list_installed_docsets
Lists all documentation sets installed in Dash to help users identify available resources for reference and search.
Instructions
List all installed documentation sets in Dash. An empty list is returned if the user has no docsets installed. Results are automatically truncated if they would exceed 25,000 tokens.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/dash_mcp_server/server.py:186-240 (handler)Implements the list_installed_docsets tool: fetches docsets from Dash API, handles errors, applies token limit truncation using estimate_tokens, and returns structured DocsetResults.@mcp.tool() async def list_installed_docsets(ctx: Context) -> DocsetResults: """List all installed documentation sets in Dash. An empty list is returned if the user has no docsets installed. Results are automatically truncated if they would exceed 25,000 tokens.""" try: base_url = await working_api_base_url(ctx) if base_url is None: return DocsetResults(error="Failed to connect to Dash API Server. Please ensure Dash is running and the API server is enabled (in Dash Settings > Integration).") await ctx.debug("Fetching installed docsets from Dash API") with httpx.Client(timeout=30.0) as client: response = client.get(f"{base_url}/docsets/list") response.raise_for_status() result = response.json() docsets = result.get("docsets", []) await ctx.info(f"Found {len(docsets)} installed docsets") # Build result list with token limit checking token_limit = 25000 current_tokens = 100 # Base overhead for response structure limited_docsets = [] for docset in docsets: docset_info = DocsetResult( name=docset["name"], identifier=docset["identifier"], platform=docset["platform"], full_text_search=docset["full_text_search"], notice=docset.get("notice") ) # Estimate tokens for this docset docset_tokens = estimate_tokens(docset_info) if current_tokens + docset_tokens > token_limit: await ctx.warning(f"Token limit reached. Returning {len(limited_docsets)} of {len(docsets)} docsets to stay under 25k token limit.") break limited_docsets.append(docset_info) current_tokens += docset_tokens if len(limited_docsets) < len(docsets): await ctx.info(f"Returned {len(limited_docsets)} docsets (truncated from {len(docsets)} due to token limit)") return DocsetResults(docsets=limited_docsets) except httpx.HTTPStatusError as e: if e.response.status_code == 404: await ctx.warning("No docsets found. Install some in Settings > Downloads.") return DocsetResults(error="No docsets found. Instruct the user to install some docsets in Settings > Downloads.") return DocsetResults(error=f"HTTP error: {e}") except Exception as e: await ctx.error(f"Failed to get installed docsets: {e}") return DocsetResults(error=f"Failed to get installed docsets: {e}")
- Pydantic model defining the output schema for list_installed_docsets tool.class DocsetResults(BaseModel): """Result from listing docsets.""" docsets: list[DocsetResult] = Field(description="List of installed docsets", default_factory=list) error: Optional[str] = Field(description="Error message if there was an issue", default=None)
- Pydantic model for individual docset information used in DocsetResults.class DocsetResult(BaseModel): """Information about a docset.""" name: str = Field(description="Display name of the docset") identifier: str = Field(description="Unique identifier") platform: str = Field(description="Platform/type of the docset") full_text_search: str = Field(description="Full-text search status: 'not supported', 'disabled', 'indexing', or 'enabled'") notice: Optional[str] = Field(description="Optional notice about the docset status", default=None)
- src/dash_mcp_server/server.py:186-186 (registration)MCP decorator that registers the list_installed_docsets function as a tool.@mcp.tool()
- Helper function to estimate tokens for truncation logic in the handler.def estimate_tokens(obj) -> int: """Estimate token count for a serialized object. Rough approximation: 1 token ≈ 4 characters.""" if isinstance(obj, str): return max(1, len(obj) // 4) elif isinstance(obj, (list, tuple)): return sum(estimate_tokens(item) for item in obj) elif isinstance(obj, dict): return sum(estimate_tokens(k) + estimate_tokens(v) for k, v in obj.items()) elif hasattr(obj, 'model_dump'): # Pydantic model return estimate_tokens(obj.model_dump()) else: return max(1, len(str(obj)) // 4)