Skip to main content
Glama
hlydecker

UCSC Genome Browser MCP Server

by hlydecker

list_public_hubs

Retrieve all publicly available track hubs from the UCSC Genome Browser to access organized genomic datasets for research and analysis.

Instructions

List all available public track hubs in the UCSC Genome Browser.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Handler code within the @app.call_tool() dispatcher that constructs the UCSC API URL for /list/publicHubs and fetches the list of public track hubs.
    elif name == "list_public_hubs":
        url = build_api_url("/list/publicHubs", {})
        result = await make_api_request(url)
  • Tool registration in @app.list_tools(), defining the tool name, description, and empty input schema.
    Tool(
        name="list_public_hubs",
        description="List all available public track hubs in the UCSC Genome Browser.",
        inputSchema={
            "type": "object",
            "properties": {}
        }
    ),
  • Input schema definition for the tool, which requires no parameters.
    inputSchema={
        "type": "object",
        "properties": {}
    }
  • Helper function to construct UCSC API URLs, used by the list_public_hubs handler.
    def build_api_url(endpoint: str, params: dict[str, Any]) -> str:
        """Build the complete API URL with parameters."""
        # Filter out None values
        filtered_params = {k: v for k, v in params.items() if v is not None}
        
        # Convert parameters to URL format (using semicolons as per UCSC API spec)
        if filtered_params:
            param_str = ";".join(f"{k}={v}" for k, v in filtered_params.items())
            return f"{BASE_URL}{endpoint}?{param_str}"
        return f"{BASE_URL}{endpoint}"
  • Helper function to perform HTTP GET requests to UCSC API and parse JSON response, used by the handler.
    async def make_api_request(url: str) -> dict[str, Any]:
        """Make an HTTP request to the UCSC API and return JSON response."""
        async with httpx.AsyncClient(timeout=30.0) as client:
            response = await client.get(url)
            response.raise_for_status()
            return response.json()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it's a list operation, implying it's likely read-only and non-destructive, but doesn't confirm this or add details like rate limits, authentication needs, or response format. This leaves gaps in understanding the tool's behavior beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that directly states the tool's purpose without any redundant or unnecessary information. It's front-loaded and efficiently conveys the essential information, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, no annotations, and no output schema, the description is minimal but covers the basic purpose. However, for a list operation in a context with many sibling tools, it lacks details on output format, limitations, or how it fits into the broader workflow, making it only adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate, but it could have mentioned implicit constraints (e.g., no filtering options). Since there are no parameters, a baseline of 4 is applied as it adequately handles the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all available public track hubs in the UCSC Genome Browser'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_hub_genomes' or 'list_ucsc_genomes', which might have overlapping domains, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage based on the tool name alone, which is insufficient for effective tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hlydecker/ucsc-genome-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server