Skip to main content
Glama
biocontext-ai

BioContextAI Knowledgebase MCP

Official

bc_get_drug_statistics

Retrieve FDA drug database statistics including top sponsors, dosage forms, administration routes, and marketing statuses to analyze pharmaceutical data trends.

Instructions

Get general statistics about the FDA Drugs@FDA database. Includes top sponsors, dosage forms, routes, marketing status.

Returns: dict: Top sponsors, dosage_forms, administration_routes, marketing_statuses with counts or error message.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Handler function for the tool logic, decorated with @core_mcp.tool(). Fetches and aggregates drug statistics from the FDA Drugs@FDA API including top sponsors, dosage forms, administration routes, and marketing statuses.
    @core_mcp.tool()
    def get_drug_statistics() -> dict:
        """Get general statistics about the FDA Drugs@FDA database. Includes top sponsors, dosage forms, routes, marketing status.
    
        Returns:
            dict: Top sponsors, dosage_forms, administration_routes, marketing_statuses with counts or error message.
        """
        statistics = {}
    
        try:
            # Get top sponsors
            base_url = "https://api.fda.gov/drug/drugsfda.json"
            sponsors_response = requests.get(base_url, params={"count": "sponsor_name", "limit": 10})  # type: ignore
            sponsors_response.raise_for_status()
            statistics["top_sponsors"] = sponsors_response.json()
    
            # Get dosage forms
            dosage_response = requests.get(base_url, params={"count": "products.dosage_form", "limit": 15})  # type: ignore
            dosage_response.raise_for_status()
            statistics["dosage_forms"] = dosage_response.json()
    
            # Get routes of administration
            routes_response = requests.get(base_url, params={"count": "products.route", "limit": 15})  # type: ignore
            routes_response.raise_for_status()
            statistics["administration_routes"] = routes_response.json()
    
            # Get marketing statuses
            status_response = requests.get(base_url, params={"count": "products.marketing_status", "limit": 10})  # type: ignore
            status_response.raise_for_status()
            statistics["marketing_statuses"] = status_response.json()
    
            return statistics
    
        except requests.exceptions.RequestException as e:
            return {"error": f"Failed to fetch FDA drug statistics: {e!s}"}
  • Registers the core_mcp server (containing the get_drug_statistics tool) into the main BioContextAI MCP application with the prefix 'bc' (from slugify('BC')), making the tool available as 'bc_get_drug_statistics'.
    logger.info("Setting up MCP server...")
    for mcp in [core_mcp, *(await get_openapi_mcps())]:
        await mcp_app.import_server(
            mcp,
            slugify(mcp.name),
  • Imports the openfda module, triggering the registration of get_drug_statistics on core_mcp via its @core_mcp.tool() decorator when the module is loaded.
    from .openfda import *
  • Exposes the get_drug_statistics handler function for import, facilitating its registration.
    from ._count_drugs import count_drugs_by_field, get_drug_statistics
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It mentions the return format ('dict: Top sponsors, dosage_forms, administration_routes, marketing_statuses with counts or error message') which adds some value, but doesn't describe important behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, data freshness, or what triggers the error message. For a tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: one stating the purpose with specific examples, and another describing the return format. It's appropriately sized for a zero-parameter tool. While every sentence earns its place, the second sentence could be slightly more polished (e.g., clarifying that it returns a dictionary with those keys).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. It states what statistics are retrieved and the return format. However, for a tool with no annotations, it should provide more behavioral context (e.g., is this a cached summary? real-time? any limitations?). The existence of an output schema means it doesn't need to explain return values in detail, but other contextual gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage (empty schema). The description appropriately doesn't discuss parameters since none exist. It focuses correctly on what the tool does rather than parameter details. A baseline of 4 is appropriate for zero-parameter tools when the description addresses the tool's function.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get general statistics about the FDA Drugs@FDA database' with specific examples of what statistics are included (top sponsors, dosage forms, routes, marketing status). It uses a specific verb ('Get') and identifies the resource (FDA Drugs@FDA database statistics). However, it doesn't explicitly differentiate from sibling tools like 'bc_count_drugs_by_field' which might also provide statistical counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, constraints, or comparison to sibling tools like 'bc_count_drugs_by_field' or 'bc_search_drugs_fda' that might overlap in functionality. The agent receives no usage context beyond the basic purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/biocontext-ai/knowledgebase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server