Skip to main content
Glama
kukapay

solana-launchpads-mcp

get_daily_tokens_deployed

Track daily token deployments across Solana memecoin launchpads. Fetch deployment counts by platform and optionally view data as percentages of daily totals.

Instructions

Retrieve the daily count of tokens deployed by Solana memecoin launchpads.

This tool fetches data from a Dune Analytics query and pivots it to show the number of tokens
deployed per day by each platform. Optionally, it can return the data as percentages of the
total daily deployments.

Args:
    return_percent (bool, optional): If True, returns the data as percentages of total daily
        deployments for each platform. Defaults to False.
    limit (int, optional): Maximum number of rows to fetch from the Dune query. Defaults to 1000.

Returns:
    str: A markdown-formatted table of daily token deployments by platform, or an error message
        if the query fails.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
return_percentNo
limitNo

Implementation Reference

  • main.py:46-73 (handler)
    The handler function that executes the tool logic: fetches latest results from Dune query ID 4010816, processes the data using pandas to pivot daily token counts by platform, optionally computes percentages, sorts by date descending, and returns a markdown table or error string.
    def get_daily_tokens_deployed(return_percent: bool = False, limit: int = 1000) -> str:
        """
        Retrieve the daily count of tokens deployed by Solana memecoin launchpads.
    
        This tool fetches data from a Dune Analytics query and pivots it to show the number of tokens
        deployed per day by each platform. Optionally, it can return the data as percentages of the
        total daily deployments.
    
        Args:
            return_percent (bool, optional): If True, returns the data as percentages of total daily
                deployments for each platform. Defaults to False.
            limit (int, optional): Maximum number of rows to fetch from the Dune query. Defaults to 1000.
    
        Returns:
            str: A markdown-formatted table of daily token deployments by platform, or an error message
                if the query fails.
        """
        try:
            data = get_latest_result(4010816, limit=limit)
            df = pd.DataFrame(data)
            df['date'] = pd.to_datetime(df['date_time']).dt.date
            pivot_df = df.pivot(index='date', columns='platform', values='daily_token_count')
            pivot_df = pivot_df.sort_index(ascending=False)
            if return_percent:
                pivot_df = pivot_df.div(pivot_df.sum(axis=1), axis=0).round(3)
            return pivot_df.to_markdown()
        except Exception as e:
            return str(e)
  • main.py:45-45 (registration)
    The @mcp.tool() decorator registers the get_daily_tokens_deployed function as an MCP tool.
    @mcp.tool()
  • main.py:21-44 (helper)
    Supporting helper function used by the tool to fetch the latest execution results from the Dune Analytics API for the specified query ID.
    def get_latest_result(query_id: int, limit: int = 1000) -> list:
        """
        Fetch the latest results from a Dune Analytics query.
    
        Args:
            query_id (int): The ID of the Dune query to fetch results from.
            limit (int, optional): Maximum number of rows to return. Defaults to 1000.
    
        Returns:
            list: A list of dictionaries containing the query results, or an empty list if the request fails.
    
        Raises:
            httpx.HTTPStatusError: If the API request fails due to a client or server error.
        """
        url = f"{BASE_URL}/query/{query_id}/results"
        params = {"limit": limit}
        with httpx.Client() as client:
            response = client.get(url, params=params, headers=HEADERS, timeout=300)
            response.raise_for_status()
            data = response.json()
            
        result_data = data.get("result", {}).get("rows", [])
        return result_data
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing: data source (Dune Analytics query), transformation (pivoting to show per-platform counts), optional percentage calculation, error handling (returns error message on query failure), and output format (markdown table). It doesn't mention rate limits, authentication needs, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with purpose statement, implementation details, parameter explanations, and return format - all in 4 focused sentences. Every sentence earns its place with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description provides excellent coverage of purpose, parameters, and return format. It could improve by mentioning data freshness, query execution time, or rate limits, but covers the essentials well given the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters: 'return_percent' controls percentage vs. absolute value formatting with clear default, and 'limit' specifies maximum rows from the query with default value. The description adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('retrieve the daily count'), resource ('tokens deployed by Solana memecoin launchpads'), and data source ('fetches data from a Dune Analytics query'). It distinguishes from siblings by focusing on token deployments rather than addresses, graduates, or graduation rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing token deployment trends and offers an optional percentage format, but doesn't explicitly state when to use this tool versus alternatives or provide context about prerequisites. No sibling tool comparisons or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kukapay/solana-launchpads-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server