Skip to main content
Glama
drasticstatic

hummingbot-mcp

explore_geckoterminal

Explore DEX market data from GeckoTerminal with no API key required. List networks, DEXes, trending or top pools, pool details, token info, OHLCV candles, and recent trades across multiple blockchains.

Instructions

Explore DEX market data from GeckoTerminal (free, no API key needed).

Progressive discovery flow:
1. action="networks" → List all supported networks (solana, eth, bsc, ...)
2. action="dexes" + network → List DEXes on a network
3. action="trending_pools" (+ network) → Trending pools globally or per network
4. action="top_pools" + network (+ dex_id) → Top pools by volume on a network/dex
5. action="new_pools" (+ network) → Recently created pools
6. action="pool_detail" + network + pool_address → Detailed info for one pool
7. action="multi_pools" + network + pool_addresses → Compare multiple pools
8. action="token_pools" + network + token_address → Top pools for a token
9. action="token_info" + network + token_address → Token details (price, mcap, fdv)
10. action="ohlcv" + network + pool_address → OHLCV candle data
11. action="trades" + network + pool_address → Recent trades

Args:
    action: The data to retrieve.
    network: Network ID (e.g., 'solana', 'eth', 'bsc'). Required for most actions.
    dex_id: DEX ID filter for top_pools (e.g., 'raydium', 'uniswap_v3').
    pool_address: Pool contract address (for pool_detail, ohlcv, trades).
    pool_addresses: List of pool addresses (for multi_pools).
    token_address: Token contract address (for token_pools, token_info).
    timeframe: OHLCV interval (default: '1h'). Options: 1m, 5m, 15m, 1h, 4h, 12h, 1d.
    before_timestamp: Fetch OHLCV candles before this unix timestamp (pagination).
    currency: OHLCV price currency, 'usd' or 'token' (default: 'usd').
    token: Which token's price for OHLCV, 'base' or 'quote' (default: 'base').
    limit: Max OHLCV candles to return (default: 1000).
    trade_volume_filter: Min trade volume in USD to filter trades (optional).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionYes
networkNo
dex_idNo
pool_addressNo
pool_addressesNo
token_addressNo
timeframeNo1h
before_timestampNo
currencyNousd
tokenNobase
limitNo
trade_volume_filterNo

Implementation Reference

  • Main handler implementation for explore_geckoterminal tool. Executes GeckoTerminal API actions (networks, dexes, trending_pools, top_pools, new_pools, pool_detail, multi_pools, token_pools, token_info, ohlcv, trades) and returns formatted results.
    async def explore_geckoterminal(
        action: str,
        network: str | None = None,
        dex_id: str | None = None,
        pool_address: str | None = None,
        pool_addresses: list[str] | None = None,
        token_address: str | None = None,
        timeframe: str = "1h",
        before_timestamp: int | None = None,
        currency: str = "usd",
        token: str = "base",
        limit: int = 1000,
        trade_volume_filter: float | None = None,
    ) -> dict[str, Any]:
        """Execute a GeckoTerminal API action and return formatted results."""
        import aiohttp
    
        async with aiohttp.ClientSession() as session:
    
            async def _get(path: str, params: dict | None = None) -> dict:
                url = f"{BASE_URL}/{path}"
                headers = {"Accept": "application/json;version=20230302"}
                async with session.get(url, headers=headers, params=params) as resp:
                    resp.raise_for_status()
                    return await resp.json()
    
            # ── Networks ─────────────────────────────────────────────────
            if action == "networks":
                data = await _get("networks")
                networks = _extract_networks(data)
                return {"formatted_output": f"Available Networks ({len(networks)}):\n\n{format_networks_table(networks)}"}
    
            # ── DEXes by network ─────────────────────────────────────────
            elif action == "dexes":
                if not network:
                    raise ToolError("'network' is required for action='dexes'")
                data = await _get(f"networks/{network}/dexes")
                dexes = _extract_dexes(data)
                return {"formatted_output": f"DEXes on {network} ({len(dexes)}):\n\n{format_dexes_table(dexes)}"}
    
            # ── Trending pools ───────────────────────────────────────────
            elif action == "trending_pools":
                if network:
                    data = await _get(f"networks/{network}/trending_pools")
                    title = f"Trending Pools on {network}"
                else:
                    data = await _get("networks/trending_pools")
                    title = "Trending Pools (All Networks)"
                pools = _extract_pools(data)
                return {"formatted_output": f"{title} ({len(pools)}):\n\n{format_pools_table(pools)}"}
    
            # ── Top pools ────────────────────────────────────────────────
            elif action == "top_pools":
                if not network:
                    raise ToolError("'network' is required for action='top_pools'")
                if dex_id:
                    data = await _get(f"networks/{network}/dexes/{dex_id}/pools")
                    title = f"Top Pools on {network} / {dex_id}"
                else:
                    data = await _get(f"networks/{network}/pools")
                    title = f"Top Pools on {network}"
                pools = _extract_pools(data)
                return {"formatted_output": f"{title} ({len(pools)}):\n\n{format_pools_table(pools)}"}
    
            # ── New pools ────────────────────────────────────────────────
            elif action == "new_pools":
                if network:
                    data = await _get(f"networks/{network}/new_pools")
                    title = f"New Pools on {network}"
                else:
                    data = await _get("networks/new_pools")
                    title = "New Pools (All Networks)"
                pools = _extract_pools(data)
                return {"formatted_output": f"{title} ({len(pools)}):\n\n{format_pools_table(pools)}"}
    
            # ── Pool detail ──────────────────────────────────────────────
            elif action == "pool_detail":
                if not network or not pool_address:
                    raise ToolError("'network' and 'pool_address' are required for action='pool_detail'")
                data = await _get(f"networks/{network}/pools/{pool_address}")
                pools = _extract_pools(data)
                if pools:
                    p = pools[0]
                    lines = [
                        f"Pool: {p.get('name', 'N/A')}",
                        f"Address: {p.get('address', 'N/A')}",
                        f"DEX: {p.get('dex_id', 'N/A')}",
                        f"Base Token Price: {format_currency(p.get('base_token_price_usd'), decimals=6)}",
                        f"Quote Token Price: {format_currency(p.get('quote_token_price_usd'), decimals=6)}",
                        f"Reserve (USD): {format_number(p.get('reserve_in_usd'))}",
                        f"FDV: {format_number(p.get('fdv_usd'))}",
                        f"Market Cap: {format_number(p.get('market_cap_usd'))}",
                        f"24h Volume: {format_number(p.get('volume_h24'))}",
                        f"1h Change: {p.get('price_change_h1', 'N/A')}%",
                        f"24h Change: {p.get('price_change_h24', 'N/A')}%",
                        f"24h Txns: {p.get('txns_h24_buys', 'N/A')} buys / {p.get('txns_h24_sells', 'N/A')} sells",
                        f"Created: {p.get('pool_created_at', 'N/A')}",
                    ]
                    return {"formatted_output": "\n".join(lines)}
                return {"formatted_output": "Pool not found."}
    
            # ── Multiple pools ───────────────────────────────────────────
            elif action == "multi_pools":
                if not network or not pool_addresses:
                    raise ToolError("'network' and 'pool_addresses' are required for action='multi_pools'")
                addresses_str = ",".join(pool_addresses)
                data = await _get(f"networks/{network}/pools/multi/{addresses_str}")
                pools = _extract_pools(data)
                return {"formatted_output": f"Pools on {network} ({len(pools)}):\n\n{format_pools_table(pools)}"}
    
            # ── Pools by token ───────────────────────────────────────────
            elif action == "token_pools":
                if not network or not token_address:
                    raise ToolError("'network' and 'token_address' are required for action='token_pools'")
                data = await _get(f"networks/{network}/tokens/{token_address}/pools")
                pools = _extract_pools(data)
                return {"formatted_output": f"Top Pools for token on {network} ({len(pools)}):\n\n{format_pools_table(pools)}"}
    
            # ── Token info ───────────────────────────────────────────────
            elif action == "token_info":
                if not network or not token_address:
                    raise ToolError("'network' and 'token_address' are required for action='token_info'")
                data = await _get(f"networks/{network}/tokens/{token_address}")
                token_data = _extract_token_info(data)
                return {"formatted_output": format_token_info(token_data)}
    
            # ── OHLCV candles ────────────────────────────────────────────
            elif action == "ohlcv":
                if not network or not pool_address:
                    raise ToolError("'network' and 'pool_address' are required for action='ohlcv'")
                tf_unit, tf_period = _parse_timeframe(timeframe)
                params: dict[str, Any] = {"aggregate": tf_period, "limit": limit, "currency": currency, "token": token}
                if before_timestamp:
                    params["before_timestamp"] = before_timestamp
                data = await _get(f"networks/{network}/pools/{pool_address}/ohlcv/{tf_unit}", params=params)
                candles = _extract_ohlcv(data)
                # Sort by timestamp ascending and deduplicate
                seen: set[int] = set()
                unique = []
                for c in candles:
                    if c["timestamp"] not in seen:
                        seen.add(c["timestamp"])
                        unique.append(c)
                unique.sort(key=lambda x: x["timestamp"])
                return {"formatted_output": (
                    f"OHLCV for pool {truncate_address(pool_address)} on {network} ({timeframe}, {len(unique)} candles):\n\n"
                    f"{format_ohlcv_table(unique)}"
                )}
    
            # ── Trades ───────────────────────────────────────────────────
            elif action == "trades":
                if not network or not pool_address:
                    raise ToolError("'network' and 'pool_address' are required for action='trades'")
                params = {}
                if trade_volume_filter is not None:
                    params["trade_volume_in_usd_greater_than"] = trade_volume_filter
                data = await _get(f"networks/{network}/pools/{pool_address}/trades", params=params or None)
                trades = _extract_trades(data)
                return {"formatted_output": (
                    f"Recent Trades for pool {truncate_address(pool_address)} on {network} ({len(trades)}):\n\n"
                    f"{format_trades_table(trades)}"
                )}
    
            else:
                raise ToolError(
                    f"Unknown action '{action}'. Available actions: networks, dexes, trending_pools, top_pools, "
                    f"new_pools, pool_detail, multi_pools, token_pools, token_info, ohlcv, trades"
                )
  • MCP tool registration using @mcp.tool() decorator. Defines the public API with type annotations (Literal for action), docstring, and calls the implementation.
    @mcp.tool()
    @handle_errors("explore GeckoTerminal")
    async def explore_geckoterminal(
            action: Literal[
                "networks", "dexes", "trending_pools", "top_pools", "new_pools",
                "pool_detail", "multi_pools", "token_pools", "token_info", "ohlcv", "trades",
            ],
            network: str | None = None,
            dex_id: str | None = None,
            pool_address: str | None = None,
            pool_addresses: list[str] | None = None,
            token_address: str | None = None,
            timeframe: str = "1h",
            before_timestamp: int | None = None,
            currency: str = "usd",
            token: str = "base",
            limit: int = 1000,
            trade_volume_filter: float | None = None,
    ) -> str:
        """Explore DEX market data from GeckoTerminal (free, no API key needed).
    
        Progressive discovery flow:
        1. action="networks" → List all supported networks (solana, eth, bsc, ...)
        2. action="dexes" + network → List DEXes on a network
        3. action="trending_pools" (+ network) → Trending pools globally or per network
        4. action="top_pools" + network (+ dex_id) → Top pools by volume on a network/dex
        5. action="new_pools" (+ network) → Recently created pools
        6. action="pool_detail" + network + pool_address → Detailed info for one pool
        7. action="multi_pools" + network + pool_addresses → Compare multiple pools
        8. action="token_pools" + network + token_address → Top pools for a token
        9. action="token_info" + network + token_address → Token details (price, mcap, fdv)
        10. action="ohlcv" + network + pool_address → OHLCV candle data
        11. action="trades" + network + pool_address → Recent trades
    
        Args:
            action: The data to retrieve.
            network: Network ID (e.g., 'solana', 'eth', 'bsc'). Required for most actions.
            dex_id: DEX ID filter for top_pools (e.g., 'raydium', 'uniswap_v3').
            pool_address: Pool contract address (for pool_detail, ohlcv, trades).
            pool_addresses: List of pool addresses (for multi_pools).
            token_address: Token contract address (for token_pools, token_info).
            timeframe: OHLCV interval (default: '1h'). Options: 1m, 5m, 15m, 1h, 4h, 12h, 1d.
            before_timestamp: Fetch OHLCV candles before this unix timestamp (pagination).
            currency: OHLCV price currency, 'usd' or 'token' (default: 'usd').
            token: Which token's price for OHLCV, 'base' or 'quote' (default: 'base').
            limit: Max OHLCV candles to return (default: 1000).
            trade_volume_filter: Min trade volume in USD to filter trades (optional).
        """
        result = await explore_geckoterminal_impl(
            action=action,
            network=network,
            dex_id=dex_id,
            pool_address=pool_address,
            pool_addresses=pool_addresses,
            token_address=token_address,
            timeframe=timeframe,
            before_timestamp=before_timestamp,
            currency=currency,
            token=token,
            limit=limit,
            trade_volume_filter=trade_volume_filter,
        )
        return result.get("formatted_output", str(result))
  • Schema/type definitions for the tool parameters including Literal constraint on action and optional parameters like network, pool_address, timeframe, limit, etc.
    async def explore_geckoterminal(
            action: Literal[
                "networks", "dexes", "trending_pools", "top_pools", "new_pools",
                "pool_detail", "multi_pools", "token_pools", "token_info", "ohlcv", "trades",
            ],
            network: str | None = None,
            dex_id: str | None = None,
            pool_address: str | None = None,
            pool_addresses: list[str] | None = None,
            token_address: str | None = None,
            timeframe: str = "1h",
            before_timestamp: int | None = None,
            currency: str = "usd",
            token: str = "base",
            limit: int = 1000,
            trade_volume_filter: float | None = None,
  • Import statement that aliases the handler implementation for registration in the MCP server.
    from hummingbot_mcp.tools.geckoterminal import explore_geckoterminal as explore_geckoterminal_impl
    from hummingbot_mcp.tools import history as history_tools
  • Constants (BASE_URL, OHLCV_TIMEFRAMES, TIMEFRAME_UNIT_MAP) used by the handler for API calls and data parsing.
    BASE_URL = "https://api.geckoterminal.com/api/v2"
    
    OHLCV_TIMEFRAMES = ["1m", "5m", "15m", "1h", "4h", "12h", "1d"]
    
    TIMEFRAME_UNIT_MAP = {"m": "minute", "h": "hour", "d": "day"}
    
    
    def _parse_timeframe(timeframe: str) -> tuple[str, str]:
        """Parse '1h' into ('hour', '1')."""
        if timeframe not in OHLCV_TIMEFRAMES:
            raise ToolError(f"Unsupported timeframe '{timeframe}'. Use one of: {OHLCV_TIMEFRAMES}")
        period, unit = timeframe[:-1], timeframe[-1]
        return TIMEFRAME_UNIT_MAP[unit], period
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains all actions are read-only data retrieval, and mentions it's free with no API key needed. It does not disclose rate limits or error handling, but for a data exploration tool, the behavioral coverage is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a numbered list of actions, brief explanations, and a separate Args section. It is front-loaded with the overall purpose and each sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 12 parameters, no output schema, and no annotations, the description is fairly complete. It covers all actions and their parameter dependencies. A minor gap is that it doesn't explicitly list which actions require 'network', but it states 'Required for most actions' which is sufficient for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage, so the description compensates by listing all parameters and explaining their relationship to actions in the progressive flow. It adds context beyond the schema, such as when each parameter is required and the meaning of action enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool explores DEX market data from GeckoTerminal and enumerates 11 specific actions. It distinguishes itself from sibling tools like explore_dex_pools by being specific to GeckoTerminal and providing a progressive discovery flow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a progressive discovery flow indicating the order of actions and their required parameters. It does not explicitly state when not to use the tool or compare to alternatives, but the structured steps imply appropriate usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/drasticstatic/hummingbot-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server