Skip to main content
Glama
desk3
by desk3

get_puell_multiple

Calculate the Puell Multiple to assess Bitcoin mining revenue pressure, identify market undervaluation for potential buys, and detect overvaluation for sell opportunities.

Instructions

The Puell Multiple assesses Bitcoin miners' revenue by dividing daily issuance (in USD) by its 365-day average. This reflects the mining pressure in the market. Low values (green areas) indicate undervaluation and strong historical buy areas, while high values (red areas) indicate overvaluation and potential sell opportunities. It provides insight into market cycles from the perspective of miners

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Core handler function that performs the API request to retrieve Puell Multiple data from the Desk3 API endpoint.
    async def get_puell_multiple() -> dict[str, Any]:
        """
        Get Puell Multiple data.
        :return: Puell Multiple data assessing Bitcoin miners' revenue by dividing daily issuance by its 365-day average
        """
        url = 'https://mcp.desk3.io/v1/market/puell-multiple'
        try:
            return request_api('get', url)
        except Exception as e:
            raise RuntimeError(f"Failed to fetch Puell Multiple data: {e}")
  • Tool registration within the list_tools handler, specifying the tool name, detailed description, and input schema (no parameters required).
    types.Tool(
        name="get_puell_multiple",
        description="The Puell Multiple assesses Bitcoin miners' revenue by dividing daily issuance (in USD) by its 365-day average. This reflects the mining pressure in the market. Low values (green areas) indicate undervaluation and strong historical buy areas, while high values (red areas) indicate overvaluation and potential sell opportunities. It provides insight into market cycles from the perspective of miners",
        inputSchema={
            "type": "object",
            "properties": {},
            "required": [],
        },
    ),
  • JSON Schema definition for the tool input, indicating no required properties.
    inputSchema={
        "type": "object",
        "properties": {},
        "required": [],
    },
  • Tool dispatch handler in the call_tool function that invokes the core get_puell_multiple function and returns the JSON-formatted result as TextContent.
    case "get_puell_multiple":
        try:
            data = await get_puell_multiple()
            return [
                types.TextContent(
                    type="text",
                    text=json.dumps(data, indent=2),
                )
            ]
        except Exception as e:
            raise RuntimeError(f"Failed to fetch Puell Multiple data: {e}")
  • Shared helper function used by all API-fetching tools, including get_puell_multiple, to make authenticated HTTP requests to the Desk3 API.
    def request_api(method: str, url: str, params: dict = None, data: dict = None) -> any:
        headers = {
            'Accepts': 'application/json',
            'X-DESK3_PRO_API_KEY': API_KEY,
        }
        try:
            logging.info(f"Requesting {method.upper()} {url} params={params} data={data}")
            if method.lower() == 'get':
                response = requests.get(url, headers=headers, params=params)
            elif method.lower() == 'post':
                response = requests.post(url, headers=headers, json=data)
            else:
                raise ValueError(f"Unsupported HTTP method: {method}")
            response.raise_for_status()
            logging.info(f"Response {response.status_code} for {url}")
            return json.loads(response.text)
        except Exception as e:
            logging.error(f"Error during {method.upper()} {url}: {e}")
            raise
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's purpose and interpretation of results (e.g., low/high values indicating market conditions), which adds context beyond basic functionality. However, it doesn't cover aspects like rate limits, error conditions, or data freshness, leaving gaps in behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized at four sentences, with the first sentence clearly stating the tool's purpose. Each sentence adds value: the first explains the calculation, the second its market reflection, the third interprets values, and the fourth provides insight perspective. It could be slightly more front-loaded by emphasizing the tool's action earlier.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a financial indicator with interpretive context), no annotations, and no output schema, the description is moderately complete. It explains what the tool calculates and how to interpret results, but it lacks details on output format, data sources, or potential limitations, which would be helpful for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't discuss parameters, which is appropriate here. It earns a baseline 4 because the schema fully covers the absence of parameters, and the description focuses on the tool's output interpretation instead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: 'assesses Bitcoin miners' revenue by dividing daily issuance (in USD) by its 365-day average.' It provides a specific verb ('assesses') and resource ('Bitcoin miners' revenue'), though it doesn't explicitly differentiate from sibling tools like 'get_cycle_indicators' or 'get_cycles' that might also relate to market cycles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by explaining that low values indicate undervaluation/buy areas and high values indicate overvaluation/sell opportunities, which suggests when this tool might be useful for market analysis. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_fear_greed_index' or 'get_cycle_indicators', nor does it provide exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/desk3/cryptocurrency-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server