Skip to main content
Glama
Coinversaa

Coinversaa Pulse

Official

pulse_cohort_history

Track daily aggregate PnL, trade count, and activity for a trader cohort. Identify trends like increasing bearishness over time.

Instructions

Get historical performance data for a specific trader cohort over time. Shows how a tier's aggregate PnL, trade count, and activity have changed day-by-day. Use to spot trends like 'smart_money has been increasingly bearish over the last month.'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
useToonFormatNoReturn data in compact toon format (default: true). Set to false for standard JSON.
tierTypeYesTier category: 'pnl' for profit tiers, 'size' for volume tiers
tierYesTier name. PnL tiers: money_printer, smart_money, grinder, humble_earner, exit_liquidity, semi_rekt, full_rekt, giga_rekt. Size tiers: leviathan, tidal_whale, whale, small_whale, apex_predator, dolphin, fish, shrimp
daysNoNumber of days of history to return

Implementation Reference

  • The tool registration and handler for 'pulse_cohort_history'. It takes useToonFormat, tierType (pnl|size), tier, and days parameters, then calls the API at /pulse/cohorts/{tierType}/{tier}/history with the days parameter.
    if (shouldRegister("pulse_cohort_history")) server.registerTool(
      "pulse_cohort_history",
      {
        description: "Get historical performance data for a specific trader cohort over time. Shows how a tier's aggregate PnL, trade count, and activity have changed day-by-day. Use to spot trends like 'smart_money has been increasingly bearish over the last month.'",
        inputSchema: {
          useToonFormat: useToonFormatSchema,
          tierType: z.enum(["pnl", "size"]).describe("Tier category: 'pnl' for profit tiers, 'size' for volume tiers"),
          tier: tierSchema,
          days: z.number().min(1).max(365).default(30).describe("Number of days of history to return"),
        },
      },
      async ({ useToonFormat, tierType, tier, days }) =>
        toolResult(
          await callAPI(useToonFormat, `/pulse/cohorts/${tierType}/${tier}/history`, { days: String(days) })
        )
    );
  • Input schema for pulse_cohort_history: useToonFormat (boolean), tierType (enum pnl|size), tier (enum of all tier names), days (number 1-365, default 30).
    inputSchema: {
      useToonFormat: useToonFormatSchema,
      tierType: z.enum(["pnl", "size"]).describe("Tier category: 'pnl' for profit tiers, 'size' for volume tiers"),
      tier: tierSchema,
      days: z.number().min(1).max(365).default(30).describe("Number of days of history to return"),
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read operation ('Get') but does not explicitly state idempotency, safety, or potential side effects. Missing details like pagination, data availability, or error handling reduce transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences: the first states purpose, the second gives a use case. It is front-loaded and free of fluff. However, the lack of a title is a minor structural gap.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters and no output schema, the description is incomplete. It does not explain the output format, the effect of useToonFormat, or how it differs from sibling tools. Agents would need additional information to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so baseline is 3. The description adds a usage example ('smart_money') but does not elaborate on parameter meaning beyond the schema. It mentions metrics but not how parameters affect output. Adequate but not enhancing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves historical performance data for a trader cohort, specifies the metrics (PnL, trade count, activity), and provides a concrete use case. However, it does not explicitly differentiate from sibling tools like pulse_cohort_performance_daily or pulse_cohort_bias_history, slightly reducing clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives a clear use case (spotting trends) but fails to provide guidance on when to use this tool versus alternatives (e.g., pulse_cohort_bias_history for bias trends or pulse_cohort_positions for current positions). No 'when not to use' or alternative suggestions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Coinversaa/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server