Skip to main content
Glama

hyperd.sentiment.token

Retrieve a token's sentiment score (0-100) from recent Farcaster discussions, including band, volume, trend, and sample casts. Cost: $0.05 USDC.

Instructions

Get a token's sentiment score (0-100) from recent Farcaster discussion. Returns score, band (very_negative to very_positive), volume, trend, sample casts. Costs $0.05 in USDC.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tokenYesToken symbol or name
windowNoDays (1-30) or "24h" / "7d". Default "24h".

Implementation Reference

  • src/server.ts:347-355 (registration)
    Tool registration for hyperd.sentiment.token using the McpServer.tool() method. Registers it with description, Zod schema (token: string, window: optional string), and a handler that delegates to paidGet('/api/sentiment/token').
    server.tool(
      "hyperd.sentiment.token",
      "Get a token's sentiment score (0-100) from recent Farcaster discussion. Returns score, band (very_negative to very_positive), volume, trend, sample casts. Costs $0.05 in USDC.",
      {
        token: z.string().describe("Token symbol or name"),
        window: z.string().optional().describe('Days (1-30) or "24h" / "7d". Default "24h".'),
      },
      async (args) => asText(await paidGet("/api/sentiment/token", args)),
    );
  • The handler function — an async lambda that calls paidGet('/api/sentiment/token', args) and wraps the result with asText(). This is the execution logic of the tool.
    async (args) => asText(await paidGet("/api/sentiment/token", args)),
  • Input schema defined with Zod: 'token' (required string, token symbol or name) and 'window' (optional string, days or '24h'/'7d').
    {
      token: z.string().describe("Token symbol or name"),
      window: z.string().optional().describe('Days (1-30) or "24h" / "7d". Default "24h".'),
    },
  • The paidGet helper function that constructs the URL, appends query params, and executes an x402-signed HTTP GET to the API endpoint.
    async function paidGet(
      path: string,
      query: Record<string, string | number | boolean | undefined>,
    ): Promise<unknown> {
      if (!httpClient) {
        throw new Error(WALLET_NOT_CONFIGURED_MSG);
      }
    
      const url = new URL(`${API_BASE}${path}`);
      for (const [k, v] of Object.entries(query)) {
        if (v !== undefined && v !== "" && v !== null) url.searchParams.set(k, String(v));
      }
      return paidRequest("GET", url, undefined);
    }
  • The asText helper that wraps JSON response into MCP text content format.
    function asText(data: unknown) {
      return { content: [{ type: "text" as const, text: JSON.stringify(data, null, 2) }] };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses cost ($0.05 USDC) and output objects (score, band, volume, trend, sample casts). Lacks explicit statement on idempotency or failure behavior, but 'Get' implies read-only. Since annotations are absent, the description does a reasonable job conveying behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no redundant information. First sentence covers purpose and output; second adds cost. Front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple query tool with no output schema, the description explains output fields and cost. Missing error handling details (e.g., unknown token) but adequate for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so description does not add extra meaning beyond schema. Both parameters are adequately described in schema, and description does not elaborate further or include examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets a token's sentiment score from Farcaster discussion, specifying range (0-100) and output fields. Differentiates from siblings like hyperd.pricing.get by focusing on sentiment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like hyperd.token.info or hyperd.pricing.get. The description implies usage for sentiment analysis but does not provide when-not-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hyperd-ai/hyperd-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server