Skip to main content
Glama
LamboPoewert

MadeOnSol — Solana memecoin intelligence

madeonsol_tokens_batch_buyer_quality

Read-onlyIdempotent

Batch score buyer quality for up to 50 Solana token mints, leveraging a shared cache for reduced cost on warm entries.

Instructions

Bulk buyer-quality scoring for up to 50 mints in one call. Shares the 5-min LRU cache with the single-mint endpoint — already-warm mints return at ~zero cost. Response includes cache_hits counter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
mintsYes1–50 base58 Solana token mints

Implementation Reference

  • src/index.ts:644-652 (registration)
    Tool registration with server.tool() — defines the tool name, description, schema (mints array 1-50), read-only annotations, and the handler that calls restQuery().
    server.tool(
      "madeonsol_tokens_batch_buyer_quality",
      "Bulk buyer-quality scoring for up to 50 mints in one call. Shares the 5-min LRU cache with the single-mint endpoint — already-warm mints return at ~zero cost. Response includes cache_hits counter.",
      { mints: z.array(z.string()).min(1).max(50).describe("1–50 base58 Solana token mints") },
      { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true },
      async ({ mints }) => ({
        content: [{ type: "text" as const, text: await restQuery("POST", "/tokens/batch/buyer-quality", { mints }) }],
      })
    );
  • Handler function — sends a POST request to /api/v1/tokens/batch/buyer-quality with { mints } payload and returns the text response.
      async ({ mints }) => ({
        content: [{ type: "text" as const, text: await restQuery("POST", "/tokens/batch/buyer-quality", { mints }) }],
      })
    );
  • Input schema using Zod: accepts an array of strings (base58 token mint addresses), min 1 max 50.
    { mints: z.array(z.string()).min(1).max(50).describe("1–50 base58 Solana token mints") },
    { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true },
  • Helper function restQuery() — makes authenticated HTTP requests to the API v1 endpoints and returns the JSON string response or an error message. Used by the handler to call POST /tokens/batch/buyer-quality.
    async function restQuery(method: string, path: string, body?: unknown): Promise<string> {
      const headers: Record<string, string> = {
        "Content-Type": "application/json",
        ...apiKeyHeaders(),
      };
      const res = await fetch(`${BASE_URL}/api/v1${path}`, {
        method,
        headers,
        ...(body ? { body: JSON.stringify(body) } : {}),
      });
      if (!res.ok) {
        const text = await res.text().catch(() => "");
        return `Error ${res.status}: ${text}`;
      }
      return JSON.stringify(await res.json(), null, 2);
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Provides valuable behavioral details beyond annotations: LRU cache with 5-min TTL, shared cache with single-mint endpoint, zero-cost for warm mints, and response includes cache_hits counter. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each adding value: purpose, cache behavior, response detail. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers key points: batch size, cache, response element. Lacks output format details but given no output schema and low complexity, is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already describes 'mints' parameter with constraints and description (1–50 base58). Description adds no new parameter info beyond cache context, so baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it performs bulk buyer-quality scoring on up to 50 mints, distinguishing it from the single-mint sibling tool 'madeonsol_token_buyer_quality' via cache sharing note.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use when scoring multiple mints in one call, and mentions cache sharing with single endpoint. However, lacks explicit when-not-to-use or detailed alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/LamboPoewert/mcp-server-madeonsol'

If you have feedback or need assistance with the MCP directory API, please join our Discord server