Skip to main content
Glama

List score configurations

listScoreConfigs

Retrieve score configurations for names, ranges, and categories. Supports pagination to manage large sets.

Instructions

List score configurations (definitions for score names, ranges, and categories).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1)
limitNoItems per page (default 50, max 100)

Implementation Reference

  • src/tools.ts:131-140 (registration)
    Registration of the 'listScoreConfigs' tool via server.registerTool(...).
    server.registerTool(
      "listScoreConfigs",
      {
        title: "List score configurations",
        description:
          "List score configurations (definitions for score names, ranges, and categories).",
        inputSchema: { ...paginationShape },
      },
      async (args) => asJson(await client.get("/api/public/score-configs", args)),
    );
  • Handler function for listScoreConfigs: calls GET /api/public/score-configs with pagination args.
    async (args) => asJson(await client.get("/api/public/score-configs", args)),
  • Input schema: paginationShape defines optional page and limit fields used by listScoreConfigs.
    export const paginationShape = {
      page: z.number().int().positive().optional().describe("Page number (default 1)"),
      limit: z
        .number()
        .int()
        .min(1)
        .max(100)
        .optional()
        .describe("Items per page (default 50, max 100)"),
    };
  • Helper function asJson wraps data in MCP content response format.
    const asJson = (data: unknown) => ({
      content: [{ type: "text" as const, text: JSON.stringify(data, null, 2) }],
    });
  • Helper enc for URI-encoding URL parameters.
    const enc = encodeURIComponent;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides only a basic definition. It does not disclose behavioral aspects like whether the listing is paginated (though implied by parameters), authentication needs, rate limits, or any side effects. Additional context on what constitutes a 'score configuration' could improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the purpose with a clarifying parenthetical. It is front-loaded with the action and resource, and every word is meaningful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters, no output schema, no annotations), the description adequately covers purpose but lacks details about return format, sorting, filtering, or error scenarios. It is minimally complete but could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both page and limit parameters having descriptions. The description does not add extra meaning beyond what the schema already provides, so a baseline score of 3 is warranted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and the resource 'score configurations' with an explicit clarification that these are definitions for score names, ranges, and categories. It distinguishes well from siblings like listScores (which lists actual scores) and getScoreConfig (which gets a single config).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is given on when to use this tool versus alternatives such as getScoreConfig or listScores. The description does not mention exclusions, prerequisites, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hugoles/langfuse-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server