Skip to main content
Glama
Cappybara12

OpenXAI MCP Server

by Cappybara12

get_leaderboard

Retrieve ranked performance results for AI explanation methods on specified datasets and metrics to compare evaluation outcomes.

Instructions

Get leaderboard results for explanation methods

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
datasetNoDataset name to get leaderboard for
metricNoMetric to sort leaderboard by

Implementation Reference

  • The main handler function for the 'get_leaderboard' tool. It returns a hardcoded sample leaderboard with rankings of various explanation methods based on the provided dataset and metric parameters.
    async getLeaderboard(dataset, metric) {
      const sampleLeaderboard = {
        dataset: dataset || 'german',
        metric: metric || 'PGI',
        rankings: [
          { rank: 1, method: 'SHAP', score: 0.87, model: 'XGBoost' },
          { rank: 2, method: 'LIME', score: 0.82, model: 'XGBoost' },
          { rank: 3, method: 'Integrated Gradients', score: 0.78, model: 'Neural Network' },
          { rank: 4, method: 'Gradient × Input', score: 0.75, model: 'Neural Network' },
          { rank: 5, method: 'Guided Backprop', score: 0.71, model: 'Neural Network' }
        ],
        updated: new Date().toISOString()
      };
    
      return {
        content: [
          {
            type: 'text',
            text: `OpenXAI Leaderboard\n\n` +
                  `Dataset: ${sampleLeaderboard.dataset}\n` +
                  `Metric: ${sampleLeaderboard.metric}\n` +
                  `Last Updated: ${sampleLeaderboard.updated}\n\n` +
                  `Rankings:\n` +
                  JSON.stringify(sampleLeaderboard.rankings, null, 2) +
                  `\n\nNote: This is a sample leaderboard. Visit https://open-xai.github.io/ for actual leaderboard data.`
          }
        ]
      };
    }
  • Input schema definition for the get_leaderboard tool, specifying optional dataset and metric parameters.
    inputSchema: {
      type: 'object',
      properties: {
        dataset: {
          type: 'string',
          description: 'Dataset name to get leaderboard for'
        },
        metric: {
          type: 'string',
          description: 'Metric to sort leaderboard by'
        }
      },
      required: []
    }
  • index.js:279-280 (registration)
    Registration of the get_leaderboard tool handler in the CallToolRequestSchema switch statement, dispatching calls to the getLeaderboard method.
    case 'get_leaderboard':
      return await this.getLeaderboard(args.dataset, args.metric);
  • index.js:199-216 (registration)
    Tool registration in the ListToolsRequestSchema response, including name, description, and input schema.
    {
      name: 'get_leaderboard',
      description: 'Get leaderboard results for explanation methods',
      inputSchema: {
        type: 'object',
        properties: {
          dataset: {
            type: 'string',
            description: 'Dataset name to get leaderboard for'
          },
          metric: {
            type: 'string',
            description: 'Metric to sort leaderboard by'
          }
        },
        required: []
      }
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool retrieves results but doesn't disclose behavioral traits such as whether it's read-only, requires authentication, has rate limits, returns paginated data, or what format the leaderboard results take. This leaves significant gaps for a tool that likely returns structured ranking data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and appropriately sized for a simple retrieval tool, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It doesn't explain what the leaderboard results include (e.g., rankings, scores, methods), how they're structured, or any behavioral context. For a tool with two parameters and likely complex output, this leaves too much unspecified for reliable agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('dataset' and 'metric') with descriptions. The description adds no additional meaning beyond what the schema provides, such as examples of valid datasets or metrics, or how they affect the leaderboard. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get leaderboard results for explanation methods'. It specifies the verb ('Get') and resource ('leaderboard results'), and distinguishes it from siblings like 'list_metrics' or 'list_explainers' by focusing on ranked results. However, it doesn't explicitly differentiate from all siblings (e.g., 'evaluate_explanation' might also involve ranking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when not to use it, or how it differs from siblings like 'evaluate_explanation' or 'list_metrics'. The agent must infer usage from the name and context alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Cappybara12/mcpopenxAI'

If you have feedback or need assistance with the MCP directory API, please join our Discord server