Skip to main content
Glama
cfahlgren1

HF Dataset MCP

by cfahlgren1

validate_dataset

Verify dataset accessibility and identify available viewer features for Hugging Face datasets to ensure compatibility and functionality.

Instructions

Check if a dataset is accessible and which viewer features are available

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
datasetYesDataset ID (e.g., 'stanfordnlp/imdb')

Implementation Reference

  • The handler function for the validate_dataset tool which calls the internal fetchDatasetViewer to validate a dataset.
    async ({ dataset }) => {
      const data = await fetchDatasetViewer<ValidResponse>("/is-valid", {
        dataset,
      });
    
      return {
        content: [
          {
            type: "text" as const,
            text: JSON.stringify(data, null, 2),
          },
        ],
      };
    }
  • Input schema for the validate_dataset tool using zod.
    {
      dataset: z.string().describe("Dataset ID (e.g., 'stanfordnlp/imdb')"),
    },
  • Registration of the validate_dataset tool within the MCP server.
    server.tool(
      "validate_dataset",
      "Check if a dataset is accessible and which viewer features are available",
      {
        dataset: z.string().describe("Dataset ID (e.g., 'stanfordnlp/imdb')"),
      },
      async ({ dataset }) => {
        const data = await fetchDatasetViewer<ValidResponse>("/is-valid", {
          dataset,
        });
    
        return {
          content: [
            {
              type: "text" as const,
              text: JSON.stringify(data, null, 2),
            },
          ],
        };
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full disclosure burden. While it implies a read-only 'check' operation, it fails to define what 'accessible' means (existence vs. permissions), what specific 'viewer features' are evaluated, how errors are signaled, or what return structure to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no filler text. Key information (action, target, scope) is front-loaded and every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and annotations, the description inadequately fails to describe the return value structure or payload. It also omits clarification on how this validation differs from simply fetching dataset info.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'dataset' parameter includes a description and example), the baseline score applies. The tool description adds no parameter-specific context, but the schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses clear verbs ('Check') and identifies the specific resources and aspects being validated (dataset accessibility and 'viewer features'). However, it does not explicitly differentiate from the sibling tool 'get_dataset_info', which also retrieves dataset metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'get_dataset_info' or 'search_dataset', nor does it mention prerequisites or conditions for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cfahlgren1/hf-dataset-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server