Skip to main content
Glama
cfahlgren1

HF Dataset MCP

by cfahlgren1

search_datasets

Search and filter Hugging Face datasets by name, tags, author, or description to find relevant data for machine learning projects.

Instructions

Find datasets on the Hugging Face Hub by name, tag, or author

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
searchNoQuery to match against dataset names and descriptions
authorNoFilter by dataset owner (user or organization)
filterNoTag filters (e.g., task_categories:text-classification, language:en)
sortNoSort order for results
directionNoSort direction (default: desc)
limitNoMax results to return (default: 20, max: 100)

Implementation Reference

  • The handler function for the search_datasets tool, which processes arguments and fetches datasets from the Hugging Face Hub.
    async ({ search, author, filter, sort, direction, limit }) => {
      const params: Record<string, string | number | string[] | undefined> = {
        search,
        author,
        filter,
        sort,
        direction: direction === "asc" ? "1" : direction === "desc" ? "-1" : undefined,
        limit: limit ?? 20,
      };
    
      const datasets = await fetchHub<DatasetInfo[]>("/api/datasets", params);
    
      const results = datasets.map((d) => ({
        id: d.id,
        author: d.author,
        description: d.description?.slice(0, 200),
        downloads: d.downloads,
        likes: d.likes,
        trending_score: d.trendingScore,
        tags: d.tags?.slice(0, 10),
        last_modified: d.lastModified,
        private: d.private,
        gated: d.gated,
      }));
    
      return {
        content: [
          {
            type: "text" as const,
            text: JSON.stringify(results, null, 2),
          },
        ],
      };
    }
  • Zod schema defining the input parameters for the search_datasets tool.
    {
      search: z
        .string()
        .optional()
        .describe("Query to match against dataset names and descriptions"),
      author: z
        .string()
        .optional()
        .describe("Filter by dataset owner (user or organization)"),
      filter: z
        .array(z.string())
        .optional()
        .describe(
          "Tag filters (e.g., task_categories:text-classification, language:en)"
        ),
      sort: z
        .enum([
          "trending_score",
          "downloads",
          "likes",
          "created_at",
          "last_modified",
        ])
        .optional()
        .describe("Sort order for results"),
      direction: z
        .enum(["asc", "desc"])
        .optional()
        .describe("Sort direction (default: desc)"),
      limit: z
        .number()
        .int()
        .min(1)
        .max(100)
        .optional()
        .describe("Max results to return (default: 20, max: 100)"),
    },
  • Registration function for the search_datasets tool.
    export function registerSearchDatasets(server: McpServer) {
      server.tool(
        "search_datasets",
        "Find datasets on the Hugging Face Hub by name, tag, or author",
        {
          search: z
            .string()
            .optional()
            .describe("Query to match against dataset names and descriptions"),
          author: z
            .string()
            .optional()
            .describe("Filter by dataset owner (user or organization)"),
          filter: z
            .array(z.string())
            .optional()
            .describe(
              "Tag filters (e.g., task_categories:text-classification, language:en)"
            ),
          sort: z
            .enum([
              "trending_score",
              "downloads",
              "likes",
              "created_at",
              "last_modified",
            ])
            .optional()
            .describe("Sort order for results"),
          direction: z
            .enum(["asc", "desc"])
            .optional()
            .describe("Sort direction (default: desc)"),
          limit: z
            .number()
            .int()
            .min(1)
            .max(100)
            .optional()
            .describe("Max results to return (default: 20, max: 100)"),
        },
        async ({ search, author, filter, sort, direction, limit }) => {
          const params: Record<string, string | number | string[] | undefined> = {
            search,
            author,
            filter,
            sort,
            direction: direction === "asc" ? "1" : direction === "desc" ? "-1" : undefined,
            limit: limit ?? 20,
          };
    
          const datasets = await fetchHub<DatasetInfo[]>("/api/datasets", params);
    
          const results = datasets.map((d) => ({
            id: d.id,
            author: d.author,
            description: d.description?.slice(0, 200),
            downloads: d.downloads,
            likes: d.likes,
            trending_score: d.trendingScore,
            tags: d.tags?.slice(0, 10),
            last_modified: d.lastModified,
            private: d.private,
            gated: d.gated,
          }));
    
          return {
            content: [
              {
                type: "text" as const,
                text: JSON.stringify(results, null, 2),
              },
            ],
          };
        }
      );
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden of behavioral disclosure. It successfully identifies the external service (Hugging Face Hub) and search scope. However, it omits critical behavioral details: return format (list of metadata objects), pagination behavior (only limit is mentioned, no offset/cursor), and whether private datasets require authentication.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It front-loads the action and resource, placing optional search dimensions at the end where they serve as supporting context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (6 well-documented parameters, 100% coverage) and lack of output schema, the description provides sufficient context for an AI to understand the tool's role. A minor gap remains regarding the return value structure, though the schema completeness reduces the burden on the description to explain inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds value by mapping the conceptual search terms ('name, tag, or author') to the parameter purposes, helping the agent understand the relationship between the 'search', 'filter', and 'author' parameters. It could further clarify the 'filter' syntax (key:value pairs).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Find'), specific resource ('datasets on the Hugging Face Hub'), and search dimensions ('by name, tag, or author'). However, it does not distinguish from the sibling tool 'search_dataset' (if distinct) or clarify when to use this versus 'get_dataset_info' for known datasets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but provides no explicit guidance on when to use it versus alternatives like 'get_dataset_info' or 'filter_rows'. There is no mention of prerequisites, required auth, or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cfahlgren1/hf-dataset-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server