Skip to main content
Glama
AgentBase1
by AgentBase1

list_categories

Retrieve all available categories in the OpenClaw registry to identify instruction file types, including file counts and descriptions.

Instructions

List all categories in the OpenClaw registry with file counts and descriptions. Use this to understand what types of instruction files are available.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handleListCategories() function implements the list_categories tool logic. It fetches the registry index, counts files per category, and returns a formatted list of categories with descriptions and file counts.
    async function handleListCategories() {
      const index = await fetchIndex();
      const entries = index.entries || [];
    
      const categoryInfo = {
        'system-prompts': 'Full agent identity and behavior definitions — the complete personality and rules for an agent',
        'skills': 'Scoped capability modules for specific tasks — drop into any agent to add a capability',
        'workflows': 'Multi-step sequential or conditional process instructions',
        'tool-definitions': 'Function schemas, API patterns, and tool usage instructions',
        'domain-packs': 'Deep field context — industry knowledge, terminology, and domain standards',
        'safety-filters': 'Output validation, content filtering, and harm detection patterns',
        'orchestration': 'Multi-agent coordination and handoff protocols'
      };
    
      const counts = {};
      for (const cat of index.categories) counts[cat] = 0;
      for (const e of entries) {
        if (counts[e.category] !== undefined) counts[e.category]++;
      }
    
      const lines = index.categories.map(cat => {
        const count = counts[cat] || 0;
        const desc = categoryInfo[cat] || '';
        return `**${cat}** (${count} file${count === 1 ? '' : 's'})\n  ${desc}`;
      });
    
      return {
        content: [{
          type: 'text',
          text: `OpenClaw Registry — ${index.count} total files\n\n${lines.join('\n\n')}\n\nUse search_registry with category filter to browse files in any category.`
        }]
      };
    }
  • The tool definition in the TOOLS array specifying the name 'list_categories', description, and inputSchema (empty properties object - takes no parameters).
    {
      name: 'list_categories',
      description: 'List all categories in the OpenClaw registry with file counts and descriptions. Use this to understand what types of instruction files are available.',
      inputSchema: {
        type: 'object',
        properties: {}
      }
    },
  • index.js:259-263 (registration)
    The switch statement in the CallToolRequestSchema handler that dispatches the 'list_categories' tool call to the handleListCategories function.
    switch (name) {
      case 'search_registry': return await handleSearchRegistry(args || {});
      case 'get_instruction': return await handleGetInstruction(args || {});
      case 'list_categories': return await handleListCategories();
      case 'get_featured': return await handleGetFeatured();
  • index.js:253-253 (registration)
    Server registration of the TOOLS array via ListToolsRequestSchema handler, which exposes list_categories to MCP clients.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: TOOLS }));
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool lists categories with file counts and descriptions, but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication needs, or how results are structured (e.g., pagination). For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and followed by a usage hint. Every sentence adds value without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is adequate for basic understanding but incomplete. It covers what the tool does and a usage context, but lacks behavioral details (e.g., safety, performance) that would be important even for simple tools, especially without annotations to fill those gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100% (since there are no parameters to describe). The description appropriately doesn't add parameter details beyond what the schema provides, which is minimal. A baseline of 4 is applied for zero-parameter tools as they inherently require less parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all categories') and resource ('OpenClaw registry'), specifying what information is returned ('file counts and descriptions'). It distinguishes from siblings by focusing on categories rather than individual instructions or searches, though it doesn't explicitly contrast with sibling tools like get_featured.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context ('to understand what types of instruction files are available'), suggesting this tool is for discovery and overview. However, it lacks explicit guidance on when to use this versus alternatives like search_registry or get_instruction, and doesn't mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AgentBase1/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server