Skip to main content
Glama

Count Objects

dual_count_objects
Read-onlyIdempotent

Count objects matching specific filter criteria in the DUAL Web3 Operating System without retrieving full object data.

Instructions

Count objects matching filter criteria without returning the full objects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filterYesFilter criteria (same as search)

Implementation Reference

  • The 'dual_count_objects' tool registration and handler. Makes a POST request to 'objects/count' API endpoint with filter criteria, returns the count of matching objects.
    server.registerTool("dual_count_objects", {
      title: "Count Objects",
      description: "Count objects matching filter criteria without returning the full objects.",
      inputSchema: {
        filter: z.record(z.unknown()).describe("Filter criteria (same as search)"),
      },
      annotations: { readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true },
    }, async (params) => {
      try {
        const res = await makeApiRequest<{ count: number }>("objects/count", "POST", params);
        return textResult(`Count: ${res.count}`);
      } catch (e) { return errorResult(handleApiError(e)); }
    });
  • Input schema for dual_count_objects tool - accepts a 'filter' parameter as a record of key-value pairs for filtering criteria.
    inputSchema: {
      filter: z.record(z.unknown()).describe("Filter criteria (same as search)"),
    },
  • makeApiRequest helper function that performs the actual HTTP request to the DUAL API. Handles authentication headers, timeout, and returns typed response data.
    export async function makeApiRequest<T>(
      endpoint: string,
      method: "GET" | "POST" | "PUT" | "PATCH" | "DELETE" = "GET",
      data?: unknown,
      params?: Record<string, unknown>,
      options?: { timeout?: number; multipart?: boolean }
    ): Promise<T> {
      const config: AxiosRequestConfig = {
        method,
        url: `${API_BASE_URL}/${endpoint}`,
        headers: getAuthHeaders(),
        timeout: options?.timeout ?? 30000,
      };
    
      if (data !== undefined) config.data = data;
      if (params) config.params = params;
      if (options?.multipart) {
        config.headers = { ...config.headers, "Content-Type": "multipart/form-data" };
      }
    
      const response = await axios(config);
      return response.data as T;
    }
  • textResult and errorResult helper functions that format the MCP tool response content in the standard format.
    export function textResult(text: string) {
      return { content: [{ type: "text" as const, text }] };
    }
    
    /** Standard error content response */
    export function errorResult(text: string) {
      return { content: [{ type: "text" as const, text }], isError: true as const };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide strong behavioral hints: readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds minimal context beyond this, only noting that it 'counts objects' without returning full objects. It doesn't disclose additional behavioral traits like rate limits, authentication needs, or what 'openWorldHint' means in practice. With annotations covering the core safety profile, a 3 is appropriate—the description adds some value but not rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Count objects matching filter criteria without returning the full objects.' It is front-loaded with the core purpose and includes a key constraint. There is zero waste, and every word earns its place, making it appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter), rich annotations covering safety and behavior, and no output schema, the description is reasonably complete. It clearly states the purpose and key constraint (no full objects returned). However, it could benefit from more context on what 'count' returns (e.g., a number, JSON structure) or how the filter works, but with annotations and schema coverage, it's mostly adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'filter' parameter fully documented in the schema as 'Filter criteria (same as search).' The description doesn't add any meaningful semantics beyond what the schema provides, such as examples of filter usage or how it differs from search. Given the high schema coverage, the baseline score of 3 is correct, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Count objects matching filter criteria without returning the full objects.' It specifies the verb ('count'), resource ('objects'), and scope ('matching filter criteria'), distinguishing it from siblings like 'dual_list_objects' or 'dual_search_objects' that return full objects. However, it doesn't explicitly differentiate from 'dual_public_get_stats' or other counting tools, keeping it at a 4 rather than a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'without returning the full objects,' suggesting this tool should be used when only a count is needed rather than object details. However, it doesn't explicitly state when to use this tool versus alternatives like 'dual_list_objects' or 'dual_search_objects,' nor does it mention any prerequisites or exclusions. The guidance is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ro-ro-b/dual-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server