Skip to main content
Glama
whisper-sec

WhisperGraph MCP Server

Official

List WhisperGraph Labels

list_labels
Read-onlyIdempotent

Discover all node labels in WhisperGraph with their node counts to avoid hallucinated labels and choose the best anchor for your query.

Instructions

List all node labels in WhisperGraph with their counts.

Use this BEFORE writing a query when you're not sure which label to anchor on. It rules out hallucinated labels (e.g. there is no DOMAIN or FQDN — only HOSTNAME) and tells you which labels are large (HOSTNAME, IPV4) vs small (RIR, COUNTRY).

Returns: an array of {label, count} rows. Cached server-side for 5 minutes.

Tip: pair with describe_label to verify which properties exist on a label before referencing them in WHERE clauses.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
labelsYes

Implementation Reference

  • listLabels() handler method in SchemaTools class — the core logic that executes the tool. Delegates to fetchLabelRows() which runs CALL db.labels() and caches results for 5 minutes.
    async listLabels(credential: Credential | null): Promise<{ labels: Row[] }> {
      return { labels: await this.fetchLabelRows(credential) };
    }
  • fetchLabelRows() private helper — executes the backend Cypher query CALL db.labels(), caches results in TtlCache, and returns empty array on failure (best-effort).
    private async fetchLabelRows(credential: Credential | null): Promise<Row[]> {
      const cached = this.labelsCache.get(LABELS_KEY);
      if (cached) return cached;
    
      let rows: Row[];
      try {
        const raw = await this.backend.execute("CALL db.labels()", undefined, credential);
        rows = raw.rows ?? [];
      } catch (error) {
        // Schema introspection is best-effort: an unreachable backend yields an
        // empty list rather than failing the tool call.
        log.warn(`list_labels failed: ${describeError(error)}`);
        rows = [];
      }
      this.labelsCache.set(LABELS_KEY, rows);
      return rows;
    }
  • src/server.ts:124-138 (registration)
    Tool registration via server.registerTool('list_labels', ...) — defines inputSchema (empty), outputSchema ({ labels: z.array(rowSchema) }), links to handler via schemaTools.listLabels().
    server.registerTool(
      "list_labels",
      {
        title: "List WhisperGraph Labels",
        description: LIST_LABELS_DESCRIPTION,
        inputSchema: {},
        outputSchema: { labels: z.array(rowSchema) },
        annotations: READ_ONLY_ANNOTATIONS,
      },
      async (_args, extra) => {
        const credential = resolveCredential(extra.requestInfo?.headers, config.apiKey);
        const result = await schemaTools.listLabels(credential);
        return toolResult(result);
      },
    );
  • Output schema definition for list_labels: { labels: z.array(rowSchema) } where rowSchema is z.record(z.string(), z.unknown()).
    outputSchema: { labels: z.array(rowSchema) },
    annotations: READ_ONLY_ANNOTATIONS,
  • LIST_LABELS_DESCRIPTION constant — the user-facing description used in the tool registration.
    export const LIST_LABELS_DESCRIPTION = `List all node labels in WhisperGraph with their counts.
    
    Use this BEFORE writing a query when you're not sure which label to anchor on. It rules out hallucinated labels (e.g. there is no DOMAIN or FQDN — only HOSTNAME) and tells you which labels are large (HOSTNAME, IPV4) vs small (RIR, COUNTRY).
    
    Returns: an array of {label, count} rows. Cached server-side for 5 minutes.
    
    Tip: pair with describe_label to verify which properties exist on a label before referencing them in WHERE clauses.`;
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. Description adds server-side caching for 5 minutes, which is beyond annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is compact with three paragraphs, each adding value: main purpose, usage guidance, return format, caching, and tip. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all needed aspects: purpose, usage, return format, caching, and cross-reference with sibling. Output schema exists, so return info sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, schema coverage 100%. Description has no param info needed, baseline 4 for zero-param tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all node labels with counts, distinguishes from sibling 'describe_label' by advising to use before writing queries to avoid hallucinated labels.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use ('BEFORE writing a query when you're not sure which label to anchor on') and what it prevents (hallucinated labels), providing concrete examples.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/whisper-sec/whisper-graph-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server