Skip to main content
Glama

get-dashboards

Retrieve all dashboards from Datadog to discover available dashboards and their IDs for further exploration and management.

Instructions

Retrieve a list of all dashboards from Datadog. Useful for discovering available dashboards and their IDs for further exploration.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filterConfiguredNo
limitNo

Implementation Reference

  • The core execution handler for the 'get-dashboards' tool. It uses the Datadog DashboardsApi to list all dashboards, applies optional client-side limiting, and returns the paginated or filtered results.
    execute: async (params: GetDashboardsParams) => {
      try {
        const { filterConfigured, limit } = params;
    
        const apiInstance = new v1.DashboardsApi(configuration);
    
        // No parameters needed for listDashboards
        const response = await apiInstance.listDashboards();
    
        // Apply client-side filtering if specified
        let filteredDashboards = response.dashboards || [];
    
        // Apply client-side limit if specified
        if (limit && filteredDashboards.length > limit) {
          filteredDashboards = filteredDashboards.slice(0, limit);
        }
    
        return {
          ...response,
          dashboards: filteredDashboards
        };
      } catch (error) {
        console.error("Error fetching dashboards:", error);
        throw error;
      }
    }
  • src/index.ts:119-132 (registration)
    Registers the 'get-dashboards' tool with the MCP server, specifying name, description, input schema, and linking to the handler execution.
    server.tool(
      "get-dashboards",
      "Retrieve a list of all dashboards from Datadog. Useful for discovering available dashboards and their IDs for further exploration.",
      {
        filterConfigured: z.boolean().optional(),
        limit: z.number().default(100)
      },
      async (args) => {
        const result = await getDashboards.execute(args);
        return {
          content: [{ type: "text", text: JSON.stringify(result) }]
        };
      }
    );
  • Zod input schema defining optional parameters: filterConfigured (boolean) and limit (number, default 100) for validating tool inputs.
    {
      filterConfigured: z.boolean().optional(),
      limit: z.number().default(100)
    },
  • Helper function to initialize the Datadog API client configuration using environment variables for auth and site.
    initialize: () => {
      const configOpts = {
        authMethods: {
          apiKeyAuth: process.env.DD_API_KEY,
          appKeyAuth: process.env.DD_APP_KEY
        }
      };
    
      configuration = client.createConfiguration(configOpts);
    
      if (process.env.DD_SITE) {
        configuration.setServerVariables({
          site: process.env.DD_SITE
        });
      }
    },
  • TypeScript type definition matching the input schema for getDashboards parameters.
    type GetDashboardsParams = {
      filterConfigured?: boolean;
      limit?: number;
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions retrieving a list and discovering IDs, but doesn't disclose critical traits like whether this is a read-only operation, potential rate limits, authentication requirements, pagination behavior (implied by 'limit' parameter but not explained), or what the return format looks like. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the purpose and utility. It's front-loaded with the core action and avoids unnecessary details. However, it could be slightly more structured by explicitly addressing parameters or behavioral aspects, but it remains efficient without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a retrieval tool with 2 parameters), no annotations, no output schema, and 0% schema description coverage, the description is incomplete. It covers the basic purpose but lacks details on parameters, return values, behavioral constraints, and differentiation from siblings. For a tool in this context, it should provide more comprehensive guidance to be fully useful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, meaning parameters 'filterConfigured' and 'limit' are undocumented in the schema. The description doesn't mention these parameters at all, failing to compensate for the lack of schema documentation. It doesn't explain what 'filterConfigured' does or how 'limit' affects the retrieval, leaving their semantics unclear. With two parameters and no coverage, the description adds no value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve a list of all dashboards') and resource ('dashboards from Datadog'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get-dashboard' (singular) or 'get-monitors', which might have overlapping functionality. The mention of discovering IDs for further exploration adds useful context but doesn't fully establish uniqueness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it's 'useful for discovering available dashboards and their IDs for further exploration', which suggests when to use it (for initial discovery). However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get-dashboard' (singular) or 'get-monitors', nor does it mention any exclusions or prerequisites. The guidance is present but incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/GeLi2001/datadog-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server