Skip to main content
Glama
0xteamhq

Grafana MCP Server

by 0xteamhq

get_assertions

Retrieve assertion summaries for monitoring entities by specifying type, name, environment, time range, and other parameters to analyze system health and performance.

Instructions

Get assertion summary for a given entity with its type, name, env, site, namespace, and time range

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
endTimeYesThe end time in RFC3339 format
entityNameYesThe name of the entity
entityTypeYesThe type of the entity (e.g., Service, Node, Pod)
envNoThe environment of the entity
namespaceNoThe namespace of the entity
siteNoThe site of the entity
startTimeYesThe start time in RFC3339 format

Implementation Reference

  • The async handler function implementing the get_assertions tool. It creates an Asserts client, builds query parameters from input, fetches assertions via API, formats the response with entity details, time range, assertion list, and summary statistics (total, critical, warning, info). Handles errors gracefully.
    handler: async (params, context: ToolContext) => {
      try {
        const client = createAssertsClient(context.config.grafanaConfig);
        
        // Build the query parameters
        const queryParams: any = {
          entity_type: params.entityType,
          entity_name: params.entityName,
          start_time: params.startTime,
          end_time: params.endTime,
        };
        
        if (params.env) queryParams.env = params.env;
        if (params.site) queryParams.site = params.site;
        if (params.namespace) queryParams.namespace = params.namespace;
        
        const response = await client.get('/api/v1/assertions', { params: queryParams });
        
        const assertions = response.data.assertions || [];
        
        // Format the response
        const formatted = {
          entity: {
            type: params.entityType,
            name: params.entityName,
            env: params.env,
            site: params.site,
            namespace: params.namespace,
          },
          timeRange: {
            start: params.startTime,
            end: params.endTime,
          },
          assertions: assertions.map((assertion: any) => ({
            id: assertion.id,
            name: assertion.name,
            status: assertion.status,
            severity: assertion.severity,
            message: assertion.message,
            lastTriggered: assertion.last_triggered,
            count: assertion.count,
          })),
          summary: {
            total: assertions.length,
            critical: assertions.filter((a: any) => a.severity === 'critical').length,
            warning: assertions.filter((a: any) => a.severity === 'warning').length,
            info: assertions.filter((a: any) => a.severity === 'info').length,
          },
        };
        
        return createToolResult(formatted);
      } catch (error: any) {
        return createErrorResult(error.response?.data?.message || error.message);
      }
    },
  • Zod input schema GetAssertionsSchema defining parameters: entityType, entityName (required), env/site/namespace (optional), startTime, endTime.
    const GetAssertionsSchema = z.object({
      entityType: z.string().describe('The type of the entity (e.g., Service, Node, Pod)'),
      entityName: z.string().describe('The name of the entity'),
      env: z.string().optional().describe('The environment of the entity'),
      site: z.string().optional().describe('The site of the entity'),
      namespace: z.string().optional().describe('The namespace of the entity'),
      startTime: z.string().describe('The start time in RFC3339 format'),
      endTime: z.string().describe('The end time in RFC3339 format'),
    });
  • The registerAssertsTools function that registers the getAssertions tool with the MCP server.
    export function registerAssertsTools(server: any) {
      server.registerTool(getAssertions);
    }
  • src/cli.ts:134-136 (registration)
    Conditional registration of asserts tools (including get_assertions) in the main CLI entrypoint, called if 'asserts' category is enabled.
    if (enabledTools.has('asserts')) {
      registerAssertsTools(server);
    }
  • Helper function createAssertsClient that configures axios client for Asserts API, handling auth (token/apiKey), base URL adjustment for grafana.net, and headers.
    function createAssertsClient(config: any) {
      const headers: any = {
        'User-Agent': 'mcp-grafana/1.0.0',
        'Content-Type': 'application/json',
      };
      
      if (config.serviceAccountToken) {
        headers['Authorization'] = `Bearer ${config.serviceAccountToken}`;
      } else if (config.apiKey) {
        headers['Authorization'] = `Bearer ${config.apiKey}`;
      }
      
      // Asserts uses a different base URL pattern
      const baseUrl = config.url.replace(/\/$/, '');
      const assertsUrl = baseUrl.includes('grafana.net')
        ? baseUrl.replace('grafana.net', 'asserts.grafana.net')
        : `${baseUrl}/api/plugins/grafana-asserts-app/resources`;
      
      return axios.create({
        baseURL: assertsUrl,
        headers,
        timeout: 30000,
      });
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Get assertion summary' which implies a read-only operation, but doesn't specify permissions, rate limits, pagination, or what the summary includes (e.g., format, fields). For a tool with 7 parameters and no annotations, this is a significant gap in transparency about how the tool behaves beyond basic input requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that lists all key parameters without unnecessary words. It's appropriately sized and front-loaded with the main purpose. However, it could be slightly more structured by separating the purpose from parameter details, but it remains clear and concise overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what an 'assertion summary' is, what the output looks like, or behavioral aspects like error handling. For a tool retrieving summaries with multiple filters, more context is needed to guide effective use, especially without annotations or output schema to fill in gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists parameters (type, name, env, site, namespace, time range), but the input schema has 100% description coverage, providing detailed descriptions for all 7 parameters (e.g., 'The end time in RFC3339 format'). The description adds minimal value beyond the schema, as it doesn't explain relationships between parameters or provide additional context. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get assertion summary for a given entity' with specific parameters (type, name, env, site, namespace, time range). It uses a specific verb ('Get') and resource ('assertion summary'), making the purpose clear. However, it doesn't differentiate from sibling tools like 'get_sift_analysis' or 'get_dashboard_summary' which might also retrieve summaries, so it doesn't fully distinguish from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lists parameters but doesn't mention prerequisites, exclusions, or compare it to sibling tools like 'get_sift_analysis' or 'get_dashboard_summary'. There's only implied usage based on needing an 'assertion summary' for an entity, but no explicit context or alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/0xteamhq/mcp-grafana'

If you have feedback or need assistance with the MCP directory API, please join our Discord server