Skip to main content
Glama
deyikong

SendGrid MCP Server

by deyikong

Get Email Statistics by Subuser

get_subuser_stats

Retrieve email performance statistics for SendGrid subusers within specified date ranges to analyze engagement metrics and campaign effectiveness.

Instructions

Retrieve email statistics for specific subusers

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
subusersYesComma-separated list of subuser names to retrieve stats for
start_dateYesStart date in YYYY-MM-DD format
end_dateNoEnd date in YYYY-MM-DD format (defaults to today)
aggregated_byNoHow to group the statisticsday

Implementation Reference

  • The handler function that fetches email statistics for specified subusers from the SendGrid API.
    handler: async ({ subusers, start_date, end_date, aggregated_by }: { subusers: string; start_date: string; end_date?: string; aggregated_by?: string }): Promise<ToolResult> => {
      let url = `https://api.sendgrid.com/v3/subusers/stats?subusers=${encodeURIComponent(subusers)}&start_date=${start_date}`;
      if (end_date) url += `&end_date=${end_date}`;
      if (aggregated_by) url += `&aggregated_by=${aggregated_by}`;
      
      const result = await makeRequest(url);
      return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
    },
  • Zod schema for validating the input parameters of the get_subuser_stats tool.
    inputSchema: {
      subusers: z.string().describe("Comma-separated list of subuser names to retrieve stats for"),
      start_date: z.string().describe("Start date in YYYY-MM-DD format"),
      end_date: z.string().optional().describe("End date in YYYY-MM-DD format (defaults to today)"),
      aggregated_by: z.enum(["day", "week", "month"]).optional().default("day").describe("How to group the statistics"),
    },
  • The complete tool definition for get_subuser_stats, including config, schema, and handler, within the exported statsTools object.
    get_subuser_stats: {
      config: {
        title: "Get Email Statistics by Subuser",
        description: "Retrieve email statistics for specific subusers",
        inputSchema: {
          subusers: z.string().describe("Comma-separated list of subuser names to retrieve stats for"),
          start_date: z.string().describe("Start date in YYYY-MM-DD format"),
          end_date: z.string().optional().describe("End date in YYYY-MM-DD format (defaults to today)"),
          aggregated_by: z.enum(["day", "week", "month"]).optional().default("day").describe("How to group the statistics"),
        },
      },
      handler: async ({ subusers, start_date, end_date, aggregated_by }: { subusers: string; start_date: string; end_date?: string; aggregated_by?: string }): Promise<ToolResult> => {
        let url = `https://api.sendgrid.com/v3/subusers/stats?subusers=${encodeURIComponent(subusers)}&start_date=${start_date}`;
        if (end_date) url += `&end_date=${end_date}`;
        if (aggregated_by) url += `&aggregated_by=${aggregated_by}`;
        
        const result = await makeRequest(url);
        return { content: [{ type: "text", text: JSON.stringify(result, null, 2) }] };
      },
  • Registration of statsTools (which includes get_subuser_stats) into the aggregate allTools export.
    import { statsTools } from "./stats.js";
    import { templateTools } from "./templates.js";
    
    export const allTools = {
      ...automationTools,
      ...campaignTools,
      ...contactTools,
      ...mailTools,
      ...miscTools,
      ...statsTools,
      ...templateTools,
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It doesn't specify whether this is a read-only operation, what permissions are required, whether there are rate limits, what the response format looks like, or if there are pagination considerations. The description merely restates the basic function without adding behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose without any wasted words. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what statistics are returned, how results are structured, or any behavioral constraints. The agent would need to rely heavily on the schema and potentially trial-and-error to understand this tool's full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, providing complete documentation for all 4 parameters. The description adds no additional parameter semantics beyond what's already in the schema, so it meets the baseline of 3 for adequate coverage when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Retrieve') and resource ('email statistics for specific subusers'), making the purpose immediately understandable. It distinguishes this tool from other stats tools by specifying 'subusers' as the target, though it doesn't explicitly contrast with sibling tools like 'get_global_stats' or 'get_category_stats'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_global_stats' or 'get_category_stats'. It doesn't mention prerequisites, constraints, or typical use cases, leaving the agent to infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/deyikong/sendgrid-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server