Skip to main content
Glama
jonathan-politzki

Smartlead Simplified MCP Server

smartlead_get_mailbox_summary

Retrieve mailbox performance summaries for Smart Delivery tests, showing overall metrics across all campaigns with configurable pagination.

Instructions

Get the list of mailboxes used for any Smart Delivery test with overall performance across all tests.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of tests to retrieve (default: 10)
offsetNoOffset for pagination (default: 0)

Implementation Reference

  • The primary handler function that implements the tool logic: validates input with isMailboxSummaryParams, constructs SmartDelivery API client, performs GET request to `/spam-test/report/mailboxes-summary` with optional limit/offset pagination, returns formatted JSON response or error content.
    async function handleGetMailboxSummary(
      args: unknown, 
      apiClient: AxiosInstance,
      withRetry: <T>(operation: () => Promise<T>, context: string) => Promise<T>
    ) {
      if (!isMailboxSummaryParams(args)) {
        throw new McpError(
          ErrorCode.InvalidParams,
          'Invalid arguments for smartlead_get_mailbox_summary'
        );
      }
    
      try {
        const smartDeliveryClient = createSmartDeliveryClient(apiClient);
        const { limit = 10, offset = 0 } = args;
        
        const response = await withRetry(
          async () => smartDeliveryClient.get(`/spam-test/report/mailboxes-summary?limit=${limit}&offset=${offset}`),
          'get mailbox summary'
        );
    
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(response.data, null, 2),
            },
          ],
          isError: false,
        };
      } catch (error: any) {
        return {
          content: [{ 
            type: 'text', 
            text: `API Error: ${error.response?.data?.message || error.message}` 
          }],
          isError: true,
        };
      }
    }
  • Tool metadata and input schema definition, specifying optional integer parameters for limit and offset.
    export const GET_MAILBOX_SUMMARY_TOOL: CategoryTool = {
      name: 'smartlead_get_mailbox_summary',
      description: 'Get the list of mailboxes used for any Smart Delivery test with overall performance across all tests.',
      category: ToolCategory.SMART_DELIVERY,
      inputSchema: {
        type: 'object',
        properties: {
          limit: {
            type: 'integer',
            description: 'Number of tests to retrieve (default: 10)',
          },
          offset: {
            type: 'integer',
            description: 'Offset for pagination (default: 0)',
          },
        },
      },
    };
  • Dispatch case in handleSmartDeliveryTool switch statement that registers and routes the tool name to its handler function.
    case 'smartlead_get_mailbox_summary': {
      return handleGetMailboxSummary(args, apiClient, withRetry);
  • Type guard for input validation in the handler; accepts any non-null object since parameters are optional.
    export function isMailboxSummaryParams(args: unknown): args is MailboxSummaryParams {
      return typeof args === 'object' && args !== null;
    }
  • Inclusion of the tool in the smartDeliveryTools export array for registry import and registration.
    GET_MAILBOX_SUMMARY_TOOL,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It mentions 'overall performance across all tests,' hinting at aggregated data, but lacks details on permissions, rate limits, error handling, or response format. For a tool with no annotations, this leaves significant gaps in understanding its operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key action and resource. It avoids unnecessary words, though it could be slightly more structured by separating purpose from performance details. Overall, it's concise and well-formed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on output (e.g., what 'overall performance' includes), error cases, or integration with sibling tools. This leaves room for improvement in providing a complete context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for 'limit' and 'offset' parameters. The description doesn't add any semantic details beyond the schema (e.g., it doesn't explain what 'overall performance' entails or how pagination works in practice), so it meets the baseline score of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get the list of mailboxes') and the resource ('used for any Smart Delivery test'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'smartlead_get_mailbox_count' or 'smartlead_get_sender_accounts', which might offer related mailbox information, so it misses the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. For example, it doesn't compare to 'smartlead_get_mailbox_count' (which might provide counts rather than lists) or 'smartlead_get_sender_accounts' (which could list sender accounts instead of mailboxes). Without such context, an agent might struggle to choose appropriately among similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jonathan-politzki/smartlead-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server