Skip to main content
Glama

get_cost_by_service

Retrieve AWS cost breakdowns by service for specific date ranges to analyze spending patterns and identify cost drivers.

Instructions

Retrieves AWS costs broken down by service for the specified date range.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_dateNoStart date in YYYY-MM-DD format.
end_dateNoEnd date in YYYY-MM-DD format.

Implementation Reference

  • Handler implementation for the 'get_cost_by_service' tool. It fetches AWS costs broken down by service using CostExplorerClient's GetCostAndUsageCommand, grouped by SERVICE dimension, for a specified date range (default last 7 days). Returns JSON with date, service, cost, and unit.
    if (name === "get_cost_by_service") {
        const endDate = (args as any)?.end_date || new Date().toISOString().split('T')[0];
        const startDate = (args as any)?.start_date || new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString().split('T')[0];
    
        const command = new GetCostAndUsageCommand({
            TimePeriod: { Start: startDate, End: endDate },
            Granularity: "DAILY",
            Metrics: ["UnblendedCost"],
            GroupBy: [{ Type: "DIMENSION", Key: "SERVICE" }]
        });
        const response = await costExplorerClient.send(command);
    
        const costs = response.ResultsByTime?.flatMap(r =>
            r.Groups?.map(g => ({
                Date: r.TimePeriod?.Start,
                Service: g.Keys?.[0],
                Cost: g.Metrics?.UnblendedCost?.Amount,
                Unit: g.Metrics?.UnblendedCost?.Unit
            }))
        ) || [];
    
        return {
            content: [{ type: "text", text: JSON.stringify(costs, null, 2) }]
        };
    }
  • src/index.ts:192-207 (registration)
    Tool registration in ListTools response, including name, description, and input schema definition.
        name: "get_cost_by_service",
        description: "Retrieves AWS costs broken down by service for the specified date range.",
        inputSchema: {
            type: "object",
            properties: {
                start_date: {
                    type: "string",
                    description: "Start date in YYYY-MM-DD format."
                },
                end_date: {
                    type: "string",
                    description: "End date in YYYY-MM-DD format."
                }
            }
        }
    },
  • Input schema for the 'get_cost_by_service' tool, defining optional start_date and end_date parameters.
    inputSchema: {
        type: "object",
        properties: {
            start_date: {
                type: "string",
                description: "Start date in YYYY-MM-DD format."
            },
            end_date: {
                type: "string",
                description: "End date in YYYY-MM-DD format."
            }
        }
    }
  • Initialization of the CostExplorerClient used by the get_cost_by_service handler.
    const costExplorerClient = new CostExplorerClient({});
  • Import of CostExplorerClient and relevant commands used in the tool.
    import { CostExplorerClient, GetCostAndUsageCommand, GetCostForecastCommand, GetAnomaliesCommand, GetSavingsPlansUtilizationCommand, GetReservationUtilizationCommand } from "@aws-sdk/client-cost-explorer";
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves cost data but doesn't mention critical aspects like whether it requires specific AWS permissions, if it's read-only, how it handles large date ranges, or what the output format looks like. For a data retrieval tool with no annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple retrieval tool with well-documented parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no nested objects) and high schema coverage, the description is adequate but incomplete. It lacks output schema information and behavioral context, which is notable since no annotations are provided. It's minimally viable but has clear gaps in usage guidance and transparency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting both parameters (start_date and end_date) with format details. The description adds no additional parameter information beyond what's in the schema, so it meets the baseline score of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Retrieves') and resource ('AWS costs broken down by service'), making it easy to understand what it does. However, it doesn't explicitly distinguish itself from sibling tools like 'get_cost_breakdown' or 'get_recent_cost', which might have overlapping functionality, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'get_cost_breakdown' and 'get_recent_cost' present, there's no indication of differences in scope, granularity, or context, leaving the agent to guess based on names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bhaveshopss/MCP-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server