Skip to main content
Glama
therealsachin

Langfuse MCP Server

usage_by_service

Analyze usage and cost breakdown by service or feature tag over a specified time period to identify spending patterns and optimize resource allocation.

Instructions

Analyze usage and cost by service/feature tag over a time period.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fromYesStart timestamp (ISO 8601)
toYesEnd timestamp (ISO 8601)
serviceTagKeyNoTag key for service identification (default: "service")
environmentNoOptional environment filter
limitNoMaximum number of services to return (default: 20)

Implementation Reference

  • The core handler function implementing the usage_by_service tool. It queries trace metrics grouped by tags, extracts service names from tags (e.g., 'service:chatbot'), aggregates cost/tokens/count per service, sorts by cost, and returns formatted JSON.
    export async function usageByService(
      client: LangfuseAnalyticsClient,
      args: z.infer<typeof usageByServiceSchema>
    ) {
      const filters: any[] = [];
    
      if (args.environment) {
        filters.push({
          column: 'environment',
          operator: 'equals',
          value: args.environment,
          type: 'string',
        });
      }
    
      const response = await client.getMetrics({
        view: 'traces',
        from: args.from,
        to: args.to,
        metrics: [
          { measure: 'totalCost', aggregation: 'sum' },
          { measure: 'totalTokens', aggregation: 'sum' },
          { measure: 'count', aggregation: 'count' },
        ],
        dimensions: [{ field: 'tags' }],
        filters,
      });
    
      // Post-process to extract service from tags
      const serviceMap = new Map<string, ServiceUsage>();
    
      if (response.data && Array.isArray(response.data)) {
        response.data.forEach((row: any) => {
          const tags = Array.isArray(row.tags) ? row.tags : [];
    
          // Find service tag
          const serviceTag = tags.find((tag: string) =>
            typeof tag === 'string' && tag.startsWith(`${args.serviceTagKey}:`)
          );
    
          if (serviceTag) {
            const service = serviceTag.split(':')[1];
            const existing = serviceMap.get(service) || {
              service,
              totalCost: 0,
              totalTokens: 0,
              traceCount: 0,
            };
    
            serviceMap.set(service, {
              service,
              totalCost: existing.totalCost + (row.totalCost_sum || 0),
              totalTokens: existing.totalTokens + (row.totalTokens_sum || 0),
              traceCount: existing.traceCount + (row.count_count || 0),
            });
          }
        });
      }
    
      const serviceUsages = Array.from(serviceMap.values())
        .sort((a, b) => b.totalCost - a.totalCost)
        .slice(0, args.limit);
    
      return {
        content: [
          {
            type: 'text' as const,
            text: JSON.stringify(
              {
                projectId: client.getProjectId(),
                from: args.from,
                to: args.to,
                serviceTagKey: args.serviceTagKey,
                services: serviceUsages,
              },
              null,
              2
            ),
          },
        ],
      };
    }
  • Zod input schema defining parameters for the usage_by_service tool: datetime range, service tag key, optional environment filter and limit.
    export const usageByServiceSchema = z.object({
      from: z.string().datetime(),
      to: z.string().datetime(),
      serviceTagKey: z.string().default('service'),
      environment: z.string().optional(),
      limit: z.number().optional().default(20),
    });
  • src/index.ts:1015-1018 (registration)
    Registration in the main tool dispatcher switch statement: parses arguments using the schema and invokes the handler.
    case 'usage_by_service': {
      const args = usageByServiceSchema.parse(request.params.arguments);
      return await usageByService(this.client, args);
    }
  • src/index.ts:212-244 (registration)
    Tool metadata registration in the listTools response, including name, description, and input schema for client discovery.
    {
      name: 'usage_by_service',
      description:
        'Analyze usage and cost by service/feature tag over a time period.',
      inputSchema: {
        type: 'object',
        properties: {
          from: {
            type: 'string',
            format: 'date-time',
            description: 'Start timestamp (ISO 8601)',
          },
          to: {
            type: 'string',
            format: 'date-time',
            description: 'End timestamp (ISO 8601)',
          },
          serviceTagKey: {
            type: 'string',
            description: 'Tag key for service identification (default: "service")',
          },
          environment: {
            type: 'string',
            description: 'Optional environment filter',
          },
          limit: {
            type: 'number',
            description: 'Maximum number of services to return (default: 20)',
          },
        },
        required: ['from', 'to'],
      },
    },
  • src/index.ts:55-55 (registration)
    Import statement bringing in the handler function and schema for use in the server.
    import { usageByService, usageByServiceSchema } from './tools/usage-by-service.js';
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it indicates this is an analysis/read operation (implied by 'analyze'), it doesn't disclose important behavioral aspects like whether this requires specific permissions, what format the analysis returns, whether there are rate limits, or how the 'limit' parameter affects results. The description is too minimal for a tool with 5 parameters and no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence that efficiently communicates the core purpose. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the main action and scope clearly stated upfront.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what the analysis returns, how results are structured, what 'usage and cost' metrics are included, or how the tool behaves with different parameter combinations. The minimal description leaves too many questions unanswered for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing clear documentation for all 5 parameters. The description adds minimal value beyond the schema - it mentions 'by service/feature tag' which relates to the 'serviceTagKey' parameter, and 'over a time period' which relates to 'from' and 'to' parameters. However, it doesn't provide additional context about parameter interactions or usage patterns beyond what's already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze usage and cost by service/feature tag over a time period.' It specifies the verb ('analyze'), resource ('usage and cost'), and scope ('by service/feature tag over a time period'). However, it doesn't explicitly distinguish this from sibling tools like 'usage_by_model' or 'get_cost_analysis', which would require more specific differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions analyzing 'by service/feature tag' but doesn't explain when this is preferable to other cost/usage analysis tools like 'usage_by_model' or 'get_cost_analysis'. There are no explicit when/when-not statements or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/therealsachin/langfuse-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server