Skip to main content
Glama

Langfuse MCP Server

Langfuse MCP Server

An MCP server for querying Langfuse analytics, cost metrics, and usage data across multiple projects.

Features

  • Multi-project support with environment-based configuration

  • Cost and usage analytics by model, service, and environment

  • Trace analysis and debugging tools

  • Metrics API integration for aggregated analytics

Installation

Option 1: Using npx (When Published)

# No installation needed - run directly with npx npx langfuse-mcp

Note: Package is configured for npm publishing but not yet published. Use local development option below for now.

Option 2: Local Development

git clone https://github.com/therealsachin/langfuse-mcp-server.git cd langfuse-mcp npm install npm run build

Configuration

Set environment variables for each Langfuse project:

LANGFUSE_PUBLIC_KEY=pk-lf-xxx LANGFUSE_SECRET_KEY=sk-lf-xxx LANGFUSE_BASEURL=https://us.cloud.langfuse.com

Available Tools

Core Tools (original)

  1. list_projects - List all configured Langfuse projects

  2. project_overview - Get cost, tokens, and trace summary for a project

  3. usage_by_model - Break down usage and cost by AI model

  4. usage_by_service - Analyze usage by service/feature tag

  5. top_expensive_traces - Find the most expensive traces

  6. get_trace_detail - Get detailed information about a specific trace

Extended Tools (requested)

  1. get_projects - Alias for list_projects (list available Langfuse projects)

  2. get_metrics - Query aggregated metrics (costs, tokens, counts) with flexible filtering

  3. get_traces - Fetch traces with comprehensive filtering options

  4. get_observations - Get LLM generations/spans with details and filtering

  5. get_cost_analysis - Specialized cost breakdowns by model/user/daily trends

  6. get_daily_metrics - Daily usage trends and patterns with averages

Usage with Claude Desktop

Add to your claude_desktop_config.json:

Option 1: Using npx (When Published)

{ "mcpServers": { "langfuse-analytics": { "command": "npx", "args": ["langfuse-mcp"], "env": { "LANGFUSE_PUBLIC_KEY": "pk-lf-xxx", "LANGFUSE_SECRET_KEY": "sk-lf-xxx", "LANGFUSE_BASEURL": "https://us.cloud.langfuse.com" } } } }

Option 2: Local Installation

{ "mcpServers": { "langfuse-analytics": { "command": "node", "args": ["/path/to/langfuse-mcp/build/index.js"], "env": { "LANGFUSE_PUBLIC_KEY": "pk-lf-xxx", "LANGFUSE_SECRET_KEY": "sk-lf-xxx", "LANGFUSE_BASEURL": "https://us.cloud.langfuse.com" } } } }

Example Queries

Once integrated with Claude Desktop, you can ask questions like:

  • "Show me the cost overview for the last 7 days"

  • "Which AI models are most expensive this month?"

  • "Find the top 10 most expensive traces from yesterday"

  • "Break down usage by service for the production environment"

  • "Show me details for trace xyz-123"

Development

# Watch mode for development npm run watch # Test with MCP Inspector npm run inspector # Test endpoints npm run test

Publishing to NPM

To make the package available via npx langfuse-mcp:

# Login to npm (first time only) npm login # Publish the package npm publish # Test global installation npx langfuse-mcp

Project Structure

src/ ├── index.ts # Main server entry point ├── config.ts # Project configuration loader ├── langfuse-client.ts # Langfuse client wrapper ├── types.ts # TypeScript type definitions └── tools/ # All 12 MCP tools ├── list-projects.ts ├── project-overview.ts ├── usage-by-model.ts ├── usage-by-service.ts ├── top-expensive-traces.ts ├── get-trace-detail.ts ├── get-projects.ts # Alias for list-projects ├── get-metrics.ts # Aggregated metrics ├── get-traces.ts # Trace filtering ├── get-observations.ts # LLM generations ├── get-cost-analysis.ts # Cost breakdowns └── get-daily-metrics.ts # Daily trends

API Integration

This server uses the Langfuse public API endpoints:

  • /api/public/metrics - For aggregated analytics using GET with JSON query parameter

  • /api/public/metrics/daily - For daily usage metrics and cost breakdowns

  • /api/public/traces - For trace listing, filtering, and individual trace retrieval

  • /api/public/observations - For detailed observation analysis and LLM generation metrics

API Implementation Notes:

  • Metrics API: Uses GET method with URL-encoded JSON in the query parameter

  • Traces API: Supports advanced filtering, pagination, and ordering

  • Observations API: Provides detailed LLM generation and span data

  • Daily Metrics API: Specialized endpoint for daily aggregated usage statistics

All authentication is handled server-side using Basic Auth with your Langfuse API keys.

Troubleshooting

✅ Fixed: 405 Method Not Allowed Errors

Previous Issue: Earlier versions encountered "405 Method Not Allowed" errors due to incorrect API usage.

Solution: This has been FIXED in the current version by using the correct Langfuse API implementation:

  • Metrics API: Now uses GET method with URL-encoded JSON query parameter (correct approach)

  • Traces API: Uses the actual /api/public/traces endpoint with proper filtering

  • Observations API: Uses /api/public/observations endpoint with correct parameters

  • Daily Metrics: Uses specialized /api/public/metrics/daily endpoint

✅ Fixed: Cost Values Returning as Zero

Previous Issue: Cost analysis tools were returning zero values even when actual cost data existed.

Solution: This has been FIXED by correcting field name mapping in API response parsing:

  • Metrics API Response Structure: The API returns aggregated field names like totalCost_sum, count_count, totalTokens_sum

  • Updated Field Access: All tools now use correct aggregated field names instead of direct field names

  • Daily Metrics Integration: Cost analysis now uses getDailyMetrics API for cleaner daily cost breakdowns

  • Affected Tools: get-cost-analysis, get-metrics, usage-by-model, usage-by-service, project-overview, get-daily-metrics

✅ Fixed: Response Size and API Parameter Issues

Previous Issues:

  1. get_observations returning responses exceeding MCP token limits (200k+ tokens)

  2. get_traces returning 400 Bad Request errors

Solutions Applied:

  • get_observations Response Size Control:

    • Added includeInputOutput: false parameter (default) to exclude large prompt/response content

    • Added truncateContent: 500 parameter to limit content size when included

    • Reduced default limit from 25 to 10 observations

    • Content truncation for input/output fields when enabled

  • get_traces API Parameter Fixes:

    • Added parameter validation for orderBy field

    • Enhanced error logging with full request details for debugging

    • Added proper error handling with detailed error responses

✅ Fixed: Cost Analysis Data Aggregation

Previous Issue: Cost analysis was showing zero values for total costs and model breakdowns while daily data worked correctly.

Root Cause: The Metrics API field mapping was still incorrect despite earlier fixes.

Solution: Switched to using the working Daily Metrics API data for all aggregations:

  • Total Cost Calculation: Now sums from daily data instead of broken metrics API

  • Model Breakdown: Extracts and aggregates model costs from daily usage data

  • Daily Breakdown: Optimized to reuse already-fetched daily data

  • User Breakdown: Still uses metrics API but with enhanced debugging

Result:

  • totalCost now shows correct values (sum of daily costs)

  • byModel now populated with real model cost breakdowns

  • byDay continues to work perfectly

  • 🔍 byUser includes debugging to identify any remaining field mapping issues

✅ Fixed: usage_by_model Showing Zero Costs/Tokens

Previous Issue: usage_by_model showed observation counts correctly but all costs and tokens as zero.

Root Cause: Same metrics API field mapping issue affecting cost calculations.

Solution: Applied the same daily metrics approach used in cost analysis:

  • Primary Method: Uses getDailyMetrics API to aggregate model costs and tokens from daily usage breakdowns

  • Fallback Method: Falls back to original metrics API with enhanced debugging if daily API fails

  • Data Aggregation: Properly extracts totalCost, totalUsage, and countObservations from daily data

Result:

  • ✅ Models now show real totalCost values instead of 0

  • ✅ Models now show real totalTokens values instead of 0

  • observationCount continues to work correctly

Performance Considerations

API Efficiency: The server now uses native Langfuse endpoints efficiently:

  • Metrics queries are processed server-side by Langfuse for optimal performance

  • Trace and observation filtering happens at the API level to reduce data transfer

  • Daily metrics use the specialized endpoint for pre-aggregated data

Environment Variables

Make sure these environment variables are properly set:

LANGFUSE_PUBLIC_KEY=pk-lf-xxx # Your Langfuse public key LANGFUSE_SECRET_KEY=sk-lf-xxx # Your Langfuse secret key LANGFUSE_BASEURL=https://us.cloud.langfuse.com # Your Langfuse instance URL
-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Enables querying Langfuse analytics, cost metrics, and usage data across multiple projects. Provides tools for trace analysis, model/service cost breakdowns, and daily usage trends through natural language queries.

  1. Features
    1. Installation
      1. Option 1: Using npx (When Published)
      2. Option 2: Local Development
    2. Configuration
      1. Available Tools
        1. Core Tools (original)
        2. Extended Tools (requested)
      2. Usage with Claude Desktop
        1. Option 1: Using npx (When Published)
        2. Option 2: Local Installation
      3. Example Queries
        1. Development
          1. Publishing to NPM
            1. Project Structure
              1. API Integration
                1. Troubleshooting
                  1. ✅ Fixed: 405 Method Not Allowed Errors
                  2. ✅ Fixed: Cost Values Returning as Zero
                  3. ✅ Fixed: Response Size and API Parameter Issues
                  4. ✅ Fixed: Cost Analysis Data Aggregation
                  5. ✅ Fixed: usage_by_model Showing Zero Costs/Tokens
                  6. Performance Considerations
                  7. Environment Variables

                MCP directory API

                We provide all the information about MCP servers via our MCP API.

                curl -X GET 'https://glama.ai/api/mcp/v1/servers/therealsachin/langfuse-mcp-server'

                If you have feedback or need assistance with the MCP directory API, please join our Discord server