Langfuse MCP Server
An MCP server for querying Langfuse analytics, cost metrics, and usage data across multiple projects.
Features
Multi-project support with environment-based configuration
Cost and usage analytics by model, service, and environment
Trace analysis and debugging tools
Metrics API integration for aggregated analytics
Installation
Option 1: Using npx (When Published)
Note: Package is configured for npm publishing but not yet published. Use local development option below for now.
Option 2: Local Development
Configuration
Set environment variables for each Langfuse project:
Available Tools
Core Tools (original)
list_projects - List all configured Langfuse projects
project_overview - Get cost, tokens, and trace summary for a project
usage_by_model - Break down usage and cost by AI model
usage_by_service - Analyze usage by service/feature tag
top_expensive_traces - Find the most expensive traces
get_trace_detail - Get detailed information about a specific trace
Extended Tools (requested)
get_projects - Alias for list_projects (list available Langfuse projects)
get_metrics - Query aggregated metrics (costs, tokens, counts) with flexible filtering
get_traces - Fetch traces with comprehensive filtering options
get_observations - Get LLM generations/spans with details and filtering
get_cost_analysis - Specialized cost breakdowns by model/user/daily trends
get_daily_metrics - Daily usage trends and patterns with averages
Usage with Claude Desktop
Add to your claude_desktop_config.json:
Option 1: Using npx (When Published)
Option 2: Local Installation
Example Queries
Once integrated with Claude Desktop, you can ask questions like:
"Show me the cost overview for the last 7 days"
"Which AI models are most expensive this month?"
"Find the top 10 most expensive traces from yesterday"
"Break down usage by service for the production environment"
"Show me details for trace xyz-123"
Development
Publishing to NPM
To make the package available via npx langfuse-mcp:
Project Structure
API Integration
This server uses the Langfuse public API endpoints:
/api/public/metrics- For aggregated analytics using GET with JSON query parameter/api/public/metrics/daily- For daily usage metrics and cost breakdowns/api/public/traces- For trace listing, filtering, and individual trace retrieval/api/public/observations- For detailed observation analysis and LLM generation metrics
API Implementation Notes:
Metrics API: Uses GET method with URL-encoded JSON in the
queryparameterTraces API: Supports advanced filtering, pagination, and ordering
Observations API: Provides detailed LLM generation and span data
Daily Metrics API: Specialized endpoint for daily aggregated usage statistics
All authentication is handled server-side using Basic Auth with your Langfuse API keys.
Troubleshooting
✅ Fixed: 405 Method Not Allowed Errors
Previous Issue: Earlier versions encountered "405 Method Not Allowed" errors due to incorrect API usage.
Solution: This has been FIXED in the current version by using the correct Langfuse API implementation:
Metrics API: Now uses GET method with URL-encoded JSON
queryparameter (correct approach)Traces API: Uses the actual
/api/public/tracesendpoint with proper filteringObservations API: Uses
/api/public/observationsendpoint with correct parametersDaily Metrics: Uses specialized
/api/public/metrics/dailyendpoint
✅ Fixed: Cost Values Returning as Zero
Previous Issue: Cost analysis tools were returning zero values even when actual cost data existed.
Solution: This has been FIXED by correcting field name mapping in API response parsing:
Metrics API Response Structure: The API returns aggregated field names like
totalCost_sum,count_count,totalTokens_sumUpdated Field Access: All tools now use correct aggregated field names instead of direct field names
Daily Metrics Integration: Cost analysis now uses
getDailyMetricsAPI for cleaner daily cost breakdownsAffected Tools: get-cost-analysis, get-metrics, usage-by-model, usage-by-service, project-overview, get-daily-metrics
✅ Fixed: Response Size and API Parameter Issues
Previous Issues:
get_observationsreturning responses exceeding MCP token limits (200k+ tokens)get_tracesreturning 400 Bad Request errors
Solutions Applied:
get_observations Response Size Control:
Added
includeInputOutput: falseparameter (default) to exclude large prompt/response contentAdded
truncateContent: 500parameter to limit content size when includedReduced default limit from 25 to 10 observations
Content truncation for input/output fields when enabled
get_traces API Parameter Fixes:
Added parameter validation for
orderByfieldEnhanced error logging with full request details for debugging
Added proper error handling with detailed error responses
✅ Fixed: Cost Analysis Data Aggregation
Previous Issue: Cost analysis was showing zero values for total costs and model breakdowns while daily data worked correctly.
Root Cause: The Metrics API field mapping was still incorrect despite earlier fixes.
Solution: Switched to using the working Daily Metrics API data for all aggregations:
Total Cost Calculation: Now sums from daily data instead of broken metrics API
Model Breakdown: Extracts and aggregates model costs from daily usage data
Daily Breakdown: Optimized to reuse already-fetched daily data
User Breakdown: Still uses metrics API but with enhanced debugging
Result:
✅
totalCostnow shows correct values (sum of daily costs)✅
byModelnow populated with real model cost breakdowns✅
byDaycontinues to work perfectly🔍
byUserincludes debugging to identify any remaining field mapping issues
✅ Fixed: usage_by_model Showing Zero Costs/Tokens
Previous Issue: usage_by_model showed observation counts correctly but all costs and tokens as zero.
Root Cause: Same metrics API field mapping issue affecting cost calculations.
Solution: Applied the same daily metrics approach used in cost analysis:
Primary Method: Uses
getDailyMetricsAPI to aggregate model costs and tokens from daily usage breakdownsFallback Method: Falls back to original metrics API with enhanced debugging if daily API fails
Data Aggregation: Properly extracts
totalCost,totalUsage, andcountObservationsfrom daily data
Result:
✅ Models now show real
totalCostvalues instead of 0✅ Models now show real
totalTokensvalues instead of 0✅
observationCountcontinues to work correctly
Performance Considerations
API Efficiency: The server now uses native Langfuse endpoints efficiently:
Metrics queries are processed server-side by Langfuse for optimal performance
Trace and observation filtering happens at the API level to reduce data transfer
Daily metrics use the specialized endpoint for pre-aggregated data
Environment Variables
Make sure these environment variables are properly set:
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables querying Langfuse analytics, cost metrics, and usage data across multiple projects. Provides tools for trace analysis, model/service cost breakdowns, and daily usage trends through natural language queries.