Skip to main content
Glama
avivsinai

langfuse-mcp

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LANGFUSE_HOSTYesLangfuse instance URL. Use https://cloud.langfuse.com for Langfuse Cloud.
LANGFUSE_PUBLIC_KEYYesLangfuse Public Key from Settings -> API Keys.
LANGFUSE_SECRET_KEYYesLangfuse Secret Key from Settings -> API Keys.
LANGFUSE_MCP_READ_ONLYNoDisable all write operations for safer read-only access if set to true.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
list_datasets

List all datasets in the project with pagination.

Returns metadata about datasets including name, description, item count, and timestamps. Args: ctx: Context object containing lifespan context with Langfuse client page: Page number for pagination (starts at 1) limit: Maximum items per page (max 100) Returns: A dictionary containing: - data: List of dataset metadata objects - metadata: Pagination info (page, limit, total)
get_dataset

Get a specific dataset by name.

Retrieves dataset details including metadata and item count. Args: ctx: Context object containing lifespan context with Langfuse client name: The name of the dataset to fetch Returns: A dictionary containing dataset details: - id: Unique dataset identifier - name: Dataset name - description: Dataset description - metadata: Custom metadata - items: List of dataset items (if included by the API) - runs: List of dataset runs (if included by the API)
list_dataset_items

List items in a dataset with pagination and optional filtering.

Returns dataset items with their input, expected output, and metadata. Args: ctx: Context object containing lifespan context with Langfuse client dataset_name: The name of the dataset to list items from source_trace_id: Optional filter by source trace ID source_observation_id: Optional filter by source observation ID page: Page number for pagination (starts at 1) limit: Maximum items per page (max 100) output_mode: How to format the response data Returns: A dictionary containing: - data: List of dataset item objects - metadata: Pagination info (page, limit, total, dataset_name)
get_dataset_item

Get a specific dataset item by ID.

Retrieves the full dataset item including input, expected output, metadata, and linked traces. Args: ctx: Context object containing lifespan context with Langfuse client item_id: The ID of the dataset item to fetch output_mode: How to format the response data Returns: A dictionary containing the dataset item details: - id: Unique item identifier - datasetId: Parent dataset ID - input: Input data for the item - expectedOutput: Expected output data - metadata: Custom metadata - sourceTraceId: Linked trace ID (if any) - sourceObservationId: Linked observation ID (if any) - status: Item status (ACTIVE or ARCHIVED)
create_dataset

Create a new dataset in the project.

Datasets are used to store evaluation test cases with input/expected output pairs. Args: ctx: Context object containing lifespan context with Langfuse client name: Name for the new dataset (must be unique) description: Optional description metadata: Optional custom metadata Returns: A dictionary containing the created dataset details: - id: Unique dataset identifier - name: Dataset name - description: Dataset description - metadata: Custom metadata - createdAt: Creation timestamp
create_dataset_item

Create a new item in a dataset, or update if item_id already exists.

Dataset items store input/expected output pairs for evaluation. If item_id is provided and already exists, the item will be updated (upsert behavior). Args: ctx: Context object containing lifespan context with Langfuse client dataset_name: Name of the target dataset input: Input data for the item expected_output: Expected output for evaluation metadata: Optional custom metadata source_trace_id: Optional linked trace ID source_observation_id: Optional linked observation ID item_id: Optional custom ID (enables upsert) status: Item status (ACTIVE or ARCHIVED) Returns: A dictionary containing the created/updated item details
delete_dataset_item

Delete a dataset item by ID.

This is a permanent deletion and cannot be undone. Args: ctx: Context object containing lifespan context with Langfuse client item_id: The ID of the dataset item to delete Returns: A dictionary confirming the deletion
find_exceptions

Get exception counts grouped by file path, function, or type.

Args: ctx: Context object containing lifespan context with Langfuse client age: Number of minutes to look back (positive integer, max 7 days/10080 minutes) group_by: How to group exceptions - "file" groups by filename, "function" groups by function name, or "type" groups by exception type Returns: List of exception counts grouped by the specified category (file, function, or type)
find_exceptions_in_file

Get detailed exception info for a specific file.

Args: ctx: Context object containing lifespan context with Langfuse client filepath: Path to the file to search for exceptions (full path including extension) age: Number of minutes to look back (positive integer, max 7 days/10080 minutes) output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: List of summarized exception details - full_json_string: String containing the full JSON response - full_json_file: List of summarized exception details with file save info
get_exception_details

Get detailed exception info for a trace/span.

Args: ctx: Context object containing lifespan context with Langfuse client trace_id: The ID of the trace to analyze for exceptions (unique identifier string) span_id: Optional span ID to filter by specific span (unique identifier string) output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: List of summarized exception details - full_json_string: String containing the full JSON response - full_json_file: List of summarized exception details with file save info
get_error_count

Get number of traces with exceptions in last N minutes.

Args: ctx: Context object containing lifespan context with Langfuse client age: Number of minutes to look back (positive integer, max 7 days/10080 minutes) Returns: Dictionary with error statistics including trace count, observation count, and exception count
fetch_observations

Get observations filtered by type and other criteria.

Args: ctx: Context object containing lifespan context with Langfuse client type: The observation type to filter by (SPAN, GENERATION, or EVENT) age: Minutes ago to start looking (e.g., 1440 for 24 hours) name: Optional name filter (string pattern to match) user_id: Optional user ID filter (exact match) trace_id: Optional trace ID filter (exact match) parent_observation_id: Optional parent observation ID filter (exact match) page: Page number for pagination (starts at 1) limit: Maximum number of observations to return per page output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: List of summarized observation objects - full_json_string: String containing the full JSON response - full_json_file: List of summarized observation objects with file save info
fetch_observation

Get a single observation by ID.

Args: ctx: Context object containing lifespan context with Langfuse client observation_id: The ID of the observation to fetch (unique identifier string) output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: Summarized observation object - full_json_string: String containing the full JSON response - full_json_file: Summarized observation object with file save info
get_prompt

Fetch a specific prompt by name with resolved dependencies.

Retrieves a prompt from Langfuse with all dependency tags resolved. Uses the SDK's built-in caching for optimal performance. Args: ctx: Context object containing lifespan context with Langfuse client name: The name of the prompt to fetch label: Optional label to fetch (e.g., 'production'). Cannot be used with version. version: Optional specific version number. Cannot be used with label. Returns: A dictionary containing the prompt details: - id: Unique prompt identifier - name: Prompt name - version: Version number - type: 'text' or 'chat' - prompt: The prompt content (string for text, list for chat) - labels: List of labels assigned to this version - tags: List of tags - config: Model configuration (temperature, model, etc.) Raises: ValueError: If both label and version are specified LookupError: If prompt not found
get_prompt_unresolved

Fetch a specific prompt by name WITHOUT resolving dependencies.

Returns raw prompt content with dependency tags intact (e.g., @@@langfusePrompt:name=xxx@@@) when the SDK supports resolve=false. Otherwise returns the resolved prompt and marks metadata.resolved=True. Useful for analyzing prompt composition and debugging dependency chains. Args: ctx: Context object containing lifespan context with Langfuse client name: The name of the prompt to fetch label: Optional label to fetch. Cannot be used with version. version: Optional specific version number. Cannot be used with label. Returns: A dictionary containing the raw prompt details with dependency tags preserved. Raises: ValueError: If both label and version are specified LookupError: If prompt not found
list_prompts

List and filter prompts in the project.

Returns metadata about prompts including versions, labels, tags, and last updated time. Args: ctx: Context object containing lifespan context with Langfuse client name: Optional filter by exact prompt name label: Optional filter by label on any version tag: Optional filter by tag page: Page number for pagination (starts at 1) limit: Maximum items per page (max 100) Returns: A dictionary containing: - data: List of prompt metadata objects - metadata: Pagination info (page, limit, total)
create_text_prompt

Create a new text prompt version in Langfuse.

Prompts are immutable; creating a new version is the only way to update prompt content. Labels are unique across versions - assigning a label here will move it from other versions.

create_chat_prompt

Create a new chat prompt version in Langfuse.

Chat prompts are arrays of role/content messages. Prompts are immutable; create a new version to update content. Labels are unique across versions.

update_prompt_labels

Update labels for a specific prompt version.

This is the only supported mutation for existing prompts. Provided labels are added to the version (existing labels are preserved). Labels are unique across versions, and the 'latest' label is managed by Langfuse.

get_data_schema

Get schema of trace, span and event objects.

Args: ctx: Context object containing lifespan context with Langfuse client dummy: Unused parameter for API compatibility (can be left empty) Returns: String containing the detailed schema definitions for traces, spans, events, and other core Langfuse data structures
fetch_sessions

Get a list of sessions in the current project.

Args: ctx: Context object containing lifespan context with Langfuse client age: Minutes ago to start looking (e.g., 1440 for 24 hours) page: Page number for pagination (starts at 1) limit: Maximum number of sessions to return per page output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: List of summarized session objects - full_json_string: String containing the full JSON response - full_json_file: List of summarized session objects with file save info
get_session_details

Get detailed information about a specific session.

Args: ctx: Context object containing lifespan context with Langfuse client session_id: The ID of the session to retrieve (unique identifier string) include_observations: If True, fetch and include the full observation objects instead of just IDs. Use this when you need access to system prompts, model parameters, or other details stored within observations. Significantly increases response time but provides complete data. output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: Summarized session details object - full_json_string: String containing the full JSON response - full_json_file: Summarized session details object with file save info Usage Tips: - For quick browsing: use include_observations=False with output_mode="compact" - For full data but viewable in responses: use include_observations=True with output_mode="compact" - For complete data dumps: use include_observations=True with output_mode="full_json_file"
get_user_sessions

Get sessions for a user within a time range.

Args: ctx: Context object containing lifespan context with Langfuse client user_id: The ID of the user to retrieve sessions for (unique identifier string) age: Minutes ago to start looking (e.g., 1440 for 24 hours) include_observations: If True, fetch and include the full observation objects instead of just IDs. Use this when you need access to system prompts, model parameters, or other details stored within observations. Significantly increases response time but provides complete data. output_mode: Controls the output format and detail level Returns: Based on output_mode: - compact: List of summarized session objects - full_json_string: String containing the full JSON response - full_json_file: List of summarized session objects with file save info Usage Tips: - For quick browsing: use include_observations=False with output_mode="compact" - For full data but viewable in responses: use include_observations=True with output_mode="compact" - For complete data dumps: use include_observations=True with output_mode="full_json_file"
fetch_traces

Find traces based on filters. All filter parameters are optional.

fetch_trace

Get a single trace by ID with full details.

Args: ctx: Context object containing lifespan context with Langfuse client trace_id: The ID of the trace to fetch (unique identifier string) include_observations: If True, fetch and include the full observation objects instead of just IDs. Use this when you need access to system prompts, model parameters, or other details stored within observations. Significantly increases response time but provides complete data. output_mode: Controls the output format and detail level Returns: One of the following based on output_mode: - For 'compact' and 'full_json_file': A response dictionary with the structure: { "data": Single trace object, "metadata": { "file_path": Path to saved file (only for full_json_file mode), "file_info": File save details (only for full_json_file mode) } } - For 'full_json_string': A string containing the full JSON response Usage Tips: - For quick browsing: use include_observations=False with output_mode="compact" - For full data but viewable in responses: use include_observations=True with output_mode="compact" - For complete data dumps: use include_observations=True with output_mode="full_json_file"

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/avivsinai/landfuse-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server