Skip to main content
Glama
langchain-ai

LangSmith MCP Server

Official
by langchain-ai

fetch_runs

Retrieve LangSmith runs for analytics and export using flexible filters, query language, and trace-level constraints to explore traces, tools, and chains.

Instructions

Fetch LangSmith runs (traces, tools, chains, etc.) from one or more projects using flexible filters, query language expressions, and trace-level constraints.


🧩 PURPOSE

This is a general-purpose LangSmith run fetcher designed for analytics, trace export, and automated exploration.

It wraps client.list_runs() with complete support for:

  • Multiple project names or IDs

  • The Filter Query Language (FQL) for precise queries

  • Hierarchical filtering across trace trees

  • Sorting and result limiting

It returns raw suitable for further analysis or export.


βš™οΈ PARAMETERS

project_name : str The project name to fetch runs from. For multiple projects, use JSON array string (e.g., '["project1", "project2"]').

trace_id : str, optional Return only runs that belong to a specific trace tree. It is a UUID string, e.g. "123e4567-e89b-12d3-a456-426614174000".

run_type : str, optional Filter runs by type (e.g. "llm", "chain", "tool", "retriever").

error : str, optional Filter by error status: "true" for errored runs, "false" for successful runs.

is_root : str, optional Filter root traces: "true" for only top-level traces, "false" to exclude roots. If not provided, returns all runs.

filter : str, optional A Filter Query Language (FQL) expression that filters runs by fields, metadata, tags, feedback, latency, or time.

─── Common field names ─── - `id`, `name`, `run_type` - `start_time`, `end_time` - `latency` - `total_tokens` - `error` - `tags` - `feedback_key`, `feedback_score` - `metadata_key`, `metadata_value` - `execution_order` ─── Supported comparators ─── - `eq`, `neq` β†’ equal / not equal - `gt`, `gte`, `lt`, `lte` β†’ numeric or time comparisons - `has` β†’ tag or metadata contains value - `search` β†’ substring or full-text match - `and`, `or`, `not` β†’ logical operators ─── Examples ─── ```python 'gt(latency, "5s")' # took longer than 5 seconds 'neq(error, null)' # errored runs 'has(tags, "beta")' # runs tagged "beta" 'and(eq(name,"ChatOpenAI"), eq(run_type,"llm"))' # named & typed runs 'search("image classification")' # full-text search ```

trace_filter : str, optional Filter applied to the root run in each trace tree. Lets you select child runs based on root attributes or feedback.

Example: ```python 'and(eq(feedback_key,"user_score"), eq(feedback_score,1))' ``` β†’ return runs whose root trace has a user_score of 1.

tree_filter : str, optional Filter applied to any run in the trace tree (including siblings or children). Example: python 'eq(name,"ExpandQuery")' β†’ return runs if any run in their trace had that name.

order_by : str, default "-start_time" Sort field; prefix with "-" for descending order.

limit : int, default 50 Maximum number of runs to return.

reference_example_id : str, optional Filter runs by reference example ID. Returns only runs associated with the specified dataset example ID.

format_type : str, default "pretty" Output format for extracted messages. Options: - "pretty" (default): Human-readable formatted text focusing on human/AI/tool message exchanges - "json": Pretty-printed JSON format - "raw": Compact single-line JSON format

When format_type is set, the tool extracts messages from runs and formats them, making it ideal for conversational AI agents that care about message exchanges rather than full trace details. The response returns only the formatted output: - `formatted`: Formatted string representation of messages (when format_type is provided) When format_type is not set, the response returns: - `runs`: Full run data

πŸ“€ RETURNS

Dict[str, Any] Dictionary containing: - If format_type is set: {"formatted": str} - formatted string representation of messages - If format_type is not set: {"runs": List[Dict]} - list of LangSmith run dictionaries


πŸ§ͺ EXAMPLES

1️⃣ Get latest 10 root runs

runs = fetch_runs("alpha-project", is_root="true", limit=10)

2️⃣ Get all tool runs that errored

runs = fetch_runs("alpha-project", run_type="tool", error="true")

3️⃣ Get all runs that took >5s and have tag "experimental"

runs = fetch_runs("alpha-project", filter='and(gt(latency,"5s"), has(tags,"experimental"))')

4️⃣ Get all runs in a specific conversation thread

thread_id = "abc-123" fql = f'and(in(metadata_key, ["session_id","conversation_id","thread_id"]), eq(metadata_value, "{thread_id}"))' runs = fetch_runs("alpha-project", is_root="true", filter=fql)

5️⃣ List all runs called "extractor" whose root trace has feedback user_score=1

runs = fetch_runs( "alpha-project", filter='eq(name,"extractor")', trace_filter='and(eq(feedback_key,"user_score"), eq(feedback_score,1))' )

6️⃣ List all runs that started after a timestamp and either errored or got low feedback

fql = 'and(gt(start_time,"2023-07-15T12:34:56Z"), or(neq(error,null), and(eq(feedback_key,"Correctness"), eq(feedback_score,0.0))))' runs = fetch_runs("alpha-project", filter=fql)

7️⃣ Get formatted messages for conversational AI (default: pretty format)

# Returns formatted messages focusing on human/AI/tool exchanges result = fetch_runs("alpha-project", limit=10, format_type="pretty") # result["formatted"] contains human-readable formatted messages # result["messages"] contains the raw message list # result["runs"] contains full run data

8️⃣ Get messages in JSON format

result = fetch_runs("alpha-project", limit=10, format_type="json") # result["messages"] contains messages as JSON array # result["formatted"] contains pretty-printed JSON string

🧠 NOTES FOR AGENTS

  • Use this to query LangSmith data sources dynamically.

  • Compose FQL strings programmatically based on your intent.

  • Combine filter, trace_filter, and tree_filter for hierarchical logic.

  • Always verify that project_name matches an existing LangSmith project.

  • Returned dict objects have fields like:

  • id, name, run_type, inputs, outputs, error, start_time, end_time, latency, metadata, feedback, etc.

  • If the trace is big, save it to a file (if you have this ability) and analyze it locally.

  • For conversational AI agents: Use format_type="pretty" (default) to get human-readable message exchanges focusing on human/AI/tool messages rather than full trace details.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_nameYes
trace_idNo
run_typeNo
errorNo
is_rootNo
filterNo
trace_filterNo
tree_filterNo
order_byNo-start_time
limitNo
reference_example_idNo
format_typeNopretty

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/langchain-ai/langsmith-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server