Skip to main content
Glama
langchain-ai

LangSmith MCP Server

Official
by langchain-ai

list_examples

Retrieve dataset examples from LangSmith with filtering by ID, metadata, splits, or version to support data analysis and model evaluation workflows.

Instructions

Fetch examples from a LangSmith dataset with advanced filtering options.

Note: Either dataset_id, dataset_name, or example_ids must be provided. If multiple are provided, they are used in order of precedence: example_ids, dataset_id, dataset_name.

Args: dataset_id (Optional[str]): Dataset ID to retrieve examples from dataset_name (Optional[str]): Dataset name to retrieve examples from example_ids (Optional[str]): Specific example IDs as JSON array string (e.g., '["id1", "id2"]') or single ID limit (int): Maximum number of examples to return (default: 10) offset (int): Number of examples to skip (default: 0) filter (Optional[str]): Filter string using LangSmith query syntax (e.g., 'has(metadata, {"key": "value"})') metadata (Optional[str]): Metadata to filter by as JSON object string (e.g., '{"key": "value"}') splits (Optional[str]): Dataset splits as JSON array string (e.g., '["train", "test"]') or single split inline_s3_urls (Optional[str]): Whether to inline S3 URLs: "true" or "false" (default: SDK default if not specified) include_attachments (Optional[str]): Whether to include attachments: "true" or "false" (default: SDK default if not specified) as_of (Optional[str]): Dataset version tag OR ISO timestamp to retrieve examples as of that version/time ctx: FastMCP context (automatically provided)

Returns: Dict[str, Any]: Dictionary containing the examples and metadata, or an error message if the examples cannot be retrieved

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataset_idNo
dataset_nameNo
example_idsNo
filterNo
metadataNo
splitsNo
inline_s3_urlsNo
include_attachmentsNo
as_ofNo
limitNo
offsetNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the tool fetches with filtering and returns a dictionary or error, but lacks details on permissions, rate limits, side effects, or pagination behavior. The description adds some context (e.g., precedence rules, default behaviors) but doesn't fully disclose behavioral traits for a complex tool with 11 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a purpose statement, usage note, parameter details, and return info. It's appropriately sized for a complex tool, but the parameter list is lengthy (though necessary). Every sentence adds value, and it's front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (11 parameters, no annotations, 0% schema coverage), the description is quite complete. It covers purpose, usage rules, parameter semantics, and return values. An output schema exists, so return details aren't needed. The main gap is lack of behavioral context like permissions or side effects, preventing a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides detailed semantics for all 11 parameters, including data types, defaults, formats (e.g., JSON strings), and examples. This goes well beyond what the bare schema offers, making parameter usage clear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch examples from a LangSmith dataset with advanced filtering options.' It specifies the verb ('fetch'), resource ('examples'), and scope ('LangSmith dataset'), and distinguishes it from siblings like 'read_example' (singular) and 'list_datasets' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Note: Either dataset_id, dataset_name, or example_ids must be provided.' It also clarifies precedence rules for multiple inputs. However, it doesn't explicitly contrast with alternatives like 'read_example' or 'update_examples', which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/langchain-ai/langsmith-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server