Skip to main content
Glama
avivsinai

langfuse-mcp

list_prompts

Browse and filter your Langfuse project prompts to access metadata such as versions, labels, tags, and last updated time.

Instructions

List and filter prompts in the project.

Returns metadata about prompts including versions, labels, tags, and last updated time.

Args:
    ctx: Context object containing lifespan context with Langfuse client
    name: Optional filter by exact prompt name
    label: Optional filter by label on any version
    tag: Optional filter by tag
    page: Page number for pagination (starts at 1)
    limit: Maximum items per page (max 100)

Returns:
    A dictionary containing:
    - data: List of prompt metadata objects
    - metadata: Pagination info (page, limit, total)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameNoFilter by exact prompt name
labelNoFilter by label (e.g., 'production', 'staging')
tagNoFilter by tag
pageNoPage number for pagination (starts at 1)
limitNoItems per page (max 100)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description claims an 'ctx' argument that is not present in the input schema, creating a discrepancy. This misleads agents about the required input. No annotations are provided, so the description must carry the full burden. It does not state whether the operation is read-only or has any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description has a clear structure with Args and Returns sections, and the first sentence provides the purpose. However, the Args section redundantly repeats information already present in the schema, making it longer than necessary. It could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the return value structure (list of prompt metadata and pagination info), which is useful since no output schema is defined in the input schema. However, it misses error conditions, rate limits, or authentication requirements. The extra 'ctx' param issue reduces completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description's parameter descriptions largely repeat the schema (e.g., 'Filter by exact prompt name'). The addition of the 'ctx' parameter not in schema is confusing and does not add meaningful semantic value beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description begins with 'List and filter prompts in the project', clearly stating the verb (list/filter) and resource (prompts). It explains the returned metadata (versions, labels, tags, last updated time), which distinguishes it from sibling tools like get_prompt that likely return full details. However, it does not explicitly contrast with siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists filter parameters (name, label, tag, page, limit), implying when to use the tool (filtering prompts). But it does not provide explicit guidance on when to use this versus alternatives like get_prompt or create_prompt, nor does it mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/avivsinai/landfuse-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server