Skip to main content
Glama
mshegolev

allure-testops-mcp

allure_list_test_cases

Read-onlyIdempotent

List test cases from an Allure project, optionally filtered by automation status and owner username. Returns id, name, automated, status, layer, and tags (tags provided when owner is set).

Instructions

List test cases for a project with optional automation and ownership filters.

Each TC carries id, name, automated, status, layer (e.g. UNIT, API, E2E), the createdBy / lastModifiedBy audit usernames, and a flat list of tag names. Caveat: the audit fields and tags are only populated when owner is set, because Allure's plain /testcase endpoint returns a compact projection that omits them — the owner path uses __search which returns the full projection.

Args: project_id: Allure project ID. ctx: MCP Context (auto-injected by FastMCP for progress reporting). automated: True — only automated, False — only manual, None — both. owner: Optional Allure username. When set, the response is narrowed to TCs where createdBy = owner OR lastModifiedBy = owner (case-sensitive, exact match), enforced server-side via Allure's RQL __search endpoint. The username must match [A-Za-z0-9._@-]+ — anything else is rejected at the MCP input layer (Pydantic pattern) to prevent RQL injection.

    **Why "creator/modifier" and not "owner".** Allure TestOps does
    not expose a separate ``owner`` field in RQL on most
    deployments — the closest stable proxy for "TCs I touched" is
    the union of ``createdBy`` and ``lastModifiedBy``.

    **Trade-off when ``owner`` is set.** ``__search`` does not
    accept the ``automated`` query parameter, so an ``automated``
    filter combined with ``owner`` is applied **client-side after
    the page is fetched**. ``pagination`` then reflects the raw
    owner-filtered set, not the further automation-filtered view —
    a fetched page of 50 may shrink. Raise ``size`` (max 200) or
    iterate ``page`` for full coverage.
page: 0-based page index.
size: Items per page (1-200; default 50).

Returns: dict with keys: - project_id (int) - count (int): items in this response (post any client-side automated filter) - pagination (dict): raw Allure paging - test_cases (list): each item carries id / name / automated / status / layer / created_by / last_modified_by / tags

Examples: - "How many manual TCs does project 63 have?" -> project_id=63, automated=False, read pagination.total - "First 200 automated TCs" -> automated=True, size=200 - "My manual TCs in project 63" -> project_id=63, automated=False, owner="jdoe", size=200

Don't use when:
- You need just the automation % (``allure_get_project_statistics``).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYesAllure project ID.
automatedNoTrue: only automated. False: only manual. None: both.
ownerNoAllure username — narrows the result to TCs where the user is the creator OR the last modifier. Applied server-side via Allure RQL (see docstring).
pageNo0-based page.
sizeNoItems per page (1-200).

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
countYes
paginationYes
test_casesYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses critical behavioral details beyond annotations: audit fields and tags are only populated when owner is set (Allure endpoint constraint), owner is actually a union of createdBy and lastModifiedBy, the automated filter becomes client-side when owner is used, pagination implications, and the regex pattern preventing RQL injection. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with Args, Returns, Examples, and a 'Don't use when' section. It is front-loaded with the primary purpose, and every section earns its place by providing necessary context without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, optional filters, server-side vs client-side behavior), the description covers all aspects: input semantics, output structure, edge cases (owner limitation), and practical examples. The presence of an output schema does not reduce the value of the detailed return format explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema description coverage, the description adds significant semantic value: explains why owner maps to creator/modifier, details the trade-off with automated filtering, provides pattern explanation and injection prevention context, and gives concrete examples for using automated, page, and size parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists test cases for a project with optional automation and ownership filters. It details specific fields returned (id, name, automated, status, layer, etc.), distinguishing it from sibling tools like allure_get_project_statistics, which returns summary data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage examples (e.g., 'How many manual TCs does project 63 have?'), a 'Don't use when' section pointing to allure_get_project_statistics for automation percentages, and detailed context on when to set owner and associated trade-offs (e.g., client-side filtering).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/allure-testops-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server