Skip to main content
Glama

tailtest_setup

Bootstraps tailtest in a Cline project: detects language/framework, writes .clinerules and workflows, seeds Memory Bank with tailtestContext.md, initializes config, and returns a reload warning.

Instructions

Bootstrap entry point for tailtest in a Cline project. Detects language / framework / runner, writes the .clinerules/ rule pack, writes .clinerules/workflows/ slash workflows, seeds Memory Bank with tailtestContext.md (a 7th file alongside the 6 core ones; existing files are not overwritten), and initialises .tailtest/config.json + session.json. Returns a structured report including the user-facing 'reload required' warning (Cline does not auto-reload .clinerules mid-conversation).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_rootNoProject directory. Defaults to the current working directory.
modeNomanual (default): user invokes /tailtest-test after edits. auto: tailtest fires after every edit (requires Cline auto-approve).

Implementation Reference

  • The `setup()` function is the main handler for the tailtest_setup tool. It detects the project language/framework/runner, copies .clinerules/ templates, seeds Memory Bank with tailtestContext.md, initializes .tailtest/ state files, and returns a structured report.
    def setup(
        project_root: str | None = None,
        mode: str = "manual",
    ) -> dict[str, Any]:
        """Bootstrap tailtest in a Cline project.
    
        Args:
            project_root: project directory. Defaults to cwd.
            mode: "manual" (default) or "auto". Manual is safer for first-time
                users; auto requires Cline auto-approve enabled. The user can
                switch modes later via `/tailtest-mode auto` or `/tailtest-mode manual`.
    
        Returns:
            Dict with detected stack info, lists of files written and skipped,
            next-steps prose for the user, and a clear "reload required" warning.
        """
        project_root = project_root or os.getcwd()
        project_root = os.path.abspath(project_root)
    
        if mode not in ("manual", "auto"):
            raise ValueError(f"Invalid mode {mode!r}. Use 'manual' or 'auto'.")
    
        # Detection
        language = _detect_project_language(project_root)
        framework = _detect_framework(language, project_root) if language != "unknown" else None
        runner = _detect_runner(language, project_root) if language != "unknown" else "unknown"
        test_dir = _detect_test_dir(language, project_root)
    
        results: dict[str, list[str]] = {"written": [], "skipped": []}
    
        # 1. Write the .clinerules/ pack
        _copy_templates("clinerules", os.path.join(project_root, ".clinerules"), results)
    
        # 2. Write the .clinerules/workflows/ slash workflows
        _copy_templates(
            "workflows", os.path.join(project_root, ".clinerules", "workflows"), results
        )
    
        # 3. Seed Memory Bank tailtestContext.md
        _write_tailtest_context(
            project_root, language, framework, runner, test_dir, mode, results
        )
    
        # 4. Initialise .tailtest/ state files
        _write_tailtest_state(project_root, mode, results)
    
        next_steps = (
            f"Setup complete. Detected {language}"
            + (f" with {framework}" if framework else "")
            + f". Mode: {mode}.\n\n"
            "**Important:** Cline does not auto-reload .clinerules mid-conversation. "
            "Start a new conversation (or reload the window) to activate tailtest.\n\n"
            + (
                "After reload, edit any source file and tailtest will fire automatically.\n"
                "Auto mode requires Cline auto-approve enabled for: 'Edit files (workspace)', "
                "'Execute safe commands', 'Use MCP servers'.\n"
                if mode == "auto"
                else "After reload, run `/tailtest-test <file>` to test a specific file. "
                "To switch to auto mode later, run `/tailtest-mode auto`.\n"
            )
            + "\nFor advanced features:\n"
            "- `/tailtest hunt <file>` -- one-shot adversarial pass on a specific file\n"
            "- Set `\"depth\": \"adversarial\"` in `.tailtest/config.json` for project-wide bug-hunting"
        )
    
        memory_bank_existed = all(
            os.path.exists(os.path.join(project_root, "memory-bank", f))
            for f in MEMORY_BANK_CORE_FILES
        )
    
        return {
            "project_root": project_root,
            "detected": {
                "language": language,
                "framework": framework,
                "runner": runner,
                "test_dir": test_dir,
            },
            "mode": mode,
            "memory_bank_pre_existed": memory_bank_existed,
            "files_written": results["written"],
            "files_skipped": results["skipped"],
            "reload_required": True,
            "next_steps": next_steps,
        }
  • The input schema for tailtest_setup is defined in the Tool registration within list_tools(). It accepts optional 'project_root' (string) and 'mode' (enum: 'manual' or 'auto', default 'manual').
    Tool(
        name="tailtest_setup",
        description=(
            "Bootstrap entry point for tailtest in a Cline project. Detects language / "
            "framework / runner, writes the .clinerules/ rule pack, writes "
            ".clinerules/workflows/ slash workflows, seeds Memory Bank with "
            "tailtestContext.md (a 7th file alongside the 6 core ones; existing files are "
            "not overwritten), and initialises .tailtest/config.json + session.json. "
            "Returns a structured report including the user-facing 'reload required' "
            "warning (Cline does not auto-reload .clinerules mid-conversation)."
        ),
        inputSchema={
            "type": "object",
            "properties": {
                "project_root": {
                    "type": "string",
                    "description": "Project directory. Defaults to the current working directory.",
                },
                "mode": {
                    "type": "string",
                    "enum": ["manual", "auto"],
                    "description": (
                        "manual (default): user invokes /tailtest-test after edits. "
                        "auto: tailtest fires after every edit (requires Cline auto-approve)."
                    ),
                },
            },
            "additionalProperties": False,
        },
    ),
  • The tool is registered as a Tool object with name='tailtest_setup' in the list_tools() method of the MCP Server.
        Tool(
            name="tailtest_setup",
            description=(
                "Bootstrap entry point for tailtest in a Cline project. Detects language / "
                "framework / runner, writes the .clinerules/ rule pack, writes "
                ".clinerules/workflows/ slash workflows, seeds Memory Bank with "
                "tailtestContext.md (a 7th file alongside the 6 core ones; existing files are "
                "not overwritten), and initialises .tailtest/config.json + session.json. "
                "Returns a structured report including the user-facing 'reload required' "
                "warning (Cline does not auto-reload .clinerules mid-conversation)."
            ),
            inputSchema={
                "type": "object",
                "properties": {
                    "project_root": {
                        "type": "string",
                        "description": "Project directory. Defaults to the current working directory.",
                    },
                    "mode": {
                        "type": "string",
                        "enum": ["manual", "auto"],
                        "description": (
                            "manual (default): user invokes /tailtest-test after edits. "
                            "auto: tailtest fires after every edit (requires Cline auto-approve)."
                        ),
                    },
                },
                "additionalProperties": False,
            },
        ),
        # V14.4+ tools registered here:
        # tailtest_baseline_state, tailtest_runner_for_lang, tailtest_security_status
    ]
  • The call_tool() dispatcher routes requests with name 'tailtest_setup' to the setup() function from .tools.setup.
    if name == "tailtest_setup":
        from .tools.setup import setup
        import json as _json
    
        result = setup(
            project_root=arguments.get("project_root"),
            mode=arguments.get("mode", "manual"),
        )
        return [TextContent(type="text", text=_json.dumps(result, indent=2))]
  • Helper functions used by the setup handler: _detect_runner(), _detect_framework(), _detect_test_dir(), _detect_project_language(), _copy_templates(), _write_tailtest_context(), and _write_tailtest_state().
    def _detect_runner(language: str, project_root: str) -> str:
        """Pick a sensible default runner per language. V14.3 lightweight."""
        if language == "python":
            if os.path.exists(os.path.join(project_root, "pyproject.toml")):
                return "pytest"
            if os.path.exists(os.path.join(project_root, "setup.py")):
                return "pytest"
            return "pytest"
        if language in ("typescript", "javascript"):
            pkg = os.path.join(project_root, "package.json")
            if os.path.exists(pkg):
                try:
                    with open(pkg) as f:
                        text = f.read().lower()
                    if "vitest" in text:
                        return "vitest"
                    if "jest" in text:
                        return "jest"
                except OSError:
                    pass
            return "vitest"
        if language == "go":
            return "go test"
        if language == "ruby":
            return "rspec"
        if language == "rust":
            return "cargo test"
        if language == "java":
            return "mvn test"
        if language == "kotlin":
            return "gradle test"
        if language == "csharp":
            return "dotnet test"
        if language == "php":
            return "phpunit"
        return "unknown"
    
    
    def _detect_framework(language: str, project_root: str) -> str | None:
        """Match the lightweight detection used by pick_template / scenario_plan."""
        if language == "python":
            for f in ("requirements.txt", "pyproject.toml", "setup.py"):
                path = os.path.join(project_root, f)
                if os.path.exists(path):
                    try:
                        with open(path) as fh:
                            text = fh.read().lower()
                        if "fastapi" in text:
                            return "fastapi"
                        if "flask" in text:
                            return "flask"
                        if "django" in text:
                            return "django"
                    except OSError:
                        pass
        elif language == "typescript":
            pkg = os.path.join(project_root, "package.json")
            if os.path.exists(pkg):
                try:
                    with open(pkg) as fh:
                        text = fh.read().lower()
                    if "@nestjs/" in text:
                        return "nestjs"
                except OSError:
                    pass
        elif language == "java":
            for f in ("pom.xml", "build.gradle"):
                path = os.path.join(project_root, f)
                if os.path.exists(path):
                    try:
                        with open(path) as fh:
                            text = fh.read().lower()
                        if "spring-boot" in text or "springframework" in text:
                            return "spring"
                    except OSError:
                        pass
        return None
    
    
    def _detect_test_dir(language: str, project_root: str) -> str:
        """Default test directory per language. V14.3 lightweight."""
        if language == "python":
            if os.path.isdir(os.path.join(project_root, "tests")):
                return "tests/"
            if os.path.isdir(os.path.join(project_root, "test")):
                return "test/"
            return "tests/"
        if language in ("typescript", "javascript"):
            if os.path.isdir(os.path.join(project_root, "__tests__")):
                return "__tests__/"
            if os.path.isdir(os.path.join(project_root, "tests")):
                return "tests/"
            return "__tests__/"
        if language == "go":
            return "(co-located)"
        if language == "java":
            return "src/test/java/"
        if language == "kotlin":
            return "src/test/kotlin/"
        if language == "csharp":
            return "tests/"
        return "tests/"
    
    
    def _detect_project_language(project_root: str) -> str:
        """Pick the dominant language by counting source files. V14.3 lightweight."""
        counts: dict[str, int] = {}
        for root, dirs, files in os.walk(project_root):
            # Skip noise dirs
            dirs[:] = [
                d
                for d in dirs
                if d not in {"node_modules", ".venv", "venv", ".git", "dist", "build", "__pycache__", "target"}
            ]
            for f in files:
                lang = detect_language(f)
                if lang:
                    counts[lang] = counts.get(lang, 0) + 1
            if sum(counts.values()) > 200:
                break
        if not counts:
            return "unknown"
        return max(counts.items(), key=lambda kv: kv[1])[0]
    
    
    def _copy_templates(template_subdir: str, dest_dir: str, results: dict[str, list[str]]) -> None:
        """Copy each .md file from the template subdirectory to dest_dir.
    
        Records each file in results['written'] or results['skipped'] (when the
        destination file already exists).
        """
        src_dir = os.path.join(_TEMPLATES_DIR, template_subdir)
        if not os.path.isdir(src_dir):
            results["skipped"].append(f"{template_subdir}/ (no templates bundled)")
            return
        os.makedirs(dest_dir, exist_ok=True)
        for fname in os.listdir(src_dir):
            if not fname.endswith(".md"):
                continue
            src = os.path.join(src_dir, fname)
            dst = os.path.join(dest_dir, fname)
            if os.path.exists(dst):
                results["skipped"].append(f"{dest_dir}/{fname} (exists; not overwritten)")
                continue
            shutil.copyfile(src, dst)
            results["written"].append(f"{dest_dir}/{fname}")
    
    
    def _write_tailtest_context(
        project_root: str,
        language: str,
        framework: str | None,
        runner: str,
        test_dir: str,
        mode: str,
        results: dict[str, list[str]],
    ) -> None:
        """Seed Memory Bank with `tailtestContext.md`.
    
        Detects whether Memory Bank exists. If yes, writes alongside the 6 core
        files as a 7th file. If not, creates `memory-bank/` and writes only
        `tailtestContext.md` (we do not initialise the full Memory Bank to avoid
        cluttering projects that did not opt in).
        """
        mb_dir = os.path.join(project_root, "memory-bank")
        target = os.path.join(mb_dir, "tailtestContext.md")
        if os.path.exists(target):
            results["skipped"].append(f"{target} (exists; not overwritten)")
            return
    
        template_path = os.path.join(_TEMPLATES_DIR, "memory-bank", "tailtestContext.md")
        if not os.path.exists(template_path):
            results["skipped"].append(f"memory-bank/tailtestContext.md (template missing)")
            return
    
        with open(template_path) as f:
            content = f.read()
    
        content = (
            content.replace("{{detection_date}}", _dt.date.today().isoformat())
            .replace("{{plugin_version}}", __version__)
            .replace("{{mode}}", mode)
            .replace("{{language}}", language)
            .replace("{{framework}}", framework or "(none detected)")
            .replace("{{runner}}", runner)
            .replace("{{test_dir}}", test_dir)
            .replace("{{depth}}", "standard")
            .replace("{{baseline_count}}", "0")
        )
    
        os.makedirs(mb_dir, exist_ok=True)
        with open(target, "w") as f:
            f.write(content)
        results["written"].append(f"{target}")
    
    
    def _write_tailtest_state(
        project_root: str,
        mode: str,
        results: dict[str, list[str]],
    ) -> None:
        """Initialise `.tailtest/config.json` and `.tailtest/session.json`."""
        tt_dir = os.path.join(project_root, ".tailtest")
        os.makedirs(tt_dir, exist_ok=True)
    
        config_path = os.path.join(tt_dir, "config.json")
        if not os.path.exists(config_path):
            with open(config_path, "w") as f:
                json.dump({"depth": "standard", "mode": mode}, f, indent=2)
            results["written"].append(f"{config_path}")
        else:
            results["skipped"].append(f"{config_path} (exists; not overwritten)")
    
        session_path = os.path.join(tt_dir, "session.json")
        if not os.path.exists(session_path):
            with open(session_path, "w") as f:
                json.dump(
                    {
                        "pending_files": [],
                        "generated_tests": {},
                        "fix_attempts": {},
                        "deferred_failures": [],
                        "runners": {},
                        "ramp_up": False,
                        "paused": False,
                        "mode": mode,
                    },
                    f,
                    indent=2,
                )
            results["written"].append(f"{session_path}")
        else:
            results["skipped"].append(f"{session_path} (exists; not overwritten)")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that files are written (.clinerules/, .tailtest/ config, memory bank), existing files are not overwritten, and a user-facing reload warning is returned. It does not mention authorization needs or potential errors, but covers key side effects sufficiently.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single dense sentence with multiple clauses. While it contains necessary information, it could be broken into clearer points for easier parsing. It is not excessively long, but structure could be improved.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple file writes, memory seeding) and lack of output schema, the description covers all primary behaviors. It notes that the return includes a reload warning but does not detail the structured report format. However, for a bootstrap setup tool, this level of detail is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with descriptions. The tool description adds context beyond the schema by explaining the mode enum values ('manual' vs 'auto') and their implications (manual requires invocation, auto fires after edits). This helps the agent choose appropriately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a 'Bootstrap entry point' and enumerates specific actions (detect language, write rule packs, seed memory bank, initialize config). This distinguishes it from sibling tools like tailtest_classify_failures or tailtest_scenario_plan, which handle other lifecycle phases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it is for initial setup ('Bootstrap entry point'), but does not explicitly state when to use this tool versus alternatives or mention prerequisites (e.g., 'Call this first before other tailtest tools'). No exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/avansaber/tailtest-cline'

If you have feedback or need assistance with the MCP directory API, please join our Discord server