Skip to main content
Glama

tailtest_pick_template

Identify the correct test template for a given source file by analyzing its framework and language, returning baseline scenarios, test patterns, and file paths. Falls back to language baseline when no framework matches.

Instructions

Return the full framework R2 template for a given source file: language baseline scenarios, framework baseline scenarios, framework-specific test pattern (e.g., NestJS Test.createTestingModule, Spring @WebMvcTest, Flask test_client), and test file path pattern. Returns just language baseline when no framework matches.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesRelative or absolute path to the source file under test.
project_rootNoProject root directory. Defaults to the current working directory.

Implementation Reference

  • The main handler function `pick_template()` that executes the tool logic. It detects language and framework, then returns the full framework R2 template including baseline scenarios, framework-specific scenarios, test patterns, and instructions.
    def pick_template(file_path: str, project_root: str | None = None) -> dict[str, Any]:
        """Return the full framework R2 template for the given source file.
    
        Args:
            file_path: relative or absolute path to the source file.
            project_root: project root for framework detection. Defaults to cwd.
    
        Returns:
            Dict with: language, framework (or None), language_baseline,
            framework_template (or None), test_file_path_pattern, instructions.
    
        If no framework matches, returns just the language baseline.
        """
        project_root = project_root or os.getcwd()
    
        language = detect_language(file_path) or "unknown"
        framework = _detect_framework(language, project_root)
    
        language_baseline = LANGUAGE_BASELINES.get(language, [])
        framework_template = FRAMEWORK_TEMPLATES.get(framework) if framework else None
    
        instructions = (
            f"Use the language baseline scenarios for {language}: "
            f"{', '.join(language_baseline) or 'none specified'}. "
        )
        if framework_template:
            instructions += (
                f"Add framework-specific scenarios for {framework} on top: "
                f"{'; '.join(framework_template['baseline_scenarios'])}. "
                f"Follow the test pattern: {framework_template['test_pattern']} "
            )
        else:
            instructions += "No framework template matched; use language baseline only. "
    
        return {
            "file_path": file_path,
            "language": language,
            "framework": framework,
            "language_baseline": language_baseline,
            "framework_template": framework_template,
            "test_file_path_pattern": (
                framework_template["test_file_path"] if framework_template else None
            ),
            "instructions": instructions,
        }
  • Tool registration with inputSchema for 'tailtest_pick_template': defines file_path (required string) and project_root (optional string) as input properties.
    name="tailtest_pick_template",
    description=(
        "Return the full framework R2 template for a given source file: language "
        "baseline scenarios, framework baseline scenarios, framework-specific test "
        "pattern (e.g., NestJS Test.createTestingModule, Spring @WebMvcTest, Flask "
        "test_client), and test file path pattern. Returns just language baseline "
        "when no framework matches."
    ),
    inputSchema={
        "type": "object",
        "properties": {
            "file_path": {
                "type": "string",
                "description": "Relative or absolute path to the source file under test.",
            },
            "project_root": {
                "type": "string",
                "description": "Project root directory. Defaults to the current working directory.",
            },
        },
        "required": ["file_path"],
        "additionalProperties": False,
    },
  • Dispatch/handler registration in call_tool(): when name=='tailtest_pick_template', imports pick_template from .tools.pick_template and calls it with arguments, returning JSON result.
    if name == "tailtest_pick_template":
        from .tools.pick_template import pick_template
        import json as _json
    
        result = pick_template(
            file_path=arguments["file_path"],
            project_root=arguments.get("project_root"),
        )
        return [TextContent(type="text", text=_json.dumps(result, indent=2))]
  • Helper function `_detect_framework()` that detects framework from project files (requirements.txt, pyproject.toml, package.json, pom.xml, build.gradle, .csproj) based on the detected language.
    def _detect_framework(language: str, project_root: str) -> str | None:
        """Detect framework from project files. V14.2 lightweight version."""
        if language == "python":
            for f in ("requirements.txt", "pyproject.toml", "setup.py"):
                path = os.path.join(project_root, f)
                if os.path.exists(path):
                    try:
                        with open(path) as fh:
                            text = fh.read().lower()
                        if "fastapi" in text:
                            return "fastapi"
                        if "flask" in text:
                            return "flask"
                        if "django" in text:
                            return "django"
                    except OSError:
                        pass
        elif language == "typescript":
            pkg = os.path.join(project_root, "package.json")
            if os.path.exists(pkg):
                try:
                    with open(pkg) as fh:
                        text = fh.read().lower()
                    if "@nestjs/" in text:
                        return "nestjs"
                except OSError:
                    pass
        elif language == "java":
            pom = os.path.join(project_root, "pom.xml")
            gradle = os.path.join(project_root, "build.gradle")
            for path in (pom, gradle):
                if os.path.exists(path):
                    try:
                        with open(path) as fh:
                            text = fh.read().lower()
                        if "spring-boot" in text or "springframework" in text:
                            return "spring"
                    except OSError:
                        pass
        elif language == "kotlin":
            gradle = os.path.join(project_root, "build.gradle.kts")
            if os.path.exists(gradle):
                return "kotlin"
        elif language == "csharp":
            # Any .csproj triggers the C# template
            for f in os.listdir(project_root) if os.path.isdir(project_root) else []:
                if f.endswith(".csproj"):
                    return "csharp"
        return None
  • Helper function `detect_language()` used by pick_template to determine the language from file extension via LANGUAGE_MAP.
    def detect_language(file_path: str) -> Optional[str]:
        """Return the language name for a file path, or None if not recognised."""
        _, ext = os.path.splitext(file_path)
        return LANGUAGE_MAP.get(ext.lower())
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses that the tool returns only language baseline when no framework matches, but does not explain side effects, permissions, or output format. This is adequate but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences, front-loading the main purpose. The second sentence is slightly redundant but not excessive. It strikes a good balance between clarity and brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description gives a solid overview of what is returned. It mentions the conditional behavior and the components included. However, it does not specify the output format or how it relates to sibling tools, which would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are fully described in the input schema (100% coverage). The description adds overall context but does not provide additional details beyond the schema. Thus, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns the full framework R2 template for a given source file, listing specific components (language baseline scenarios, framework baseline scenarios, test pattern, file path pattern). It clearly distinguishes from sibling tools like tailtest_classify_failures or tailtest_setup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus alternatives. There is no mention of when not to use it or which sibling tool is appropriate for different scenarios. The only condition mentioned is when no framework matches, but no alternatives are suggested.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/avansaber/tailtest-cline'

If you have feedback or need assistance with the MCP directory API, please join our Discord server