Skip to main content
Glama

validate_component_logging

Checks that each component method logs output with a BaseModel and flags defensive dict-access antipattern self.params.get.

Instructions

AST-check that each component's required method calls the matching self.log_<component>_output(...) with a BaseModel instance (not a dict, not a wrong schema).

    Also flags ``self.params.get(...)`` anywhere in the file (PRM-004 —
    defensive dict-access antipattern on the params container).

    Returns ``{"any_errors": bool, "findings": [...]}``.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
strategy_dirYes

Implementation Reference

  • Core implementation: 'validate_component_logging' function. For each of 4 component files (entry.py, exit.py, risk.py, sizer.py), parses the AST, checks that the required method calls self.log_<component>_output(...) with a matching BaseModel (not dict/wrong schema), and flags self.params.get(...) antipattern. Returns a Report.
    def validate_component_logging(strategy_dir: "Path | str") -> Report:
        """Return a Report with logging-convention findings for each of the
        4 required component files in ``strategy_dir``."""
        strategy_dir = Path(strategy_dir)
        report = Report()
    
        for file_name, (class_name, method_name, log_call, expected_schema) in _CONTRACT.items():
            file_path = strategy_dir / file_name
            if not file_path.exists():
                continue
    
            try:
                tree = ast.parse(file_path.read_text(encoding="utf-8"))
            except SyntaxError:
                continue  # upstream concern
    
            method = _find_method_in_class(tree, class_name, method_name)
            if method is None:
                continue  # STR-003 territory (component_signatures handles this)
    
            _check_log_call(
                method, log_call, expected_schema, file_path, class_name, method_name, report,
            )
            _check_params_dict_get(tree, file_path, report)
    
        return report
  • MCP tool registration: 'validate_component_logging' is registered as a FastMCP server tool via @server.tool() decorator. Delegates to the core implementation in echolon.strategy.validators.component_logging.
    @server.tool()
    def validate_component_logging(strategy_dir: str) -> dict:
        """AST-check that each component's required method calls the
        matching ``self.log_<component>_output(...)`` with a BaseModel
        instance (not a dict, not a wrong schema).
    
        Also flags ``self.params.get(...)`` anywhere in the file (PRM-004 —
        defensive dict-access antipattern on the params container).
    
        Returns ``{"any_errors": bool, "findings": [...]}``.
        """
        from echolon.strategy.validators.component_logging import (
            validate_component_logging as _impl,
        )
        return _impl(strategy_dir=strategy_dir).to_dict()
  • Contract schema: _CONTRACT dict maps file names to (class_name, method_name, log_call_name, expected_schema_identifier) defining the 4 required components.
    # Maps file name → (class_name, method_name, log_call_name, expected_schema_identifier)
    _CONTRACT: Dict[str, Tuple[str, str, str, str]] = {
        "entry.py": ("entry_rule",     "generate_signal", "log_entry_output", "EntrySignalOutput"),
        "exit.py":  ("exit_rule",      "should_exit",     "log_exit_output",  "ExitSignalOutput"),
        "risk.py":  ("risk_manager",   "can_trade",       "log_risk_output",  "RiskOutput"),
        "sizer.py": ("position_sizer", "calculate_size",  "log_sizer_output", "SizerOutput"),
    }
  • AST helper functions: _is_self_attr_call, _resolve_name_binding, _call_function_tail, _resolve_arg_type_identifier, _find_method_in_class, _check_log_call, _check_params_dict_get — all supporting the AST-based validation logic.
    def _is_self_attr_call(node: ast.Call, attr_name: str) -> bool:
        """True if ``node`` is a call of the form ``self.<attr_name>(...)``."""
        func = node.func
        return (
            isinstance(func, ast.Attribute)
            and func.attr == attr_name
            and isinstance(func.value, ast.Name)
            and func.value.id == "self"
        )
    
    
    def _resolve_name_binding(
        method_body: List[ast.stmt], target_name: str,
    ) -> Optional[ast.AST]:
        """Scan the method body for ``<target_name> = <rhs>`` assignments and
        return the RHS of the first match. Used to trace a log call's local-
        variable argument back to its instantiation site."""
        for stmt in method_body:
            if isinstance(stmt, ast.Assign):
                for tgt in stmt.targets:
                    if isinstance(tgt, ast.Name) and tgt.id == target_name:
                        return stmt.value
        return None
    
    
    def _call_function_tail(call: ast.Call) -> Optional[str]:
        """Return the trailing identifier of a call's function expression."""
        func = call.func
        if isinstance(func, ast.Name):
            return func.id
        if isinstance(func, ast.Attribute):
            return func.attr
        return None
    
    
    def _resolve_arg_type_identifier(
        arg: ast.AST, method_body: List[ast.stmt],
    ) -> str:
        """Best-effort identification of the type passed as the log-call arg.
    
        Returns a string describing what was passed:
        - ``"<SchemaName>"`` — looks like a BaseModel instantiation (we
          matched ``XxxOutput(...)`` either inline or via a local variable).
        - ``"dict"`` — dict literal.
        - ``"unknown"`` — couldn't resolve. Do NOT raise VAL-006 on this —
          the agent may be using an approach we don't recognize (FP insurance).
        """
        if isinstance(arg, ast.Dict):
            return "dict"
    
        # Inline call: self.log_entry_output(EntrySignalOutput(...))
        if isinstance(arg, ast.Call):
            tail = _call_function_tail(arg)
            if tail:
                return tail
    
        # Local variable: self.log_entry_output(out), where ``out = EntrySignalOutput(...)``.
        if isinstance(arg, ast.Name):
            rhs = _resolve_name_binding(method_body, arg.id)
            if rhs is None:
                return "unknown"
            if isinstance(rhs, ast.Dict):
                return "dict"
            if isinstance(rhs, ast.Call):
                tail = _call_function_tail(rhs)
                if tail:
                    return tail
    
        return "unknown"
  • Report/Finding helpers: dataclasses used by validate_component_logging to structure findings and produce dict output via to_dict().
    @dataclass
    class Finding:
        """One issue surfaced by a validator.
    
        ``code`` is an error-catalog code (STR-*, VAL-*, PRM-*, IND-*, BT-*).
        ``message`` is a short human-readable summary.
        ``context`` carries the structured key/value fix-template fields from
        the catalog entry so downstream consumers can format remediation
        guidance deterministically.
        """
    
        code: str
        message: str
        context: Dict[str, Any] = field(default_factory=dict)
    
    
    @dataclass
    class Report:
        """Aggregate of findings from one validator call.
    
        A validator returns one ``Report``. The caller can inspect
        ``any_errors`` as a one-shot gate, or iterate ``findings`` to surface
        every issue at once. ``to_dict()`` produces the JSON-serializable
        form the MCP tool wrappers return to agents.
        """
    
        findings: List[Finding] = field(default_factory=list)
    
        def add(self, finding: Finding) -> None:
            self.findings.append(finding)
    
        @property
        def any_errors(self) -> bool:
            return len(self.findings) > 0
    
        def to_dict(self) -> Dict[str, Any]:
            return {
                "any_errors": self.any_errors,
                "findings": [asdict(f) for f in self.findings],
            }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden but only mentions it is an AST check and returns findings. It does not disclose error handling, dependencies, or limitations, though the core behavior is adequately described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise and front-loaded with the main purpose. However, the triple-quoted string formatting may obscure readability slightly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering no output schema, no annotations, and 0% schema coverage, the description is the sole source. It explains checks and return format but lacks parameter clarity, making it somewhat incomplete for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has one parameter (strategy_dir) with 0% description coverage, and the tool description does not explain its meaning or usage. The agent has no help understanding what value to provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is an AST-check for component logging, specifying it verifies required method calls with BaseModel and flags self.params.get() antipattern. It distinguishes from sibling tools by its specific focus on logging validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus other similar validation tools (e.g., validate_component_integration). The description does not provide context on prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DolphinQuant/echolon'

If you have feedback or need assistance with the MCP directory API, please join our Discord server