Skip to main content
Glama

check_code

Scan LSL code for AI-generated pitfalls: nonexistent functions, unsupported syntax, reserved word misuses. Returns line numbers and suggestions for each issue.

Instructions

Scan an LSL code snippet for known AI-generated pitfalls.

Checks for nonexistent function calls, unsupported syntax (ternary operators, switch statements), reserved words used as variable names, and other patterns from the pitfalls database.

Call this on any LSL you generate before presenting it to the user. Returns line numbers and suggestions for each issue found.

Args: code: Raw LSL source code as a string.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes

Implementation Reference

  • The `check_code` MCP tool registration (via @mcp.tool decorator) that delegates to `lsl_check_code()` in tools/pitfalls.py. This is the entrypoint exposed to MCP clients.
    @mcp.tool()
    def check_code(code: str) -> dict:
        """
        Scan an LSL code snippet for known AI-generated pitfalls.
    
        Checks for nonexistent function calls, unsupported syntax (ternary
        operators, switch statements), reserved words used as variable names,
        and other patterns from the pitfalls database.
    
        Call this on any LSL you generate before presenting it to the user.
        Returns line numbers and suggestions for each issue found.
    
        Args:
            code: Raw LSL source code as a string.
        """
        log.info("check_code(%d chars)", len(code))
        return lsl_check_code(code)
  • The core implementation of `lsl_check_code()` which scans LSL code snippets for known AI-generated pitfalls. Checks for nonexistent function calls, unsupported syntax (ternary, switch), reserved word usage, and additional patterns from the pitfalls database.
    def lsl_check_code(code: str) -> dict:
        """
        Scan an LSL code snippet for known AI-generated pitfalls.
    
        Checks for:
          - Nonexistent function calls
          - Unsupported syntax (ternary operators, switch statements)
          - Reserved words used as variable names
          - Other patterns from the pitfalls database
    
        Does NOT perform full LSL compilation or type checking — use the
        in-world script editor for that. This tool catches the specific class
        of mistakes AI tools commonly make.
    
        Args:
            code: Raw LSL source code as a string.
    
        Returns:
            dict with keys:
                clean    — True if no issues found
                issues   — list of detected issues, each with:
                               pitfall_id, category, title, line, match, suggestion
        """
        if not code or not code.strip():
            return {"clean": True, "issues": [], "note": "Empty input."}
    
        con    = _connect()
        issues = []
        lines  = code.splitlines()
    
        # ── 1. Nonexistent function calls ────────────────────────────────────────
    
        # Pull all nonexistent_functions pitfalls from DB
        fake_rows = con.execute(
            "SELECT * FROM pitfalls WHERE category = 'nonexistent_functions'"
        ).fetchall()
    
        fake_functions: dict[str, sqlite3.Row] = {}
        for row in fake_rows:
            # Extract function name from bad_example if present
            if row["bad_example"]:
                m = re.match(r"(ll\w+|os\w+)", row["bad_example"])
                if m:
                    fake_functions[m.group(1)] = row
    
        # Also include static list
        for fname in _KNOWN_FAKE_FUNCTIONS:
            if fname not in fake_functions:
                fake_functions[fname] = None  # no DB row, bare detection
    
        for fname, pitfall_row in fake_functions.items():
            pattern = re.compile(rf"\b{re.escape(fname)}\s*\(")
            for lineno, line in enumerate(lines, 1):
                if pattern.search(line):
                    issues.append({
                        "pitfall_id": pitfall_row["id"] if pitfall_row else "func_unknown",
                        "category":   "nonexistent_functions",
                        "title":      f"`{fname}` does not exist in LSL",
                        "line":       lineno,
                        "match":      line.strip(),
                        "suggestion": pitfall_row["good_example"] if pitfall_row else
                                      f"Check the LSL wiki — `{fname}` has no equivalent.",
                    })
    
        # ── 2. Static syntax patterns ─────────────────────────────────────────────
    
        for pitfall_id, pattern, description in _STATIC_PATTERNS:
            # Fetch the DB row for richer output
            db_row = con.execute(
                "SELECT * FROM pitfalls WHERE id = ?", (pitfall_id,)
            ).fetchone()
    
            for lineno, line in enumerate(lines, 1):
                if pattern.search(line):
                    issues.append({
                        "pitfall_id": pitfall_id,
                        "category":   db_row["category"] if db_row else "unsupported_syntax",
                        "title":      db_row["title"] if db_row else description,
                        "line":       lineno,
                        "match":      line.strip(),
                        "suggestion": db_row["good_example"] if db_row else
                                      "Rewrite without this construct.",
                    })
    
        # ── 3. FTS scan for additional bad_example patterns ───────────────────────
        # For pitfalls that have a bad_example but aren't covered by static patterns,
        # do a simple token presence check.
    
        extra_rows = con.execute(
            """
            SELECT * FROM pitfalls
            WHERE bad_example IS NOT NULL
              AND category NOT IN ('nonexistent_functions')
              AND id NOT IN (?, ?, ?)
            """,
            ("syn_001", "syn_002", "lang_001"),
        ).fetchall()
    
        for row in extra_rows:
            bad = row["bad_example"]
            if not bad:
                continue
            # Extract a meaningful token to search for
            token = re.search(r"[\w]+", bad)
            if not token:
                continue
            tok = token.group(0)
            if len(tok) < 4:
                continue
            tok_pattern = re.compile(rf"\b{re.escape(tok)}\b")
            for lineno, line in enumerate(lines, 1):
                if tok_pattern.search(line):
                    # Avoid duplicate issues
                    already = any(
                        i["pitfall_id"] == row["id"] and i["line"] == lineno
                        for i in issues
                    )
                    if not already:
                        issues.append({
                            "pitfall_id": row["id"],
                            "category":   row["category"],
                            "title":      row["title"],
                            "line":       lineno,
                            "match":      line.strip(),
                            "suggestion": row["good_example"] or row["notes"],
                        })
    
        # Deduplicate by (pitfall_id, line)
        seen   = set()
        unique = []
        for issue in issues:
            key = (issue["pitfall_id"], issue["line"])
            if key not in seen:
                seen.add(key)
                unique.append(issue)
    
        unique.sort(key=lambda i: i["line"])
    
        return {
            "clean":  len(unique) == 0,
            "issues": unique,
        }
  • Static regex patterns used by lsl_check_code to detect ternary operators (syn_001), switch statements (syn_002), and type names used as variable names (lang_001).
    _STATIC_PATTERNS: list[tuple[str, re.Pattern, str]] = [
        # Ternary operator
        (
            "syn_001",
            re.compile(r"\?\s*\S+\s*:", re.S),
            "Ternary operator `? :` detected — not supported in LSL",
        ),
        # Switch statement
        (
            "syn_002",
            re.compile(r"\bswitch\s*\(", re.S),
            "`switch` statement detected — not supported in portable LSL",
        ),
        # Type names used as variable names (declaration pattern)
        (
            "lang_001",
            re.compile(
                r"\b(integer|float|string|key|vector|rotation|list)\s+"
                r"(integer|float|string|key|vector|rotation|list)\s*[=;,\)]",
                re.S,
            ),
            "LSL type name used as variable name — type names are reserved identifiers",
        ),
    ]
  • Static list of known nonexistent LSL function names (e.g., 'llStringReplace') that lsl_check_code checks for; supplemented by DB entries.
    # Nonexistent functions we know about — built from DB at check time
    _KNOWN_FAKE_FUNCTIONS = [
        "llStringReplace",
        # expanded at runtime from pitfalls table
    ]
  • server.py:47-47 (registration)
    Import of `lsl_check_code` from tools.pitfalls, used by the `check_code` MCP tool registration.
    from tools.pitfalls import lsl_get_pitfalls, lsl_check_code
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It explains what the tool checks (pitfalls) and that it returns line numbers and suggestions. However, it does not disclose if the tool is read-only or if it has side effects, but for a scanning tool this is acceptable. The description adds value beyond the input schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (approximately 8 lines) with no wasted words. It front-loads the purpose and usage guidance, then lists checks and parameter details in a clear, structured manner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should detail the return structure. It mentions 'line numbers and suggestions' but does not specify the format (e.g., list of objects). While the tool is simple, this omission could confuse an AI agent when interpreting results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'code' has no description in the schema. The description adds 'Raw LSL source code as a string,' which clarifies the expected input. While it could specify constraints like max length, the current description is sufficient for correct usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'scan', the resource 'LSL code snippet', and the specific purpose of finding AI-generated pitfalls. It lists specific checks like nonexistent functions and unsupported syntax, which distinguishes it from sibling tools that are informational (e.g., lookup_function, get_constants).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool: 'Call this on any LSL you generate before presenting it to the user.' This provides clear guidance without ambiguity, covering the primary use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Treeeeeeeeeeeeee/second-life-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server