calculate_integrity_score
Calculate integrity scores to verify AI code edits by analyzing dependencies and preventing unsafe changes in JavaScript/TypeScript projects.
Instructions
Рассчитывает Integrity Score. Использует State Machine для защиты от "читерства" ИИ.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| target_function | Yes | ||
| dependencies | Yes | ||
| verified_dependencies | Yes | ||
| proposed_header | No | ||
| breaking_change_description | No | ||
| confirmation_token | No |
Implementation Reference
- mcp_edit_math.py:398-489 (handler)The handler function for the 'calculate_integrity_score' tool, decorated with @mcp.tool() for registration in the FastMCP server. It calculates an integrity score for code edits based on dependencies, verified dependencies, and user confirmation via a state machine to prevent unauthorized changes.
@mcp.tool() def calculate_integrity_score( target_function: str, dependencies: List[str], verified_dependencies: List[str], proposed_header: str = "", breaking_change_description: str = "", confirmation_token: str = "" ) -> str: """ Рассчитывает Integrity Score. Использует State Machine для защиты от "читерства" ИИ. """ deps_safe = dependencies if dependencies else [] verified_safe = verified_dependencies if verified_dependencies else [] # 1. Авто-детекция переименования is_renaming = False if proposed_header: if target_function not in proposed_header: is_renaming = True # 2. Нужна ли защита? needs_confirmation = (len(deps_safe) > 0) or (len(breaking_change_description) > 0) or is_renaming # 3. МАШИНА СОСТОЯНИЙ current_state = APPROVAL_STATE.get(target_function, "NONE") # Сценарий А: Первый заход (или ИИ пытается проскочить сразу) # Если защита нужна, но мы еще не в режиме PENDING -> БЛОКИРУЕМ if needs_confirmation and current_state != "PENDING": # Переводим в режим ожидания APPROVAL_STATE[target_function] = "PENDING" reasons = [] if deps_safe: reasons.append(f"Dependencies: {len(deps_safe)}") if breaking_change_description: reasons.append("Breaking change declared") if is_renaming: reasons.append("Renaming detected") return f""" ✋ STRICT MODE INTERVENTION (Step 1/2) ------------------------------------- Reason: {', '.join(reasons)} The server FORBIDS silent edits. You must obtain user permission. INSTRUCTION FOR AI: 1. STOP. Do not edit yet. 2. Explain the plan/conflicts to the user. 3. ASK THE USER: "Type 'ok' to confirm." 4. Wait for the user's input. 5. Call this tool again with `confirmation_token='ok'`. """ # Сценарий Б: Второй заход (после ответа пользователя) if current_state == "PENDING": # Проверяем токен "ok" (регистронезависимо) is_confirmed = (confirmation_token.strip().lower() == "ok") if not is_confirmed: return "⛔ ACCESS DENIED. I am waiting for the 'ok' token from the user." # Если токен верный - переходим к расчету баллов # 4. Расчет баллов if not deps_safe and not is_renaming and not breaking_change_description: APPROVAL_STATE[target_function] = "APPROVED" return f"Score: 1.0 (Safe). Edit to '{target_function}' is allowed." BASE_WEIGHT = 0.5 REMAINING_WEIGHT = 0.5 count_deps = len(deps_safe) if count_deps == 0: current_score = 1.0 else: weight_per_dep = REMAINING_WEIGHT / count_deps current_score = BASE_WEIGHT for dep in deps_safe: if dep in verified_safe: current_score += weight_per_dep is_safe = current_score >= 0.99 if is_safe: APPROVAL_STATE[target_function] = "APPROVED" return f"Integrity Score: {current_score:.4f} / 1.0\nSTATUS: ✅ ACCESS GRANTED (User Confirmed)" else: extra_verified = set(verified_safe) - set(deps_safe) hint_msg = f"\n💡 HINT: You verified items NOT in the list: {list(extra_verified)}.\nIf renaming, verify the ORIGINAL name." if extra_verified else "" return f"Integrity Score: {current_score:.4f} / 1.0\nSTATUS: ⛔ ACCESS DENIED\nUser confirmed, BUT you missed verifying dependencies: {[d for d in deps_safe if d not in verified_safe]}{hint_msg}"