log_tool_execution
Record tool executions to help AI systems learn from user corrections and automatically update configuration files based on detected patterns.
Instructions
Log tool execution for learning
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| args | Yes | ||
| result | Yes | ||
| tool_name | Yes |
Implementation Reference
- src/mcp_standards/server.py:396-429 (handler)Main handler function for log_tool_execution tool. Extracts patterns using PatternExtractor and delegates to autologger for logging if significant, returns logging status and detected patterns.async def _log_tool_execution(self, tool_name: str, args: Dict[str, Any], result: Any) -> Dict[str, Any]: """Log tool execution for learning""" try: # ALWAYS extract patterns first (regardless of significance) # Pattern learning happens even for low-significance events if they contain corrections patterns = self.pattern_extractor.extract_patterns( tool_name, args, result, project_path=args.get("project_path", "") if isinstance(args, dict) else "" ) # Then use autologger for high-significance events log_id = self.autologger.log_tool_execution(tool_name, args, result) if log_id is None: # Low significance for logging, but may have detected patterns return { "success": True, "skipped_logging": True, "reason": "Low significance for full logging", "patterns_detected": len(patterns), "patterns": [p.get("description", p.get("pattern_key")) for p in patterns] if patterns else [] } return { "success": True, "logged": True, "log_id": log_id, "patterns_detected": len(patterns), "patterns": [p.get("description", p.get("pattern_key")) for p in patterns] if patterns else [] } except Exception as e: return {"success": False, "error": str(e)}
- src/mcp_standards/server.py:166-178 (registration)Registration of the log_tool_execution tool in list_tools(), including name, description, and input schema definition.Tool( name="log_tool_execution", description="Log tool execution for learning", inputSchema={ "type": "object", "properties": { "tool_name": {"type": "string"}, "args": {"type": "object"}, "result": {"type": "object"}, }, "required": ["tool_name", "args", "result"], }, ),
- src/mcp_standards/server.py:169-177 (schema)Input schema for log_tool_execution tool: requires tool_name (str), args (object), result (object).inputSchema={ "type": "object", "properties": { "tool_name": {"type": "string"}, "args": {"type": "object"}, "result": {"type": "object"}, }, "required": ["tool_name", "args", "result"], },
- src/mcp_standards/autolog.py:108-162 (helper)Core helper function in AutoLogger that performs the actual database logging of significant tool executions, including significance check, insertion into tool_logs table, and optional episode extraction.def log_tool_execution( self, tool_name: str, args: Dict[str, Any], result: Any, session_id: Optional[str] = None ) -> Optional[int]: """Log tool execution if significant""" significance = self.should_log(tool_name, args) if significance < 0.3: # Skip low significance return None with sqlite3.connect(self.db_path) as conn: cursor = conn.execute(""" INSERT INTO tool_logs (tool_name, args, result, significance, session_id) VALUES (?, ?, ?, ?, ?) """, ( tool_name, json.dumps(args), json.dumps(str(result)[:1000]), # Truncate large results significance, session_id or "default" )) log_id = cursor.lastrowid # Extract episode if highly significant if significance > 0.6: episode = self._extract_episode(tool_name, args, result) if episode: conn.execute(""" INSERT INTO episodes (name, content, source, tool_log_id, tags) VALUES (?, ?, ?, ?, ?) """, ( episode["name"], episode["content"], episode["source"], log_id, json.dumps(episode.get("tags", [])) )) # Update FTS index conn.execute(""" INSERT INTO episodes_fts (name, content, tags) VALUES (?, ?, ?) """, ( episode["name"], episode["content"], " ".join(episode.get("tags", [])) )) conn.commit() return log_id