Skip to main content
Glama

log_tool_execution

Record tool executions to help AI systems learn from user corrections and automatically update configuration files based on detected patterns.

Instructions

Log tool execution for learning

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
argsYes
resultYes
tool_nameYes

Implementation Reference

  • Main handler function for log_tool_execution tool. Extracts patterns using PatternExtractor and delegates to autologger for logging if significant, returns logging status and detected patterns.
    async def _log_tool_execution(self, tool_name: str, args: Dict[str, Any], result: Any) -> Dict[str, Any]: """Log tool execution for learning""" try: # ALWAYS extract patterns first (regardless of significance) # Pattern learning happens even for low-significance events if they contain corrections patterns = self.pattern_extractor.extract_patterns( tool_name, args, result, project_path=args.get("project_path", "") if isinstance(args, dict) else "" ) # Then use autologger for high-significance events log_id = self.autologger.log_tool_execution(tool_name, args, result) if log_id is None: # Low significance for logging, but may have detected patterns return { "success": True, "skipped_logging": True, "reason": "Low significance for full logging", "patterns_detected": len(patterns), "patterns": [p.get("description", p.get("pattern_key")) for p in patterns] if patterns else [] } return { "success": True, "logged": True, "log_id": log_id, "patterns_detected": len(patterns), "patterns": [p.get("description", p.get("pattern_key")) for p in patterns] if patterns else [] } except Exception as e: return {"success": False, "error": str(e)}
  • Registration of the log_tool_execution tool in list_tools(), including name, description, and input schema definition.
    Tool( name="log_tool_execution", description="Log tool execution for learning", inputSchema={ "type": "object", "properties": { "tool_name": {"type": "string"}, "args": {"type": "object"}, "result": {"type": "object"}, }, "required": ["tool_name", "args", "result"], }, ),
  • Input schema for log_tool_execution tool: requires tool_name (str), args (object), result (object).
    inputSchema={ "type": "object", "properties": { "tool_name": {"type": "string"}, "args": {"type": "object"}, "result": {"type": "object"}, }, "required": ["tool_name", "args", "result"], },
  • Core helper function in AutoLogger that performs the actual database logging of significant tool executions, including significance check, insertion into tool_logs table, and optional episode extraction.
    def log_tool_execution( self, tool_name: str, args: Dict[str, Any], result: Any, session_id: Optional[str] = None ) -> Optional[int]: """Log tool execution if significant""" significance = self.should_log(tool_name, args) if significance < 0.3: # Skip low significance return None with sqlite3.connect(self.db_path) as conn: cursor = conn.execute(""" INSERT INTO tool_logs (tool_name, args, result, significance, session_id) VALUES (?, ?, ?, ?, ?) """, ( tool_name, json.dumps(args), json.dumps(str(result)[:1000]), # Truncate large results significance, session_id or "default" )) log_id = cursor.lastrowid # Extract episode if highly significant if significance > 0.6: episode = self._extract_episode(tool_name, args, result) if episode: conn.execute(""" INSERT INTO episodes (name, content, source, tool_log_id, tags) VALUES (?, ?, ?, ?, ?) """, ( episode["name"], episode["content"], episode["source"], log_id, json.dumps(episode.get("tags", [])) )) # Update FTS index conn.execute(""" INSERT INTO episodes_fts (name, content, tags) VALUES (?, ?, ?) """, ( episode["name"], episode["content"], " ".join(episode.get("tags", [])) )) conn.commit() return log_id

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/airmcp-com/mcp-standards'

If you have feedback or need assistance with the MCP directory API, please join our Discord server