Skip to main content
Glama

read-logs

Retrieve and filter conversation variation logs from the MCP Variance Log server by specifying date range, limit, and detail level for analysis.

Instructions

Retrieve logged conversation variations from the database.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
end_dateNoFilter logs before this date (ISO format YYYY-MM-DDTHH:MM:SS)
full_detailsNoIf true, show all fields; if false, show only context summaries
limitYesMaximum number of logs to retrieve
start_dateNoFilter logs after this date (ISO format YYYY-MM-DDTHH:MM:SS)

Implementation Reference

  • The handler block within handle_call_tool that processes 'read-logs' tool calls. It extracts arguments, fetches logs via db.get_logs, formats them into a formatted table string, handles errors, and returns TextContent.
    elif name == "read-logs": if not arguments: return [types.TextContent(type="text", text="No arguments provided")] limit = min(max(arguments.get("limit", 10), 1), 100) full_details = arguments.get("full_details", False) try: logs = db.get_logs(limit=limit, full_details=full_details) if not logs: return [types.TextContent(type="text", text="No logs found")] # Create compact table header with adjusted widths header = ["ID", "Time", "Prob", "Type", "Context"] separator = "-" * 90 # Increased overall width table = [separator] table.append(" | ".join([ f"{h:<4}" if h == "ID" else f"{h:<12}" if h == "Time" else f"{h:<6}" if h == "Prob" or h == "Type" else f"{h:<45}" # Increased context width for h in header ])) table.append(separator) # Create compact rows with adjusted widths for log in logs: time_str = str(log[1])[5:16] # Extract MM-DD HH:MM context = str(log[8])[:42] + "..." if len(str(log[8])) > 42 else str(log[8]) # Increased context length row = [ str(log[0])[:4], # ID time_str, # Time str(log[5])[:6], # Prob str(log[4])[:6], # Type context # Truncated context ] table.append(" | ".join([ f"{str(cell):<4}" if i == 0 else # ID f"{str(cell):<12}" if i == 1 else # Time f"{str(cell):<6}" if i in [2, 3] else # Prob and Type f"{str(cell):<45}" # Context for i, cell in enumerate(row) ])) return [types.TextContent(type="text", text="\n".join(table))] except sqlite3.Error as e: return [types.TextContent(type="text", text=f"Database error: {str(e)}")] except Exception as e: return [types.TextContent(type="text", text=f"Error: {str(e)}")]
  • JSON Schema defining the input parameters for the 'read-logs' tool: limit (required), optional start_date, end_date, full_details.
    inputSchema={ "type": "object", "properties": { "limit": { "type": "integer", "description": "Maximum number of logs to retrieve", "default": 10, "minimum": 1, "maximum": 100 }, "start_date": { "type": "string", "description": "Filter logs after this date (ISO format YYYY-MM-DDTHH:MM:SS)" }, "end_date": { "type": "string", "description": "Filter logs before this date (ISO format YYYY-MM-DDTHH:MM:SS)" }, "full_details": { "type": "boolean", "description": "If true, show all fields; if false, show only context summaries", "default": False } }, "required": ["limit"]
  • Registration of the 'read-logs' tool in the @server.list_tools() response, specifying name, description, and input schema.
    types.Tool( name="read-logs", description="Retrieve logged conversation variations from the database.", inputSchema={ "type": "object", "properties": { "limit": { "type": "integer", "description": "Maximum number of logs to retrieve", "default": 10, "minimum": 1, "maximum": 100 }, "start_date": { "type": "string", "description": "Filter logs after this date (ISO format YYYY-MM-DDTHH:MM:SS)" }, "end_date": { "type": "string", "description": "Filter logs before this date (ISO format YYYY-MM-DDTHH:MM:SS)" }, "full_details": { "type": "boolean", "description": "If true, show all fields; if false, show only context summaries", "default": False } }, "required": ["limit"] } ),
  • Helper method in LogDatabase class used by the handler to query and retrieve the logs from SQLite database, supporting limit and date range filters (though date filters not yet used in handler). Note: full_details param is accepted but not used in the query.
    def get_logs(self, limit: int = 10, start_date: Optional[datetime] = None, end_date: Optional[datetime] = None, full_details: bool = False) -> list: """ Retrieve logs with optional filtering. Args: limit (int): Maximum number of logs to retrieve start_date (datetime, optional): Filter by start date end_date (datetime, optional): Filter by end date full_details (bool): If True, return all fields; if False, return only context summary Returns: list: List of log entries """ query = "SELECT * FROM chat_monitoring" params = [] conditions = [] if start_date: conditions.append("timestamp >= ?") params.append(start_date) if end_date: conditions.append("timestamp <= ?") params.append(end_date) if conditions: query += " WHERE " + " AND ".join(conditions) query += " ORDER BY timestamp DESC LIMIT ?" params.append(limit) try: with sqlite3.connect(self.db_path) as conn: cursor = conn.cursor() cursor.execute(query, params) return cursor.fetchall() except sqlite3.Error as e: print(f"Database error: {str(e)}") return [] except Exception as e: print(f"Error: {str(e)}") return []

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/truaxki/mcp-variance-log'

If you have feedback or need assistance with the MCP directory API, please join our Discord server