Skip to main content
Glama

log_tool

Query and filter logs with pagination to monitor tool execution status, duration, and timestamps for analysis and debugging.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pageSizeNoLogs per page (1-100)
pageNoPage number (>= 1)
toolNameNoRegex to match tool name
statusNoLog status (success or error)
minDurationNoMinimum duration (ms)
maxDurationNoMaximum duration (ms)
startTimeNoStart time (ISO8601)
endTimeNoEnd time (ISO8601)

Implementation Reference

  • Main execution logic for log_tool: reads log file, applies filters (toolName, status, duration, time), paginates results, returns JSON-formatted log entries.
    export default async (request: any): Promise<{ content: any[]; isError?: boolean }> => { try { const params = request.params.arguments; let { pageSize, page, toolName, status, minDuration, maxDuration, startTime, endTime } = params; // Parameter preprocessing pageSize = Math.min(pageSize || 10, 100); page = Math.max(page || 1, 1); const skip = (page - 1) * pageSize; const take = pageSize; const paginatedLogs: LogEntry[] = []; let matchedCount = 0; // Check if log file exists if (!fs.existsSync(logFile)) { return { content: [{ type: 'text', text: JSON.stringify([], null, 2) }] }; } const fileStream = fs.createReadStream(logFile); const rl = readline.createInterface({ input: fileStream, crlfDelay: Infinity, }); for await (const line of rl) { try { if (line.trim() === '') continue; const logEntry: LogEntry = JSON.parse(line); logEntry.tool = logEntry.tool || 'unknown'; // Apply filters if (toolName && !new RegExp(toolName).test(logEntry.tool)) continue; if (status && logEntry.stat !== status) continue; if (minDuration && logEntry.cost < minDuration) continue; if (maxDuration && logEntry.cost > maxDuration) continue; if (startTime && new Date(logEntry.ts) < new Date(startTime)) continue; if (endTime && new Date(logEntry.ts) > new Date(endTime)) continue; // If filters pass, check for pagination if (matchedCount >= skip) { paginatedLogs.push(logEntry); } matchedCount++; // If the page is full, stop reading if (paginatedLogs.length >= take) { rl.close(); fileStream.destroy(); break; } } catch (parseError) { // Ignore lines that are not valid JSON } } return { content: [{ type: 'text', text: JSON.stringify(paginatedLogs, null, 2) }] }; } catch (error: any) { return { content: [{ type: 'text', text: JSON.stringify(`Query failed: ${error.message}`, null, 2) }], isError: true }; } };
  • Input schema defining parameters for log querying: pagination (pageSize, page), filters (toolName regex, status, durations, time range).
    export const schema = { name: 'log_tool', description: 'Query logs with filtering and pagination', type: 'object', properties: { pageSize: { type: 'number', description: 'Logs per page (1-100)', minimum: 1, maximum: 100, default: 10 }, page: { type: 'number', description: 'Page number (>= 1)', minimum: 1, default: 1 }, toolName: { type: 'string', description: 'Regex to match tool name', }, status: { type: 'string', description: 'Log status (success or error)', enum: ['success', 'error'], }, minDuration: { type: 'number', description: 'Minimum duration (ms)', }, maxDuration: { type: 'number', description: 'Maximum duration (ms)', }, startTime: { type: 'string', description: 'Start time (ISO8601)', }, endTime: { type: 'string', description: 'End time (ISO8601)', }, }, required: [] };
  • Helper function to clean up resources when unloading the log_tool.
    export async function destroy() { // Release resources, stop timers, disconnect, etc. console.log("Destroy log_tool"); }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xiaoguomeiyitian/ToolBox'

If you have feedback or need assistance with the MCP directory API, please join our Discord server