Skip to main content
Glama

cloudtrail_analyze

Analyze AWS CloudTrail logs to extract event timelines, identify unique users, categorize event types, and track source IPs for security auditing.

Instructions

Parse and analyze AWS CloudTrail logs.

Extracts event timeline, unique users, event types, and source IPs.

Returns: {"event_count": int, "unique_users": [str], "event_types": [str], "source_ips": [str], "timeline": str}.

Side effects: Read-only file analysis. Requires jq.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
log_dirYesDirectory containing CloudTrail JSON log files

Implementation Reference

  • The handler function for 'cloudtrail_analyze' which uses jq to parse and aggregate AWS CloudTrail logs.
    async ({ log_dir }) => {
      requireTool("jq");
    
      const logPath = resolve(log_dir);
      if (!existsSync(logPath) || !statSync(logPath).isDirectory()) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({ error: `Directory not found: ${logPath}` }),
            },
          ],
        };
      }
    
      // Event timeline (sorted by time)
      const timeline = await runShell(
        `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records | sort_by(.eventTime) | .[] | [.eventTime, .eventName, .sourceIPAddress, .userIdentity.userName // .userIdentity.principalId] | @tsv' 2>/dev/null | head -100`,
        { timeout: 60 }
      );
    
      // Unique users
      const users = await runShell(
        `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].userIdentity | (.userName // .principalId // .arn)' 2>/dev/null | sort -u`,
        { timeout: 30 }
      );
    
      // Event type frequency
      const events = await runShell(
        `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].eventName' 2>/dev/null | sort | uniq -c | sort -rn | head -30`,
        { timeout: 30 }
      );
    
      // Source IPs
      const ips = await runShell(
        `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].sourceIPAddress' 2>/dev/null | sort | uniq -c | sort -rn | head -20`,
        { timeout: 30 }
      );
    
      // Error events
      const errors = await runShell(
        `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[] | select(.errorCode != null) | [.eventTime, .eventName, .errorCode, .errorMessage] | @tsv' 2>/dev/null | head -30`,
        { timeout: 30 }
      );
    
      const result = {
        timeline: timeline.stdout.slice(0, 3000),
        unique_users: parseLines(users.stdout).slice(0, 30),
        event_type_frequency: parseLines(events.stdout).slice(0, 30),
        source_ips: parseLines(ips.stdout).slice(0, 20),
        error_events: parseLines(errors.stdout).slice(0, 30),
      };
    
      return { content: [{ type: "text", text: JSON.stringify(result) }] };
    }
  • Registration of the 'cloudtrail_analyze' tool.
    server.tool(
      "cloudtrail_analyze",
      "Parse and analyze AWS CloudTrail logs.\n\nExtracts event timeline, unique users, event types, and source IPs.\n\nReturns: {\"event_count\": int, \"unique_users\": [str], \"event_types\": [str], \"source_ips\": [str], \"timeline\": str}.\n\nSide effects: Read-only file analysis. Requires jq.",
      {
        log_dir: z
          .string()
          .describe("Directory containing CloudTrail JSON log files"),
      },
      async ({ log_dir }) => {
        requireTool("jq");
    
        const logPath = resolve(log_dir);
        if (!existsSync(logPath) || !statSync(logPath).isDirectory()) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify({ error: `Directory not found: ${logPath}` }),
              },
            ],
          };
        }
    
        // Event timeline (sorted by time)
        const timeline = await runShell(
          `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records | sort_by(.eventTime) | .[] | [.eventTime, .eventName, .sourceIPAddress, .userIdentity.userName // .userIdentity.principalId] | @tsv' 2>/dev/null | head -100`,
          { timeout: 60 }
        );
    
        // Unique users
        const users = await runShell(
          `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].userIdentity | (.userName // .principalId // .arn)' 2>/dev/null | sort -u`,
          { timeout: 30 }
        );
    
        // Event type frequency
        const events = await runShell(
          `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].eventName' 2>/dev/null | sort | uniq -c | sort -rn | head -30`,
          { timeout: 30 }
        );
    
        // Source IPs
        const ips = await runShell(
          `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[].sourceIPAddress' 2>/dev/null | sort | uniq -c | sort -rn | head -20`,
          { timeout: 30 }
        );
    
        // Error events
        const errors = await runShell(
          `cat '${logPath}'/*.json 2>/dev/null | jq -r '.Records[] | select(.errorCode != null) | [.eventTime, .eventName, .errorCode, .errorMessage] | @tsv' 2>/dev/null | head -30`,
          { timeout: 30 }
        );
    
        const result = {
          timeline: timeline.stdout.slice(0, 3000),
          unique_users: parseLines(users.stdout).slice(0, 30),
          event_type_frequency: parseLines(events.stdout).slice(0, 30),
          source_ips: parseLines(ips.stdout).slice(0, 20),
          error_events: parseLines(errors.stdout).slice(0, 30),
        };
    
        return { content: [{ type: "text", text: JSON.stringify(result) }] };
      }
    );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/operantlabs/operant-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server