Skip to main content
Glama

Search Firewalla Alarms

firewalla_search_alarms
Read-onlyIdempotent

Search active security alarms on your Firewalla network to monitor threats, audit events by device or type, and identify patterns like rogue devices or repeated attacks.

Instructions

Search active Firewalla alarms with the MSP query grammar. This is the primary tool for "what security events are happening right now?" audits.

Use this to answer:

  • "Any alarms from devices not in a known group?"

  • "How many alarms of type X in the last 24h, grouped by device?"

  • "Which remote countries are triggering the most alarms?"

  • "Any alarms relating to a specific device (by MAC)?"

Args:

  • query (string, optional): Firewalla query grammar. Examples: type:1, device.mac:AA:BB:CC:DD:EE:FF, remote.country:CN, ts:>1700000000.

  • group_by (string, optional): e.g. device, type, remote.country.

  • sort_by (string, optional): e.g. ts:desc (default), ts:asc.

  • limit (number, 1–500, default 200).

  • cursor (string, optional): pagination cursor from a prior response.

  • response_format ('markdown' | 'json'): Output format (default: markdown).

Returns: { count: number, // items in this page next_cursor?: string, // echo back to fetch the next page alarms: Array<{ aid, gid, type, ts, message, status?, device?: { id?, name?, ip? }, remote?: { ip?, country?, name?, region?, category? } }> }

Audit framing:

  • Alarm from an unknown MAC (device.id not in firewalla_list_devices) → rogue device.

  • Repeated alarms to the same remote.country → likely a single piece of malware, check firewalla_list_rules.

  • When counts get big, use group_by=type first for a birds-eye view, then drill.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNoFirewalla query string (pass-through). See Firewalla docs for the grammar — supports filters like `device.mac:AA:BB:CC:DD:EE:FF`, `blocked:true`, `region:CN`, `ts:>1700000000`, etc. Omit to match everything.
group_byNoGroup results by one or more fields (comma-separated). Examples: `device`, `device,domain`, `region`. When set, results are aggregated per group.
sort_byNoSort expression. Format: `<field>:<asc|desc>`. Common: `ts:desc` (default, newest first), `ts:asc` (oldest first), `download:desc` (biggest flows first).
limitNoMaximum results per page (1–500, default 200). Smaller values are recommended when auditing — easier to review.
cursorNoPagination cursor echoed from a prior response's `next_cursor`. Omit for the first page.
response_formatNoOutput format. 'markdown' (default) renders human-readable audit tables. 'json' returns structured data suitable for chaining into another tool call.markdown
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, the description adds valuable behavioral context beyond these annotations. It explains the tool's role in security audits, provides guidance on handling large result sets ('When counts get big, use group_by=type first'), and describes pagination behavior through the cursor parameter. The description doesn't contradict annotations and adds meaningful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose statement, usage examples, parameter details, return format, and audit guidance. Every sentence serves a specific purpose—no wasted words. The information is front-loaded with the core purpose, followed by progressively detailed guidance. The structure supports both quick understanding and deep reference.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, security audit focus) and the absence of an output schema, the description provides excellent contextual completeness. It fully documents the return structure in the 'Returns' section, explains pagination mechanics, provides audit-specific guidance, and references sibling tools for follow-up actions. The description compensates fully for the lack of output schema and provides comprehensive operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline would be 3, but the description adds significant value beyond the schema. The 'Args' section provides concrete query examples (`type:1`, `device.mac:AA:BB:CC:DD:EE:FF`, etc.) that illustrate the query grammar more vividly than the schema's description. It also explains the practical implications of parameters like 'group_by' for aggregation and 'response_format' for different use cases (human-readable vs. chaining).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose as 'Search active Firewalla alarms with the MSP query grammar' and positions it as 'the primary tool for "what security events are happening right now?" audits.' This clearly distinguishes it from sibling tools like firewalla_get_alarm (likely for single alarm retrieval) and firewalla_search_flows (for flow data rather than alarms), providing specific verb+resource+scope differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool through concrete example questions ('Any alarms from devices not in a known group?', 'How many alarms of type X in the last 24h, grouped by device?', etc.) and includes an 'Audit framing' section with specific scenarios (e.g., 'Alarm from an unknown MAC → rogue device'). It also implicitly suggests alternatives by referencing sibling tools like firewalla_list_devices and firewalla_list_rules for follow-up actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/productengineered/firewalla-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server