Skip to main content
Glama

List Firewalla Boxes

firewalla_list_boxes
Read-onlyIdempotent

List and audit Firewalla network security boxes to check online status, firmware versions, active devices, rules, and alarms for security monitoring.

Instructions

Discover the Firewalla boxes linked to this MSP account. This is the entry point for every audit — the returned gid is required by other tools.

Use this to answer:

  • "Is my box online and reporting in?"

  • "What firmware version is it running?"

  • "How many active devices, rules, alarms are there right now?"

Args:

  • group (string, optional): Filter to a specific group id.

  • response_format ('markdown' | 'json'): Output format (default: markdown).

Returns: { count: number, boxes: Array<{ gid: string, // box id — save this, other tools need it name: string, model: string, // e.g. "gold_plus" mode: string, // routing mode version: string, // firmware online: boolean, publicIP?: string, lastSeen?: number, // epoch seconds — not always populated license?: string, location?: string, deviceCount: number, ruleCount: number, alarmCount: number, // currently-active alarms group?: { id, name } }> }

Audit framing:

  • Offline box → can't observe current state; surface it.

  • High alarmCount → follow up with firewalla_search_alarms.

  • publicIP exposed unexpectedly → investigate with firewalla_search_flows.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
groupNoFilter to boxes in a specific group id. Omit to list all boxes on the account.
response_formatNoOutput format. 'markdown' (default) renders human-readable audit tables. 'json' returns structured data suitable for chaining into another tool call.markdown
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering the safety profile. The description adds valuable behavioral context beyond annotations: it explains the audit framing logic, clarifies that 'lastSeen' is 'not always populated', and provides guidance on interpreting results and next steps based on findings like offline boxes or high alarm counts.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, usage questions, Args, Returns, audit framing) and efficiently conveys necessary information. While comprehensive, every section earns its place by adding value, though the Args section could be more concise given the schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity as an audit entry point with rich return data and sibling relationships, the description provides complete context. It explains the tool's role in the ecosystem, provides detailed return structure documentation (compensating for no output schema), and includes audit framing that guides interpretation and next steps with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters well-documented in the schema. The description's Args section essentially repeats what's in the schema without adding significant semantic context beyond what's already structured. The baseline of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Discover'), resource ('Firewalla boxes linked to this MSP account'), and scope ('entry point for every audit'). It distinguishes from siblings by emphasizing this tool provides the essential 'gid' needed by other tools, unlike more specific tools like firewalla_search_alarms or firewalla_list_devices.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('entry point for every audit'), when to follow up with alternatives ('High alarmCount → follow up with firewalla_search_alarms', 'publicIP exposed unexpectedly → investigate with firewalla_search_flows'), and includes audit framing questions that guide appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/productengineered/firewalla-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server