Skip to main content
Glama

wireshark_follow_stream

Reassemble and view complete network stream content from PCAP files with pagination, search, and multiple output formats to analyze protocols like TCP, UDP, TLS, HTTP, and HTTP2.

Instructions

[Stream] Reassemble and view complete stream content. Supports pagination to avoid token limits.

Args: stream_index: Stream ID from conversations/stats protocol: Stream protocol - 'tcp', 'udp', 'tls', 'http', 'http2' output_mode: Output format - 'ascii', 'hex', 'raw' limit_lines: Max lines to return (default: 500) offset_lines: Skip first N lines (for pagination) search_content: Optional string to grep/search within the stream

Returns: Reconstructed stream data or JSON error

Errors: FileNotFound: pcap_file does not exist InvalidParameter: Invalid protocol

Example: wireshark_follow_stream("traffic.pcap", stream_index=0, search_content="password")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pcap_fileYes
stream_indexYes
protocolNotcp
output_modeNoascii
limit_linesNo
offset_linesNo
search_contentNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: reassembly of stream content, pagination support to avoid token limits, and error conditions like 'FileNotFound' and 'InvalidParameter'. However, it doesn't cover important aspects like whether this is a read-only operation, performance implications for large streams, or authentication requirements. The example shows usage but doesn't fully explain behavioral nuances.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns, Errors, Example) and front-loaded with the core purpose. Most sentences earn their place by explaining functionality or parameters. However, the example includes 'traffic.pcap' which might confuse since 'pcap_file' is a parameter, and the 'Returns' section is somewhat vague ('Reconstructed stream data or JSON error').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, stream reassembly functionality) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers purpose, all parameters, errors, and provides an example. The main gap is lack of behavioral context about read/write nature and performance, but with output schema handling returns, this is acceptable for a tool focused on data retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for 7 parameters. It provides semantic explanations for all parameters in the 'Args' section, adding meaning beyond the bare schema. For example, it explains 'stream_index' comes from 'conversations/stats', 'protocol' options, 'output_mode' formats, and how 'limit_lines' and 'offset_lines' enable pagination. This significantly enhances understanding, though some details like 'pcap_file' format expectations could be clearer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'reassemble and view' and the resource 'complete stream content', making the purpose specific. It distinguishes from siblings like 'wireshark_get_packet_details' or 'wireshark_search_content' by focusing on stream reconstruction rather than packet-level analysis or general searching. However, it doesn't explicitly differentiate from 'wireshark_stats_conversations' which might provide stream IDs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing to view reassembled stream content with pagination support, suggesting it's for detailed stream analysis rather than high-level statistics. However, it lacks explicit guidance on when to use this versus alternatives like 'wireshark_search_content' for searching or 'wireshark_get_packet_details' for packet-level info, and doesn't mention prerequisites such as needing a stream index from conversations/stats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bx33661/Wireshark-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server