Skip to main content
Glama
heizaheiza

Charles MCP Server

analyze_recorded_traffic

Analyze saved network traffic recordings to identify patterns, filter by criteria like host or status, and generate structured summaries for debugging and monitoring.

Instructions

Analyze a saved recording snapshot with compact summaries. Returns structured TrafficSummary items with matched_fields and match_reasons. Use get_traffic_entry_detail to drill down into a specific entry_id afterwards.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
recording_pathNo
presetNoapi_focus
host_containsNo
path_containsNo
method_inNo
status_inNo
resource_class_inNo
min_priority_scoreNo
request_header_nameNo
request_header_value_containsNo
response_header_nameNo
response_header_value_containsNo
request_content_typeNo
response_content_typeNo
request_body_containsNo
response_body_containsNo
request_json_queryNo
response_json_queryNo
include_body_previewNo
max_itemsNo
max_preview_charsNo
max_headers_per_sideNo
scan_limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
itemsNo
sourceYes
warningsNo
truncatedNo
next_cursorNo
total_itemsNo
matched_countNo
scanned_countNo
filtered_out_countNo
filtered_out_by_classNo

Implementation Reference

  • The core logic for analyze_recorded_traffic, which handles orchestration of history snapshots and building the query result.
    async def analyze_recorded_traffic(
        self,
        *,
        recording_path: str | None,
        query: TrafficQuery,
    ) -> TrafficQueryResult:
        try:
            prepared = await self.prepare_capture(
                source="history",
                query=query,
                recording_path=recording_path,
                advance=False,
            )
        except FileNotFoundError:
            return TrafficQueryResult(
                source="history",
                items=[],
                total_items=0,
                scanned_count=0,
                matched_count=0,
                filtered_out_count=0,
                filtered_out_by_class={},
                warnings=["no_saved_recordings"],
            )
        return self.build_query_result(prepared=prepared, query=query, include_items=True)
  • The tool registration and function handler for analyze_recorded_traffic within the MCP tool definitions.
    async def analyze_recorded_traffic(
        ctx: ToolContext,
        recording_path: Optional[str] = None,
        preset: TrafficPreset = "api_focus",
        host_contains: Optional[str] = None,
        path_contains: Optional[str] = None,
        method_in: Optional[list[str]] = None,
        status_in: Optional[list[int]] = None,
        resource_class_in: Optional[list[str]] = None,
        min_priority_score: Optional[int] = None,
        request_header_name: Optional[str] = None,
        request_header_value_contains: Optional[str] = None,
        response_header_name: Optional[str] = None,
        response_header_value_contains: Optional[str] = None,
        request_content_type: Optional[str] = None,
        response_content_type: Optional[str] = None,
        request_body_contains: Optional[str] = None,
        response_body_contains: Optional[str] = None,
        request_json_query: Optional[str] = None,
        response_json_query: Optional[str] = None,
        include_body_preview: bool = True,
        max_items: int = 10,
        max_preview_chars: int = 128,
        max_headers_per_side: int = 6,
        scan_limit: int = 500,
    ) -> TrafficQueryResult:
        """Analyze a saved recording snapshot with compact summaries.
        Returns structured TrafficSummary items with matched_fields and match_reasons.
        Use get_traffic_entry_detail to drill down into a specific entry_id afterwards."""
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the return structure ('matched_fields and match_reasons') and the compact nature of summaries. However, it fails to mention operational characteristics like the scan_limit behavior, read-only safety, or filtering capabilities implied by the extensive parameter set.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three well-structured sentences with zero waste: purpose declaration, return value specification, and workflow guidance. Information is front-loaded appropriately given the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (23 filter parameters, output schema, no annotations), the description is inadequate. While it mentions the output structure, it omits any explanation of the filtering capabilities (host/path/header/body filters), which constitute the primary interaction model for this tool. The 0% schema coverage makes this omission critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 23 parameters with 0% description coverage (titles only). The description completely fails to compensate for this gap, not mentioning any parameters, the filtering paradigm, or even the 'preset' enum which defines the analysis mode. Users have no guidance on how to construct valid queries from the description alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Analyze'), resource ('saved recording snapshot'), and output format ('compact summaries'/'TrafficSummary items'). It distinguishes this tool from the sibling 'get_traffic_entry_detail' by positioning this as the summary/overview tool versus the drill-down tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: 'Use get_traffic_entry_detail to drill down into a specific entry_id afterwards.' This clearly indicates the sequence of operations. However, it does not distinguish when to use this versus the similarly-named 'query_recorded_traffic' sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/heizaheiza/Charles-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server