Skip to main content
Glama
heizaheiza

Charles MCP Server

query_recorded_traffic

Search and filter previously captured network traffic recordings to analyze HTTP requests and responses by host, method, or content patterns.

Instructions

Query the latest saved recording. This tool never reads the live Charles session.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
host_containsNo按 host 子串过滤(包含匹配)。例如:api.example.com
http_methodNoHTTP 方法过滤。仅允许标准 HTTP 方法。必须是方法名,不是正则表达式,不是路径。
keyword_regexNo用于搜索请求/响应内容的 Python 正则表达式。建议使用短表达式,避免灾难性回溯。
keep_requestNo
keep_responseNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathNo
itemsYes
sourceYes
warningsNo
truncatedNo
total_itemsYes

Implementation Reference

  • The implementation of the `query_recorded_traffic` MCP tool, which queries the latest saved recording.
    async def query_recorded_traffic(
        ctx: ToolContext,
        host_contains: HostContains = None,
        http_method: HttpMethodFilter = None,
        keyword_regex: KeywordRegex = None,
        keep_request: bool = True,
        keep_response: bool = True,
    ) -> RecordedTrafficQueryResult:
        """Query the latest saved recording. This tool never reads the live Charles session."""
        deps = get_tool_dependencies(ctx)
        host_contains_normalized = normalize_text_filter(host_contains)
        method_normalized, method_error = normalize_http_method(http_method)
        if method_error:
            raise ValueError(guidance_error_message(method_error))
    
        if keyword_regex:
            valid, error_msg = deps.history_service.validate_keyword_regex(keyword_regex)
            if not valid:
                raise ValueError(
                    guidance_error_message(
                        build_tool_guidance_error(
                            parameter="keyword_regex",
                            received=keyword_regex,
                            reason=f"invalid regex: {error_msg}",
                            valid_input="Provide a valid Python regular expression.",
                            retry_example='query_recorded_traffic(keyword_regex="token|session")',
                        )
                    )
                )
    
        return await deps.history_service.query_latest_result(
            host_contains=host_contains_normalized,
            method_normalized=method_normalized,
            keyword_regex=keyword_regex,
            keep_request=keep_request,
            keep_response=keep_response,
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the critical behavioral trait of auto-selecting the 'latest' recording (explaining the absence of a recording_id parameter) and the boundary constraint against reading live sessions. However, it omits details about return format, read-only safety, or performance limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first sentence establishes the core function, while the second sentence immediately addresses the critical distinction from live capture tools. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately omits return value details. It covers the essential scope (saved vs live) and the implicit recording selection behavior. However, gaps remain regarding the filtering capabilities and the undocumented boolean parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 60%, with keep_request and keep_response lacking descriptions. The description adds no information about the filtering parameters (host_contains, http_method, keyword_regex) or the boolean flags, failing to compensate for the schema gaps. The word 'Query' vaguely implies filtering but provides no specific semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Query) and resource (latest saved recording), clearly distinguishing it from live capture tools. The explicit statement 'never reads the live Charles session' effectively differentiates this tool from siblings like read_live_capture and query_live_capture_entries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'never reads the live Charles session' provides clear contextual guidance for when to use this tool (saved recordings) versus live capture alternatives. However, it stops short of explicitly naming the sibling tools or stating positive conditions like 'Use this when analyzing historical traffic.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/heizaheiza/Charles-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server