Skip to main content
Glama
WhiteNightShadow

camoufox-reverse-mcp

verify_signer_offline

Verifies a signing function offline by comparing its output against expected values from user-provided samples, returning pass rate and first point of divergence.

Instructions

Offline verify a signing function against user-provided samples.

Typical workflow:

  1. Capture real signed requests via network_capture + list_network_requests

  2. Extract samples into a list

  3. Write candidate signing code

  4. Call this tool -> get pass_rate + first_divergence

  5. Iterate

Args: signer_code: JS evaluating to a function: (sample) => {param: computed_value}. Runs in current page context. samples: List of sample dicts, each with: - id: user-defined identifier - input: dict passed to signer function - expected: dict of {param_name: expected_value_str} compare_params: Which params to compare. If None, compare all keys in each sample's expected.

Returns: dict with total_samples, passed, failed, pass_rate, first_divergence, details.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
signer_codeYes
samplesYes
compare_paramsNo

Implementation Reference

  • Main handler for verify_signer_offline tool. Accepts signer_code, samples, and optional compare_params. Evaluates the signing function in the browser page context, runs it against each sample, compares results against expected values, and returns stats (pass_rate, first_divergence, details).
    async def verify_signer_offline(
        signer_code: str,
        samples: list[dict],
        compare_params: list[str] | None = None,
    ) -> dict:
        """Offline verify a signing function against user-provided samples.
    
        Typical workflow:
          1. Capture real signed requests via network_capture + list_network_requests
          2. Extract samples into a list
          3. Write candidate signing code
          4. Call this tool -> get pass_rate + first_divergence
          5. Iterate
    
        Args:
            signer_code: JS evaluating to a function: (sample) => {param: computed_value}.
                Runs in current page context.
            samples: List of sample dicts, each with:
                - id: user-defined identifier
                - input: dict passed to signer function
                - expected: dict of {param_name: expected_value_str}
            compare_params: Which params to compare. If None, compare all keys
                in each sample's expected.
    
        Returns:
            dict with total_samples, passed, failed, pass_rate, first_divergence, details.
        """
        try:
            if not isinstance(samples, list) or not samples:
                return {"error": "samples must be a non-empty list"}
    
            page = await browser_manager.get_active_page()
            try:
                await page.evaluate(f"window.__mcp_signer_fn = {signer_code};")
            except Exception as e:
                return {"error": f"signer_code failed to evaluate: {e}"}
    
            details = []
            passed = failed = 0
            first_divergence = None
    
            for s in samples:
                sid = s.get("id", f"sample_{len(details)}")
                sample_input = s.get("input", {})
                expected = s.get("expected", {})
    
                try:
                    computed = await page.evaluate(
                        "(sample) => window.__mcp_signer_fn(sample)", sample_input)
                except Exception as e:
                    details.append({"sample_id": sid, "passed": False, "error": f"signer threw: {e}"})
                    failed += 1
                    continue
    
                diffs = _compare_params(expected, computed, compare_params)
                if not diffs:
                    passed += 1
                    details.append({"sample_id": sid, "passed": True})
                else:
                    failed += 1
                    details.append({"sample_id": sid, "passed": False, "diffs": diffs})
                    if first_divergence is None:
                        first_divergence = {"sample_id": sid, "diffs": diffs, "input": sample_input}
    
            return {
                "total_samples": len(samples), "passed": passed, "failed": failed,
                "pass_rate": round(passed / len(samples), 3) if samples else 0,
                "first_divergence": first_divergence, "details": details,
            }
        except Exception as e:
            return {"error": str(e)}
  • Helper _compare_params function that compares expected vs computed dictionaries, optionally filtered by a focus list. Generates diff entries with character-level detail for string mismatches.
    def _compare_params(expected: dict, computed: dict, focus: list[str] | None) -> list[dict]:
        diffs = []
        keys = focus if focus else list(expected.keys())
        for k in keys:
            exp = expected.get(k)
            act = (computed or {}).get(k)
            if exp == act:
                continue
            if isinstance(exp, str) and isinstance(act, str):
                first_diff = -1
                for i in range(min(len(exp), len(act))):
                    if exp[i] != act[i]:
                        first_diff = i
                        break
                if first_diff == -1 and len(exp) != len(act):
                    first_diff = min(len(exp), len(act))
                diffs.append({"param": k, "expected": exp, "actual": act,
                              "first_diff_char": first_diff,
                              "expected_length": len(exp), "actual_length": len(act)})
            else:
                diffs.append({"param": k, "expected": exp, "actual": act})
        return diffs
  • Registration of the verification module (which contains verify_signer_offline) via import in server.py.
    from .tools import verification     # noqa: E402, F401  — verify_signer_offline
  • The @mcp.tool() decorator registering verify_signer_offline as an MCP tool.
    @mcp.tool()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states that signer_code runs in current page context (implying potential side effects) and returns pass_rate and first_divergence. However, it does not explicitly warn about possible mutations or safety. This is adequate but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: a one-sentence purpose, a numbered workflow list, and an Args section. Every sentence adds value, and the format is easy to parse. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description lists the return fields (total_samples, passed, failed, pass_rate, first_divergence, details). The tool's complexity (3 params, JS execution) is fully addressed: workflow, parameter structures, and return values are all covered. The description is complete for an agent to correctly invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description carries full burden. It explains signer_code as JS evaluating to a function with signature (sample) => {...}, details the structure of each sample (id, input, expected), and clarifies compare_params default behavior. This adds significant meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Offline verify a signing function against user-provided samples' with a specific verb and resource. It clearly differentiates from sibling tools like evaluate_js or hook_function by focusing on offline verification of signing code with sample comparisons.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a typical 5-step workflow that contextualizes when to use this tool (after capturing samples and writing candidate code). It does not explicitly state when not to use it, but the workflow implies it is for testing signers post-capture. The guidance is clear and useful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/WhiteNightShadow/camoufox-reverse-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server