Skip to main content
Glama
SlanyCukr

Bug Bounty MCP Server

by SlanyCukr

arjun_parameter_discovery

Discovers hidden HTTP parameters in web applications using Arjun to identify potential attack surfaces for security testing and vulnerability assessment.

Instructions

Execute Arjun for HTTP parameter discovery with enhanced logging.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
additional_argsNo
delayNo
methodNoGET
stableNo
threadsNo
urlYes
wordlistNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Primary MCP handler and registration for the 'arjun_parameter_discovery' tool. This function defines the tool interface, input parameters (serving as schema), and proxies execution to the backend REST API endpoint.
    @mcp.tool()
    def arjun_parameter_discovery(
        url: str,
        method: str = "GET",
        wordlist: str = "",
        delay: int = 0,
        threads: int = 25,
        stable: bool = False,
        additional_args: str = "",
    ) -> dict[str, Any]:
        """Execute Arjun for HTTP parameter discovery with enhanced logging."""
        data = {
            "url": url,
            "method": method,
            "wordlist": wordlist,
            "delay": delay,
            "threads": threads,
            "stable": stable,
            "additional_args": additional_args,
        }
    
        logger.info(f"🔍 Starting Arjun parameter discovery on {url}")
        result = api_client.safe_post("api/arjun-parameter-discovery", data)
    
        if result.get("success"):
            logger.info(f"✅ Arjun parameter discovery completed on {url}")
        else:
            logger.error("❌ Arjun parameter discovery failed")
    
        return result
  • Backend handler function 'execute_arjun' decorated with @tool decorator, responsible for extracting parameters, building and executing the arjun CLI command, and parsing results into structured findings.
    @tool(required_fields=["url"])
    def execute_arjun():
        """Execute Arjun for HTTP parameter discovery."""
        data = request.get_json()
        params = extract_arjun_params(data)
    
        logger.info(f"Executing Arjun on {params['url']}")
    
        started_at = datetime.now()
        command_parts = build_arjun_command(params)
        execution_result = execute_command(command_parts, timeout=600)
        ended_at = datetime.now()
    
        return parse_arjun_result(
            execution_result, params, command_parts, started_at, ended_at
        )
  • Helper function to extract and provide defaults for Arjun tool parameters from incoming request data (defines input schema).
    def extract_arjun_params(data):
        """Extract and validate arjun parameters from request data."""
        base_params = {
            "url": data["url"],
            "threads": data.get("threads", 50),
            "method": data.get("method", "GET"),
            "methods": data.get("methods", "GET,POST"),
            "wordlist": data.get("wordlist", ""),
            "headers": data.get("headers", ""),
            "post_data": data.get("post_data", ""),
            "delay": data.get("delay", 0),
            "timeout": data.get("timeout", 10),
            "stable": data.get("stable", False),
            "include_status": data.get("include_status", ""),
            "exclude_status": data.get("exclude_status", ""),
            "output_file": data.get("output_file", ""),
            "additional_args": data.get("additional_args", ""),
        }
    
        return base_params
  • Helper function to construct the arjun CLI command line with shell escaping and conditional flags.
    def build_arjun_command(params):
        """Build arjun command from parameters with proper input validation."""
        import shlex
    
        url = params["url"]
    
        cmd_parts = ["arjun", "-u", shlex.quote(url)]
    
        threads = params.get("threads", 10)
        cmd_parts.extend(["-t", str(threads)])
    
        methods = params.get("methods", "")
        if methods:
            method_list = [m.strip().upper() for m in methods.split(",")]
            cmd_parts.extend(["-m", ",".join(method_list)])
    
        wordlist = params.get("wordlist", "")
        if wordlist:
            cmd_parts.extend(["-w", shlex.quote(wordlist)])
    
        headers = params.get("headers", "")
        if headers:
            cmd_parts.extend(["--headers", shlex.quote(headers)])
    
        post_data = params.get("post_data", "")
        if post_data:
            cmd_parts.extend(["--data", shlex.quote(post_data)])
    
        delay = params.get("delay", 0)
        if delay > 0:
            cmd_parts.extend(["-d", str(delay)])
    
        timeout = params.get("timeout", 10)
        cmd_parts.extend(["--timeout", str(timeout)])
    
        if params.get("stable", False):
            cmd_parts.append("--stable")
    
        include_status = params.get("include_status", "")
        if include_status:
            cmd_parts.extend(["--include-status", include_status])
    
        exclude_status = params.get("exclude_status", "")
        if exclude_status:
            cmd_parts.extend(["--exclude-status", exclude_status])
    
        cmd_parts.append("--json")
    
        output_file = params.get("output_file", "")
        if output_file:
            cmd_parts.extend(["-o", shlex.quote(output_file)])
    
        if params["additional_args"]:
            cmd_parts.extend(params["additional_args"].split())
    
        return cmd_parts
  • Helper function to parse Arjun's JSON output into standardized finding structures with deduplication and tagging.
    def parse_arjun_json_output(stdout: str) -> list[dict[str, Any]]:
        """Parse arjun JSON output into structured findings."""
        findings = []
    
        if not stdout.strip():
            return findings
    
        try:
            data = json.loads(stdout)
    
            if isinstance(data, dict):
                url = data.get("url", "")
                parameters = data.get("parameters", [])
    
                if url and parameters:
                    parsed_url = urlparse(url)
                    host = parsed_url.netloc
    
                    for param_data in parameters:
                        if isinstance(param_data, dict):
                            param_name = param_data.get("name", "")
                            method = param_data.get("method", "GET")
    
                            if param_name:
                                tags = ["parameter", "discovery", method.lower()]
                                if parsed_url.scheme == "https":
                                    tags.append("https")
                                else:
                                    tags.append("http")
    
                                finding = create_finding(
                                    finding_type="param",
                                    target=host,
                                    evidence={
                                        "parameter_name": param_name,
                                        "method": method,
                                        "url": url,
                                        "path": parsed_url.path,
                                        "scheme": parsed_url.scheme,
                                        "port": parsed_url.port,
                                        "discovered_by": "arjun",
                                    },
                                    severity="info",
                                    confidence="medium",
                                    tags=tags,
                                    raw_ref=json.dumps(param_data),
                                )
                                findings.append(finding)
                        elif isinstance(param_data, str):
                            tags = ["parameter", "discovery"]
                            if parsed_url.scheme == "https":
                                tags.append("https")
                            else:
                                tags.append("http")
    
                            finding = create_finding(
                                finding_type="param",
                                target=host,
                                evidence={
                                    "parameter_name": param_data,
                                    "url": url,
                                    "path": parsed_url.path,
                                    "scheme": parsed_url.scheme,
                                    "port": parsed_url.port,
                                    "discovered_by": "arjun",
                                },
                                severity="info",
                                confidence="medium",
                                tags=tags,
                                raw_ref=param_data,
                            )
                            findings.append(finding)
    
            elif isinstance(data, list):
                for item in data:
                    if isinstance(item, str):
                        finding = create_finding(
                            finding_type="param",
                            target="unknown",
                            evidence={"parameter_name": item, "discovered_by": "arjun"},
                            severity="info",
                            confidence="low",
                            tags=["parameter", "discovery"],
                            raw_ref=item,
                        )
                        findings.append(finding)
    
        except json.JSONDecodeError:
            logger.warning("Failed to parse arjun JSON output")
            return findings
    
        if len(findings) > 100:
            findings = findings[:100]
    
        return findings
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'enhanced logging,' which adds some context beyond basic execution, but fails to cover critical aspects like required permissions, rate limits, output format (despite having an output schema), or potential side effects (e.g., network impact). For a tool with 7 parameters and no annotations, this is insufficient, warranting a low score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence: 'Execute Arjun for HTTP parameter discovery with enhanced logging.' It is front-loaded with the core purpose and includes an additional feature without unnecessary details. Every word contributes to understanding the tool's function, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, 1 required), no annotations, and an output schema (which reduces the need to describe return values), the description is minimally complete. It states the tool's purpose and a feature ('enhanced logging'), but lacks details on parameter usage, behavioral traits, or differentiation from siblings. This provides a basic foundation but leaves significant gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning none of the 7 parameters have descriptions in the schema. The tool description does not mention any parameters, their purposes, or how they affect the discovery process. This lack of semantic information leaves the agent to infer usage from parameter titles alone, which is inadequate for effective tool invocation, resulting in a score of 2.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute Arjun for HTTP parameter discovery with enhanced logging.' It specifies the verb ('Execute'), the resource ('Arjun'), and the function ('HTTP parameter discovery'), with an additional feature ('enhanced logging'). However, it doesn't explicitly differentiate from sibling tools like 'x8_parameter_discovery' or 'paramspider_mining', which appear to serve similar parameter discovery purposes, preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context about specific scenarios, prerequisites, or comparisons with sibling tools such as 'x8_parameter_discovery' or 'paramspider_mining', which are listed and likely related. This omission leaves the agent without clear usage instructions, scoring only 2 for minimal implied utility.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SlanyCukr/bugbounty-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server