Skip to main content
Glama

adv_mark_false_positive

Mark security findings as false positives to reduce alert noise and improve vulnerability management accuracy in the Adversary MCP Server.

Instructions

Mark a finding as a false positive

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
finding_uuidYesUUID of the finding to mark as false positive
reasonNoReason for marking as false positive
marked_byNoWho marked it as false positiveuser
adversary_file_pathNoPath to .adversary.json file.adversary.json

Implementation Reference

  • The primary handler function for the 'adv_mark_false_positive' tool. Validates input arguments, initializes the FalsePositiveJsonRepository and FalsePositiveService, calls the service to mark the finding, and returns a JSON response indicating success.
    async def _handle_mark_false_positive(
        self, name: str, arguments: dict
    ) -> list[types.TextContent]:
        """Handle mark false positive requests."""
        try:
            # Comprehensive input validation
            validated_args = self._input_validator.validate_mcp_arguments(
                arguments, tool_name="adv_mark_false_positive"
            )
    
            finding_uuid = validated_args.get("finding_uuid", "")
            reason = validated_args.get("reason", "")
            marked_by = validated_args.get("marked_by", "user")
            adversary_file_path = validated_args.get(
                "adversary_file_path", ".adversary.json"
            )
    
            if not finding_uuid:
                raise CleanAdversaryToolError("finding_uuid parameter is required")
    
            # Initialize false positive repository and service
            fp_repository = FalsePositiveJsonRepository(adversary_file_path)
            fp_service = FalsePositiveService(fp_repository)
    
            # Mark as false positive
            success = await fp_service.mark_as_false_positive(
                uuid=finding_uuid, reason=reason, marked_by=marked_by
            )
    
            result = {
                "success": success,
                "finding_uuid": finding_uuid,
                "message": (
                    f"Finding {finding_uuid} marked as false positive"
                    if success
                    else f"Failed to mark finding {finding_uuid} as false positive"
                ),
            }
    
            return [types.TextContent(type="text", text=json.dumps(result, indent=2))]
    
        except Exception as e:
            logger.error(f"Mark false positive failed: {e}")
            raise CleanAdversaryToolError(f"Mark false positive failed: {str(e)}")
  • The input schema definition for the 'adv_mark_false_positive' tool, specifying required finding_uuid and optional reason, marked_by, adversary_file_path parameters.
    Tool(
        name="adv_mark_false_positive",
        description="Mark a finding as a false positive",
        inputSchema={
            "type": "object",
            "properties": {
                "finding_uuid": {
                    "type": "string",
                    "description": "UUID of the finding to mark as false positive",
                },
                "reason": {
                    "type": "string",
                    "description": "Reason for marking as false positive",
                    "default": "",
                },
                "marked_by": {
                    "type": "string",
                    "description": "Who marked it as false positive",
                    "default": "user",
                },
                "adversary_file_path": {
                    "type": "string",
                    "description": "Path to .adversary.json file",
                    "default": ".adversary.json",
                },
            },
            "required": ["finding_uuid"],
        },
    ),
  • Registration of the tool in the central dispatcher function within _register_tools method, routing calls to the handler.
    elif name == "adv_mark_false_positive":
        return await self._handle_mark_false_positive(name, arguments)
  • The FalsePositiveService.mark_as_false_positive method, which contains the core business logic for creating FalsePositiveInfo and persisting it via the repository.
    async def mark_as_false_positive(
        self,
        uuid: str,
        reason: str = "",
        marked_by: str = "user",
    ) -> bool:
        """
        Mark a security finding as a false positive.
    
        Args:
            uuid: UUID of the finding to mark
            reason: Reason for marking as false positive
            marked_by: Who marked it as false positive
    
        Returns:
            True if marked successfully, False otherwise
        """
        try:
            # Validate inputs
            if not uuid.strip():
                raise ValueError("UUID cannot be empty")
    
            # Create false positive info
            fp_info = FalsePositiveInfo.create_false_positive(
                uuid=uuid,
                reason=reason,
                marked_by=marked_by,
            )
    
            # Save to repository
            success = await self.repository.save_false_positive_info(fp_info)
    
            if success:
                self.logger.info(
                    f"Marked finding {uuid} as false positive (reason: '{reason}', marked_by: {marked_by})"
                )
            else:
                self.logger.error(f"Failed to mark finding {uuid} as false positive")
    
            return success
    
        except Exception as e:
            self.logger.error(f"Error marking finding {uuid} as false positive: {e}")
            return False
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Mark') but doesn't explain what this entails—whether it's a write operation, requires permissions, affects the finding's status permanently, or has side effects. This is a significant gap for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste—it directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a mutation tool (marking findings) with no annotations and no output schema, the description is incomplete. It doesn't cover behavioral aspects like permissions, side effects, or what happens after marking, nor does it explain the return value. This leaves gaps that could hinder correct tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all four parameters thoroughly. The description adds no additional meaning beyond what the schema provides, such as explaining parameter interactions or usage examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Mark') and resource ('a finding') with the specific action ('as a false positive'), making the purpose unambiguous. However, it doesn't explicitly differentiate from its sibling 'adv_unmark_false_positive' beyond the opposite action, missing a direct comparison that would elevate it to a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'adv_unmark_false_positive' or other scanning tools. It lacks context about prerequisites (e.g., after a scan) or exclusions, leaving the agent to infer usage based on the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brettbergin/adversary-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server