Skip to main content
Glama
wagonbomb

Megaraptor MCP

by wagonbomb

get_hunt_results

Retrieve digital forensics data from Velociraptor hunts to analyze endpoint activity and investigate security incidents.

Instructions

Get results from a Velociraptor hunt.

Args: hunt_id: The hunt ID (e.g., 'H.1234567890') artifact: Optional specific artifact to get results for limit: Maximum number of result rows to return (default 1000)

Returns: Hunt results data from all clients.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
hunt_idYes
artifactNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The implementation of the get_hunt_results tool, which queries hunt results from Velociraptor and formats the output.
    async def get_hunt_results(
        hunt_id: str,
        artifact: Optional[str] = None,
        limit: int = 1000,
    ) -> list[TextContent]:
        """Get results from a Velociraptor hunt.
    
        Args:
            hunt_id: The hunt ID (e.g., 'H.1234567890')
            artifact: Optional specific artifact to get results for
            limit: Maximum number of result rows to return (default 1000)
    
        Returns:
            Hunt results data from all clients.
        """
        try:
            # Input validation
            hunt_id = validate_hunt_id(hunt_id)
            limit = validate_limit(limit)
            client = get_client()
    
            # Build the VQL query
            if artifact:
                vql = f"SELECT * FROM hunt_results(hunt_id='{hunt_id}', artifact='{artifact}') LIMIT {limit}"
            else:
                vql = f"SELECT * FROM hunt_results(hunt_id='{hunt_id}') LIMIT {limit}"
    
            results = client.query(vql)
    
            return [TextContent(
                type="text",
                text=json.dumps({
                    "hunt_id": hunt_id,
                    "artifact": artifact,
                    "result_count": len(results),
                    "results": results[:limit],
                }, indent=2, default=str)
            )]
    
        except grpc.RpcError as e:
            error_response = map_grpc_error(e, f"hunt results for {hunt_id}")
            # Check if it's a not-found error
            if "NOT_FOUND" in error_response.get("grpc_status", ""):
                error_response["hint"] = f"Hunt {hunt_id} may not exist. Use list_hunts() to see available hunts."
            return [TextContent(
                type="text",
                text=json.dumps(error_response)
            )]
    
        except ValueError as e:
            # Validation errors
            return [TextContent(
                type="text",
                text=json.dumps({
                    "error": str(e),
                    "hint": "Provide a valid hunt ID starting with 'H.'"
                })
            )]
    
        except Exception:
            # Generic errors - don't expose internals
            return [TextContent(
                type="text",
                text=json.dumps({
                    "error": "Failed to get hunt results",
                    "hint": "Check hunt ID and try again"
                })
            )]
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying results come 'from all clients', indicating scope. However, it lacks disclosure on safety (read-only vs destructive), permissions required, or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a clean docstring format with distinct Args and Returns sections. Every sentence earns its place; the parameter descriptions are concise yet complete, and the initial purpose statement is immediately clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (per context signals), the brief return description is sufficient. With zero schema coverage, the description successfully documents all parameters. Minor gap: does not explicitly state this is a read-only operation given the lack of safety annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, but the description excellently compensates by documenting all three parameters: hunt_id includes a format example ('H.1234567890'), artifact explains it filters to a specific artifact, and limit clarifies it controls result rows with the default value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'Get' and resource 'results from a Velociraptor hunt', clearly distinguishing it from siblings like create_hunt, list_hunts, and get_flow_results by specifying this retrieves hunt (not flow) results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_flow_results, nor does it mention prerequisites such as needing to create a hunt first. It only states what the tool does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wagonbomb/megaraptor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server