Skip to main content
Glama
joesecurity

JoeSandboxMCP

Official
by joesecurity

get_ai_summaries

Retrieve AI-generated analysis summaries for specific malware analysis runs from Joe Sandbox Cloud to understand threat behavior across different system environments.

Instructions

Retrieve the AI summaries for a specific analysis run, either from cache or by downloading it.

Joe Sandbox analyses may run on multiple system configurations (e.g., different Windows/Linux variants).
Each run is indexed in the `runs` array of the analysis metadata. This function retrieves the report
corresponding to a specific run.

Args:
    webid: The submission ID of the analysis (unique identifier).
    run (optional, default = 0): The index of the analysis run to retrieve the report for.
                                 Use 0 for the first run, 1 for the second, etc.
                                 If not specified, defaults to 0 (the first run).

Returns:
    A dictionary containing AI reasoning summaries with fields:
    - webid: The analysis ID
    - run: The run index
    - reasonings: List of AI reasoning entries
    - count: Number of reasoning entries found

Notes:
    - Reports are cached in memory by key: "{webid}-{run}".
    - Use `run` to distinguish between different environments used during analysis.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
webidYes
runNo

Implementation Reference

  • The handler function for the 'get_ai_summaries' MCP tool. It fetches the XML report for the specified analysis (webid and run), parses the LLM reasonings section, extracts each reasoning's text and attributes, and returns a structured list of AI summaries. Includes caching via get_or_fetch_report and error handling.
    @mcp.tool()
    async def get_ai_summaries(webid: str, run: int=0) -> Dict[str, Any]:
        """
        Retrieve the AI summaries for a specific analysis run, either from cache or by downloading it.
    
        Joe Sandbox analyses may run on multiple system configurations (e.g., different Windows/Linux variants).
        Each run is indexed in the `runs` array of the analysis metadata. This function retrieves the report
        corresponding to a specific run.
    
        Args:
            webid: The submission ID of the analysis (unique identifier).
            run (optional, default = 0): The index of the analysis run to retrieve the report for.
                                         Use 0 for the first run, 1 for the second, etc.
                                         If not specified, defaults to 0 (the first run).
    
        Returns:
            A dictionary containing AI reasoning summaries with fields:
            - webid: The analysis ID
            - run: The run index
            - reasonings: List of AI reasoning entries
            - count: Number of reasoning entries found
    
        Notes:
            - Reports are cached in memory by key: "{webid}-{run}".
            - Use `run` to distinguish between different environments used during analysis.
        """
    
        try:
            root = await get_or_fetch_report(webid, run)
            if root is None:
                return {"error": f"Could not retrieve report for submission ID '{webid}', run {run}"}
            
            # Find all reasoning elements
            reasoning_elements = root.findall('./llm/reasonings/reasoning')
            
            if not reasoning_elements:
                return {
                    "warning": "No AI reasoning summaries found in the report",
                    "webid": webid,
                    "run": run
                }
            
            # Extract the reasonings with their attributes
            reasonings = []
            for i, reasoning in enumerate(reasoning_elements):
                # Find the text element within this reasoning
                text_element = reasoning.find('./text')
                if text_element is not None and text_element.text:
                    reasoning_data = {
                        "id": i + 1,
                        "text": text_element.text
                    }
                    
                    # Add any attributes from the reasoning element
                    for key, value in reasoning.attrib.items():
                        reasoning_data[key] = value
                    
                    reasonings.append(reasoning_data)
            
            return {
                "webid": webid,
                "run": run,
                "reasonings": reasonings,
                "count": len(reasonings)
            }
            
        except Exception as e:
            return {
                "error": f"Failed to process AI summaries for submission ID '{webid}'. "
                         f"Reason: {str(e)}"
            }
  • jbxmcp/tools.py:2-17 (registration)
    The __all__ export list includes 'get_ai_summaries', indicating it is one of the public tools exported from this module.
    __all__ = [
        'submit_analysis_job',
        'search_analysis',
        'get_analysis_info',
        'get_ai_summaries',
        'get_dropped_info',
        'get_domain_info',
        'get_ip_info',
        'get_url_info',
        'get_signature_info',
        'get_unpacked_files',
        'get_pcap_file',
        'get_list_of_recent_analyses',
        'get_process_info',
        'get_memory_dumps'
    ]
  • jbxmcp/server.py:19-19 (registration)
    server.py imports the tools module, which registers all @mcp.tool() decorated functions including get_ai_summaries upon import.
    import jbxmcp.tools as tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/joesecurity/joesandboxMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server