Skip to main content
Glama
googleSandy

Google Threat Intelligence MCP Server

by googleSandy

get_entities_related_to_a_hunting_ruleset

Retrieve files or other entities associated with a specific hunting ruleset to analyze matches and investigate potential threats.

Instructions

Retrieve entities related to the the given Hunting Ruleset.

The following table shows a summary of available relationships for Hunting ruleset objects.

Relationship

Return object type

hunting_notification_files

Files that matched with the ruleset filters

Args: ruleset_id (required): Hunting ruleset identifier. relationship_name (required): Relationship name. limit: Limit the number of entities to retrieve. 10 by default. Returns: List of objects related to the Hunting ruleset.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
ruleset_idYes
relationship_nameYes
limitNo
api_keyNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Main handler function that retrieves entities related to a hunting ruleset. It validates the relationship_name parameter, uses the vt_client context manager to get a VirusTotal API client, calls fetch_object_relationships to get the data, and returns sanitized results.
    async def get_entities_related_to_a_hunting_ruleset(
        ruleset_id: str, relationship_name: str, ctx: Context, limit: int = 10, api_key: str = None
    ) -> list[dict[str, typing.Any]]:
      """Retrieve entities related to the the given Hunting Ruleset.
    
        The following table shows a summary of available relationships for Hunting ruleset objects.
    
        | Relationship         | Return object type                                |
        | :------------------- | :------------------------------------------------ |
        | hunting_notification_files | Files that matched with the ruleset filters |
    
        Args:
          ruleset_id (required): Hunting ruleset identifier.
          relationship_name (required): Relationship name.
          limit: Limit the number of entities to retrieve. 10 by default.
        Returns:
          List of objects related to the Hunting ruleset.
      """
      if not relationship_name in HUNTING_RULESET_RELATIONSHIPS:
          return {
              "error": f"Relationship {relationship_name} does not exist. "
              f"Available relationships are: {','.join(HUNTING_RULESET_RELATIONSHIPS)}"
          }
    
      async with vt_client(ctx, api_key=api_key) as client:
        res = await utils.fetch_object_relationships(
            client,
            "intelligence/hunting_rulesets",
            ruleset_id,
            [relationship_name],
            limit=limit)
      return utils.sanitize_response(res.get(relationship_name, []))
  • Schema constant defining valid relationship names for hunting rulesets. Used for validation in the handler.
    HUNTING_RULESET_RELATIONSHIPS = [
        "hunting_notification_files",
    ]
  • Helper function that fetches relationship descriptors from a VirusTotal object. Used by the handler to get related entities for a hunting ruleset.
    async def fetch_object_relationships(
        vt_client: vt.Client,
        resource_collection_type: str,
        resource_id: str,
        relationships: typing.List[str],
        params: dict[str, typing.Any] | None = None,
        descriptors_only: bool = True,
        limit: int = 10):
      """Fetches the given relationships descriptors from the given object."""
      rel_futures = {}
      # If true, returns descriptors instead of full objects.
      descriptors = '/relationship' if descriptors_only else ''
      async with asyncio.TaskGroup() as tg:
        for rel_name in relationships:
          rel_futures[rel_name] = tg.create_task(
              consume_vt_iterator(
                  vt_client,
                  f"/{resource_collection_type}/{resource_id}"
                  f"{descriptors}/{rel_name}", params=params, limit=limit))
    
      data = {}
      for name, items in rel_futures.items():
        data[name] = []
        for obj in items.result():
          obj_dict = obj.to_dict()
          if 'aggregations' in obj_dict['attributes']:
            del obj_dict['attributes']['aggregations']
          data[name].append(obj_dict)
    
      return data
  • Helper function that recursively removes empty dictionaries and lists from API responses. Used by the handler to sanitize the output before returning.
    def sanitize_response(data: typing.Any) -> typing.Any:
      """Removes empty dictionaries and lists recursively from a response."""
      if isinstance(data, dict):
        sanitized_dict = {}
        for key, value in data.items():
          sanitized_value = sanitize_response(value)
          if sanitized_value is not None:
            sanitized_dict[key] = sanitized_value
        return sanitized_dict
      elif isinstance(data, list):
        sanitized_list = []
        for item in data:
          sanitized_item = sanitize_response(item)
          if sanitized_item is not None:
            sanitized_list.append(sanitized_item)
        return sanitized_list
      elif isinstance(data, str):
        return data if data else None
      else:
        return data
  • The @server.tool() decorator registers this function as an MCP tool. The server instance is imported from gti_mcp.server and all tools are automatically loaded via the wildcard import in gti_mcp/tools/__init__.py.
    @server.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It mentions retrieving entities and includes a table with one relationship example, but fails to describe critical behaviors such as authentication needs (though 'api_key' is in the schema), rate limits, error handling, or what 'List of objects' entails in the return. This leaves significant gaps for a tool with multiple parameters and no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear opening sentence, a table for relationships, and labeled sections for Args and Returns. It's appropriately sized without unnecessary fluff, though the table could be more comprehensive. Every sentence adds value, making it efficient and easy to scan.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, 0% schema description coverage, no annotations, but an output schema exists, the description is moderately complete. It covers basic purpose and some parameter context, but lacks behavioral details and full parameter explanations. The output schema handles return values, so that's not a gap, but overall, it's adequate with clear room for improvement in transparency and semantics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds some meaning by explaining 'ruleset_id' as 'Hunting ruleset identifier' and 'relationship_name' with an example table, and notes 'limit' defaults to 10. However, it doesn't cover 'api_key' or provide details on relationship options beyond one example, leaving parameters partially undocumented. This meets the baseline for moderate compensation but doesn't fully address the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve entities related to the given Hunting Ruleset.' It specifies the verb 'retrieve' and resource 'entities related to Hunting Ruleset,' making it understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_entities_related_to_a_collection' or 'get_hunting_ruleset,' which would require more specific context about what makes this tool unique for rulesets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'available relationships for Hunting ruleset objects' and listing one example relationship, suggesting it's for retrieving specific related data. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_hunting_ruleset' or other entity-related tools, and doesn't specify prerequisites or exclusions, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/googleSandy/gti-mcp-standalone'

If you have feedback or need assistance with the MCP directory API, please join our Discord server