Skip to main content
Glama

sigma_rule_lookup

Read-onlyIdempotent

Retrieve a Sigma detection rule by UUID, including title, status, logsource, and tags. Use to investigate SIEM alerts or explore rules from search results.

Instructions

Look up a single Sigma detection rule by UUID from the SigmaHQ corpus (~3,200 rules, refreshed daily at 02:00 UTC). Returns the full rule with title, description, status (stable/test/experimental/deprecated/unsupported), level (informational/low/medium/high/critical), logsource (product/category/service), detection logic, tags (including attack.t#### ATT&CK technique refs and cve.YYYY-#### CVE refs), author, references, and modification date. Use to fetch a known rule for context (e.g., a SIEM detection that fired) or to inspect a rule discovered via REST sigma_rule_search. When a rule tags an ATT&CK technique or CVE, the response next_calls surfaces atlas_technique_lookup / cve_lookup as natural follow-ups. Free: 30/hr, Pro: 500/hr. Returns {rule, next_calls}.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
rule_idYesSigma rule UUID (RFC 4122, 36 chars, hyphenated). Example: '195e1b9d-bfc2-4ffa-ab4e-35aef69815f8'. Obtained from the REST sigma_rule_search endpoint or external SIEM correlation.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds significant behavioral context: rate limits ('Free: 30/hr, Pro: 500/hr'), data freshness ('refreshed daily at 02:00 UTC'), return structure ('{rule, next_calls}'), and fields returned (title, status, level, etc.). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise yet comprehensive: each sentence adds unique value. It starts with the core purpose, lists return fields, explains usage context, mentions follow-up tools, states rate limits, and ends with return format. No redundancy or filler. Well-structured for an AI agent to quickly grasp.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single required parameter, read-only, idempotent), the description is complete. It covers purpose, return values, usage context, rate limits, data source, and even suggests follow-up tools. Output schema likely defines the rule structure, so the description's field list is sufficient. No gaps remain for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with 'rule_id' described in detail (UUID format, example, max length). The description adds value by clarifying that the UUID is 'obtained from the REST sigma_rule_search endpoint or external SIEM correlation', providing provenance beyond the schema. Since schema coverage is high, baseline is 3, and this extra context justifies a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Look up a single Sigma detection rule by UUID from the SigmaHQ corpus', specifying verb ('look up'), resource ('Sigma detection rule'), and source ('SigmaHQ corpus, ~3,200 rules, refreshed daily'). It differentiates from siblings like sigma_rule_search (search) and bulk_sigma_rule_lookup (bulk) by emphasizing 'single rule' and providing specific use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides explicit use cases: 'fetch a known rule for context (e.g., a SIEM detection that fired) or to inspect a rule discovered via REST sigma_rule_search'. It also hints at alternatives by mentioning follow-up tools (atlas_technique_lookup, cve_lookup) when relevant. While it doesn't explicitly state when not to use, the context of siblings implies alternatives for bulk or search operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/UPinar/contrastapi'

If you have feedback or need assistance with the MCP directory API, please join our Discord server