Skip to main content
Glama
step-security

stepsecurity-mcp

Official

list_anomalous_network_calls

List anomalous outbound network calls across your tenant, detecting endpoints not in baseline as supply-chain exfiltration indicators. Filter by status or org. Each result includes a dashboard link for investigation.

Instructions

List anomalous outbound network-call detections across the tenant (all orgs installed under the customer). 'Anomalous' = a destination endpoint was contacted that is NOT in the repo's Harden-Runner baseline of allowed endpoints — a common indicator of supply-chain exfiltration. Typically the most-used detection type during an investigation. Every result has a dashboard_url — when you present detections to the user you MUST include a clickable link per detection, not just the first one.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
customerNoStepSecurity customer/tenant identifier. Optional — if omitted, falls back to STEP_SECURITY_CUSTOMER env var. Returns detections aggregated across ALL GitHub orgs installed under this tenant.
statusNoDetection status filter. Defaults to 'new'.
limitNoMax detections to return (1-200). Defaults to 50.
orgScopeNoOptional: restrict to a single GitHub org under this tenant (uses the owner-scoped endpoint instead of tenant-wide).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that results are tenant-wide (across all orgs) and every result has a dashboard_url. It does not mention ordering, pagination, rate limits, or whether the tool is idempotent/safe. As a read operation, the description provides adequate but not comprehensive behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no waste. The first sentence states purpose, the second defines key terms, and the third provides critical user-facing instructions. Information is front-loaded and each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description mentions dashboard_url per detection, which is helpful. However, it does not describe the full return structure, typical fields, pagination, or ordering. For a list tool with 4 parameters and no output schema, a 3 reflects adequate but not complete guidance for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 4 parameters have schema descriptions (100% coverage). The tool description adds value by explaining the customer fallback behavior, default status filter, and that orgScope uses an owner-scoped endpoint. This enriches the schema beyond simple type/enum definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List anomalous outbound network-call detections' with a precise definition of 'anomalous' (not in Harden-Runner baseline). It distinguishes from sibling tools like list_detections, list_blocked_domain_calls, and list_suspicious_process_events by focusing specifically on anomalous network calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes that this is 'typically the most-used detection type during an investigation,' implying primary usage. It also provides explicit instructions to include clickable dashboard_url links per detection. However, it does not explicitly compare to alternatives or state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/step-security/stepsecurity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server