Skip to main content
Glama

waveguard_scan

Detect anomalies in data by comparing test samples against normal examples using GPU-accelerated wave physics simulation. Returns per-sample scores, confidence levels, and explanatory features for flagged anomalies.

Instructions

Detect anomalies in data using GPU-accelerated wave physics simulation. Fully stateless — send training data (normal examples) and test data (samples to check) in ONE call. Returns per-sample anomaly scores, confidence levels, and the top features explaining WHY each anomaly was flagged. Works on any data type: JSON objects, numbers, text, time series, arrays. No separate training step required.

Example: to check if server metrics are anomalous, send 3-5 normal readings as training, and the suspect readings as test.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
trainingYes2+ examples of NORMAL/expected data. These define what 'normal' looks like. All samples should be the same type and shape. More samples = better baseline (4-10 is ideal).
testYes1+ data points to check for anomalies. Same type/shape as training data. Each sample is scored independently.
sensitivityNoAnomaly threshold multiplier (default: 2.0). Lower = more sensitive (flags more anomalies). Higher = less sensitive. Range: 0.5 to 5.0.
encoder_typeNoData encoder type. Omit to auto-detect from data shape. Auto-detection works well for most data.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses: statelessness ('Fully stateless'), computational characteristics ('GPU-accelerated'), output format ('Returns per-sample anomaly scores, confidence levels, and the top features'), and scope flexibility ('Works on any data type'). No contradictions with missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences, zero waste. Front-loaded with purpose and method, followed by operational constraints (stateless, one call), output contract (since no output schema exists), scope, workflow simplification, and concrete example. Every sentence provides unique information not redundant with the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description comprehensively compensates by detailing return values (scores, confidence, feature explanations), covering all data types supported, explaining the stateless architecture, and providing usage examples. Sufficient for an agent to invoke correctly without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds value by providing concrete example values ('3-5 normal readings as training') and emphasizing the critical workflow constraint that both arrays must be sent in 'ONE call', which adds semantic context to how training and test parameters relate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb ('Detect') and resource ('anomalies in data'), includes implementation detail ('GPU-accelerated wave physics simulation') that distinguishes methodology, and explicitly notes universality ('Works on any data type') which differentiates it from the sibling waveguard_scan_timeseries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance ('send training data... and test data... in ONE call'), clarifies prerequisites ('No separate training step required'), includes concrete example ('to check if server metrics are anomalous...'), and explains the stateless nature which informs when to use this versus stateful alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gpartin/WaveGuardClient'

If you have feedback or need assistance with the MCP directory API, please join our Discord server