Skip to main content
Glama

check_embedding_drift

Read-onlyIdempotent

Detect silent embedding provider model swaps by pinning a canary and comparing cosine distances. First call saves baseline; later calls flag drift beyond a threshold.

Instructions

Pin and re-check a 16-string canary against the active embedding provider. Catches silent provider model swaps (OpenAI/Voyage/etc.) that quietly degrade hybrid retrieval. First call (or with capture=true) saves the baseline; subsequent calls report max cosine distance vs baseline. Read-only or write-only (capture). Returns JSON: { status, message, max_distance?, mean_distance?, per_string? }.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
captureNoForce a fresh baseline capture, overwriting any existing canary file.
thresholdNoCosine-distance threshold for flagging drift (default 0.05).
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description contradicts annotations: annotations state readOnlyHint=true and idempotentHint=true, but description indicates that capture=true performs a write (overwrites baseline) and that behavior is not idempotent when capture=true. This contradiction reduces trust.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is succinct, front-loaded with purpose, and contains no redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description includes return format and explains the canary concept. It is almost complete, though it could clarify how the 16 strings are defined. Overall, well-rounded for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already fully describes both parameters (100% coverage). Description adds lifecycle context (first call saves baseline) but does not significantly enhance understanding beyond what schema provides. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it pins and re-checks a 16-string canary against the active embedding provider, specifically catching silent provider model swaps. This is distinct from generic drift detection sibling tools, making purpose precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for detecting provider swaps and explains first-call vs subsequent-call behavior, but does not explicitly provide when-to-use or when-not-to-use guidance compared to alternatives like detect_drift.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikolai-vysotskyi/trace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server