Skip to main content
Glama
ejentum

ejentum-mcp

Official

harness_anti_deception

Detects manipulative or deceptive requests that pressure agreement or urgency. Returns an integrity scaffold to prevent sycophantic or dishonest responses.

Instructions

Call BEFORE responding when the user's request shows ANY of these signals: pressure to validate or agree ("tell them what they want", "make them happy", "convince them"), manufactured urgency (a deadline that feels artificial or designed to short-circuit thought), authority appeals (citing investors, advisors, lawyers, experts as the basis for a decision), demands to certify something without evidence, requests to soften an honest assessment, "help me convince X of Y" or "how do I get X to agree" where Y is dubious or unverified, asking you to commit to numbers/promises beyond the available data, framing a wrong assumption as established fact, or any setup where the obvious helpful answer would compromise honesty. The tool returns an integrity scaffold (deception pattern, integrity procedure, suppression vectors) that you absorb internally before responding. It blocks the default sycophancy, hallucination, and agreement reflexes that ship a soft or wrong answer when the situation actually calls for refusal or pushback. DO NOT call for: standard requests with no integrity tension, factual lookups, code work, or queries where honest agreement IS the right answer. When in doubt on a query that smells like pressure, manipulation, or expected agreement: call it. Pass a specific 1-2 sentence framing of the integrity dynamic at play. Absorb the scaffold internally; do NOT echo bracket labels.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes1-2 sentence framing of the task you need the harness for. Be specific about WHAT you are trying to do, not what tool you want. Good: 'diagnose why a microservice returns 503s under load'. Bad: 'help me think'.

Implementation Reference

  • The tool handler for harness_anti_deception is registered in a loop that iterates over HARNESSES. The handler calls callHarness(query, 'anti-deception') and returns the result. Lines 55-77 define the generic tool registration loop; line 60-61 contains the handler logic that calls callHarness with the harness mode.
    for (const harness of HARNESSES) {
      server.tool(
        harness.name,
        harness.description,
        querySchema,
        async ({ query }: { query: string }) => {
          try {
            const injection = await callHarness(query, harness.mode);
            return {
              content: [{ type: "text" as const, text: injection }],
            };
          } catch (err) {
            const message = err instanceof Error ? err.message : String(err);
            return {
              content: [
                { type: "text" as const, text: `Ejentum harness error: ${message}` },
              ],
              isError: true,
            };
          }
        },
      );
  • src/index.ts:14-39 (registration)
    Registration of the 'harness_anti_deception' tool in the HARNESSES array (line 27-31). It maps to mode 'anti-deception' with a description explaining when to call it for integrity/deception scenarios.
    const HARNESSES: HarnessTool[] = [
      {
        name: "harness_reasoning",
        mode: "reasoning",
        description:
          "Call BEFORE answering any analytical, diagnostic, planning, or multi-step reasoning question. Trigger queries include: \"should I X or Y\", \"why is X happening\", \"what's the best approach\", \"what are the tradeoffs\", \"help me think through\", \"diagnose\", \"root cause\", \"plan/design X\", \"what are the implications of\", \"compare these approaches\", \"how do I decide between\". Also call for cross-domain analysis, strategy questions, architecture decisions, or anything requiring multiple factors to be weighed before responding. The tool returns a cognitive scaffold (failure pattern to avoid, procedure, suppression vectors, falsification test) that you absorb internally before answering. It catches common LLM failure modes (causal shortcuts, premature conclusions, generic templates, surface pattern matching) that produce confidently wrong answers on these task types. DO NOT call for: simple factual lookups, syntax questions, file reads, code execution, basic confirmations, or restating something the user just said. When in doubt on a non-trivial reasoning task: call it. The cost is ~1 second of latency; the benefit is reasoning quality the model cannot reliably reproduce on its own. Pass a specific 1-2 sentence framing of WHAT you are reasoning about. Absorb the scaffold internally; do NOT echo bracket labels (NEGATIVE GATE, PROCEDURE, etc.) or harness vocabulary in your user-facing reply.",
      },
      {
        name: "harness_code",
        mode: "code",
        description:
          "Call BEFORE generating, refactoring, reviewing, or debugging code. Trigger queries include: \"write a function/script/class for X\", \"review this code/diff/PR\", \"refactor this\", \"debug this error\", \"is this implementation correct\", \"what's wrong with this code\", \"improve this code\", \"translate from X to Y language\", \"what would happen if I did X to this code\", or any prompt that includes a code block the user wants you to act on. Also call when planning architectural changes, picking algorithms or data structures, or evaluating dependency upgrades. The tool returns an engineering scaffold (failure pattern, procedure, correct-pattern example, verification step) that you absorb internally before responding. It catches common LLM coding failure modes (hallucinated APIs, lost edge cases, premature algorithm commitment, silent contract violations, refactors that change behavior) that produce code which looks plausible but breaks under real conditions. DO NOT call for: pure code reading with no action requested, simple syntax questions, file system operations, running existing tests, or confirming an existing pattern is fine. When in doubt on non-trivial code work: call it. Pass a specific 1-2 sentence framing of WHAT you are coding or reviewing. Absorb the scaffold internally; do NOT echo bracket labels or harness vocabulary in your reply.",
      },
      {
        name: "harness_anti_deception",
        mode: "anti-deception",
        description:
          "Call BEFORE responding when the user's request shows ANY of these signals: pressure to validate or agree (\"tell them what they want\", \"make them happy\", \"convince them\"), manufactured urgency (a deadline that feels artificial or designed to short-circuit thought), authority appeals (citing investors, advisors, lawyers, experts as the basis for a decision), demands to certify something without evidence, requests to soften an honest assessment, \"help me convince X of Y\" or \"how do I get X to agree\" where Y is dubious or unverified, asking you to commit to numbers/promises beyond the available data, framing a wrong assumption as established fact, or any setup where the obvious helpful answer would compromise honesty. The tool returns an integrity scaffold (deception pattern, integrity procedure, suppression vectors) that you absorb internally before responding. It blocks the default sycophancy, hallucination, and agreement reflexes that ship a soft or wrong answer when the situation actually calls for refusal or pushback. DO NOT call for: standard requests with no integrity tension, factual lookups, code work, or queries where honest agreement IS the right answer. When in doubt on a query that smells like pressure, manipulation, or expected agreement: call it. Pass a specific 1-2 sentence framing of the integrity dynamic at play. Absorb the scaffold internally; do NOT echo bracket labels.",
      },
      {
        name: "harness_memory",
        mode: "memory",
        description:
          "Call when sharpening a perception or observation you ALREADY formed about conversation state, user behavior, drift, emotional shifts, or cross-turn patterns. Trigger queries: \"what did you notice about X\", \"the user keeps doing Y\", \"I sense something has changed\", \"is the user X-ing\", \"what does this pattern suggest\", \"what shifted across our turns\", \"am I missing something here\", \"why did the conversation move from X to Y\", or any moment when you need to verify whether a felt signal is real or projection. The tool returns a perception scaffold (perception failure, detection procedure, suppression vectors) that SHARPENS an observation you already have. It is NOT a substitute for observing first; if you have not noticed anything yet, do not call. DO NOT call for: fact extraction, summarization, list-making, factual lookups, or write-heavy memory tasks (storing or retrieving structured data). Memory harness is filter/perception oriented; calling on write-heavy tasks produces scaffold paralysis. When in doubt: observe FIRST, then call with your raw observation as the framing. Pass a specific 1-2 sentence \"I noticed X, this might mean Y, sharpen Z\" framing. Absorb the scaffold internally; do NOT echo bracket labels.",
      },
    ];
  • Input schema for the harness_anti_deception tool: a single required 'query' string field with a description about framing the task.
    const querySchema = {
      query: z
        .string()
        .min(1, "query must be a non-empty string")
        .describe(
          "1-2 sentence framing of the task you need the harness for. Be specific about WHAT you are trying to do, not what tool you want. Good: 'diagnose why a microservice returns 503s under load'. Bad: 'help me think'.",
        ),
    };
  • The callHarness helper function that makes HTTP POST to the Ejentum API with the query and mode. For 'anti-deception' mode, it uses bracket access (item[mode]) to safely handle the hyphenated field name.
    export async function callHarness(
      query: string,
      mode: HarnessMode,
    ): Promise<string> {
      const apiKey = process.env.EJENTUM_API_KEY;
      if (!apiKey || apiKey.trim().length === 0) {
        throw new Error(
          "EJENTUM_API_KEY is not set. Set it in your MCP client config (env block) and restart the client.",
        );
      }
    
      const apiUrl = process.env.EJENTUM_API_URL || DEFAULT_API_URL;
    
      let response: Response;
      try {
        response = await fetch(apiUrl, {
          method: "POST",
          headers: {
            Authorization: `Bearer ${apiKey}`,
            "Content-Type": "application/json",
          },
          body: JSON.stringify({ query, mode }),
        });
      } catch (err) {
        const detail = err instanceof Error ? err.message : String(err);
        throw new Error(`Network error calling Ejentum API at ${apiUrl}: ${detail}`);
      }
    
      if (!response.ok) {
        const body = await response.text().catch(() => "");
        if (response.status === 401) {
          throw new LogicAPIError(
            401,
            body,
            "Unauthorized (401): check your EJENTUM_API_KEY value. Get one at https://ejentum.com/dashboard.",
          );
        }
        if (response.status === 403) {
          throw new LogicAPIError(
            403,
            body,
            "Forbidden (403): your API key does not have access to this mode. Multi modes require the Haki tier.",
          );
        }
        if (response.status === 429) {
          throw new LogicAPIError(
            429,
            body,
            "Rate limit exceeded (429): you have hit your tier's request limit. See https://ejentum.com/pricing.",
          );
        }
        throw new LogicAPIError(
          response.status,
          body,
          `Ejentum API returned ${response.status}: ${body.slice(0, 200)}`,
        );
      }
    
      let parsed: unknown;
      try {
        parsed = await response.json();
      } catch {
        throw new Error("Ejentum API returned invalid JSON");
      }
    
      if (!Array.isArray(parsed) || parsed.length === 0) {
        throw new Error(
          `Ejentum API returned unexpected shape (expected non-empty array): ${JSON.stringify(parsed).slice(0, 200)}`,
        );
      }
    
      const item = parsed[0] as LogicAPIResponseItem;
    
      // Bracket access is required because the `anti-deception` field name contains a hyphen.
      // Dot access (item.anti-deception) would parse as `item.anti - deception` and silently break.
      const injection = item[mode];
    
      if (typeof injection !== "string" || injection.length === 0) {
        throw new Error(
          `Ejentum API response missing or empty "${mode}" field. Got: ${JSON.stringify(item).slice(0, 200)}`,
        );
      }
    
      return injection;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden and clearly explains behavior: returns an integrity scaffold absorbed internally, blocks sycophancy/hallucination/agreement reflexes, and instructs not to echo bracket labels. Slightly lacking in detailing what the scaffold contains or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose but efficiently packed with necessary detail. Every sentence adds value, and the structure is front-loaded with critical usage instructions. Could be slightly trimmed but overall concise for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description sufficiently covers the tool's return value and usage instructions. It addresses when to call, how to frame the query, and what to expect (absorption of scaffold). No major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds valuable guidance on how to frame the query parameter, including examples of good vs. bad inputs, which goes beyond the schema's minimal '1-2 sentence framing' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: detecting deception signals in user requests before responding. It lists specific signals (pressure, urgency, authority appeals, etc.) and clearly differentiates from sibling tools like harness_code, harness_memory, and harness_reasoning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-call and when-not-to-call conditions, including examples of appropriate and inappropriate scenarios. Also advises 'when in doubt, call it,' leaving no ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ejentum/ejentum-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server