Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
IRANTI_PORTNoAPI port (default: 3001)3001
DATABASE_URLYesPostgreSQL connection string (pgvector required)
LLM_PROVIDERYesLLM provider: openai, claude, gemini, groq, mistral, ollama, or mock
IRANTI_API_KEYYesServer authentication key
IRANTI_ARCHIVIST_WATCHNoWatch escalation files and auto-run maintenance (true/false)

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
iranti_handshakeA

Initialize or refresh an agent's working-memory brief for the current task. Call this at session start or when a new task begins, passing the task and recent messages. Returns operating rules plus prioritized relevant memory for that task. If the recent messages appear to contain durable facts that are not yet in shared memory, the result may include a backfill suggestion. If your host does not support a true session-start hook, call this on the first user turn before you start answering recall-style questions. Do not use this as a per-turn retrieval tool; use iranti_attend.

iranti_attendA

Ask Iranti whether memory should be injected before the next LLM turn. REQUIRED CALL SEQUENCE — follow this every turn, regardless of host:

  1. Call with phase='pre-response' BEFORE replying to the user.

  2. Call BEFORE any lookup tool (Read, Grep, Glob, Bash, WebSearch, WebFetch) where Iranti might already hold the answer. When you do, pass the pendingToolCall field so Iranti can derive entity hints from the tool target (file, URL, query) and preempt the lookup with stored facts.

  3. If you just ran Edit/Write/Bash/WebSearch/WebFetch since your last iranti_write, call iranti_write FIRST — then attend.

  4. Call with phase='post-response' AFTER every reply, without exception.

If the user is asking you to recall a remembered fact (preference, decision, blocker, next step, prior project detail), use this before answering instead of guessing or saying you do not know. Returns an injection decision plus any facts that should be added to context if relevant memory is missing. If no handshake has been performed yet for this agent in the current process, attend will auto-bootstrap the session first and report that in the result metadata. This is the minimum safe pre-reply call even when the host skipped handshake. Omitting currentContext falls back to the latest message only; pass the full visible context when available. For host compatibility, message is accepted as an alias for latestMessage. When phase='post-response', pass the assistant response so Iranti can persist strict continuity facts and shared checkpoint state before closing the turn.

iranti_checkpointA

Persist a shared progress checkpoint while you work. Use this at meaningful milestones so current step, next step, open risks, recent outputs, structured actions, and shared entity state survive across turns, sessions, and agents. This is the strongest shared-RAM tool for active work: prefer it over ad-hoc prose when you need another session or another agent to pick up where you left off. If entityTargets are supplied, Iranti also writes canonical shared state such as current_step, next_step, open_risks, recent_actions, and recent_file_changes to those entities for handoff.

iranti_observeB

Recover relevant facts that have fallen out of Claude context.

iranti_queryA

Retrieve the current fact for an exact entity+key lookup. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected before exact lookup. Use this when you already know both the entity and the key. Returns the current value, summary, confidence, source, and temporal metadata when available. Prefer this over iranti_search when the target fact is already known, and do not answer from memory alone before checking Iranti.

iranti_historyA

Retrieve the full version history of a fact for an exact entity+key pair. Returns all archived past values plus the current value, ordered oldest-first. Each entry includes value, summary, confidence, source, validFrom, validUntil, isCurrent, archivedReason, and resolutionState. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected first. Use this to understand how a fact evolved over time — decisions that changed, blockers that were resolved, values that were contested or superseded.

iranti_searchA

Search shared memory with natural language when the exact entity or key is unknown. Uses hybrid lexical and vector search across stored facts. Use this for discovery and recall, not exact lookup. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected before search. If the user asks what they previously told you and you do not know the exact key, use this before saying you do not know.

iranti_writeA

Write one durable fact to shared memory for a specific entity. TIMING: Call IMMEDIATELY when a fact is confirmed — do not batch or defer to end of turn. One call per finding. If you edited a file, write before the next action. If you ran a command and got a result, write before the next action. If you got a search result, write before moving on. Use this when you learned something concrete that future turns, agents, or sessions should retain. Requires: entity ("type/id"), key, value JSON, and summary. Confidence is optional and defaults to 85. Conflicts on the same entity+key are detected automatically and may be resolved or escalated. Personal-memory keys honor the configured canonical personal entity for this project/session. Use properties JSON when you need structured issue or workflow metadata such as issueStatus=open|resolved, severity, or resolution notes.

iranti_write_ruleA

Write a task-scoped user operating rule with trigger keywords. Rules surface during iranti_attend only when the current context matches one or more trigger keywords. Use this for recurring guidelines that should be applied to specific task types (e.g. "always use GitHub Releases, not npm publish" triggered by "release", "publish", "npm"). Rules are stored as rule/<rule_id> entities and persist across sessions.

iranti_write_issueA

Write a canonical open or resolved issue fact on a stable key. Use this when you want defects, bugs, or chores to remain first-class shared memory instead of loose prose. The same issueId always maps to the same issue_ key, so changing status from open to resolved archives the prior state automatically while preserving history. Prefer this over hand-rolling issueStatus properties through iranti_write when the fact is specifically a trackable issue lifecycle entry.

iranti_remember_responseA

Persist a strict durable summary from your own response. Use this after you decide to say something like "the next step is ...", "the blocker is ...", "we decided ...", or "the current owner is ...". This uses the same narrow summary extractor as the Claude Stop hook, but it is explicit and works for Codex or any MCP client. Do not use this for arbitrary prose or every turn.

iranti_ingestC

Ingest a raw text block and let the Librarian chunk it into atomic facts.

iranti_relateC

Create a relationship edge between two entities.

iranti_relatedA

Read directly related entities (1 hop) for a given entity. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected before graph traversal.

iranti_related_deepA

Read related entities up to N hops deep for a given entity. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected before graph traversal.

iranti_who_knowsA

List which agents have written facts about an entity. REQUIRED: call iranti_attend before this discovery tool so Iranti can decide whether memory should be injected before provenance discovery.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nfemmanuel/iranti'

If you have feedback or need assistance with the MCP directory API, please join our Discord server