Skip to main content
Glama

omega_ingest

Stores a knowledge fragment with source and evidence tier metadata for future retrieval via semantic RAG queries.

Instructions

Stores a new knowledge fragment in the provenance RAG store with source and evidence tier metadata. Use this to persist decisions, patterns, or findings for future retrieval via omega_rag_query. Returns JSON with fields: fragment_id, stored (boolean), timestamp.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesText content to store, e.g. 'Switched from Poetry to setuptools for pyproject.toml compatibility'.
sourceNoOrigin identifier for provenance tracking, e.g. 'code-review', 'user-session', 'documentation'.
tierNoEvidence confidence tier: A (verified/reproducible), B (reliable), C (single source), D (unverified).B
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description discloses it stores and returns JSON with specific fields, but does not cover side effects like overwrite behavior, idempotency, or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load the core action and return format. No extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with 3 parameters and no output schema. Description covers purpose, usage, and return fields. Could mention behavioral details like idempotency but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds no extra meaning beyond the schema's parameter descriptions, which are already clear for content, source, and tier.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'stores', the resource 'knowledge fragment in provenance RAG store', and the purpose 'persist for future retrieval via omega_rag_query', distinguishing it from retrieval tools like omega_rag_query.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to use: 'persist decisions, patterns, or findings for future retrieval'. Does not include when-not-to-use or alternatives, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/VrtxOmega/omega-brain-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server