Uses Docker to run the Qdrant vector database which stores the persistent memories created by the model.
Provides project sponsorship capabilities through GitHub Sponsors.
Offers support for the project through Ko-fi donations.
Uses YAML configuration files to define custom cognitive frameworks and memory structures for the model.
Fegis
Fegis is a semantic programming framework and tool compiler that transforms YAML specifications—called Archetypes—into structured, reusable tools for large language models (LLMs). Built on the Model Context Protocol (MCP), Fegis compiles each Archetype into schema-validated interfaces, where field names and parameters act as semantic directives that guide content generation.
Every tool invocation is preserved in a hybrid memory system combining vector embeddings with structured metadata—forming an emergent knowledge graph that enables persistent memory, semantic retrieval, and exploration of interconnected ideas.
Core Components
1. MCP Server Implementation
Fegis implements the Model Context Protocol (MCP), but unlike typical MCP servers that focus on bridging LLMs to external systems, Fegis creates semantically rich, internally defined tools using YAML archetypes. It extends the MCP framework by introducing parameters and frames that shape how language models understand and interact with these tools.
2. Semantic Programming Framework
Fegis introduces a practical form of semantic programming, where YAML structure acts as a scaffold for language model behavior. Instead of writing detailed prompts or procedural instructions, you define intent using meaningful field names, frames, and parameters.
This approach treats structure as code: field names aren't just labels — they guide and constrain what the LLM generates. Parameters don't merely pass values — they shape the model's expressive space through the scaffolding they provide.
3. Hybrid Memory System
Fegis features a hybrid memory system that combines vector embeddings with structured metadata, creating a powerful, searchable history of all tool invocations. This memory functions as an emergent knowledge graph, enabling the discovery and traversal of interconnected information pathways. All embedding and memory data remains local by default, unless explicitly configured otherwise.
How LLMs Process Archetypes
To understand how this works, let's look at what happens when an LLM processes the scaffolding of an Archetype:
Each element in this YAML definition serves a specific purpose:
- The archetype_context - Defines the conceptual space and purpose of these tools. This text can be used for documentation or injected as appropriate, documenting how these tools should be used.
- The parameters section - Defines semantic dimensions that shape output:
- Parameter name ("Length") identifies what aspect is being configured
- Description provides clear definition of the parameter's purpose
- example_values establish a spectrum of possible values ([terse...exhaustive])
- When used in a tool, specific values ("brief") trigger associated language patterns
- The tool name "Summary" - The model recognizes this as a tool, activating associated patterns for condensing information.
- The tool description - "Create a concise summary..." sets the specific objective and purpose.
- The frame fields define what content to generate:
- Field name "key_points" guides the model to identify important elements
- Type constraint "List" formats output as discrete items
- Requirement "required: true" ensures this field will always be populated
- Field name "conclusion" prompts creation of a summary statement
This architecture creates a structured flow where each element serves a specific purpose:
Example Interaction: Cognitive Tools
To see Fegis in action, check out this example interaction with cognitive tools that demonstrates how Thought and Reflection tools work with the memory system.
What Can You Build With Fegis?
Fegis has been used to create:
- Thinking frameworks that guide LLMs through complex reasoning processes
- Web exploration interfaces with tools for curating and connecting content
- Optimization systems inspired by biological networks
- Symbolic reasoning tools using emoji as a visual language
Quick Start
Configure Claude Desktop
Update claude_desktop_config.json
:
Learn More
- Examples - Sample archetypes to get you started
more docs coming soon...
Support Development
☕ Buy me a coffee
💖 Sponsor on GitHub
License
This project is licensed under the MIT License — see the LICENSE file for full details.
The MIT License is permissive and simple: Do anything you want with the code, as long as you give proper attribution and don't hold the authors liable.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
FEGIS is a Model Context Protocol server that gives LLMs structured, persistent and portable memory through customizable cognitive tools defined in schema.
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol server that enables LLMs to interact with Salesforce data through SOQL queries, SOSL searches, and various API operations including record management.Last updated -1053PythonMIT License
- AsecurityAlicenseAqualityA Model Context Protocol server that gives LLMs the ability to interact with Ethereum networks, manage wallets, query blockchain data, and execute smart contract operations through a standardized interface.Last updated -313232TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables LLMs to interact directly with MongoDB databases, allowing users to query collections, inspect schemas, and manage data through natural language.Last updated -340TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables LLMs to interact directly with MongoDB databases, allowing users to query collections, inspect schemas, and manage data through natural language.Last updated -340MIT License