Programmatically stages and commits changes to repositories with context-rich commit messages synthesized from agent justifications.
Includes a GitHub Actions workflow to validate programmatic commits and ensure auditability within CI/CD pipelines.
Provides pre-commit hooks to validate programmatic commits and ensure that codebase changes meet mandatory audit and justification requirements.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@domin8Refactor auth.py and explain the changes for the audit log"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
domin8
High-level goal: Provide a deterministic, auditable tool that constrains AI co-developing agents and requires them to record meaningful, auditable intent and justification before making destructive or intrusive changes to a codebase.
🚀 Quick Start
Installation
Using with Continue (VS Code)
domin8 integrates seamlessly with the Continue AI assistant:
Run the MCP server locally using the repository entrypoint:
📖 Full Continue Integration Guide →
Interactions and approvals are handled via MCP elicitation and chat-based flows. There is no CLI or Web UI in this distribution.
Project overview 🔍
domin8 is a Model Context Protocol (MCP) server that provides highly-structured pipelines for AI co-developing agents to request, justify, and execute potentially destructive actions (create, edit, delete, rename, move files, etc.) while ensuring auditability and capturing intent data for future training and optimization.
Key design goals:
Force agents to provide the WHAT and WHY of what they want to do before any destructive action is performed.
Persist intent, context, and the decision alongside a full trace of the action for later review and training.
Be deterministic, meticulously organized and transparent so that human reviewers can reproduce, review, and validate agent decisions.
Provide chat-based human-in-the-loop approval workflows (no CLI or Web UI).
Features ✨
🔐 Cryptographic Signatures
HMAC-SHA256 signatures for non-repudiation
Secure key management
Verifiable approvals
🗄️ SQLite Indexing
Fast artifact searches
Automatic indexing on operations
No filesystem scanning needed
🪝 Pre-commit Hooks & CI/CD
Validates programmatic commits
GitHub Actions workflow included
Prevents invalid artifact commits
🛠️ 5 MCP Tools
request_file_edit- Edit files with validationrequest_file_create- Create new filesrequest_file_delete- Delete with strong justificationrequest_file_move- Move files with trackingrequest_file_rename- Rename files safely
Usage & core concepts 🔧
The conceptual workflow this project aims to provide is:
Agent calls a tool provided by this MCP server to make a potentially destructive change of some kind to the codebase, localhost file system, or remote file system. These tools are the ONLY methods by which the agent can perform potentially destructive tasks (by removing all other options), and they all MUST treat the agent as an unwilling participant.
Agent must provide a structured payload along with the tool call which contains information such as:
the precise change(s) requested;
the reason(s) for the requested change(s);
any other information that is relevant to the task at hand AND can not be generated or obtained deterministically with code This payload will be in the form of a Pydantic-validated schema which is heavily constrained in order to force the agent's compliance with the objectives of this tool. It MUST be assumed that the agent is NOT willing to provide the information being sought from it. The schemas MUST take every measure possible to extract meaningful responses from the agent - whether the agent wants to provide that data or not. This should be thought of as a HOSTILE INTERROGATION of the agent by a meticulous, by-the-book bureaucrat.
A deterministic server-side pipeline validates the payload with Pydantic, and rejects failures for immediate correction(s) by the agent before the schema will be executed.
A mandatory human-in-the-loop call presents the agent's response for human approval. Again - this pipeline assumes that the agent is going to try to cheat its way out, and the human should be the final arbiter of whether or not the agent provided meaningful responses. The human should be able to provide feedback to the agent if the attempt is not approved so that the agent can try again, taking the user's feedback into consideration. Subsequent attempts will still require human approval.
Once approved, the pipeline releases the agent, executes the requested action(s), and persists all documents generated throughout the process to a meticulously-organized, tamper-evident, .gitignored store in the local repo, then stages and commits the changes with a data- and context-rich commit message sythesized programatically from the data generated/collected throughout the process.
All artifacts are:
dynamically/automatically assigned semantically-meaningful UUIDs
timestamped automatically using the localhost system's timezone
populated with any/all other relevant metadata it is possible to generate/collect programatically
stored in per-file sub-directories of a repo "mirror directory" (
~/.domin8/agent_data/with the same directories as the repo root, except all repo files are represented by their own sub-directory where all data about the corresponding file is kept, such as~/.domin8/agent_data/README.md/, which would contain information about every change ever made toREADME.mdis a logically-structured, meticulously-organized manner that is populated automatically by deterministic code)(obviously, this directory MUST be excluded from the mirror directory, to avoid infinite looping)versioned and retained for training, QA, post-hoc analysis, optimization, etc.