Nucleus MCP
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| NUCLEAR_BRAIN_PATH | Yes | The path to your project's .brain folder where knowledge and state are stored. |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
| prompts | {} |
| resources | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| nucleus_governance | Enforce file integrity, security posture, and automated verification loops for the Nucleus Agent OS. Use this tool when you need to lock files against modification, switch security modes, or run auto-fix cycles. Do NOT use for task management (use nucleus_tasks), session state (use nucleus_sessions), or memory storage (use nucleus_engrams). Actions: 'lock' sets an immutable flag on a file preventing modification. 'unlock' removes that flag (destructive: re-enables writes). 'set_mode' switches between 'red' (restricted, blocks dangerous ops) and 'blue' (permissive) security modes. 'auto_fix_loop' runs a verify-diagnose-fix-retry cycle: it executes your verification_command, and if it fails, attempts to fix the file, then retries until the command passes or max retries exceeded. 'delete_file' permanently removes a file (destructive, irreversible). 'watch' monitors a file path and returns changes detected within the duration window. 'curl' proxies HTTP requests through Nucleus egress controls. 'pip_install' installs Python packages with governance audit logging. 'status' returns current security mode and lock state. 'list_directory' returns directory contents. Side effects: lock/unlock modify filesystem extended attributes. delete_file removes data permanently. Prerequisites: .brain directory must exist. Returns JSON with {success: boolean, data: object}. Example: {action: 'auto_fix_loop', params: {file_path: 'src/app.py', verification_command: 'python -m py_compile src/app.py'}} returns {success: true, data: {iterations: 2, fixed: true}}. |
| nucleus_engrams | Store, query, and search persistent memory (engrams) that survives across AI sessions, plus health monitoring and context graph visualization. Use this tool when you need to remember something for future sessions, recall past decisions, search the knowledge base, or check system health. Do NOT use for task tracking (use nucleus_tasks), session lifecycle (use nucleus_sessions), or agent coordination (use nucleus_agents). Engrams are the fundamental memory unit — each has content, optional tags for categorization, source attribution, and arbitrary metadata. Actions: 'write_engram' persists new knowledge to .brain/engrams/ (side effect: creates a JSONL entry). 'query_engrams' retrieves engrams filtered by tag, context, or intensity. 'search_engrams' performs full-text search across all stored knowledge. 'health' checks brain directory integrity and returns file counts and sizes. 'version' returns Nucleus version, Python version, and platform info. 'audit_log' shows the decision audit trail with timestamps. 'morning_brief' generates a daily status report with task summaries, session history, and recommendations. 'governance_status' shows current security mode and lock state. 'context_graph' builds a relationship map between related engrams. 'engram_neighbors' traverses the graph from a specific engram. 'pulse_and_polish' analyzes engram quality and suggests improvements. 'fusion_reactor' cross-references multiple engrams to generate insights. 'billing_summary' shows resource usage. All read operations are non-destructive. Prerequisites: .brain directory must exist. Returns JSON with {success: boolean, data: object}. Example: {action: 'write_engram', params: {content: 'Auth uses JWT with 24h expiry', tags: ['architecture', 'auth']}} returns {success: true, data: {key: 'engram_a1b2c3', stored: true}}. |
| nucleus_tasks | Manage a priority task queue with escalation, human-in-the-loop (HITL) gates, and cognitive depth tracking to prevent context-switch overhead and rabbit-holing. Use this tool when you need to create, assign, update, or track work items. Do NOT use for persistent knowledge storage (use nucleus_engrams), session management (use nucleus_sessions), or multi-agent coordination (use nucleus_agents). Actions: 'add' creates a new task with a priority level (critical/high/medium/low) and optional tags. 'list' shows tasks filtered by status — returns an array of task objects. 'get_next' returns the highest-priority unclaimed task. 'claim' assigns a task to the current agent (side effect: sets status to in_progress). 'update' changes task status (pending/in_progress/done/blocked) with optional notes. 'escalate' flags a task for human review with a reason. 'import_jsonl' bulk-imports tasks from a JSONL file. 'depth_push' increments cognitive nesting depth (tracks how deep into subtasks you've gone). 'depth_pop' decrements it. 'depth_show' returns current depth and max. 'depth_reset' clears depth to zero. 'depth_set_max' sets the maximum allowed depth — system warns when exceeded. 'depth_map' visualizes the full depth tree. 'context_switch' saves current task state and loads another task's context. All mutations write to .brain/tasks/. Prerequisites: .brain directory. Returns JSON with {success: boolean, data: object}. Example: {action: 'add', params: {title: 'Fix auth bug', priority: 'high', tags: ['backend']}} returns {success: true, data: {task_id: 'task_x1y2', created: true}}. |
| nucleus_sessions | Manage session lifecycles with save/resume, structured event logging, key-value state persistence, and named checkpoints for rollback. Use this tool to maintain continuity across AI conversations, track what happened during a work session, and hand off context between sessions or agents. Do NOT use for persistent knowledge (use nucleus_engrams), task tracking (use nucleus_tasks), or multi-agent sync (use nucleus_sync). Actions: 'start' begins a new session with a stated goal and optional tags. 'save' persists current session state to .brain/sessions/. 'resume' restores a previous session with full context including events, state, and active tasks. 'end' closes the active session and records duration. 'emit_event' appends a structured event to the session log (side effect: writes to events.jsonl). 'read_events' retrieves event history with optional filters. 'get_state' reads the session's key-value state. 'update_state' sets a key-value pair. 'checkpoint' creates a named snapshot of current state for later rollback. 'resume_checkpoint' restores state from a checkpoint. 'handoff_summary' generates context for transitioning to a new session or agent. 'archive_resolved' removes completed sessions (destructive: deletes session files). 'garbage_collect' removes stale sessions older than threshold (destructive). Prerequisites: .brain directory. Returns JSON with {success: boolean, data: object}. Example: {action: 'start', params: {goal: 'Fix authentication bug', tags: ['backend', 'auth']}} returns {success: true, data: {session_id: 'sess_abc123', started: true}}. |
| nucleus_sync | Coordinate state across multiple AI agents, store and retrieve named artifacts, manage trigger-based automation, and orchestrate deployments. Use this tool when multiple agents need to share data, when you need to persist artifacts for cross-session use, or when managing deployment workflows. Do NOT use for persistent memory (use nucleus_engrams), session state (use nucleus_sessions), or task assignment (use nucleus_tasks). Actions: 'identify_agent' registers the current agent's identity in the brain. 'sync_status' shows sync state. 'sync_now' forces immediate state replication between brains (may overwrite remote data). 'write_artifact' stores a named data blob in .brain/artifacts/ for cross-session sharing (side effect: creates file). 'read_artifact' retrieves a stored artifact. 'list_artifacts' shows all stored artifacts. 'trigger_agent' dispatches an event to another registered agent. 'get_triggers'/'evaluate_triggers' manage automated trigger rules. 'start_deploy_poll' begins monitoring a deployment service for readiness. 'check_deploy' queries deployment status. 'complete_deploy' marks deployment as finished. 'smoke_test' validates a deployed service endpoint by hitting its URL. 'shared_read'/'shared_write'/'shared_list' manage a shared key-value store visible to all agents. Prerequisites: .brain directory. Sync operations require at least two configured brains. Deploy actions require network access. Returns JSON with {success: boolean, data: object}. Example: {action: 'write_artifact', params: {name: 'api_schema', content: '{...}', mime_type: 'application/json'}} returns {success: true, data: {stored: true, path: '.brain/artifacts/api_schema'}}. |
| nucleus_features | Track features through their lifecycle, generate cryptographic execution proofs for audit compliance, and mount external MCP servers as composable sub-tools. Use this tool when you need to register a feature, verify code execution, or integrate another MCP server. Do NOT use for task tracking (use nucleus_tasks), memory storage (use nucleus_engrams), or agent spawning (use nucleus_agents). Actions: 'add' creates a feature record with name, description, and initial status. 'update' changes feature status through its lifecycle (proposed/in_progress/done/cancelled). 'validate' marks a feature as verified with evidence. 'list' shows all features. 'get' retrieves one feature by ID. 'search' finds features by keyword. 'generate_proof' creates a cryptographic Ed25519-signed receipt of a code execution for audit compliance (side effect: writes to .brain/proofs/). 'get_proof'/'list_proofs' retrieve stored proofs. 'mount_server' connects an external MCP server as a sub-tool (side effect: spawns a child process). 'discover_tools' lists tools available on a mounted server. 'invoke_tool' calls a tool on a mounted server and returns its result. 'traverse_mount' navigates the mount hierarchy. 'thanos_snap'/'unmount_server' disconnect mounted servers (destructive: kills child process, removes mount config). Prerequisites: .brain directory. Mounting requires the external server command to be installed locally. Returns JSON with {success: boolean, data: object}. Example: {action: 'add', params: {name: 'JWT Auth', description: 'Token-based authentication', status: 'in_progress'}} returns {success: true, data: {feature_id: 'feat_xyz', created: true}}. |
| nucleus_federation | Coordinate multiple Nucleus brain instances across distributed environments by joining federations, syncing state between peers, and routing requests to the appropriate brain. Use this tool when multiple AI agents on different machines or projects need to share memory, synchronize decisions, or coordinate work across separate .brain directories. Do NOT use for single-brain agent coordination (use nucleus_agents), artifact sharing within one brain (use nucleus_sync), or session handoffs (use nucleus_sessions). Actions: 'status' returns current federation membership and connection state (read-only). 'join' connects the current brain to a named federation (side effect: writes federation config to .brain/federation/). 'leave' disconnects from a federation. 'peers' lists all connected brains with their last-sync timestamps. 'sync' replicates state between brains — 'delta' mode merges only changes (safe), 'full' mode overwrites the target entirely (destructive). 'route' forwards a tool request to a specific peer brain and returns its response. 'health' checks connectivity and latency to all peers. Prerequisites: .brain directory. Federation requires filesystem access for local peers or network access for remote peers. Returns JSON with {success: boolean, data: object}. Example: {action: 'join', params: {federation_id: 'team-alpha', brain_path: '/shared/project/.brain'}} returns {success: true, data: {joined: true, peer_count: 3}}. |
| nucleus_orchestration | Get strategic awareness of all active work through satellite overviews, commitment tracking, open loop management, pattern detection, and data export. Use this tool when you need a high-level view of project state, want to track promises made during sessions, identify recurring patterns, or export data. Do NOT use for individual task CRUD (use nucleus_tasks), session management (use nucleus_sessions), or slot-based sprint execution (use nucleus_slots). Actions: 'satellite' returns a comprehensive bird's-eye view of tasks, sessions, commitments, health scores, and frontier status — the best starting point for understanding current state. 'scan_commitments' extracts promises and action items from session transcripts. 'list_commitments' shows all tracked commitments. 'close_commitment' marks a commitment as fulfilled with a resolution note. 'commitment_health' scores how well commitments are being met. 'open_loops' shows unfinished work items that need closure. 'add_loop' registers something that needs follow-up. 'patterns' detects recurring themes across sessions and tasks. 'metrics' shows system-wide statistics (tool usage, event counts, memory growth). 'export' dumps data in json/csv/markdown format. 'weekly_challenge' generates a focused challenge based on recent activity. 'archive_stale' removes commitments older than N days (destructive: deletes records). Prerequisites: .brain directory with session history for best results. Returns JSON with {success: boolean, data: object}. Example: {action: 'satellite'} returns {success: true, data: {tasks: {total: 12, in_progress: 3}, sessions: {active: 'sess_abc'}, commitments: {open: 5, overdue: 1}}}. |
| nucleus_telemetry | Configure LLM model tiers, record interaction telemetry for training signal generation, track costs, and manage safety controls including kill switches and notification pausing. Use this tool when you need to set which AI models are used for different task types, log usage data, check cost dashboards, or control emergency stops. Do NOT use for persistent memory (use nucleus_engrams), task management (use nucleus_tasks), or agent lifecycle (use nucleus_agents). Actions: 'set_llm_tier' configures which model (opus/sonnet/haiku) to use for specific task contexts. 'get_llm_status' returns current tier configuration. 'record_interaction' logs a tool invocation with token counts and latency for training signal generation (side effect: appends to telemetry log). 'value_ratio' calculates cost-effectiveness metrics across recent interactions. 'check_kill_switch' queries whether all operations should halt — returns boolean. 'pause_notifications' temporarily stops PEFS alert delivery. 'resume_notifications' re-enables alerts. 'record_feedback' captures human ratings (1-5 scale) on AI outputs for DPO training pairs. 'mark_high_impact' flags an interaction for human review. 'agent_cost_dashboard' shows per-agent token spending and cost breakdown. 'request_handoff' initiates a work transfer between agents. 'dispatch_metrics' shows tool dispatch statistics. Prerequisites: .brain directory. Kill switch state persists in .brain/governance/kill_switch.json. Returns JSON with {success: boolean, data: object}. Example: {action: 'record_feedback', params: {interaction_id: 'int_abc', rating: 5, comment: 'Perfect fix'}} returns {success: true, data: {recorded: true, dpo_pair_created: true}}. |
| nucleus_slots | Structure focused work into time-boxed slots, run automated sprints that claim and execute tasks, and manage multi-sprint missions with automatic sequencing. Use this tool when you want to organize execution into focused work periods, automate task execution cycles, or track progress toward multi-sprint goals. Do NOT use for individual task CRUD (use nucleus_tasks), session lifecycle (use nucleus_sessions), or strategic overview (use nucleus_orchestration). Actions: 'orchestrate' assigns tasks to time-boxed slots based on a strategy (fifo/priority/balanced). 'autopilot_sprint' runs an automated 25-minute pomodoro-style work cycle — it claims the next task, executes it, records results, and moves to the next until time expires. 'start_mission' creates a multi-sprint goal with automatic sprint sequencing. 'status_dashboard' shows all active slots, their assigned tasks, and progress. 'mission_status' shows progress toward a mission goal. 'slot_complete' marks a slot as finished with a result summary. 'slot_exhaust' marks a slot as time-expired without completion. 'force_assign' overrides automatic slot assignment (destructive: replaces current slot occupant). 'halt_sprint' pauses an active autopilot sprint. 'resume_sprint' continues a halted sprint. Prerequisites: .brain directory with tasks in the queue. Sprints require claimable tasks to be available. Returns JSON with {success: boolean, data: object}. Example: {action: 'autopilot_sprint', params: {duration_minutes: 25, focus_tags: ['backend']}} returns {success: true, data: {sprint_id: 'sprint_001', tasks_completed: 3, duration: '24m'}}. |
| nucleus_infra | Monitor infrastructure health, manage Google Cloud Platform services, track file changes across your project, and generate strategic planning reports. Use this tool when you need operational awareness of your development environment, GCP service status, or strategic recommendations. Do NOT use for code-level tasks (use nucleus_tasks), memory (use nucleus_engrams), or deployment orchestration (use nucleus_sync with deploy actions). Actions: 'file_changes' lists recently modified files in the project directory with timestamps and sizes (read-only, useful for detecting unexpected modifications). 'gcloud_status' checks Google Cloud Platform availability and incident status. 'gcloud_services' lists all enabled GCP services for a project (requires project_id). 'list_services' shows locally running services detected on common ports. 'status_report' generates a formatted markdown or JSON summary of brain health, task status, session state, and frontier metrics. 'synthesize_strategy' analyzes accumulated data (engrams, patterns, metrics) and recommends strategic actions. 'optimize_workflow' suggests process improvements for a named area. 'manage_strategy' reads and writes strategy documents to .brain/strategy/ (side effect: creates or modifies files). 'update_roadmap' modifies roadmap items in .brain/roadmap.json (side effect: modifies file). 'scan_marketing_log' analyzes marketing-related log entries. Prerequisites: .brain directory. GCloud actions require 'gcloud' CLI installed and authenticated via 'gcloud auth login'. Returns JSON with {success: boolean, data: object}. Example: {action: 'file_changes', params: {since: '24h', path: 'src/'}} returns {success: true, data: {changes: [{path: 'src/app.py', modified: '2026-04-04T10:00:00Z', size: 1234}]}}. |
| nucleus_agents | Manage multi-agent lifecycles including spawning specialized sub-agents, running automated code review and repair, orchestrating agent swarms for complex tasks, searching persistent memory, ingesting tasks from external sources, and viewing real-time dashboards. Use this tool when you need to create new agents, review or fix code, coordinate parallel work, or query the knowledge base. Do NOT use for individual task CRUD (use nucleus_tasks), session management (use nucleus_sessions), or cross-brain sync (use nucleus_federation). Actions: 'spawn_agent' creates a sub-agent with a specific role (reviewer/implementer/researcher) and goal (side effect: may start a new process). 'critique_code' runs automated code review on a file, returning issues and suggestions. 'fix_code' attempts automated repair of a described issue in a file. 'apply_critique' applies review feedback. 'orchestrate_swarm' coordinates multiple agents working on a complex task in parallel. 'search_memory' queries the persistent engram store by keyword (read-only). 'read_memory' retrieves a specific engram by key. 'ingest_tasks' imports tasks from external sources like GitHub issues, CSV, or JSONL files (side effect: creates tasks). 'rollback_ingestion' undoes a previous import (destructive: deletes imported tasks). 'ingestion_stats' shows import history. 'dashboard' shows live system metrics including agent count, task throughput, and memory usage. 'snapshot_dashboard'/'list_dashboard_snapshots' manage dashboard snapshots. 'get_alerts'/'set_alert_threshold' configure monitoring alerts. 'respond_to_consent'/'list_pending_consents' handle human-in-the-loop approval flows for sensitive operations. Prerequisites: .brain directory. Returns JSON with {success: boolean, data: object}. Example: {action: 'search_memory', params: {query: 'authentication', limit: 5}} returns {success: true, data: {results: [{key: 'engram_x', content: 'Auth uses JWT...', score: 0.95}]}}. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| activate_synthesizer | Activate Synthesizer agent to orchestrate the current sprint. |
| start_sprint | Initialize a new sprint with the given goal. |
| cold_start | Get instant context when starting a new session. Call this first. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| Brain State | Live state.json content — current session, active tasks, config |
| Brain Events | Recent events from the event ledger with timestamps |
| Trigger Definitions | Automation trigger rules and their evaluation state |
| Depth Tracking | Current cognitive depth state — shows nesting level in task tree |
| Cold Start Context | Full context for new sessions — read this first in any new conversation |
| Change Ledger | Monotonic version tracker — poll to detect staleness across all resources |
| Decision Traces | Recent DecisionMade traces from the DSoR decision ledger |
| Three Frontiers Health | GROUND/ALIGN/COMPOUND status — verification pass rates, alignment verdicts, delta counts |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/eidetic-works/nucleus-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server