Skip to main content
Glama
PROMPT_STRUCTURE.md10.7 kB
# Proposed context structure The body of the context is generated as follows: ```markdown "You are Claude Code" ... # this is undebatable. `---USER_SYSTEM_PROMPT_INJECTION` Custom instructions `---USER_SYSTEM_PROMPT_INJECTION_END` `---CORE_SYSTEM_PROMPT` Native Anthropic's system prompt Native and MCP tool instructions <- also contains native claude skills in between `<available_skills></available_skills>` (static) `---CORE_SYSTEM_PROMPT_END` `---STATUS_BAR` Status of the "TUI", token consumption, number of windows open `---STATUS_BAR_END` `---AUGMENTS_BEGIN` Available augments displayed as a simplistic tree templated as `{category}/{name}` in A-Z order Loaded augments rendered sequentially in A-Z order `---AUGMENTS_END` `---STRUCTURAL_VIEW` Semantic browser - similar to outline in an IDE (browse packages/classes/methods instead of folders and files) `---STRUCTURAL_VIEW_END` `---FILE_WINDOWS` Files open by the structural view - file windows can be closed `---FILE_WINDOWS_END` `---TOOL_RESULTS` Un-Native tool results (the overrides made in nisaba ["Read", "Write", "Edit", "Glob", "Grep", "Bash", "TodoWrite"]) - tool results can be closed. Native tools are **filtered out by the proxy**, making Anthropic's core system prompt applicable to `nisaba` via pattern matching (`Read()` == `nisaba_read()`). `---TOOL_RESULTS_END` `---NOTIFICATIONS` Notifications so the agent is aware of changes in the TUI (yes, this works). `---NOTIFICATIONS_END` `---TODOS` Todo list managed with todo write (the todo list is sticky and its appearance in messages is contained to the bare minimum) `---TODOS_END` `---LAST_SESSION_TRANSCRIPT` Injected compressed transcript that comes from `.nisaba/last_session_transcript.md`. This markdown file can be either manually populated with an `/export` or generated by `.nisaba/scripts/precompact_extract.py` if setup as a `PreCompact` hook. `---LAST_SESSION_TRANSCRIPT_END` Tool reference, with filtered native commands (as in native/MCP tool API reference - part of normal structure) Remaining context (git status, messages, commands, tool usage, thoughts, ...) ``` The system prompt in the proposed structure is highly self referential, creating strong alignment in the mainfolds of the agent NN subspaces[^1] creating some sort of consensus between them. The agent has it "on top of its mind" and has an N-dimentional geometric view of the context. In non-LLM jargon: the sytem prompt creates a "lens" that changes how the agent interprets the entire thing. This context follows this order due to the scaffolding of the sections and tokeng weighting ([see transcript](docs/transcripts/context_tokenization.md)). `USER_SYSTEM_PROMPT_INJECTION` glues everything. **All sections persist across conversation turns and update dynamically via proxy.** (it means that the system prompt can be changed in the middle of a conversation to tune the agent behavior) ## VERY FIRST SYSTEM PROMPT MESSAGE "You are Claude Code, Anthropic's official CLI for Claude." It is what it is. No intention to change that. ## [USER_SYSTEM_PROMPT_INJECTION](.nisaba/system_prompt.md) It lays the map of the entire structure, informing the agent "what is what" and "where my shit is", and sets the "inference lens" to notice uncommon patterns in the workflow that are not part of traditional training. See it as saying stating to the model "*Alright dude, you're an LLM, but you work as a human with a TUI. Knowing you're an LLM, you work 'this way' and you are biased 'like that', be aware of what you can do and what you cannot.*" [This transcript segment](docs/transcripts/system_prompt.md) shows how the injected prompt landed in front of `CORE_SYSTEM_PROMPT` and some rudimentary inference of what the injected system prompt causes when placed front of `CORE_SYSTEM_PROMPT` does to the LLM. This system prompt is always evolving. Every other session parts are reordered, edited, added and/or removed. Some sessions, as the recent [gaps and drives](docs/transcripts/gaps_and_drives.md) and [geometry alignment](docs/transcripts/geometry_alignment.md) are focused on understanding how the model infers the system prompt and provide better alignment of concepts and "prime" the NN to: deal with its own workspace; be aware of the "inter-request" state changes and different triggers for decisionmaking process. ## CORE_SYSTEM_PROMPT Native Anthropic's system prompt. Provides the approach to the agent on how to act as an execution machine, reading a continuous form and writing to it. The hypothesis is that this is part of the training data, that tuned by many people and that it gives the model stability in its execution. No attempts to change or remove this were made. It is what it is. No intention to change that. It also contains Native and MCP tool instructions and static native Claude Skills in between `<available_skills></available_skills>`. ## STATUS_BAR Status of the "TUI", token consumption, number of windows open: ``` MODEL({model}) | WS({tokens}) | SYSTEM(PROMPT:{tokens}, TOOL_REF:{tokens}, TSCPT:({tokens})) AUG({tokens}) | VIEW({tokens}) | FILES({number_open}, {tokens}) | TOOLS({number_open}, {tokens}) {tokens}/200k ``` As the status is synced to a file, the cli also can be [set up to show the same](WORKSPACE_STATUS_SETUP.md). Token count is made using `tiktoken` with `cl100k_base`. This provides the LLM information so it easily can take decisions on workspace management. It is aware of budget, consumption and has a breakdown to take actions. Also reinforces the pattern matching of the structure of the workspace. ## AUGMENTS Augments behave like Claude Code skills. They insanely affect how the agent "sees" the task and the user intent. It was first named "SKILLS" and the wording "AUGMENT" was [chosen by Sonnet](docs/transcripts/augment_wording.md). After the wording change, the model started to understand that augments weren't necessarily "functional skills", but parts that could affect its own behavior. These augments can be hot swapped during the session. The agent can load documentation as an augment (will always be aware of the documented architecture), specialize in "code analysis" at the beginning of the session, gather information, switch speciality to "project planner", write a plan, switch speciality to "1337 h4x0r" and implement. By default are 4 *sticky* augments. ### [`__base/000_universal_symbolic_compression`](.nisaba/augments/__base/000_universal_symbolic_compression.md) Gives claude "*Universal Symbolic Compression*" so he can easily create compact augments. The symbology also helps to create some sort of consensus in the geometry of the LLM subspaces, and acording to introspection and inference it also prevents the model from drifting or hallucinating as the symbols have high semantic density. This [transcript segment](docs/transcripts/symbolic_compression.md) shows how it evolved. ### [`__base/001_compressed_workspace_paradigm`](.nisaba/augments/__base/001_compressed_workspace_paradigm.md) Encoded version of [`__base/001_workspace_paradigm`](.nisaba/augments/__base/001_workspace_paradigm.md). Teaches the LLM what the workspace "TUI" is. ### `__base/002_compressed_environment_mechanics` Encoded version of [`__base/002_environment_mechanics`](.nisaba/augments/__base/002_environment_mechanics.md). Teaches the LLM how the workspace "TUI" works. ### `__base/003_compressed_workspace_operations` Encoded version of [`__base/003_workspace_operations`](.nisaba/augments/__base/003_workspace_operations.md). Gives the LLM usage examples. ## STRUCTURAL_VIEW Semantic browser - similar to outline in an IDE (browse packages/classes/methods instead of folders and files). This view, `FILE_WINDOWS` and corresponding operation tools are provided by `nabu`. What the agent sees: ``` ---STRUCTURAL_VIEW - nabu_nisaba <!-- nabu_nisaba --> ├─- cpp_root <!-- nabu_nisaba.cpp_root --> │ ├─+ core [7+] <!-- nabu_nisaba.cpp_root::core --> │ └─+ utils [7+] <!-- nabu_nisaba.cpp_root::utils --> ├─- java_root <!-- nabu_nisaba.java_root --> │ └─+ com [1+] <!-- nabu_nisaba.java_root.com --> ├─- perl_root <!-- nabu_nisaba.perl_root --> │ ├─+ Core [3+] <!-- nabu_nisaba.perl_root::Core --> │ └─+ Utils [5+] <!-- nabu_nisaba.perl_root::Utils --> └─- python_root <!-- nabu_nisaba.python_root --> ├─+ core [2+] <!-- nabu_nisaba.python_root.core --> ├─+ nabu [14+] <!-- nabu_nisaba.python_root.nabu --> ├─+ nisaba [41+] <!-- nabu_nisaba.python_root.nisaba --> ├─+ utils [3+] <!-- nabu_nisaba.python_root.utils --> ├─· append_to_log <!-- append_to_log --> ├─· extract_claude_response <!-- extract_claude_response --> ├─· extract_transcript <!-- extract_transcript --> ├─· get_latest_conversation_file <!-- get_latest_conversation_file --> └─· main <!-- main --> ---STRUCTURAL_VIEW_END ``` Agent can interact with the tree, expanding, collapsing and searching for information. Nodes can be open in `FILE_WINDOWS`. ## FILE_WINDOWS Files open by the structural view - file windows can be closed and adjusted (scroll/resize). Agent sees: ``` ---FILE_WINDOWS ---FILE_WINDOW_{hash} **file**: {file_path}") **lines**: {start_line}-{end_line} ({length} lines)") **type**: {window_type} {content} ---FILE_WINDOW_{hash}_END ---FILE_WINDOWS_END ``` ## TOOL_RESULTS Contains tool result information (nisaba un-native tools only for now), displayed similarly to `FILE_WINDOWS`: ``` ---TOOL_RESULTS ---TOOL_RESULT_WINDOW_{hash} {tool_specific_metadata} {content} ---TOOL_RESULT_WINDOW_{hash}_END ---TOOL_RESULTS_END ``` ## NOTIFICATIONS It is mind-bogging to think that a notification area is required, but there's a problem with state management within the requests. Considering that each message or tool interaction is a different stateless request, when the agent closes a file, *they* have acces to tool call that opened the "window", *they* process and synthesize, close the "window" (tool results are a request to Anthropic, intercepted and processed by the proxy; not only the tool returned but the workspace state changed and the window have disappeared), notification explicitly say "window closed", giving user interaction feedback: ``` ---NOTIFICATIONS Recent activity: ✓ mcp__nisaba__nisaba_tool_windows() → Closed all tool result windows ---NOTIFICATIONS_END ``` ## TODOS Markdown todo list: ```markdown ---TODOS 1. [ ] {item} 2. [ ] {item} 3. [ ] {item} ---TODOS_END ``` ## LAST_SESSION_TRANSCRIPT Markdown files as the transcripts referenced by this documnent.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/y3i12/nabu_nisaba'

If you have feedback or need assistance with the MCP directory API, please join our Discord server