Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PYTHONUTF8NoSet to '1' on Windows to avoid encoding errors0
STENO_TOP_KNoNumber of briefs returned by stenographer_compact_guard5
STENO_TURN_LIMITNoNumber of unprocessed turns before compression triggers8
OMEGA_STENOGRAPHER_DIRNoOverride the data directory for the SQLite database~/.omega-stenographer/

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
stenographer_ingest_exchangeA

Ingest a conversation turn. Call after every significant user or assistant message.

stenographer_get_briefB

Get the live running notes document.

stenographer_compact_guardB

Get compressed briefing + top-k relevant fragments. Call when context window pressure hits.

stenographer_mark_milestoneC

Flag a critical decision for tier-A priority.

stenographer_query_historyB

FTS5 keyword search over all ingested exchanges.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
Running NotesLive running notes document from the current session
Compaction GuardAuto-compressed briefing + relevant RAG fragments

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/VrtxOmega/omega-stenographer-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server