Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LIVECHAT_DEBUGNoSet to '1' for VAD/segmentation debug logs to stderr.
LIVECHAT_END_PHRASENoSpoken phrase to end the voice session.terminate voice session now
LIVECHAT_SILENCE_SECNoSilence duration after speech to end an utterance (seconds).1.5
LIVECHAT_LONG_POLL_SECNoHow long get_voice_input blocks before returning __NO_INPUT__ (seconds).300
LIVECHAT_VAD_THRESHOLDNoSilero VAD speech probability threshold.0.5
LIVECHAT_WHISPER_MODELNoWhisper model size: tiny.en, base.en, small.en, medium.en, or tiny, base, small, medium (multilingual).base.en
LIVECHAT_WHISPER_DEVICENoDevice for Whisper: cpu, cuda, or auto.auto
LIVECHAT_WHISPER_COMPUTENoCompute type for Whisper: int8 (CPU) or float16 (GPU).int8
LIVECHAT_WHISPER_LANGUAGENoLanguage code for Whisper (en, pt, es, ...) or auto to detect per utterance.en
LIVECHAT_MAX_UTTERANCE_SECNoMaximum utterance length in seconds (force-cut runaway utterances).120
LIVECHAT_MIN_UTTERANCE_SECNoMinimum utterance length in seconds (filters coughs).0.4

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brunocramos/livechat-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server