Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SENTINEL_LLM_MODELNoLLM Model to usegpt-4o
SENTINEL_LOG_LEVELNoLogging level (DEBUG, INFO, WARN, ERROR)INFO
SENTINEL_GRYPE_IMAGENoCustom Docker image for Grypeanchore/grype
SENTINEL_LLM_API_KEYNoAPI Key for AI Threat Modeling (e.g., OpenAI)
SENTINEL_TRIVY_IMAGENoCustom Docker image for Trivyaquasec/trivy
SENTINEL_SEMGREP_IMAGENoCustom Docker image for Semgrepreturntocorp/semgrep
SENTINEL_TESTSSL_IMAGENoCustom Docker image for testssl.shdrwetter/testssl.sh
SENTINEL_DOCKER_TIMEOUTNoTimeout for Docker commands in seconds600
SENTINEL_SCHEMATHESIS_IMAGENoCustom Docker image for Schemathesisschemathesis/schemathesis:stable

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pranjal-lnct/Scurity-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server