Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
ADVERSARY_LLM_MODELNoLLM model to use
ADVERSARY_LOG_LEVELNoLog level for the applicationINFO
ADVERSARY_LLM_PROVIDERNoLLM provider (e.g., openai, anthropic)
ADVERSARY_CACHE_SIZE_MBNoCache size in megabytes200
ADVERSARY_WORKSPACE_ROOTNoPath to the project workspace root
ADVERSARY_MAX_CONCURRENT_SCANSNoMaximum number of concurrent scans8

Tools

Functions exposed to the LLM to take actions

NameDescription
adv_scan_file

Scan a file for security vulnerabilities using Clean Architecture. Automatically uses session-aware analysis when LLM is configured.

adv_scan_folder

Scan a directory for security vulnerabilities using Clean Architecture. Automatically uses session-aware project analysis when LLM is configured.

adv_scan_code

Scan code content for security vulnerabilities using Clean Architecture. Automatically uses session-aware analysis with project context when available.

adv_get_status

Get comprehensive server status including session management capabilities, active sessions, and cache statistics

adv_get_version

Get server version information

adv_mark_false_positive

Mark a finding as a false positive

adv_unmark_false_positive

Remove false positive marking from a finding

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brettbergin/adversary-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server