Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| LLM_PROVIDER | No | The LLM provider to use (ollama or openai). | ollama |
| OLLAMA_MODEL | No | The model to use with Ollama. | llama2 |
| OPENAI_MODEL | No | The OpenAI model to use. | |
| OPENAI_API_KEY | No | The API key for OpenAI. | |
| OLLAMA_BASE_URL | No | The base URL for the local Ollama instance. | http://localhost:11434 |
| UPLOAD_MAX_FILES | No | Maximum number of files allowed for upload. | 10 |
| CHROMA_PERSIST_DIR | No | The absolute path or relative path to the directory where Chroma DB data is persisted. | ./data/chroma |
| UPLOAD_MAX_FILE_SIZE_MB | No | Maximum allowed file size for uploads in MB. | 50 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| assess_document | Assess a security document (PDF, Word, etc.) and return a risk report. Args: file_path: Absolute path to the file to be assessed. scenario_id: The assessment scenario ID (default: "default"). Returns: JSON string containing the assessment report (risks, gaps, remediations). |
| query_knowledge_base | Query the internal security knowledge base (policies, standards). Args: query: The search query (e.g., "password complexity requirements"). top_k: Number of results to return. Returns: JSON string with retrieved document chunks. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| get_kb_stats | Get statistics about the knowledge base (document count, etc.) |