# PM Counter Monitoring System
A comprehensive system for monitoring telecom performance monitoring (PM) counters from remote SFTP locations, storing them in a time-series database, and providing access through API endpoints and a Streamlit chat interface.
## Architecture
```
Remote SFTP Location → Job Server (periodic fetch) → PostgreSQL Database
↓
MCP Server ← API Endpoints ← Streamlit Frontend
```
## Components
1. **SFTP Client** (`sftp_client.py`) - Handles file downloads from remote SFTP server
2. **Job Server** (`job_server.py`) - Periodically fetches and processes XML files
3. **XML Parser** (`xml_parser.py`) - Parses PM counter XML files
4. **Database** (`database.py`) - PostgreSQL schema and models
5. **Data Storage** (`data_storage.py`) - Saves parsed data to database
6. **API Server** (`api_server.py`) - FastAPI REST endpoints
7. **MCP Server** (`mcp_server.py`) - Model Context Protocol server
8. **Streamlit Frontend** (`streamlit_app.py`) - Chat bot interface
## Quick Start with Docker (Recommended)
The easiest way to run the entire system is using Docker Compose:
### Step 1: Create Environment File
Create a `.env` file in the project root with your configuration:
```bash
# Copy the example file
cp .env.example .env
# Edit .env and add your Groq API key (required for RAG system)
# Get your API key from: https://console.groq.com/
```
The `.env` file should include at minimum:
```env
GROQ_API_KEY=your_groq_api_key_here
```
**Note:** The Groq API key is required for the RAG (Retrieval Augmented Generation) system to work. Without it, the system will fall back to simple pattern matching.
### Step 2: Build and Start Services
```bash
# Build and start all services
make build
make up
# Or using docker-compose directly
docker-compose up -d
# Initialize database schema
make init-db
# View logs
make logs
# Access the application
# - Streamlit: http://localhost:8501
# - API: http://localhost:8000
# - MCP Server: http://localhost:8001
```
The Docker setup includes:
- PostgreSQL database
- SFTP server (for testing, with example XML files)
- Job server (fetches files every hour)
- API server
- MCP server
- Streamlit frontend
All services are automatically configured to work together.
## Manual Setup (Without Docker)
### 1. Install Dependencies
```bash
pip install -r requirements.txt
```
### 2. Configure Environment
Copy `.env.example` to `.env` and update with your settings:
```bash
cp .env.example .env
```
Edit `.env` with your database, SFTP credentials, and **Groq API key**:
```env
# Required for RAG system
GROQ_API_KEY=your_groq_api_key_here
# Database settings
DB_NAME=pm_counters_db
DB_USER=postgres
DB_PASSWORD=postgres
# SFTP settings
SFTP_HOST=localhost
SFTP_USERNAME=sftp_user
SFTP_PASSWORD=sftp_password
```
**Get your Groq API key:** Visit https://console.groq.com/ to create an account and generate an API key.
### 3. Setup Remote Location (SFTP Server)
The remote location is where your XML files are stored. You have several options:
**Option A: Use Local Files for Testing (Easiest)**
```bash
# Process local XML files directly (no SFTP needed)
python test_local_files.py
```
**Option B: Set Up Local SFTP Server**
See `SETUP_REMOTE.md` for detailed instructions on setting up a local SFTP server.
**Option C: Use Existing Remote SFTP Server**
Update `.env` with your remote SFTP server credentials:
```env
SFTP_HOST=your-sftp-server.com
SFTP_USERNAME=your_username
SFTP_PASSWORD=your_password
SFTP_REMOTE_PATH=/path/to/xml/files
```
For more details, see `SETUP_REMOTE.md`.
### 4. Setup PostgreSQL Database
```bash
# Create database
createdb pm_counters_db
# Or using psql
psql -U postgres -c "CREATE DATABASE pm_counters_db;"
```
### 5. Initialize Database Schema
```python
from database import init_db
init_db()
```
Or run:
```bash
python -c "from database import init_db; init_db()"
```
## Running the System
### With Docker (Recommended)
```bash
# Start all services
make up
# Or
docker-compose up -d
# View logs
make logs
# Stop all services
make down
```
### Without Docker
#### 1. Start Job Server
The job server fetches files from SFTP at configured intervals:
```bash
python job_server.py
```
#### 2. Start API Server
```bash
python api_server.py
```
Or using uvicorn:
```bash
uvicorn api_server:app --host 0.0.0.0 --port 8000
```
#### 3. Start MCP Server
```bash
python mcp_server.py
```
Or using uvicorn:
```bash
uvicorn mcp_server:app --host 0.0.0.0 --port 8001
```
#### 4. Start Streamlit Frontend
```bash
streamlit run streamlit_app.py
```
## Docker Commands
Use the Makefile for convenient commands:
```bash
make build # Build Docker images
make up # Start all services
make down # Stop all services
make restart # Restart all services
make logs # View logs from all services
make logs-job # View logs from job server only
make logs-api # View logs from API server only
make logs-streamlit # View logs from Streamlit only
make clean # Stop and remove everything (including volumes)
make init-db # Initialize database schema
make ps # Show running containers
make shell-api # Open shell in API server container
make shell-job # Open shell in job server container
```
Or use docker-compose directly:
```bash
docker-compose up -d # Start services
docker-compose down # Stop services
docker-compose logs -f # View logs
docker-compose exec api_server bash # Open shell
```
## Configuration
### Changing Fetch Interval
The fetch interval can be configured in two ways:
1. **Environment Variable**: Set `FETCH_INTERVAL_HOURS` in `.env` file (for Docker) or environment
2. **Docker Compose**: Update `FETCH_INTERVAL_HOURS` in `docker-compose.yml` or `.env` file
For Docker, update the environment variable and restart the job server:
```bash
# Edit .env file
FETCH_INTERVAL_HOURS=2.0
# Restart job server
docker-compose restart job_server
```
For non-Docker, update `Config.FETCH_INTERVAL_HOURS` in `config.py` or set environment variable.
## API Endpoints
### Main API (Port 8000)
- `GET /` - API information
- `GET /network-elements` - List all network elements
- `GET /interfaces/{interface_name}/counters` - Get interface counters
- `GET /system/counters` - Get system counters
- `GET /cpu/utilization` - Get CPU utilization
- `GET /memory/utilization` - Get memory utilization
- `GET /bgp/peers` - List BGP peers
- `GET /bgp/peers/{peer_address}/counters` - Get BGP peer counters
- `GET /files/processed` - List processed files
- `GET /stats/summary` - Get summary statistics
### MCP Server (Port 8001)
- `POST /mcp` - MCP protocol endpoint
- `GET /mcp/methods` - List available MCP methods
MCP Methods:
- `get_interface_counters` - Get interface counters
- `get_system_counters` - Get system counters
- `get_cpu_utilization` - Get CPU utilization
- `get_memory_utilization` - Get memory utilization
- `get_latest_metrics` - Get latest metrics summary
## Streamlit Chat Interface
The Streamlit frontend provides a chat bot interface where you can ask questions like:
- "What is the current CPU utilization?"
- "Show me memory usage for the last 12 hours"
- "Get interface counters for GigabitEthernet1/0/1"
- "What are the latest metrics?"
- "Show me system statistics"
## Database Schema
The system stores data in the following tables:
- `file_records` - Track downloaded XML files
- `network_elements` - Network element information
- `measurement_intervals` - Time intervals for measurements
- `interface_counters` - Interface performance counters
- `ip_counters` - IP layer counters
- `tcp_counters` - TCP layer counters
- `system_counters` - System performance counters
- `bgp_counters` - BGP peer counters
- `threshold_alerts` - Threshold alerts from XML files
## Testing with Local Files
For testing without a real SFTP server, you can:
1. Use the existing `example_1.xml` and `example_2.xml` files
2. Modify the job server to process local files directly
3. Use a local SFTP server like `openssh-server` for testing
## Troubleshooting
1. **Database Connection Issues**: Ensure PostgreSQL is running and credentials are correct
2. **SFTP Connection Issues**: Verify SFTP server is accessible and credentials are correct
3. **API Not Responding**: Check if services are running on correct ports
4. **No Data**: Ensure job server has processed files and data is in the database
## License
MIT License