README.mdโข7.14 kB
# FastMCP_RecSys
This is a CLIP-Based Fashion Recommender with MCP.
### ๐ Sample Components for UI
1. Image upload
2. Submit button
3. Display clothing tags + recommendations
# Mockup
A user uploads a clothing image โ YOLO detects clothing โ CLIP encodes โ Recommend similar
<img width="463" alt="Screenshot 2025-04-26 at 10 26 13โฏAM" src="https://github.com/user-attachments/assets/93c0a75b-4ed1-4fa1-b25d-5137b8eb6af0" />
# Folder Structure
```
/project-root
โ
โโโ /backend
โ โโโ Dockerfile
โ โโโ /app
โ โโโ /aws
โ โ โ โโโ rekognition_wrapper.py # AWS Rekognition logic
โ โ โโโ /utils
โ โ โ โโโ image_utils.py # Bounding box crop utils
โ โ โโโ /controllers
โ โ โ โโโ clothing_detector.py # Coordinates Rekognition + cropping
โ โ โโโ /tests
โ โ โ โโโ test_rekognition_wrapper.py
โ โ โ โโโ test_clothing_tagging.py
โ โ โโโ server.py # FastAPI app code
โ โ โโโ /routes
โ โ โ โโโ clothing_routes.py
โ โ โโโ /controllers
โ โ โ โโโ clothing_controller.py
โ โ โ โโโ clothing_tagging.py
โ โ โ โโโ tag_extractor.py # Pending: define core CLIP functionality
โ โ โโโ schemas/
โ โ โ โโโ clothing_schemas.py
โ โ โโโ config/
โ โ โ โโโ tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
โ โ โ โโโ database.py
โ โ โ โโโ settings.py
โ โ โ โโโ api_keys.py
โ โ โโโ requirements.txt
โ โโโ .env
โ
โโโ /frontend
โ โโโ Dockerfile
โ โโโ package.json
โ โโโ package-lock.json
โ โโโ /public
โ โ โโโ index.html
โ โโโ /src
โ โ โโโ /components
โ โ โ โโโ ImageUpload.jsx
โ โ โ โโโ DetectedTags.jsx
โ โ โ โโโ Recommendations.jsx
โ โ โโโ /utils
โ โ โ โโโ api.js
โ โ โโโ App.js # Main React component
โ โ โโโ index.js
โ โ โโโ index.css
โ โ โโโ tailwind.config.js
โ โ โโโ postcss.config.js
โ โโโ .env
โโโ docker-compose.yml
โโโ README.md
```
## Quick Start Guide
### Step 1: Clone the GitHub Project
### Step 2: Set Up the Python Environment
```
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On Windows
```
### Step 3: Install Dependencies
```
pip install -r requirements.txt
```
### Step 4: Start the FastAPI Server (Backend)
```
uvicorn backend.app.server:app --reload
```
Once the server is running and the database is connected, you should see the following message in the console:
```
Database connected
INFO: Application startup complete.
```
<img width="750" alt="Screenshot 2025-04-25 at 1 15 45โฏAM" src="https://github.com/user-attachments/assets/7f3fc403-fb33-4107-a00c-61796a48ecec" />
### Step 5: Install Dependencies
Database connected
INFO: Application startup complete.
```
npm install
```
### Step 6: Start the Development Server (Frontend)
```
npm start
```
Once running, the server logs a confirmation and opens the app in your browser: [http://localhost:3000/](http://localhost:3000/)
<img width="372" alt="Screenshot 2025-04-25 at 9 08 50โฏPM" src="https://github.com/user-attachments/assets/794a6dba-9fbb-40f1-9e57-c5c2e2af1013" />
# Whatโs completed so far:
1. FastAPI server is up and running (24 Apr)
2. Database connection is set up (24 Apr)
3. Backend architecture is functional (24 Apr)
4. Basic front-end UI for uploading picture (25 Apr)
## 5. Mock Testing for AWS Rekognition -> bounding box (15 May)
```
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py
```
<img width="1067" alt="Screenshot 2025-05-20 at 4 58 14โฏPM" src="https://github.com/user-attachments/assets/7a25a92d-2aca-42a8-abdd-194dd9d2e8a5" />
- Tested Rekognition integration logic independently using a mock โ verified it correctly extracts bounding boxes only when labels match the garment set
- Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
## 6. Mock Testing for AWS Rekognition -> CLIP (20 May)
```
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
```
<img width="1062" alt="Screenshot 2025-05-21 at 9 25 33โฏAM" src="https://github.com/user-attachments/assets/6c64b658-3414-4115-9e20-520132605cab" />
- Detecting garments using AWS Rekognition
- Cropping the image around detected bounding boxes
- Tagging the cropped image using CLIP
## 7. Mock Testing for full image tagging pipeline (Image bytes โ AWS Rekognition (detect garments) โ Crop images โ CLIP (predict tags) + Error Handling (25 May)
| **Negative Test Case** | **Description** |
| -------------------------------| ------------------------------------------------------------------------------- |
| No Detection Result | AWS doesn't detect any garments โ should return an empty list. |
| Image Not Clothing | CLIP returns vague or empty tags โ verify fallback behavior. |
| AWS Returns Exception | Simulate `rekognition.detect_labels` throwing an error โ check `try-except`. |
| Corrupted Image File | Simulate a broken (non-JPEG) image โ verify it raises an error or gives a hint. |
```
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
```
<img width="1072" alt="Screenshot 2025-05-21 at 11 19 47โฏAM" src="https://github.com/user-attachments/assets/b41f07f4-7926-44a3-8b64-34fe3c6ef049" />
- detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
- crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
- get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
## 8. Run Testing for CLIP Output (30 May)
```
python3 -m venv venv
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
python -m backend.app.tests.test_tag_extractor
```
<img width="1111" alt="Screenshot 2025-06-06 at 5 12 13โฏPM" src="https://github.com/user-attachments/assets/d0b3b288-20f8-482f-9d39-dcccf9a775ee" />
Next Step:
1. Evaluate CLIPโs tagging accuracy on sample clothing images
2. Fine-tune the tagging system for better recommendations
3. Test the backend integration with real-time user data
4. Set up monitoring for model performance
5. Front-end demo