Offers containerization through Docker for both backend and frontend components, allowing for consistent deployment environments.
Uses .ENV files for environment variable management in both the backend and frontend components.
Uses FastAPI as the backend framework for serving the CLIP-based fashion recommendation system, handling image uploads and providing recommendation endpoints.
Leverages npm for frontend dependency management and application startup.
Incorporates PostCSS for CSS processing in the frontend, as evidenced by the postcss.config.js file.
Provides a React-based frontend for the fashion recommendation system, allowing users to upload clothing images and view recommendations.
Integrates YOLO for clothing detection in user-uploaded images, which are then processed by CLIP for encoding and recommendation.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@FastMCP_RecSysrecommend similar outfits based on this dress image"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
AWS_RecSys
This is a CLIP-Based Fashion Recommender with AWS.
π Sample Components for UI
Image upload
Submit button
Display clothing tags + recommendations
Mockup
A user uploads a clothing image β YOLO detects clothing β CLIP encodes β Recommend similar
Folder Structure
/project-root
β
βββ /backend
β βββ Dockerfile
β βββ /app
β βββ /aws
β β β βββ rekognition_wrapper.py # AWS Rekognition logic
β β βββ /utils
β β β βββ image_utils.py # Bounding box crop utils
β β βββ /controllers
β β β βββ clothing_detector.py # Coordinates Rekognition + cropping
β β βββ /tests
β β β βββ test_rekognition_wrapper.py
β β β βββ test_clothing_tagging.py
β β βββ server.py # FastAPI app code
β β βββ /routes
β β β βββ clothing_routes.py
β β βββ /controllers
β β β βββ clothing_controller.py
β β β βββ clothing_tagging.py
β β β βββ tag_extractor.py # Pending: define core CLIP functionality
β β βββ schemas/
β β β βββ clothing_schemas.py
β β βββ config/
β β β βββ tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor
β β β βββ database.py
β β β βββ settings.py
β β β βββ api_keys.py
β β βββ requirements.txt
β βββ .env
β
βββ /frontend
β βββ Dockerfile
β βββ package.json
β βββ package-lock.json
β βββ /public
β β βββ index.html
β βββ /src
β β βββ /components
β β β βββ ImageUpload.jsx
β β β βββ DetectedTags.jsx
β β β βββ Recommendations.jsx
β β βββ /utils
β β β βββ api.js
β β βββ App.js # Main React component
β β βββ index.js
β β βββ index.css
β β βββ tailwind.config.js
β β βββ postcss.config.js
β βββ .env
βββ docker-compose.yml
βββ README.md Quick Start Guide
Step 1: Clone the GitHub Project
Step 2: Set Up the Python Environment
python -m venv venv
source venv/bin/activate # On macOS or Linux
venv\Scripts\activate # On WindowsStep 3: Install Dependencies
pip install -r requirements.txtStep 4: Start the FastAPI Server (Backend)
uvicorn backend.app.server:app --reloadOnce the server is running and the database is connected, you should see the following message in the console:
Database connected
INFO: Application startup complete.Step 5: Install Dependencies
Database connected INFO: Application startup complete.
npm installStep 6: Start the Development Server (Frontend)
npm startOnce running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/
Whatβs completed so far:
FastAPI server is up and running (24 Apr)
Database connection is set up (24 Apr)
Backend architecture is functional (24 Apr)
Basic front-end UI for uploading picture (25 Apr)
Related MCP server: FastMCP_RecSys
5. Mock Testing for AWS Rekognition -> bounding box (15 May)
PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.pyTested Rekognition integration logic independently using a mock β verified it correctly extracts bounding boxes only when labels match the garment set
Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
6. Mock Testing for AWS Rekognition -> CLIP (20 May)
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.pyDetecting garments using AWS Rekognition
Cropping the image around detected bounding boxes
Tagging the cropped image using CLIP
7. Mock Testing for full image tagging pipeline (Image bytes β AWS Rekognition (detect garments) β Crop images β CLIP (predict tags) + Error Handling (25 May)
Negative Test Case | Description |
No Detection Result | AWS doesn't detect any garments β should return an empty list. |
Image Not Clothing | CLIP returns vague or empty tags β verify fallback behavior. |
AWS Returns Exception | Simulate |
Corrupted Image File | Simulate a broken (non-JPEG) image β verify it raises an error or gives a hint. |
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.pydetect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
8. Run Testing for CLIP Output (30 May)
python3 -m venv venv
pip install -r requirements.txt
pip install git+https://github.com/openai/CLIP.git
python -m backend.app.tests.test_tag_extractorNext Step:
Evaluate CLIPβs tagging accuracy on sample clothing images
Fine-tune the tagging system for better recommendations
Test the backend integration with real-time user data
Set up monitoring for model performance
Front-end demo