Provides containerized deployment of the fashion recommendation system with separate containers for frontend, backend, and database services.
Manages environment variables for both frontend and backend configurations of the fashion recommender.
Powers the backend API that handles image processing and recommendation requests for the fashion recommender system.
Provides version control for the fashion recommender codebase with specific ignore patterns for sensitive files.
Stores and manages clothing tags and recommendation data for the fashion recommender system.
Runs the frontend application environment for the fashion recommender system.
Processes CSS for the frontend, enabling advanced styling capabilities for the recommendation UI.
Delivers the user interface for the fashion recommender, enabling image uploads and displaying clothing recommendations.
Styles the frontend components of the fashion recommender interface.
FastMCP_RecSys
This is a CLIP-Based Fashion Recommender with MCP.
π Sample Components for UI
Image upload
Submit button
Display clothing tags + recommendations
Mockup
A user uploads a clothing image β YOLO detects clothing β CLIP encodes β Recommend similar
Folder Structure
Quick Start Guide
Step 1: Clone the GitHub Project
Step 2: Set Up the Python Environment
Step 3: Install Dependencies
Step 4: Start the FastAPI Server (Backend)
Once the server is running and the database is connected, you should see the following message in the console:
Step 5: Install Dependencies
Database connected INFO: Application startup complete.
Step 6: Start the Development Server (Frontend)
Once running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/
Whatβs completed so far:
FastAPI server is up and running (24 Apr)
Database connection is set up (24 Apr)
Backend architecture is functional (24 Apr)
Basic front-end UI for uploading picture (25 Apr)
Related MCP server: FastMCP_RecSys
5. Mock Testing for AWS Rekognition -> bounding box (15 May)
Tested Rekognition integration logic independently using a mock β verified it correctly extracts bounding boxes only when labels match the garment set
Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root
6. Mock Testing for AWS Rekognition -> CLIP (20 May)
Detecting garments using AWS Rekognition
Cropping the image around detected bounding boxes
Tagging the cropped image using CLIP
7. Mock Testing for full image tagging pipeline (Image bytes β AWS Rekognition (detect garments) β Crop images β CLIP (predict tags) + Error Handling (25 May)
Negative Test Case | Description |
No Detection Result | AWS doesn't detect any garments β should return an empty list. |
Image Not Clothing | CLIP returns vague or empty tags β verify fallback behavior. |
AWS Returns Exception | Simulate
throwing an error β check
. |
Corrupted Image File | Simulate a broken (non-JPEG) image β verify it raises an error or gives a hint. |
detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]
8. Run Testing for CLIP Output (30 May)
Next Step:
Evaluate CLIPβs tagging accuracy on sample clothing images
Fine-tune the tagging system for better recommendations
Test the backend integration with real-time user data
Set up monitoring for model performance
Front-end demo