Skip to main content
Glama

FastMCP_RecSys

by attarmau

FastMCP_RecSys

This is a CLIP-Based Fashion Recommender with MCP.

📌 Sample Components for UI

  1. Image upload
  2. Submit button
  3. Display clothing tags + recommendations

Mockup

A user uploads a clothing image → YOLO detects clothing → CLIP encodes → Recommend similar

Folder Structure

/project-root │ ├── /backend │ ├── Dockerfile │ ├── /app │ ├── /aws │ │ │ └── rekognition_wrapper.py # AWS Rekognition logic │ │ ├── /utils │ │ │ └── image_utils.py # Bounding box crop utils │ │ ├── /controllers │ │ │ └── clothing_detector.py # Coordinates Rekognition + cropping │ │ ├── /tests │ │ │ ├── test_rekognition_wrapper.py │ │ │ └── test_clothing_tagging.py │ │ ├── server.py # FastAPI app code │ │ ├── /routes │ │ │ └── clothing_routes.py │ │ ├── /controllers │ │ │ ├── clothing_controller.py │ │ │ ├── clothing_tagging.py │ │ │ └── tag_extractor.py # Pending: define core CLIP functionality │ │ ├── schemas/ │ │ │ └── clothing_schemas.py │ │ ├── config/ │ │ │ ├── tag_list_en.py $ Tool for mapping: https://jsoncrack.com/editor │ │ │ ├── database.py │ │ │ ├── settings.py │ │ │ └── api_keys.py │ │ └── requirements.txt │ └── .env │ ├── /frontend │ ├── Dockerfile │ ├── package.json │ ├── package-lock.json │ ├── /public │ │ └── index.html │ ├── /src │ │ ├── /components │ │ │ ├── ImageUpload.jsx │ │ │ ├── DetectedTags.jsx │ │ │ └── Recommendations.jsx │ │ ├── /utils │ │ │ └── api.js │ │ ├── App.js # Main React component │ │ ├── index.js │ │ ├── index.css │ │ ├── tailwind.config.js │ │ └── postcss.config.js │ └── .env ├── docker-compose.yml └── README.md

Quick Start Guide

Step 1: Clone the GitHub Project

Step 2: Set Up the Python Environment

python -m venv venv source venv/bin/activate # On macOS or Linux venv\Scripts\activate # On Windows

Step 3: Install Dependencies

pip install -r requirements.txt

Step 4: Start the FastAPI Server (Backend)

uvicorn backend.app.server:app --reload

Once the server is running and the database is connected, you should see the following message in the console:

Database connected INFO: Application startup complete.

Step 5: Install Dependencies

Database connected INFO: Application startup complete.

npm install

Step 6: Start the Development Server (Frontend)

npm start

Once running, the server logs a confirmation and opens the app in your browser: http://localhost:3000/

What’s completed so far:

  1. FastAPI server is up and running (24 Apr)
  2. Database connection is set up (24 Apr)
  3. Backend architecture is functional (24 Apr)
  4. Basic front-end UI for uploading picture (25 Apr)

5. Mock Testing for AWS Rekognition -> bounding box (15 May)

PYTHONPATH=. pytest backend/app/tests/test_rekognition_wrapper.py
  • Tested Rekognition integration logic independently using a mock → verified it correctly extracts bounding boxes only when labels match the garment set
  • Confirmed the folder structure and PYTHONPATH=. works smoothly with pytest from root

6. Mock Testing for AWS Rekognition -> CLIP (20 May)

PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
  • Detecting garments using AWS Rekognition
  • Cropping the image around detected bounding boxes
  • Tagging the cropped image using CLIP

7. Mock Testing for full image tagging pipeline (Image bytes → AWS Rekognition (detect garments) → Crop images → CLIP (predict tags) + Error Handling (25 May)

Negative Test CaseDescription
No Detection ResultAWS doesn't detect any garments — should return an empty list.
Image Not ClothingCLIP returns vague or empty tags — verify fallback behavior.
AWS Returns ExceptionSimulate rekognition.detect_labels throwing an error — check try-except.
Corrupted Image FileSimulate a broken (non-JPEG) image — verify it raises an error or gives a hint.
PYTHONPATH=. pytest backend/app/tests/test_clothing_tagging.py
  • detect_garments: simulates AWS Rekognition returning one bounding box: {"Left": 0.1, "Top": 0.1, "Width": 0.5, "Height": 0.5}
  • crop_by_bounding_box: simulates the cropping step returning a dummy "cropped_image" object
  • get_tags_from_clip: simulates CLIP returning a list of tags: ["T-shirt", "Cotton", "Casual"]

8. Run Testing for CLIP Output (30 May)

python3 -m venv venv pip install -r requirements.txt pip install git+https://github.com/openai/CLIP.git python -m backend.app.tests.test_tag_extractor

Next Step:

  1. Evaluate CLIP’s tagging accuracy on sample clothing images
  2. Fine-tune the tagging system for better recommendations
  3. Test the backend integration with real-time user data
  4. Set up monitoring for model performance
  5. Front-end demo
-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A CLIP-Based Fashion Recommender system that allows users to upload clothing images and receive tags and recommendations based on visual analysis.

  1. Mockup
    1. Folder Structure
      1. Quick Start Guide
    2. What’s completed so far:

      Related MCP Servers

      • A
        security
        F
        license
        A
        quality
        An MCP server that integrates FindMine's product styling and outfit recommendation capabilities with Claude and other MCP-compatible applications, allowing users to browse products, get outfit recommendations, find similar items, and access style guidance.
        Last updated -
        3
        303
        1
      • -
        security
        A
        license
        -
        quality
        A CLIP-Based Fashion Recommender system with MCP that provides fashion recommendations based on uploaded images.
        Last updated -
        Apache 2.0
        • Linux
        • Apple
      • -
        security
        A
        license
        -
        quality
        A TypeScript-based MCP server that implements virtual try-on capabilities using the HeyBeauty API, allowing users to visualize how clothes would look on them through Claude.
        Last updated -
        296
        19
        MIT License
        • Apple
      • A
        security
        A
        license
        A
        quality
        An MCP server that exposes Fabric patterns as tools for Cline, enabling AI-driven pattern execution directly within Cline tasks.
        Last updated -
        1
        11
        MIT License
        • Apple
        • Linux

      View all related MCP servers

      MCP directory API

      We provide all the information about MCP servers via our MCP API.

      curl -X GET 'https://glama.ai/api/mcp/v1/servers/attarmau/FastMCP_RecSys'

      If you have feedback or need assistance with the MCP directory API, please join our Discord server