Enables an automated MLOps pipeline for Stable Diffusion model fine-tuning using multiple Google Cloud services, including Vertex AI, Cloud Storage, Cloud Build, PubSub, Firestore, Cloud Run, and Cloud Functions.
Handles image storage for training data, maintains predefined bucket paths for uploads, and stores compiled pipeline artifacts for Stable Diffusion fine-tuning jobs.
Used to create notebooks that outline pipeline workflows and components for Stable Diffusion model fine-tuning on Vertex AI.
Used in cloud functions to subscribe to topics and trigger Vertex AI pipeline jobs for Stable Diffusion model fine-tuning.
Provides the frontend portal interface for uploading training images, deployed as a Cloud Run service to interact with the backend processing pipeline.
Handles compiled pipeline definitions that are stored in Google Cloud Storage and used to trigger Vertex AI training jobs.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@kjlahsdjkashdjhkasdkajshdfinetune a stable diffusion model with my uploaded images"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
sd-for-designers
A fully automated workflow for triggering, running & managing fine tuning, training & deploying custom stable diffusion models using Vertex AI
Description
Sd-aa-S is a full automated MLOps pipeline for triggering, managing & tracking Stable diffusion finetuning jobs on GCP using GCP components such as Google Cloud Storage, Cloud Build, Cloud PubSub, Firestore, Cloud Run, Cloud Functions and Vertex AI. It aims to simplify the ML workflows for tuning Stable diffusion using different techniques, starting with Dreambooth. Support for Lora, ControlNet etc. coming soon. The project is targeted ML/Data Engineers, Data Scientists & anybody else interested in or on the road towards building a platform for finetuning stable diffusion at scale.
Related MCP server: KiCAD-MCP-Server
Three Parts
1. The App part
1. Set up your Cloud Environment
2. Create a backend service for handling uploads to a GCS bucket
- Receive images from clients and store them under a predefined GCS bucket path
- Track the status of individual uploads in a Firestore collection
- Track the status of the overall upload job in a separate Firestore collection
- Once the job is compelted, publish the jobID as the message on a predefined PubSub topic
3. Deploy this backend service as a Cloud Run endpoint using Cloud build
4. Create a frontend portal to upload images using ReactJs
5. Deploy the frontend service on Cloud Run2. The Vertex AI part
1. Set up your Cloud Environment
2. Create a new custom container artifact for running the pipeline components
3. Create a new custom container artifact for running the training job itself
4. Create a Jupyter notebook outlining the Pipeline flow & components
5. Compile a YAML file from a Vertex AI workbench and store the precompiled YAML file under a GCS bucket path3. The Plumbing part
1. Set up your Cloud Environment
2. Create a cloud function that gets triggered every time the jobID is published on a predefinied topic (from 1st part)
3. Within the cloud function, the python code subscribes to the topic and triggers a Vertex AI pipeline job using the precomiled YAML file (from 2nd part)
4. The pipeline jobs finetunes the stable diffusion model using Dreambooth, uploads the new custom model to Model registy & deploys an endpoint
5. The job also updates Firestore with the status of the pipeline job from start to end