-
Notifications
You must be signed in to change notification settings - Fork 4
Description
Hello, I have reviewed the content you wrote in the ‘readme’ for text guided image generation. In the parameters of the second step 'Finetune Diffusion Model with Context-Aware Adapter', it seems that there is no option to call the pretrained Context-Aware Adapter in the first step.
All the parameters are here: CUDA_VISIBLE_DEVICES=0 finetune_diffusion.py --pretrained_model_name_or_path="stabilityai/stable-diffusion-2-1-base" --train_data_dir=./train2017 --use_ema --resolution=512 --center_crop --random_flip --train_batch_size=32 --gradient_accumulation_steps=1 --gradient_checkpointing --max_train_steps=50000 --checkpointing_steps=10000 --learning_rate=2e-05 --max_grad_norm=1 --lr_scheduler="constant" --lr_warmup_steps=0
--output_dir="./output"
So I want to know how does the Context-Aware Adapter model work? Or which pretrained model mentioned in the parameters could be replaced by Context-Aware Adapter?Thank you for your help!