Skip to content
This repository was archived by the owner on Sep 26, 2025. It is now read-only.

Commit 5aef140

Browse files
authored
⭐ Add example code for Stable Diffusion(1.5) (#409)
1 parent cf247a1 commit 5aef140

File tree

1 file changed

+35
-0
lines changed
  • src/refiners/foundationals/latent_diffusion/stable_diffusion_1

1 file changed

+35
-0
lines changed

src/refiners/foundationals/latent_diffusion/stable_diffusion_1/model.py

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,41 @@ class StableDiffusion_1(LatentDiffusionModel):
2929
unet: The U-Net model.
3030
clip_text_encoder: The text encoder.
3131
lda: The image autoencoder.
32+
33+
Example:
34+
```py
35+
import torch
36+
37+
from refiners.fluxion.utils import manual_seed, no_grad
38+
from refiners.foundationals.latent_diffusion.stable_diffusion_1 import StableDiffusion_1
39+
40+
# Load SD
41+
sd15 = StableDiffusion_1(device="cuda", dtype=torch.float16)
42+
43+
sd15.clip_text_encoder.load_from_safetensors("sd1_5.text_encoder.safetensors")
44+
sd15.unet.load_from_safetensors("sd1_5.unet.safetensors")
45+
sd15.lda.load_from_safetensors("sd1_5.autoencoder.safetensors")
46+
47+
# Hyperparameters
48+
prompt = "a cute cat, best quality, high quality"
49+
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality"
50+
seed = 42
51+
52+
sd15.set_inference_steps(50)
53+
54+
with no_grad(): # Disable gradient calculation for memory-efficient inference
55+
clip_text_embedding = sd15.compute_clip_text_embedding(text=prompt, negative_text=negative_prompt)
56+
manual_seed(seed)
57+
58+
x = sd15.init_latents((512, 512)).to(sd15.device, sd15.dtype)
59+
60+
# Diffusion process
61+
for step in sd15.steps:
62+
x = sd15(x, step=step, clip_text_embedding=clip_text_embedding)
63+
64+
predicted_image = sd15.lda.decode_latents(x)
65+
predicted_image.save("output.png")
66+
```
3267
"""
3368

3469
unet: SD1UNet

0 commit comments

Comments
 (0)