π€ Generate images with diffusion models:
diffused <model> <prompt>pipx run diffused segmind/tiny-sd "red apple"pipx run diffused OFA-Sys/small-stable-diffusion-v0 "cat wizard" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.pngpipx run diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png --mask-image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.pngInstall the CLI:
pipx install diffusedRequired (str): The diffusion model.
diffused segmind/SSD-1B "An astronaut riding a green horse"See segmind/SSD-1B.
Required (str): The text prompt.
diffused dreamlike-art/dreamlike-photoreal-2.0 "cinematic photo of Godzilla eating sushi with a cat in a izakaya, 35mm photograph, film, professional, 4k, highly detailed"Optional (str): What to exclude from the output image.
diffused stabilityai/stable-diffusion-2 "photo of an apple" --negative-prompt="blurry, bright photo, red"With the short option:
diffused stabilityai/stable-diffusion-2 "photo of an apple" -np="blurry, bright photo, red"Optional (str): The input image path or URL. The initial image is used as a starting point for an image-to-image diffusion process.
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in a desert" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.pngWith the short option:
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in a desert" -i=./local/image.pngOptional (str): The mask image path or URL. Inpainting replaces or edits specific areas of an image. Create a mask image to inpaint images.
diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png --mask-image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.pngWith the short option:
diffused kandinsky-community/kandinsky-2-2-decoder-inpaint "black cat" -i=inpaint.png -mi=inpaint_mask.pngOptional (str): The output image filename.
diffused dreamlike-art/dreamlike-photoreal-2.0 "cat eating sushi" --output=cat.jpgWith the short option:
diffused dreamlike-art/dreamlike-photoreal-2.0 "cat eating sushi" -o=cat.jpgOptional (int): The output image width in pixels.
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" --width=1024With the short option:
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" -W=1024Optional (int): The output image height in pixels.
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" --height=1024With the short option:
diffused stabilityai/stable-diffusion-xl-base-1.0 "dog in space" -H=1024Optional (int): The number of output images. Defaults to 1.
diffused segmind/tiny-sd apple --number=2With the short option:
diffused segmind/tiny-sd apple -n=2Optional (int): How much the prompt influences the output image. A lower value leads to more deviation and creativity, whereas a higher value follows the prompt to a tee.
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut in a jungle" --guidance-scale=7.5With the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut in a jungle" -gs=7.5Optional (int): The number of diffusion steps used during image generation. The more steps you use, the higher the quality, but the generation time will increase.
diffused CompVis/stable-diffusion-v1-4 "astronaut rides horse" --inference-steps=50With the short option:
diffused CompVis/stable-diffusion-v1-4 "astronaut rides horse" -is=50Optional (float): The noise added to the input image, which determines how much the output image deviates from the original image. Strength is used for image-to-image and inpainting tasks and is a multiplier to the number of denoising steps (--inference-steps).
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in swamp" --image=https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png --strength=0.5With the short option:
diffused stabilityai/stable-diffusion-xl-refiner-1.0 "astronaut in swamp" -i=image.png -s=0.5Optional (int): The seed for generating random numbers, ensuring reproducibility in image generation pipelines.
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "Labrador in the style of Vermeer" --seed=0With the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "Labrador in the style of Vermeer" -S=1337Optional (str): The device to accelerate the computation (cpu, cuda, mps, xpu, xla, or meta).
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut on earth, 8k" --device=cudaWith the short option:
diffused stable-diffusion-v1-5/stable-diffusion-v1-5 "astronaut on earth, 8k" -d=cudaOptional (bool): Whether to disable safetensors.
diffused runwayml/stable-diffusion-v1-5 "astronaut on mars" --no-safetensorsShow the program's version number and exit:
diffused --version # diffused -vShow the help message and exit:
diffused --help # diffused -hCreate a virtual environment:
python3 -m venv .venvActivate the virtual environment:
source .venv/bin/activateInstall the package:
pip install diffusedGenerate an image with a model and a prompt:
# script.py
from diffused import generate
images = generate(model="segmind/tiny-sd", prompt="apple")
images[0].save("apple.png")Run the script:
python script.pyOpen the image:
open apple.pngSee the API documentation.