Stable Diffusion is an innovative generative AI technique that allows us to generate and manipulate images in interesting ways, including generating image from text and restoring missing parts of pictures (inpainting)!
Stable Diffusion v2 provides great functionality over previous versions, including being able to use more data, employ more training, and has less restrictive filtering of the dataset. All of these features give us promising results for selecting a wide range of input text prompts!
Note: This is a shorter version of the stable-diffusion-v2-text-to-image notebook for demo purposes and to get started quickly. This version does not have the full implementation of the helper utilities needed to convert the models from PyTorch to OpenVINO, and the OpenVINO OVStableDiffusionPipeline
within the notebook directly. If you would like to see the full implementation of stable diffusion for text to image, please visit stable-diffusion-v2-text-to-image.
This is a self-contained example that relies solely on its own code.
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start. For details, please refer to Installation Guide.
To work with Stable Diffusion v2, we will use Hugging Face's Diffusers library.
To experiment with Stable Diffusion models, Diffusers exposes the StableDiffusionPipeline
and StableDiffusionInpaintPipeline
, similar to the other Diffusers pipelines.
%pip install -q "diffusers>=0.14.0" "openvino>=2023.1.0" "transformers>=4.31" accelerate "torch>=2.1" Pillow opencv-python --extra-index-url https://download.pytorch.org/whl/cpu
Stable Diffusion pipelines for both Text to Image and Inpainting consist of three important parts:
Depending on the pipeline, the parameters for these parts can differ, which we'll explore in this demo!
Let's start by retrieving these components from HuggingFace!
The code below demonstrates how to create StableDiffusionPipeline
using stable-diffusion-2-1
.
# Retrieve the Text to Image Stable Diffusion pipeline components
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base").to("cpu")
# for reducing memory consumption get all components from pipeline independently
text_encoder = pipe.text_encoder
text_encoder.eval()
unet = pipe.unet
unet.eval()
vae = pipe.vae
vae.eval()
conf = pipe.scheduler.config
del pipe
Now that we've retrieved the three parts for both of these pipelines, we now need to:
ov_model_part = ov.convert_model(model_part, example_input=input_data)
ov.save_model(ov_model_part, xml_file_path)
We can then run our Stable Diffusion v2 text to image and inpainting pipelines in OpenVINO on our own data!
from pathlib import Path
# Define a dir to save text-to-image models
txt2img_model_dir = Path("sd2.1")
txt2img_model_dir.mkdir(exist_ok=True)
from implementation.conversion_helper_utils import (
convert_encoder,
convert_unet,
convert_vae_decoder,
convert_vae_encoder,
)
# Convert the Text-to-Image models from PyTorch -> OpenVINO
# 1. Convert the Text Encoder
txt_encoder_ov_path = txt2img_model_dir / "text_encoder.xml"
convert_encoder(text_encoder, txt_encoder_ov_path)
# 2. Convert the U-NET
unet_ov_path = txt2img_model_dir / "unet.xml"
convert_unet(unet, unet_ov_path, num_channels=4, width=96, height=96)
# 3. Convert the VAE encoder
vae_encoder_ov_path = txt2img_model_dir / "vae_encoder.xml"
convert_vae_encoder(vae, vae_encoder_ov_path, width=768, height=768)
# 4. Convert the VAE decoder
vae_decoder_ov_path = txt2img_model_dir / "vae_decoder.xml"
convert_vae_decoder(vae, vae_decoder_ov_path, width=96, height=96)
select device from dropdown list for running inference using OpenVINO
import requests
r = requests.get(
url="https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py",
)
open("notebook_utils.py", "w").write(r.text)
from notebook_utils import device_widget
device = device_widget()
device
Dropdown(description='Device:', index=2, options=('CPU', 'GPU', 'AUTO'), value='AUTO')
Let's create instances of our OpenVINO Model for Text to Image.
import openvino as ov
core = ov.Core()
text_enc = core.compile_model(txt_encoder_ov_path, device.value)
unet_model = core.compile_model(unet_ov_path, device.value)
vae_encoder = core.compile_model(vae_encoder_ov_path, device.value)
vae_decoder = core.compile_model(vae_decoder_ov_path, device.value)
Next, we will define a few key elements to create the inference pipeline, as depicted in the diagram below:
As part of the OVStableDiffusionPipeline()
class:
The stable diffusion pipeline takes both a latent seed and a text prompt as input. The latent seed is used to generate random latent image representations, and the text prompt is provided to OpenAI's CLIP to transform these to text embeddings.
Next, the U-Net model iteratively denoises the random latent image representations while being conditioned on the text embeddings. The output of the U-Net, being the noise residual, is used to compute a denoised latent image representation via a scheduler algorithm. In this case we use the LMSDiscreteScheduler
.
from diffusers.schedulers import LMSDiscreteScheduler
from transformers import CLIPTokenizer
from implementation.ov_stable_diffusion_pipeline import OVStableDiffusionPipeline
scheduler = LMSDiscreteScheduler.from_config(conf)
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
ov_pipe = OVStableDiffusionPipeline(
tokenizer=tokenizer,
text_encoder=text_enc,
unet=unet_model,
vae_encoder=vae_encoder,
vae_decoder=vae_decoder,
scheduler=scheduler,
)
/home/ea/work/openvino_notebooks/notebooks/stable-diffusion-v2/implementation/ov_stable_diffusion_pipeline.py:10: FutureWarning: Importing `DiffusionPipeline` or `ImagePipelineOutput` from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead. from diffusers.pipeline_utils import DiffusionPipeline
Now, let's define some text prompts for image generation and run our inference pipeline.
We can also change our random generator seed for latent state initialization and number of steps (higher steps = more precise results).
Example prompts:
To improve image generation quality, we can use negative prompting. While positive prompts steer diffusion toward the images associated with it, negative prompts declares undesired concepts for the generation image, e.g. if we want to have colorful and bright images, a gray scale image will be result which we want to avoid. In this case, a gray scale can be treated as negative prompt. The positive and negative prompt are in equal footing. You can always use one with or without the other. More explanation of how it works can be found in this article.
import ipywidgets as widgets
text_prompt = widgets.Textarea(
value="valley in the Alps at sunset, epic vista, beautiful landscape, 4k, 8k",
description="positive prompt",
layout=widgets.Layout(width="auto"),
)
negative_prompt = widgets.Textarea(
value="frames, borderline, text, charachter, duplicate, error, out of frame, watermark, low quality, ugly, deformed, blur",
description="negative prompt",
layout=widgets.Layout(width="auto"),
)
num_steps = widgets.IntSlider(min=1, max=50, value=25, description="steps:")
seed = widgets.IntSlider(min=0, max=10000000, description="seed: ", value=42)
widgets.VBox([text_prompt, negative_prompt, seed, num_steps])
VBox(children=(Textarea(value='valley in the Alps at sunset, epic vista, beautiful landscape, 4k, 8k', descrip…
# Run inference pipeline
result = ov_pipe(
text_prompt.value,
negative_prompt=negative_prompt.value,
num_inference_steps=num_steps.value,
seed=seed.value,
)
0%| | 0/25 [00:00<?, ?it/s]
final_image = result["sample"][0]
final_image.save("result.png")
final_image