text
stringlengths 0
5.54k
|
---|
Copied
|
>>> from torchvision import transforms
|
>>> preprocess = transforms.Compose(
|
... [
|
... transforms.Resize((config.image_size, config.image_size)),
|
... transforms.RandomHorizontalFlip(),
|
... transforms.ToTensor(),
|
... transforms.Normalize([0.5], [0.5]),
|
... ]
|
... )
|
Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training:
|
Copied
|
>>> def transform(examples):
|
... images = [preprocess(image.convert("RGB")) for image in examples["image"]]
|
... return {"images": images}
|
>>> dataset.set_transform(transform)
|
Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training!
|
Copied
|
>>> import torch
|
>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)
|
Create a UNet2DModel
|
Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel:
|
Copied
|
>>> from diffusers import UNet2DModel
|
>>> model = UNet2DModel(
|
... sample_size=config.image_size, # the target image resolution
|
... in_channels=3, # the number of input channels, 3 for RGB images
|
... out_channels=3, # the number of output channels
|
... layers_per_block=2, # how many ResNet layers to use per UNet block
|
... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block
|
... down_block_types=(
|
... "DownBlock2D", # a regular ResNet downsampling block
|
... "DownBlock2D",
|
... "DownBlock2D",
|
... "DownBlock2D",
|
... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention
|
... "DownBlock2D",
|
... ),
|
... up_block_types=(
|
... "UpBlock2D", # a regular ResNet upsampling block
|
... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention
|
... "UpBlock2D",
|
... "UpBlock2D",
|
... "UpBlock2D",
|
... "UpBlock2D",
|
... ),
|
... )
|
It is often a good idea to quickly check the sample image shape matches the model output shape:
|
Copied
|
>>> sample_image = dataset[0]["images"].unsqueeze(0)
|
>>> print("Input shape:", sample_image.shape)
|
Input shape: torch.Size([1, 3, 128, 128])
|
>>> print("Output shape:", model(sample_image, timestep=0).sample.shape)
|
Output shape: torch.Size([1, 3, 128, 128])
|
Great! Next, you’ll need a scheduler to add some noise to the image.
|
Create a scheduler
|
The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule.
|
Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before:
|
Copied
|
>>> import torch
|
>>> from PIL import Image
|
>>> from diffusers import DDPMScheduler
|
>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000)
|
>>> noise = torch.randn(sample_image.shape)
|
>>> timesteps = torch.LongTensor([50])
|
>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)
|
>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])
|
The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:
|
Copied
|
>>> import torch.nn.functional as F
|
>>> noise_pred = model(noisy_image, timesteps).sample
|
>>> loss = F.mse_loss(noise_pred, noise)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.