--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- # ddpm-apes-128 ![example image](example.png) ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python from diffusers import DDPMPipeline import torch model_id = "dn-gh/ddpm-apes-128" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id).to(device) # run pipeline in inference image = ddpm().images[0] # save image image.save("generated_image.png") ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data This model is trained on 4866 images generated with [ykilcher/apes](https://huggingface.co/ykilcher/apes) for 30 epochs. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/dn-gh/ddpm-apes-128/tensorboard?#scalars)