pony-diffusion-v2 - more anthro edition
Pony Diffusion V4 is now live!
pony-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality pony and furry SFW and NSFW images through fine-tuning.
WARNING: Compared to v1 of this model, v2 is much more capable of producing NSFW content so it's recommended to use 'safe' tag in prompt in combination with negative prompt for image features you may want to suppress. v2 model also has a slight 3d bias so negative prompts like '3d' or 'sfm' should be used to better match v1 outputs.
With special thanks to Waifu-Diffusion for providing finetuning expertise and Novel AI for providing necessary compute.
Original PyTorch Model Download Link
Real-ESRGAN Model finetuned on pony faces
Model Description
The model originally used for fine-tuning is an early finetuned checkpoint of waifu-diffusion on top of Stable Diffusion V1-4, which is a latent image diffusion model trained on LAION2B-en.
This particular checkpoint has been fine-tuned with a learning rate of 5.0e-6 for 4 epochs on approximately 450k pony and furry text-image pairs (using tags from derpibooru and e621) which all have score greater than 250
.
License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
Example Code
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline, DDIMScheduler
model_id = "AstraliteHeart/pony-diffusion-v2"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16,
revision="fp16",
scheduler=DDIMScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
),
)
pipe = pipe.to(device)
prompt = "pinkie pie anthro portrait wedding dress veil intricate highly detailed digital painting artstation concept art smooth sharp focus illustration Unreal Engine 5 8K"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5)["sample"][0]
image.save("cute_poner.png")
Team Members and Acknowledgements
This project would not have been possible without the incredible work by the CompVis Researchers.
- Waifu-Diffusion for helping with finetuning and providing starting checkpoint
- Novel AI for providing compute
In order to reach us, you can join our Discord server.
- Downloads last month
- 24