File size: 1,706 Bytes
92979f8 3929e60 d543685 92979f8 b0297fd 92979f8 b0297fd db47f72 b0297fd 92979f8 66144c5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
tags:
- text-to-image
- stable-diffusion
---
Stable Diffusion model, fine-tuned for generating images of people with their thumbs up.
How to use it:
```py
from diffusers import StableDiffusionPipeline
import torch
from torchmetrics.functional.multimodal import clip_score
from functools import partial
model_ckpt = "raghav-gaggar/stable-diffusion-thumbs-up"
sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda")
prompts = [
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
"thumbs up",
]
images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="numpy").images
print(images.shape)
clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16")
def calculate_clip_score(images, prompts):
images_int = (images * 255).astype("uint8")
clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach()
return round(float(clip_score), 4)
sd_clip_score = calculate_clip_score(images, prompts)
print(f"CLIP score: {sd_clip_score}")
```
Sample pictures of this concept:



 |