File size: 2,894 Bytes
ea978bb
 
 
 
 
 
e536b60
067430d
b289bec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
602ad3a
 
 
 
b289bec
 
 
602ad3a
 
 
 
 
 
 
 
 
 
 
 
b289bec
 
 
 
 
 
 
602ad3a
 
 
 
 
 
 
 
 
 
b289bec
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# KranokThaiV1 🇹🇭
#### model by CHAITron
This your the Stable Diffusion model fine-tuned the KranokThaiV1 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: '\<knt-style> style'

#### Example Results
![example1](./example1.png)
**Prompt used**: "a colourful masterpiece thai cat in \<knt-style> style, high-quality, professional art, realistic, thai style"

![example1](./example2.png)
**Prompt used**: "a colourful masterpiece thai dog in \<knt-style> style, high-quality, professional art, realistic, thai style"

### 🧨 Diffusers

This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).

You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().

```shell
pip install torch diffusers accelerate pillow
```

```python
from diffusers import StableDiffusionPipeline
import torch
from PIL import Image

def image_grid(imgs, rows, cols):
    assert len(imgs) == rows*cols

    w, h = imgs[0].size
    grid = Image.new('RGB', size=(cols*w, rows*h))
    grid_w, grid_h = grid.size

    for i, img in enumerate(imgs):
        grid.paste(img, box=(i%cols*w, i//cols*h))
    return grid

model_id = "CHAITron/KranokThaiV1"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a colourful masterpiece thai dog in <knt-style> style, high-quality, professional art, realistic, thai style"

num_samples = 4
num_rows = 1

all_images = []
for _ in range(num_rows):
    images = pipe(prompt, num_images_per_prompt=num_samples, num_inference_steps=25, guidance_scale=9).images
    all_images.extend(images)

grid = image_grid(all_images, num_rows, num_samples)
grid
```

## License

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies: 

1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)