kirisame commited on
Commit
4cd6cd4
1 Parent(s): 19d6914
README.md CHANGED
@@ -13,7 +13,6 @@ inference: false
13
 
14
  waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
15
 
16
- <<<<<<< HEAD
17
  <img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%>
18
 
19
  [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
@@ -23,15 +22,11 @@ waifu-diffusion is a latent text-to-image diffusion model that has been conditio
23
  We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion:
24
  [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo)
25
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
26
- =======
27
- <img src=https://cdn.discordapp.com/attachments/930499731451428926/1017258164439220254/unknown.png width=20% height=20%>
28
- >>>>>>> b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1
29
 
30
  ## Model Description
31
 
32
  [See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1)
33
 
34
- <<<<<<< HEAD
35
  ## License
36
 
37
  This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
@@ -41,15 +36,6 @@ The CreativeML OpenRAIL License specifies:
41
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
42
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
43
  [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
44
- =======
45
- The current model has been fine-tuned with a learning rate of 5.0e-6 for 4 epochs on 56k Danbooru text-image pairs which all have an aesthetic rating greater than `6.0`.
46
-
47
- ## Training Data & Annotative Prompting
48
-
49
- The data used for fine-tuning has come from a random sample of 56k Danbooru images, which were filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
50
-
51
- Captions are Danbooru-style captions.
52
- >>>>>>> b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1
53
 
54
  ## Downstream Uses
55
 
@@ -67,15 +53,7 @@ pipe = StableDiffusionPipeline.from_pretrained(
67
  torch_dtype=torch.float32
68
  ).to('cuda')
69
 
70
- <<<<<<< HEAD
71
  prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
72
- =======
73
-
74
- pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision='fp16')
75
- pipe = pipe.to(device)
76
-
77
- prompt = "touhou hakurei_reimu 1girl solo portrait"
78
- >>>>>>> b45bafccd9d0e0757b70a54c7ebc32ff56ca9ee1
79
  with autocast("cuda"):
80
  image = pipe(prompt, guidance_scale=6)["sample"][0]
81
 
 
13
 
14
  waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
15
 
 
16
  <img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%>
17
 
18
  [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
 
22
  We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion:
23
  [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo)
24
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
 
 
 
25
 
26
  ## Model Description
27
 
28
  [See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1)
29
 
 
30
  ## License
31
 
32
  This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
 
36
  2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
37
  3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
38
  [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
 
 
 
 
 
 
 
 
 
39
 
40
  ## Downstream Uses
41
 
 
53
  torch_dtype=torch.float32
54
  ).to('cuda')
55
 
 
56
  prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
 
 
 
 
 
 
 
57
  with autocast("cuda"):
58
  image = pipe(prompt, guidance_scale=6)["sample"][0]
59
 
feature_extractor/preprocessor_config.json DELETED
@@ -1,20 +0,0 @@
1
- {
2
- "crop_size": 224,
3
- "do_center_crop": true,
4
- "do_convert_rgb": true,
5
- "do_normalize": true,
6
- "do_resize": true,
7
- "feature_extractor_type": "CLIPFeatureExtractor",
8
- "image_mean": [
9
- 0.48145466,
10
- 0.4578275,
11
- 0.40821073
12
- ],
13
- "image_std": [
14
- 0.26862954,
15
- 0.26130258,
16
- 0.27577711
17
- ],
18
- "resample": 3,
19
- "size": 224
20
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model_index.json DELETED
@@ -1,32 +0,0 @@
1
- {
2
- "_class_name": "StableDiffusionPipeline",
3
- "_diffusers_version": "0.4.1",
4
- "feature_extractor": [
5
- "transformers",
6
- "CLIPFeatureExtractor"
7
- ],
8
- "safety_checker": [
9
- "stable_diffusion",
10
- "StableDiffusionSafetyChecker"
11
- ],
12
- "scheduler": [
13
- "diffusers",
14
- "LMSDiscreteScheduler"
15
- ],
16
- "text_encoder": [
17
- "transformers",
18
- "CLIPTextModel"
19
- ],
20
- "tokenizer": [
21
- "transformers",
22
- "CLIPTokenizer"
23
- ],
24
- "unet": [
25
- "diffusers",
26
- "UNet2DConditionModel"
27
- ],
28
- "vae": [
29
- "diffusers",
30
- "AutoencoderKL"
31
- ]
32
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
safety_checker/config.json DELETED
@@ -1,175 +0,0 @@
1
- {
2
- "_commit_hash": null,
3
- "_name_or_path": "CompVis/stable-diffusion-safety-checker",
4
- "architectures": [
5
- "StableDiffusionSafetyChecker"
6
- ],
7
- "initializer_factor": 1.0,
8
- "logit_scale_init_value": 2.6592,
9
- "model_type": "clip",
10
- "projection_dim": 768,
11
- "text_config": {
12
- "_name_or_path": "",
13
- "add_cross_attention": false,
14
- "architectures": null,
15
- "attention_dropout": 0.0,
16
- "bad_words_ids": null,
17
- "bos_token_id": 0,
18
- "chunk_size_feed_forward": 0,
19
- "cross_attention_hidden_size": null,
20
- "decoder_start_token_id": null,
21
- "diversity_penalty": 0.0,
22
- "do_sample": false,
23
- "dropout": 0.0,
24
- "early_stopping": false,
25
- "encoder_no_repeat_ngram_size": 0,
26
- "eos_token_id": 2,
27
- "exponential_decay_length_penalty": null,
28
- "finetuning_task": null,
29
- "forced_bos_token_id": null,
30
- "forced_eos_token_id": null,
31
- "hidden_act": "quick_gelu",
32
- "hidden_size": 768,
33
- "id2label": {
34
- "0": "LABEL_0",
35
- "1": "LABEL_1"
36
- },
37
- "initializer_factor": 1.0,
38
- "initializer_range": 0.02,
39
- "intermediate_size": 3072,
40
- "is_decoder": false,
41
- "is_encoder_decoder": false,
42
- "label2id": {
43
- "LABEL_0": 0,
44
- "LABEL_1": 1
45
- },
46
- "layer_norm_eps": 1e-05,
47
- "length_penalty": 1.0,
48
- "max_length": 20,
49
- "max_position_embeddings": 77,
50
- "min_length": 0,
51
- "model_type": "clip_text_model",
52
- "no_repeat_ngram_size": 0,
53
- "num_attention_heads": 12,
54
- "num_beam_groups": 1,
55
- "num_beams": 1,
56
- "num_hidden_layers": 12,
57
- "num_return_sequences": 1,
58
- "output_attentions": false,
59
- "output_hidden_states": false,
60
- "output_scores": false,
61
- "pad_token_id": 1,
62
- "prefix": null,
63
- "problem_type": null,
64
- "pruned_heads": {},
65
- "remove_invalid_values": false,
66
- "repetition_penalty": 1.0,
67
- "return_dict": true,
68
- "return_dict_in_generate": false,
69
- "sep_token_id": null,
70
- "task_specific_params": null,
71
- "temperature": 1.0,
72
- "tf_legacy_loss": false,
73
- "tie_encoder_decoder": false,
74
- "tie_word_embeddings": true,
75
- "tokenizer_class": null,
76
- "top_k": 50,
77
- "top_p": 1.0,
78
- "torch_dtype": null,
79
- "torchscript": false,
80
- "transformers_version": "4.22.2",
81
- "typical_p": 1.0,
82
- "use_bfloat16": false,
83
- "vocab_size": 49408
84
- },
85
- "text_config_dict": {
86
- "hidden_size": 768,
87
- "intermediate_size": 3072,
88
- "num_attention_heads": 12,
89
- "num_hidden_layers": 12
90
- },
91
- "torch_dtype": "float16",
92
- "transformers_version": null,
93
- "vision_config": {
94
- "_name_or_path": "",
95
- "add_cross_attention": false,
96
- "architectures": null,
97
- "attention_dropout": 0.0,
98
- "bad_words_ids": null,
99
- "bos_token_id": null,
100
- "chunk_size_feed_forward": 0,
101
- "cross_attention_hidden_size": null,
102
- "decoder_start_token_id": null,
103
- "diversity_penalty": 0.0,
104
- "do_sample": false,
105
- "dropout": 0.0,
106
- "early_stopping": false,
107
- "encoder_no_repeat_ngram_size": 0,
108
- "eos_token_id": null,
109
- "exponential_decay_length_penalty": null,
110
- "finetuning_task": null,
111
- "forced_bos_token_id": null,
112
- "forced_eos_token_id": null,
113
- "hidden_act": "quick_gelu",
114
- "hidden_size": 1024,
115
- "id2label": {
116
- "0": "LABEL_0",
117
- "1": "LABEL_1"
118
- },
119
- "image_size": 224,
120
- "initializer_factor": 1.0,
121
- "initializer_range": 0.02,
122
- "intermediate_size": 4096,
123
- "is_decoder": false,
124
- "is_encoder_decoder": false,
125
- "label2id": {
126
- "LABEL_0": 0,
127
- "LABEL_1": 1
128
- },
129
- "layer_norm_eps": 1e-05,
130
- "length_penalty": 1.0,
131
- "max_length": 20,
132
- "min_length": 0,
133
- "model_type": "clip_vision_model",
134
- "no_repeat_ngram_size": 0,
135
- "num_attention_heads": 16,
136
- "num_beam_groups": 1,
137
- "num_beams": 1,
138
- "num_channels": 3,
139
- "num_hidden_layers": 24,
140
- "num_return_sequences": 1,
141
- "output_attentions": false,
142
- "output_hidden_states": false,
143
- "output_scores": false,
144
- "pad_token_id": null,
145
- "patch_size": 14,
146
- "prefix": null,
147
- "problem_type": null,
148
- "pruned_heads": {},
149
- "remove_invalid_values": false,
150
- "repetition_penalty": 1.0,
151
- "return_dict": true,
152
- "return_dict_in_generate": false,
153
- "sep_token_id": null,
154
- "task_specific_params": null,
155
- "temperature": 1.0,
156
- "tf_legacy_loss": false,
157
- "tie_encoder_decoder": false,
158
- "tie_word_embeddings": true,
159
- "tokenizer_class": null,
160
- "top_k": 50,
161
- "top_p": 1.0,
162
- "torch_dtype": null,
163
- "torchscript": false,
164
- "transformers_version": "4.22.2",
165
- "typical_p": 1.0,
166
- "use_bfloat16": false
167
- },
168
- "vision_config_dict": {
169
- "hidden_size": 1024,
170
- "intermediate_size": 4096,
171
- "num_attention_heads": 16,
172
- "num_hidden_layers": 24,
173
- "patch_size": 14
174
- }
175
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
safety_checker/pytorch_model.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b1dc15150c06764bb60249c8a68b3e31319c66293d800c252b4d400e3e7ea17
3
- size 295
 
 
 
 
scheduler/scheduler_config.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "_class_name": "LMSDiscreteScheduler",
3
- "_diffusers_version": "0.4.1",
4
- "beta_end": 0.012,
5
- "beta_schedule": "scaled_linear",
6
- "beta_start": 0.00085,
7
- "num_train_timesteps": 1000,
8
- "trained_betas": null
9
- }
 
 
 
 
 
 
 
 
 
 
text_encoder/config.json DELETED
@@ -1,25 +0,0 @@
1
- {
2
- "_name_or_path": "waifu-diffusion/text_encoder",
3
- "architectures": [
4
- "CLIPTextModel"
5
- ],
6
- "attention_dropout": 0.0,
7
- "bos_token_id": 0,
8
- "dropout": 0.0,
9
- "eos_token_id": 2,
10
- "hidden_act": "quick_gelu",
11
- "hidden_size": 768,
12
- "initializer_factor": 1.0,
13
- "initializer_range": 0.02,
14
- "intermediate_size": 3072,
15
- "layer_norm_eps": 1e-05,
16
- "max_position_embeddings": 77,
17
- "model_type": "clip_text_model",
18
- "num_attention_heads": 12,
19
- "num_hidden_layers": 12,
20
- "pad_token_id": 1,
21
- "projection_dim": 768,
22
- "torch_dtype": "float16",
23
- "transformers_version": "4.22.2",
24
- "vocab_size": 49408
25
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
text_encoder/pytorch_model.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:439fc72b1855e991f9c6525d746fa8a7590e2763a9fc7ac077cea4b2e4a1ce93
3
- size 295
 
 
 
 
tokenizer/merges.txt DELETED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json DELETED
@@ -1,24 +0,0 @@
1
- {
2
- "bos_token": {
3
- "content": "<|startoftext|>",
4
- "lstrip": false,
5
- "normalized": true,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "eos_token": {
10
- "content": "<|endoftext|>",
11
- "lstrip": false,
12
- "normalized": true,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "pad_token": "<|endoftext|>",
17
- "unk_token": {
18
- "content": "<|endoftext|>",
19
- "lstrip": false,
20
- "normalized": true,
21
- "rstrip": false,
22
- "single_word": false
23
- }
24
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tokenizer/tokenizer_config.json DELETED
@@ -1,34 +0,0 @@
1
- {
2
- "add_prefix_space": false,
3
- "bos_token": {
4
- "__type": "AddedToken",
5
- "content": "<|startoftext|>",
6
- "lstrip": false,
7
- "normalized": true,
8
- "rstrip": false,
9
- "single_word": false
10
- },
11
- "do_lower_case": true,
12
- "eos_token": {
13
- "__type": "AddedToken",
14
- "content": "<|endoftext|>",
15
- "lstrip": false,
16
- "normalized": true,
17
- "rstrip": false,
18
- "single_word": false
19
- },
20
- "errors": "replace",
21
- "model_max_length": 77,
22
- "name_or_path": "waifu-diffusion/tokenizer",
23
- "pad_token": "<|endoftext|>",
24
- "special_tokens_map_file": "./special_tokens_map.json",
25
- "tokenizer_class": "CLIPTokenizer",
26
- "unk_token": {
27
- "__type": "AddedToken",
28
- "content": "<|endoftext|>",
29
- "lstrip": false,
30
- "normalized": true,
31
- "rstrip": false,
32
- "single_word": false
33
- }
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tokenizer/vocab.json DELETED
The diff for this file is too large to render. See raw diff
 
unet/config.json DELETED
@@ -1,36 +0,0 @@
1
- {
2
- "_class_name": "UNet2DConditionModel",
3
- "_diffusers_version": "0.4.1",
4
- "act_fn": "silu",
5
- "attention_head_dim": 8,
6
- "block_out_channels": [
7
- 320,
8
- 640,
9
- 1280,
10
- 1280
11
- ],
12
- "center_input_sample": false,
13
- "cross_attention_dim": 768,
14
- "down_block_types": [
15
- "CrossAttnDownBlock2D",
16
- "CrossAttnDownBlock2D",
17
- "CrossAttnDownBlock2D",
18
- "DownBlock2D"
19
- ],
20
- "downsample_padding": 1,
21
- "flip_sin_to_cos": true,
22
- "freq_shift": 0,
23
- "in_channels": 4,
24
- "layers_per_block": 2,
25
- "mid_block_scale_factor": 1,
26
- "norm_eps": 1e-05,
27
- "norm_num_groups": 32,
28
- "out_channels": 4,
29
- "sample_size": 32,
30
- "up_block_types": [
31
- "UpBlock2D",
32
- "CrossAttnUpBlock2D",
33
- "CrossAttnUpBlock2D",
34
- "CrossAttnUpBlock2D"
35
- ]
36
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
unet/diffusion_pytorch_model.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a9d5548268e0013c7d21a0f451ba016098a7069bbfba4b6744312b58fe86707
3
- size 297
 
 
 
 
vae/config.json DELETED
@@ -1,29 +0,0 @@
1
- {
2
- "_class_name": "AutoencoderKL",
3
- "_diffusers_version": "0.4.1",
4
- "act_fn": "silu",
5
- "block_out_channels": [
6
- 128,
7
- 256,
8
- 512,
9
- 512
10
- ],
11
- "down_block_types": [
12
- "DownEncoderBlock2D",
13
- "DownEncoderBlock2D",
14
- "DownEncoderBlock2D",
15
- "DownEncoderBlock2D"
16
- ],
17
- "in_channels": 3,
18
- "latent_channels": 4,
19
- "layers_per_block": 2,
20
- "norm_num_groups": 32,
21
- "out_channels": 3,
22
- "sample_size": 256,
23
- "up_block_types": [
24
- "UpDecoderBlock2D",
25
- "UpDecoderBlock2D",
26
- "UpDecoderBlock2D",
27
- "UpDecoderBlock2D"
28
- ]
29
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
vae/diffusion_pytorch_model.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9980e77e11dfc7e18e14aefc2604f2bba3db7737e722cd11610f0cf6a1e96271
3
- size 295