helenai commited on
Commit
28aab9f
1 Parent(s): 6450534

commit files to HF hub

Browse files
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - stable-diffusion
5
+ - text-to-image
6
+ - openvino
7
+ extra_gated_prompt: |-
8
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
9
+ The CreativeML OpenRAIL License specifies:
10
+
11
+ 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
12
+ 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
13
+ 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
14
+ Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
15
+
16
+
17
+ extra_gated_heading: Please read the LICENSE to access this model
18
+ ---
19
+
20
+ # OpenVINO Stable Diffusion
21
+
22
+ ## runwayml/stable-diffusion-v1-5
23
+
24
+ This repository contains the models from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) converted to
25
+ OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum:
26
+ [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16
27
+ precision, which reduces the size of the model by half.
28
+
29
+ Please check out the [source model repository](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more information about the model and its license.
30
+
31
+ To install the requirements for this demo, do `pip install optimum[openvino]`. This installs all the necessary dependencies,
32
+ including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide).
33
+
34
+ The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the
35
+ model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image.
36
+
37
+ ```python
38
+ from optimum.intel.openvino import OVStableDiffusionPipeline
39
+
40
+ stable_diffusion = OVStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
41
+ images = stable_diffusion("a random image").images
42
+ ```
43
+
44
+ The following example code uses static shapes for even faster inference. Using larger image sizes will
45
+ require more memory and take longer to generate.
46
+
47
+ If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel
48
+ discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below.
49
+ Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable
50
+ diffusion only static shapes are supported at the moment.
51
+
52
+
53
+ ```python
54
+ from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
55
+
56
+ batch_size = 1
57
+ num_images_per_prompt = 1
58
+ height = 256
59
+ width = 256
60
+
61
+ # load the model and reshape to static shapes for faster inference
62
+ model_id = "runwayml/stable-diffusion-v1-5"
63
+ stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False)
64
+ stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt)
65
+ stable_diffusion.compile()
66
+
67
+ # generate image!
68
+ prompt = "a random image"
69
+ images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images
70
+ images[0].save("result.png")
71
+ ```
72
+
feature_extractor/preprocessor_config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 224,
4
+ "width": 224
5
+ },
6
+ "do_center_crop": true,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "feature_extractor_type": "CLIPFeatureExtractor",
12
+ "image_mean": [
13
+ 0.48145466,
14
+ 0.4578275,
15
+ 0.40821073
16
+ ],
17
+ "image_processor_type": "CLIPFeatureExtractor",
18
+ "image_std": [
19
+ 0.26862954,
20
+ 0.26130258,
21
+ 0.27577711
22
+ ],
23
+ "resample": 3,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "shortest_edge": 224
27
+ }
28
+ }
inference.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline
2
+
3
+ batch_size = 1
4
+ num_images_per_prompt = 1
5
+ height = 256
6
+ width = 256
7
+
8
+ # load the model and reshape to static shapes for faster inference
9
+ model_id = "helenai/runwayml-stable-diffusion-v1-5-ov"
10
+ stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False)
11
+ stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt)
12
+ stable_diffusion.compile()
13
+
14
+ # generate image!
15
+ prompt = "a random image"
16
+ images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images
17
+ images[0].save("result.png")
18
+
model_index.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "OVStableDiffusionPipeline",
3
+ "_diffusers_version": "0.13.1",
4
+ "feature_extractor": [
5
+ "transformers",
6
+ "CLIPFeatureExtractor"
7
+ ],
8
+ "safety_checker": [
9
+ "stable_diffusion",
10
+ "StableDiffusionSafetyChecker"
11
+ ],
12
+ "scheduler": [
13
+ "diffusers",
14
+ "PNDMScheduler"
15
+ ],
16
+ "text_encoder": [
17
+ "optimum",
18
+ "OVModelTextEncoder"
19
+ ],
20
+ "tokenizer": [
21
+ "transformers",
22
+ "CLIPTokenizer"
23
+ ],
24
+ "unet": [
25
+ "optimum",
26
+ "OVModelUnet"
27
+ ],
28
+ "vae_decoder": [
29
+ "optimum",
30
+ "OVModelVaeDecoder"
31
+ ]
32
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "PNDMScheduler",
3
+ "_diffusers_version": "0.13.1",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "num_train_timesteps": 1000,
9
+ "prediction_type": "epsilon",
10
+ "set_alpha_to_one": false,
11
+ "skip_prk_steps": true,
12
+ "steps_offset": 1,
13
+ "trained_betas": null
14
+ }
text_encoder/openvino_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dedf0d36126919dd5b018997cbe0b3e8705166ebd06321114184242b73c98a4
3
+ size 246121704
text_encoder/openvino_model.xml ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "models/runwayml-stable-diffusion-v1-5-ov/tokenizer",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/openvino_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e64d711505d43018a9f129b9d824c6bae4ee50881c0192175f8f544e5adcff57
3
+ size 1719042636
unet/openvino_model.xml ADDED
The diff for this file is too large to render. See raw diff
 
vae_decoder/openvino_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc523c30697d2cbf197960eeb4e449e1cfaa94abdbb4f3b9f5930c4a47ea9c3a
3
+ size 98980700
vae_decoder/openvino_model.xml ADDED
The diff for this file is too large to render. See raw diff