kuprel commited on
Commit
09bce69
1 Parent(s): 35e6dbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -19
README.md CHANGED
@@ -8,15 +8,23 @@ license: mit
8
 
9
  [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
10
  [![Discord](https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white)](https://discord.com/channels/823813159592001537/912729332311556136)
11
- [GitHub](https://github.com/kuprel/min-dalle)
 
12
 
13
- This is a fast, minimal port of [DALL·E Mega](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training-Journal--VmlldzoxODMxMDI2). It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.
14
 
15
  To generate a 4x4 grid of DALL·E Mega images it takes:
16
  - 89 sec with a T4 in Colab
17
  - 48 sec with a P100 in Colab
18
  - 13 sec with an A100 on Replicate
19
 
 
 
 
 
 
 
 
20
  ## Install
21
 
22
  ```bash
@@ -33,20 +41,23 @@ from min_dalle import MinDalle
33
  model = MinDalle(
34
  models_root='./pretrained',
35
  dtype=torch.float32,
 
36
  is_mega=True,
37
  is_reusable=True
38
  )
39
  ```
40
 
41
- The required models will be downloaded to `models_root` if they are not already there. Set the `dtype` to `torch.float16` to save GPU memory. If you have an Ampere architecture GPU you can use `torch.bfloat16`. Once everything has finished initializing, call `generate_image` with some text as many times as you want. Use a positive `seed` for reproducible results. Higher values for `log2_supercondition_factor` result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the top-$k$ most probable tokens.
42
 
43
  ```python
44
  image = model.generate_image(
45
  text='Nuclear explosion broccoli',
46
  seed=-1,
47
  grid_size=4,
48
- log2_k=6,
49
- log2_supercondition_factor=5,
 
 
50
  is_verbose=False
51
  )
52
 
@@ -54,7 +65,7 @@ display(image)
54
  ```
55
  <img src="https://github.com/kuprel/min-dalle/raw/main/examples/nuclear_broccoli.jpg" alt="min-dalle" width="400"/>
56
 
57
- credit: [https://twitter.com/hardmaru/status/1544354119527596034](https://twitter.com/hardmaru/status/1544354119527596034)
58
 
59
 
60
  ### Saving Individual Images
@@ -64,9 +75,11 @@ The images can also be generated as a `FloatTensor` in case you want to process
64
  images = model.generate_images(
65
  text='Nuclear explosion broccoli',
66
  seed=-1,
67
- image_count=7,
68
- log2_k=6,
69
- log2_supercondition_factor=5,
 
 
70
  is_verbose=False
71
  )
72
  ```
@@ -81,18 +94,20 @@ image = Image.fromarray(images[i])
81
  image.save('image_{}.png'.format(i))
82
  ```
83
 
84
- ### Interactive
85
 
86
- If the model is being used interactively (e.g. in a notebook) `generate_image_stream` can be used to generate a stream of images as the model is decoding. The detokenizer adds a slight delay for each image. Setting `log2_mid_count` to 3 results in a total of `2 ** 3 = 8` generated images. The only valid values for `log2_mid_count` are 0, 1, 2, 3, and 4. This is implemented in the colab.
87
 
88
  ```python
89
  image_stream = model.generate_image_stream(
90
  text='Dali painting of WALL·E',
91
  seed=-1,
92
  grid_size=3,
93
- log2_mid_count=3,
94
- log2_k=6,
95
- log2_supercondition_factor=3,
 
 
96
  is_verbose=False
97
  )
98
 
@@ -108,8 +123,4 @@ Use `image_from_text.py` to generate images from the command line.
108
  ```bash
109
  $ python image_from_text.py --text='artificial intelligence' --no-mega
110
  ```
111
- <img src="https://github.com/kuprel/min-dalle/raw/main/examples/artificial_intelligence.jpg" alt="min-dalle" width="200"/>
112
-
113
- <br />
114
-
115
- [Sponsor this work](https://github.com/sponsors/kuprel)
 
8
 
9
  [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
10
  [![Discord](https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white)](https://discord.com/channels/823813159592001537/912729332311556136)
11
+ **[GitHub](https://github.com/kuprel/min-dalle)**
12
+ **[❤️ Sponsor](https://github.com/sponsors/kuprel)**
13
 
14
+ This is a fast, minimal port of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini) (with mega weights). It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.
15
 
16
  To generate a 4x4 grid of DALL·E Mega images it takes:
17
  - 89 sec with a T4 in Colab
18
  - 48 sec with a P100 in Colab
19
  - 13 sec with an A100 on Replicate
20
 
21
+ Here's a more detailed breakdown of performance on an A100. Credit to [@technobird22](https://github.com/technobird22) and his [NeoGen](https://github.com/technobird22/NeoGen) discord bot for the graph.
22
+ <br />
23
+ <img src="https://github.com/kuprel/min-dalle/raw/main/performance.png" alt="min-dalle" width="450"/>
24
+ <br />
25
+
26
+ The flax model and code for converting it to torch can be found [here](https://github.com/kuprel/min-dalle-flax).
27
+
28
  ## Install
29
 
30
  ```bash
 
41
  model = MinDalle(
42
  models_root='./pretrained',
43
  dtype=torch.float32,
44
+ device='cuda',
45
  is_mega=True,
46
  is_reusable=True
47
  )
48
  ```
49
 
50
+ The required models will be downloaded to `models_root` if they are not already there. Set the `dtype` to `torch.float16` to save GPU memory. If you have an Ampere architecture GPU you can use `torch.bfloat16`. Set the `device` to either "cuda" or "cpu". Once everything has finished initializing, call `generate_image` with some text as many times as you want. Use a positive `seed` for reproducible results. Higher values for `supercondition_factor` result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the `top_k` most probable tokens. The largest logit is subtracted from the logits to avoid infs. The logits are then divided by the `temperature`. If `is_seamless` is true, the image grid will be tiled in token space not pixel space.
51
 
52
  ```python
53
  image = model.generate_image(
54
  text='Nuclear explosion broccoli',
55
  seed=-1,
56
  grid_size=4,
57
+ is_seamless=False,
58
+ temperature=1,
59
+ top_k=256,
60
+ supercondition_factor=32,
61
  is_verbose=False
62
  )
63
 
 
65
  ```
66
  <img src="https://github.com/kuprel/min-dalle/raw/main/examples/nuclear_broccoli.jpg" alt="min-dalle" width="400"/>
67
 
68
+ Credit to [@hardmaru](https://twitter.com/hardmaru) for the [example](https://twitter.com/hardmaru/status/1544354119527596034)
69
 
70
 
71
  ### Saving Individual Images
 
75
  images = model.generate_images(
76
  text='Nuclear explosion broccoli',
77
  seed=-1,
78
+ grid_size=3,
79
+ is_seamless=False,
80
+ temperature=1,
81
+ top_k=256,
82
+ supercondition_factor=16,
83
  is_verbose=False
84
  )
85
  ```
 
94
  image.save('image_{}.png'.format(i))
95
  ```
96
 
97
+ ### Progressive Outputs
98
 
99
+ If the model is being used interactively (e.g. in a notebook) `generate_image_stream` can be used to generate a stream of images as the model is decoding. The detokenizer adds a slight delay for each image. Set `progressive_outputs` to `True` to enable this. An example is implemented in the colab.
100
 
101
  ```python
102
  image_stream = model.generate_image_stream(
103
  text='Dali painting of WALL·E',
104
  seed=-1,
105
  grid_size=3,
106
+ progressive_outputs=True,
107
+ is_seamless=False,
108
+ temperature=1,
109
+ top_k=256,
110
+ supercondition_factor=16,
111
  is_verbose=False
112
  )
113
 
 
123
  ```bash
124
  $ python image_from_text.py --text='artificial intelligence' --no-mega
125
  ```
126
+ <img src="https://github.com/kuprel/min-dalle/raw/main/examples/artificial_intelligence.jpg" alt="min-dalle" width="200"/>