MultiMatrix commited on
Commit
bbcbe3c
Β·
verified Β·
1 Parent(s): e6b8212

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +194 -182
README.md CHANGED
@@ -1,128 +1,116 @@
1
- <p align="center">
2
- <img src="assets/logo.png" width="400">
3
- </p>
4
-
5
  ## DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
6
 
7
  [Paper](https://arxiv.org/abs/2308.15070) | [Project Page](https://0x3f3f3f3fun.github.io/projects/diffbir/)
8
 
9
  ![visitors](https://visitor-badge.laobi.icu/badge?page_id=XPixelGroup/DiffBIR) [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb)
10
 
11
- [Xinqi Lin](https://0x3f3f3f3fun.github.io/)<sup>1,\*</sup>, [Jingwen He](https://github.com/hejingwenhejingwen)<sup>2,3,\*</sup>, [Ziyan Chen](https://orcid.org/0000-0001-6277-5635)<sup>1</sup>, [Zhaoyang Lyu](https://scholar.google.com.tw/citations?user=gkXFhbwAAAAJ&hl=en)<sup>2</sup>, [Bo Dai](http://daibo.info/)<sup>2</sup>, [Fanghua Yu](https://github.com/Fanghua-Yu)<sup>1</sup>, [Wanli Ouyang](https://wlouyang.github.io/)<sup>2</sup>, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao)<sup>2</sup>, [Chao Dong](http://xpixel.group/2010/01/20/chaodong.html)<sup>1,2</sup>
12
-
13
- <sup>1</sup>Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences<br><sup>2</sup>Shanghai AI Laboratory<br><sup>3</sup>The Chinese University of Hong Kong
14
-
15
- <p align="center">
16
- <img src="assets/teaser.png">
17
- </p>
18
 
19
- ---
20
 
21
- <p align="center">
22
- <img src="assets/pipeline.png">
23
- </p>
24
 
25
  :star:If DiffBIR is helpful for you, please help star this repo. Thanks!:hugs:
26
 
27
- ## :book:Table Of Contents
28
 
29
- - [Update](#update)
30
  - [Visual Results On Real-world Images](#visual_results)
 
31
  - [TODO](#todo)
32
  - [Installation](#installation)
33
  - [Pretrained Models](#pretrained_models)
 
34
  - [Inference](#inference)
35
  - [Train](#train)
36
 
37
- ## <a name="update"></a>:new:Update
38
-
39
- - **2024.04.08**: βœ… Release everything about our [updated manuscript](https://arxiv.org/abs/2308.15070), including (1) a **new model** trained on subset of laion2b-en and (2) a **more readable code base**, etc. DiffBIR is now a general restoration pipeline that could handle different blind image restoration tasks with a unified generation module.
40
- - **2023.09.19**: βœ… Add support for Apple Silicon! Check [installation_xOS.md](assets/docs/installation_xOS.md) to work with **CPU/CUDA/MPS** device!
41
- - **2023.09.14**: βœ… Integrate a patch-based sampling strategy ([mixture-of-diffusers](https://github.com/albarji/mixture-of-diffusers)). [**Try it!**](#patch-based-sampling) Here is an [example](https://imgsli.com/MjA2MDA1) with a resolution of 2396 x 1596. GPU memory usage will continue to be optimized in the future and we are looking forward to your pull requests!
42
- - **2023.09.14**: βœ… Add support for background upsampler (DiffBIR/[RealESRGAN](https://github.com/xinntao/Real-ESRGAN)) in face enhancement! :rocket: [**Try it!**](#inference_fr)
43
- - **2023.09.13**: :rocket: Provide online demo (DiffBIR-official) in [OpenXLab](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official), which integrates both general model and face model. Please have a try! [camenduru](https://github.com/camenduru) also implements an online demo, thanks for his work.:hugs:
44
- - **2023.09.12**: βœ… Upload inference code of latent image guidance and release [real47](inputs/real47) testset.
45
- - **2023.09.08**: βœ… Add support for restoring unaligned faces.
46
- - **2023.09.06**: :rocket: Update [colab demo](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb). Thanks to [camenduru](https://github.com/camenduru)!:hugs:
47
- - **2023.08.30**: This repo is released.
48
-
49
- ## <a name="visual_results"></a>:eyes:Visual Results On Real-world Images
50
-
51
- ### Blind Image Super-Resolution
52
-
53
- [<img src="assets/visual_results/bsr6.png" height="223px"/>](https://imgsli.com/MTk5ODI3) [<img src="assets/visual_results/bsr7.png" height="223px"/>](https://imgsli.com/MTk5ODI4) [<img src="assets/visual_results/bsr4.png" height="223px"/>](https://imgsli.com/MTk5ODI1)
54
-
55
- <!-- [<img src="assets/visual_results/bsr1.png" height="223px"/>](https://imgsli.com/MTk5ODIy) [<img src="assets/visual_results/bsr2.png" height="223px"/>](https://imgsli.com/MTk5ODIz)
56
-
57
- [<img src="assets/visual_results/bsr3.png" height="223px"/>](https://imgsli.com/MTk5ODI0) [<img src="assets/visual_results/bsr5.png" height="223px"/>](https://imgsli.com/MjAxMjM0) -->
58
 
59
- <!-- [<img src="assets/visual_results/bsr1.png" height="223px"/>](https://imgsli.com/MTk5ODIy) [<img src="assets/visual_results/bsr5.png" height="223px"/>](https://imgsli.com/MjAxMjM0) -->
 
 
60
 
61
- ### Blind Face Restoration
62
 
63
- <!-- [<img src="assets/visual_results/bfr1.png" height="223px"/>](https://imgsli.com/MTk5ODI5) [<img src="assets/visual_results/bfr2.png" height="223px"/>](https://imgsli.com/MTk5ODMw) [<img src="assets/visual_results/bfr4.png" height="223px"/>](https://imgsli.com/MTk5ODM0) -->
 
64
 
65
- [<img src="assets/visual_results/whole_image1.png" height="370"/>](https://imgsli.com/MjA2MTU0)
66
- [<img src="assets/visual_results/whole_image2.png" height="370"/>](https://imgsli.com/MjA2MTQ4)
67
 
68
- :star: Face and the background enhanced by DiffBIR.
69
 
70
- ### Blind Image Denoising
71
 
72
- [<img src="assets/visual_results/bid1.png" height="215px"/>](https://imgsli.com/MjUzNzkz) [<img src="assets/visual_results/bid3.png" height="215px"/>](https://imgsli.com/MjUzNzky)
73
- [<img src="assets/visual_results/bid2.png" height="215px"/>](https://imgsli.com/MjUzNzkx)
74
 
75
- ### 8x Blind Super-Resolution With Patch-based Sampling
76
-
77
- > I often think of Bag End. I miss my books and my arm chair, and my garden. See, that's where I belong. That's home. --- Bilbo Baggins
 
 
 
 
 
 
78
 
79
- [<img src="assets/visual_results/tiled_sampling.png" height="480px"/>](https://imgsli.com/MjUzODE4)
80
 
81
- ## <a name="todo"></a>:climbing:TODO
82
 
83
- - [x] Release code and pretrained models :computer:.
84
- - [x] Update links to paper and project page :link:.
85
- - [x] Release real47 testset :minidisc:.
86
- - [ ] Provide webui.
87
- - [ ] Reduce the vram usage of DiffBIR :fire::fire::fire:.
88
- - [ ] Provide HuggingFace demo :notebook:.
89
- - [x] Add a patch-based sampling schedule :mag:.
90
- - [x] Upload inference code of latent image guidance :page_facing_up:.
91
- - [ ] Improve the performance :superhero:.
92
  - [x] Support MPS acceleration for MacOS users.
93
- - [ ] DiffBIR-turbo :fire::fire::fire:.
94
- - [ ] Speed up inference, such as using fp16/bf16, torch.compile :fire::fire::fire:.
95
 
96
- ## <a name="installation"></a>:gear:Installation
 
 
 
 
97
 
98
  ```shell
99
  # clone this repo
100
  git clone https://github.com/XPixelGroup/DiffBIR.git
101
  cd DiffBIR
102
 
103
- # create environment
104
- conda create -n diffbir python=3.10
105
  conda activate diffbir
106
  pip install -r requirements.txt
107
  ```
108
 
109
- Our new code is based on pytorch 2.2.2 for the built-in support of memory-efficient attention. If you are working on a GPU that is not compatible with the latest pytorch, just downgrade pytorch to 1.13.1+cu116 and install xformers 0.0.16 as an alternative.
110
- <!-- Note the installation is only compatible with **Linux** users. If you are working on different platforms, please check [xOS Installation](assets/docs/installation_xOS.md). -->
111
 
112
- ## <a name="pretrained_models"></a>:dna:Pretrained Models
 
 
 
 
 
 
 
113
 
114
- Here we list pretrained weight of stage 2 model (IRControlNet) and our trained SwinIR, which was used for degradation removal during the training of stage 2 model.
 
115
 
116
- | Model Name | Description | HuggingFace | BaiduNetdisk | OpenXLab |
117
- | :---------: | :----------: | :----------: | :----------: | :----------: |
118
- | v2.pth | IRControlNet trained on filtered laion2b-en | [download](https://huggingface.co/lxq007/DiffBIR-v2/resolve/main/v2.pth) | [download](https://pan.baidu.com/s/1uTAFl13xgGAzrnznAApyng?pwd=xiu3)<br>(pwd: xiu3) | [download](https://openxlab.org.cn/models/detail/linxinqi/DiffBIR/tree/main) |
119
- | v1_general.pth | IRControlNet trained on ImageNet-1k | [download](https://huggingface.co/lxq007/DiffBIR-v2/resolve/main/v1_general.pth) | [download](https://pan.baidu.com/s/1PhXHAQSTOUX4Gy3MOc2t2Q?pwd=79n9)<br>(pwd: 79n9) | [download](https://openxlab.org.cn/models/detail/linxinqi/DiffBIR/tree/main) |
120
- | v1_face.pth | IRControlNet trained on FFHQ | [download](https://huggingface.co/lxq007/DiffBIR-v2/resolve/main/v1_face.pth) | [download](https://pan.baidu.com/s/1kvM_SB1VbXjbipLxdzlI3Q?pwd=n7dx)<br>(pwd: n7dx) | [download](https://openxlab.org.cn/models/detail/linxinqi/DiffBIR/tree/main) |
121
- | codeformer_swinir.ckpt | SwinIR trained on ImageNet-1k | [download](https://huggingface.co/lxq007/DiffBIR-v2/resolve/main/codeformer_swinir.ckpt) | [download](https://pan.baidu.com/s/176fARg2ySYtDgX2vQOeRbA?pwd=vfif)<br>(pwd: vfif) | [download](https://openxlab.org.cn/models/detail/linxinqi/DiffBIR/tree/main) |
122
 
123
- During inference, we use off-the-shelf models from other papers as the stage 1 model: [BSRNet](https://github.com/cszn/BSRGAN) for BSR, [SwinIR-Face](https://github.com/zsyOAOA/DifFace) used in DifFace for BFR, and [SCUNet-PSNR](https://github.com/cszn/SCUNet) for BID, while the trained IRControlNet remains **unchanged** for all tasks. Please check [code](utils/inference.py) for more details. Thanks for their work!
 
 
 
 
 
124
 
125
- <!-- ## <a name="quick_start"></a>:flight_departure:Quick Start
126
 
127
  Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/`, then run the following command to interact with the gradio website.
128
 
@@ -135,163 +123,189 @@ python gradio_diffbir.py \
135
  --device cuda
136
  ```
137
 
138
- <div align="center">
139
- <kbd><img src="assets/gradio.png"></img></kbd>
140
- </div> -->
141
 
142
- ## <a name="inference"></a>:crossed_swords:Inference
143
 
144
- We provide some examples for inference, check [inference.py](inference.py) for more arguments. Pretrained weights will be **automatically downloaded**.
 
145
 
146
- ### Blind Image Super-Resolution
147
 
148
  ```shell
149
- python -u inference.py \
150
- --version v2 \
151
- --task sr \
152
- --upscale 4 \
153
- --cfg_scale 4.0 \
154
- --input inputs/demo/bsr \
155
- --output results/demo_bsr \
156
- --device cuda
 
 
157
  ```
158
 
159
- ### Blind Face Restoration
160
- <a name="inference_fr"></a>
 
 
 
161
 
162
  ```shell
163
  # for aligned face inputs
164
- python -u inference.py \
165
- --version v2 \
166
- --task fr \
167
- --upscale 1 \
168
- --cfg_scale 4.0 \
169
- --input inputs/demo/bfr/aligned \
170
- --output results/demo_bfr_aligned \
171
  --device cuda
172
  ```
173
 
 
 
174
  ```shell
175
  # for unaligned face inputs
176
- python -u inference.py \
177
- --version v2 \
178
- --task fr_bg \
179
- --upscale 2 \
180
- --cfg_scale 4.0 \
181
- --input inputs/demo/bfr/whole_img \
182
- --output results/demo_bfr_unaligned \
183
  --device cuda
184
  ```
185
 
186
- ### Blind Image Denoising
 
 
187
 
188
  ```shell
189
- python -u inference.py \
190
- --version v2 \
191
- --task dn \
192
- --upscale 1 \
193
- --cfg_scale 4.0 \
194
- --input inputs/demo/bid \
195
- --output results/demo_bid \
196
- --device cuda
 
 
 
197
  ```
198
 
199
- ### Other options
200
 
201
- #### Patch-based sampling
202
- <a name="patch_based_sampling"></a>
203
 
204
- Add the following arguments to enable patch-based sampling:
205
 
206
  ```shell
207
- [command...] --tiled --tile_size 512 --tile_stride 256
 
 
 
 
 
208
  ```
209
 
210
- Patch-based sampling supports super-resolution with a large scale factor. Our patch-based sampling is built upon [mixture-of-diffusers](https://github.com/albarji/mixture-of-diffusers). Thanks for their work!
211
-
212
- #### Restoration Guidance
213
 
214
- Restoration guidance is used to achieve a trade-off bwtween quality and fidelity. We default to closing it since we prefer quality rather than fidelity. Here is an example:
215
 
216
  ```shell
217
- python -u inference.py \
218
- --version v2 \
219
- --task sr \
220
- --upscale 4 \
221
- --cfg_scale 4.0 \
222
- --input inputs/demo/bsr \
223
- --guidance --g_loss w_mse --g_scale 0.5 --g_space rgb \
224
- --output results/demo_bsr_wg \
 
 
 
225
  --device cuda
226
  ```
227
 
228
- You will see that the results become more smooth.
229
 
230
- #### Better Start Point For Sampling
231
 
232
- Add the following argument to offer better start point for reverse sampling:
233
 
234
- ```shell
235
- [command...] --better_start
236
- ```
237
 
238
- This option prevents our model from generating noise in
239
- image background.
240
 
241
- ## <a name="train"></a>:stars:Train
242
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
243
 
244
- ### Stage 1
245
 
246
- First, we train a SwinIR, which will be used for degradation removal during the training of stage 2.
247
 
248
- <a name="gen_file_list"></a>
249
- 1. Generate file list of training set and validation set, a file list looks like:
250
 
251
- ```txt
252
- /path/to/image_1
253
- /path/to/image_2
254
- /path/to/image_3
255
- ...
256
- ```
257
 
258
- You can write a simple python script or directly use shell command to produce file lists. Here is an example:
259
-
260
- ```shell
261
- # collect all iamge files in img_dir
262
- find [img_dir] -type f > files.list
263
- # shuffle collected files
264
- shuf files.list > files_shuf.list
265
- # pick train_size files in the front as training set
266
- head -n [train_size] files_shuf.list > files_shuf_train.list
267
- # pick remaining files as validation set
268
- tail -n +[train_size + 1] files_shuf.list > files_shuf_val.list
269
- ```
270
 
271
- 2. Fill in the [training configuration file](configs/train/train_stage1.yaml) with appropriate values.
272
 
273
- 3. Start training!
274
 
275
  ```shell
276
- accelerate launch train_stage1.py --config configs/train/train_stage1.yaml
277
  ```
278
 
279
- ### Stage 2
280
 
281
- 1. Download pretrained [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) to provide generative capabilities. :bulb:: If you have ran the [inference script](inference.py), the SD v2.1 checkpoint can be found in [weights](weights).
 
 
282
 
283
  ```shell
284
  wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
285
  ```
286
 
287
- 2. Generate file list as mentioned [above](#gen_file_list). Currently, the training script of stage 2 doesn't support validation set, so you only need to create training file list.
 
 
 
 
 
 
 
 
 
 
 
 
288
 
289
- 3. Fill in the [training configuration file](configs/train/train_stage2.yaml) with appropriate values.
290
 
291
- 4. Start training!
292
 
293
  ```shell
294
- accelerate launch train_stage2.py --config configs/train/train_stage2.yaml
295
  ```
296
 
297
  ## Citation
@@ -299,13 +313,11 @@ First, we train a SwinIR, which will be used for degradation removal during the
299
  Please cite us if our work is useful for your research.
300
 
301
  ```
302
- @misc{lin2024diffbir,
303
- title={DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior},
304
- author={Xinqi Lin and Jingwen He and Ziyan Chen and Zhaoyang Lyu and Bo Dai and Fanghua Yu and Wanli Ouyang and Yu Qiao and Chao Dong},
305
- year={2024},
306
- eprint={2308.15070},
307
- archivePrefix={arXiv},
308
- primaryClass={cs.CV}
309
  }
310
  ```
311
 
@@ -319,4 +331,4 @@ This project is based on [ControlNet](https://github.com/lllyasviel/ControlNet)
319
 
320
  ## Contact
321
 
322
- If you have any questions, please feel free to contact with me at linxinqi23@mails.ucas.ac.cn.
 
1
+ ![1](https://github.com/open-mmlab/mmdetection/assets/95841578/1298bd73-ac5e-4275-a2f6-0a42f2430d79)
 
 
 
2
  ## DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
3
 
4
  [Paper](https://arxiv.org/abs/2308.15070) | [Project Page](https://0x3f3f3f3fun.github.io/projects/diffbir/)
5
 
6
  ![visitors](https://visitor-badge.laobi.icu/badge?page_id=XPixelGroup/DiffBIR) [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_app.svg)](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb)
7
 
8
+ [Xinqi Lin](https://0x3f3f3f3fun.github.io/)<sup>1,\*</sup>, [Jingwen He](https://github.com/hejingwenhejingwen)<sup>2,\*</sup>, [Ziyan Chen](https://orcid.org/0000-0001-6277-5635)<sup>2</sup>, [Zhaoyang Lyu](https://scholar.google.com.tw/citations?user=gkXFhbwAAAAJ&hl=en)<sup>2</sup>, [Ben Fei](https://scholar.google.com/citations?user=skQROj8AAAAJ&hl=zh-CN&oi=ao)<sup>2</sup>, [Bo Dai](http://daibo.info/)<sup>2</sup>, [Wanli Ouyang](https://wlouyang.github.io/)<sup>2</sup>, [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao)<sup>2</sup>, [Chao Dong](http://xpixel.group/2010/01/20/chaodong.html)<sup>1,2</sup>
 
 
 
 
 
 
9
 
10
+ <sup>1</sup>Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences<br><sup>2</sup>Shanghai AI Laboratory
11
 
12
+ ![2](https://github.com/open-mmlab/mmdetection/assets/95841578/63b4899a-4c92-4a08-a2bc-e932320e4166)
 
 
13
 
14
  :star:If DiffBIR is helpful for you, please help star this repo. Thanks!:hugs:
15
 
16
+ ## Table Of Contents
17
 
 
18
  - [Visual Results On Real-world Images](#visual_results)
19
+ - [Update](#update)
20
  - [TODO](#todo)
21
  - [Installation](#installation)
22
  - [Pretrained Models](#pretrained_models)
23
+ - [Quick Start (gradio demo)](#quick_start)
24
  - [Inference](#inference)
25
  - [Train](#train)
26
 
27
+ ## <a name="visual_results"></a>Visual Results On Real-world Images
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
+ <!-- <details close>
30
+ <summary>General Image Restoration</summary> -->
31
+ ### General Image Restoration
32
 
33
+ ![3](https://github.com/open-mmlab/mmdetection/assets/95841578/0887c64d-ba44-4124-b001-f2576e543226)
34
 
35
+ <!-- <summary>Face Image Restoration</summary> -->
36
+ ### Face Image Restoration
37
 
38
+ ![4](https://github.com/open-mmlab/mmdetection/assets/95841578/8e054d66-050a-4eff-8837-acfa4cffc904)
 
39
 
40
+ Face and the background enhanced by DiffBIR.
41
 
42
+ <!-- </details> -->
43
 
44
+ ## <a name="update"></a>Update
 
45
 
46
+ - **2023.09.19**: βœ… Add support for Apple Silicon! Check [installation_xOS.md](assets/docs/installation_xOS.md) to work with **CPU/CUDA/MPS** device!
47
+ - **2023.09.14**: βœ… Integrate a patch-based sampling strategy ([mixture-of-diffusers](https://github.com/albarji/mixture-of-diffusers)). [**Try it!**](#general_image_inference) Here is an [example](https://imgsli.com/MjA2MDA1) with a resolution of 2396 x 1596. GPU memory usage will continue to be optimized in the future and we are looking forward to your pull requests!
48
+ - **2023.09.14**: βœ… Add support for background upsampler(DiffBIR/[RealESRGAN](https://github.com/xinntao/Real-ESRGAN)) in face enhancement! :rocket: [**Try it!**](#unaligned_face_inference)
49
+ - **2023.09.13**: :rocket: Provide online demo (DiffBIR-official) in [OpenXLab](https://openxlab.org.cn/apps/detail/linxinqi/DiffBIR-official), which integrates both general model and face model. Please have a try! [camenduru](https://github.com/camenduru) also implements an online demo, thanks for his work.:hugs:
50
+ - **2023.09.12**: βœ… Upload inference code of latent image guidance and release [real47](inputs/real47) testset.
51
+ - **2023.09.08**: βœ… Add support for restoring unaligned faces.
52
+ - **2023.09.06**: :rocket: Update [colab demo](https://colab.research.google.com/github/camenduru/DiffBIR-colab/blob/main/DiffBIR_colab.ipynb). Thanks to [camenduru](https://github.com/camenduru)!:hugs:
53
+ - **2023.08.30**: This repo is released.
54
+ <!-- - [**History Updates** >]() -->
55
 
 
56
 
57
+ ## <a name="todo"></a>TODO
58
 
59
+ - [x] Release code and pretrained models:computer:.
60
+ - [x] Update links to paper and project page:link:.
61
+ - [x] Release real47 testset:minidisc:.
62
+ - [ ] Provide webui and reduce the memory usage of DiffBIR:fire::fire::fire:.
63
+ - [ ] Provide HuggingFace demo:notebook::fire::fire::fire:.
64
+ - [x] Add a patch-based sampling schedule:mag:.
65
+ - [x] Upload inference code of latent image guidance:page_facing_up:.
66
+ - [ ] Improve the performance:superhero:.
 
67
  - [x] Support MPS acceleration for MacOS users.
 
 
68
 
69
+ ## <a name="installation"></a>Installation
70
+ <!-- - **Python** >= 3.9
71
+ - **CUDA** >= 11.3
72
+ - **PyTorch** >= 1.12.1
73
+ - **xformers** == 0.0.16 -->
74
 
75
  ```shell
76
  # clone this repo
77
  git clone https://github.com/XPixelGroup/DiffBIR.git
78
  cd DiffBIR
79
 
80
+ # create an environment with python >= 3.9
81
+ conda create -n diffbir python=3.9
82
  conda activate diffbir
83
  pip install -r requirements.txt
84
  ```
85
 
86
+ Note the installation is only compatible with **Linux** users. If you are working on different platforms, please check [xOS Installation](assets/docs/installation_xOS.md).
 
87
 
88
+ <!-- ```shell
89
+ # clone this repo
90
+ git clone https://github.com/XPixelGroup/DiffBIR.git
91
+ cd DiffBIR
92
+
93
+ # create a conda environment with python >= 3.9
94
+ conda create -n diffbir python=3.9
95
+ conda activate diffbir
96
 
97
+ conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3 -c pytorch
98
+ conda install xformers==0.0.16 -c xformers
99
 
100
+ # other dependencies
101
+ pip install -r requirements.txt
102
+ ``` -->
103
+
104
+ ## <a name="pretrained_models"></a>Pretrained Models
 
105
 
106
+ | Model Name | Description | HuggingFace | BaiduNetdisk | OpenXLab |
107
+ | :--------- | :---------- | :---------- | :---------- | :---------- |
108
+ | general_swinir_v1.ckpt | Stage1 model (SwinIR) for general image restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) | [download](https://pan.baidu.com/s/1uvSvJgcoL_Knj0h22-9TvA?pwd=v3v6) (pwd: v3v6) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_general_swinir_v1) |
109
+ | general_full_v1.ckpt | Full model for general image restoration. "Full" means it contains both the stage1 and stage2 model. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) | [download](https://pan.baidu.com/s/1gLvW1nvkJStdVAKROqaYaA?pwd=86zi) (pwd: 86zi) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_general_full_v1) |
110
+ | face_swinir_v1.ckpt | Stage1 model (SwinIR) for face restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_swinir_v1.ckpt) | [download](https://pan.baidu.com/s/1cnBBC8437BJiM3q6suaK8g?pwd=xk5u) (pwd: xk5u) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_face_swinir_v1) |
111
+ | face_full_v1.ckpt | Full model for face restoration. | [download](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) | [download](https://pan.baidu.com/s/1pc04xvQybkynRfzK5Y8K0Q?pwd=ov8i) (pwd: ov8i) | [download](https://download.openxlab.org.cn/models/linxinqi/DiffBIR/weight//diffbir_face_full_v1) |
112
 
113
+ ## <a name="quick_start"></a>Quick Start
114
 
115
  Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/`, then run the following command to interact with the gradio website.
116
 
 
123
  --device cuda
124
  ```
125
 
126
+ <img width="887" alt="5" src="https://github.com/open-mmlab/mmdetection/assets/95841578/36afc84f-61d9-4514-88c8-40eaec557e44">
127
+
128
+ ## <a name="inference"></a>Inference
129
 
130
+ ### Full Pipeline (Remove Degradations & Refine Details)
131
 
132
+ <a name="general_image_inference"></a>
133
+ #### General Image
134
 
135
+ Download [general_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_full_v1.ckpt) and [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt) to `weights/` and run the following command.
136
 
137
  ```shell
138
+ python inference.py \
139
+ --input inputs/demo/general \
140
+ --config configs/model/cldm.yaml \
141
+ --ckpt weights/general_full_v1.ckpt \
142
+ --reload_swinir --swinir_ckpt weights/general_swinir_v1.ckpt \
143
+ --steps 50 \
144
+ --sr_scale 4 \
145
+ --color_fix_type wavelet \
146
+ --output results/demo/general \
147
+ --device cuda [--tiled --tile_size 512 --tile_stride 256]
148
  ```
149
 
150
+ Remove the brackets to enable tiled sampling. If you are confused about where the `reload_swinir` option came from, please refer to the [degradation details](#degradation-details).
151
+
152
+ #### Face Image
153
+ <!-- Download [face_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) to `weights/` and run the following command. -->
154
+ The [face_full_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_full_v1.ckpt) will be downloaded from HuggingFace automatically.
155
 
156
  ```shell
157
  # for aligned face inputs
158
+ python inference_face.py \
159
+ --input inputs/demo/face/aligned \
160
+ --sr_scale 1 \
161
+ --output results/demo/face/aligned \
162
+ --has_aligned \
 
 
163
  --device cuda
164
  ```
165
 
166
+ <a name="unaligned_face_inference"></a>
167
+
168
  ```shell
169
  # for unaligned face inputs
170
+ python inference_face.py \
171
+ --input inputs/demo/face/whole_img \
172
+ --sr_scale 2 \
173
+ --output results/demo/face/whole_img \
174
+ --bg_upsampler DiffBIR \
 
 
175
  --device cuda
176
  ```
177
 
178
+ ### Latent Image Guidance (Quality-fidelity trade-off)
179
+
180
+ Latent image guidance is used to achieve a trade-off bwtween quality and fidelity. We default to closing it since we prefer quality rather than fidelity. Here is an example:
181
 
182
  ```shell
183
+ python inference.py \
184
+ --input inputs/demo/general \
185
+ --config configs/model/cldm.yaml \
186
+ --ckpt weights/general_full_v1.ckpt \
187
+ --reload_swinir --swinir_ckpt weights/general_swinir_v1.ckpt \
188
+ --steps 50 \
189
+ --sr_scale 4 \
190
+ --color_fix_type wavelet \
191
+ --output results/demo/general \
192
+ --device cuda \
193
+ --use_guidance --g_scale 400 --g_t_start 200
194
  ```
195
 
196
+ You will see that the results become more smooth.
197
 
198
+ ### Only Stage1 Model (Remove Degradations)
 
199
 
200
+ Download [general_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/general_swinir_v1.ckpt), [face_swinir_v1.ckpt](https://huggingface.co/lxq007/DiffBIR/resolve/main/face_swinir_v1.ckpt) for general, face image respectively, and run the following command.
201
 
202
  ```shell
203
+ python scripts/inference_stage1.py \
204
+ --config configs/model/swinir.yaml \
205
+ --ckpt [swinir_ckpt_path] \
206
+ --input [lq_dir] \
207
+ --sr_scale 1 --image_size 512 \
208
+ --output [output_dir_path]
209
  ```
210
 
211
+ ### Only Stage2 Model (Refine Details)
 
 
212
 
213
+ Since the proposed two-stage pipeline is very flexible, you can utilize other awesome models to remove degradations instead of SwinIR and then leverage the Stable Diffusion to refine details.
214
 
215
  ```shell
216
+ # step1: Use other models to remove degradations and save results in [img_dir_path].
217
+
218
+ # step2: Refine details of step1 outputs.
219
+ python inference.py \
220
+ --config configs/model/cldm.yaml \
221
+ --ckpt [full_ckpt_path] \
222
+ --steps 50 --sr_scale 1 \
223
+ --input [img_dir_path] \
224
+ --color_fix_type wavelet \
225
+ --output [output_dir_path] \
226
+ --disable_preprocess_model \
227
  --device cuda
228
  ```
229
 
230
+ ## <a name="train"></a>Train
231
 
232
+ ### Degradation Details
233
 
234
+ For general image restoration, we first train both the stage1 and stage2 model under codeformer degradation to enhance the generative capacity of the stage2 model. In order to improve the ability for degradation removal, we train another stage1 model under Real-ESRGAN degradation and utilize it during inference.
235
 
236
+ For face image restoration, we adopt the degradation model used in [DifFace](https://github.com/zsyOAOA/DifFace/blob/master/configs/training/swinir_ffhq512.yaml) for training and directly utilize the SwinIR model released by them as our stage1 model.
 
 
237
 
238
+ ### Data Preparation
 
239
 
240
+ 1. Generate file list of training set and validation set.
241
 
242
+ ```shell
243
+ python scripts/make_file_list.py \
244
+ --img_folder [hq_dir_path] \
245
+ --val_size [validation_set_size] \
246
+ --save_folder [save_dir_path] \
247
+ --follow_links
248
+ ```
249
+
250
+ This script will collect all image files in `img_folder` and split them into training set and validation set automatically. You will get two file lists in `save_folder`, each line in a file list contains an absolute path of an image file:
251
+
252
+ ```
253
+ save_folder
254
+ β”œβ”€β”€ train.list # training file list
255
+ └── val.list # validation file list
256
+ ```
257
 
258
+ 2. Configure training set and validation set.
259
 
260
+ For general image restoration, fill in the following configuration files with appropriate values.
261
 
262
+ - [training set](configs/dataset/general_deg_codeformer_train.yaml) and [validation set](configs/dataset/general_deg_codeformer_val.yaml) for **CodeFormer** degradation.
263
+ - [training set](configs/dataset/general_deg_realesrgan_train.yaml) and [validation set](configs/dataset/general_deg_realesrgan_val.yaml) for **Real-ESRGAN** degradation.
264
 
265
+ For face image restoration, fill in the face [training set](configs/dataset/face_train.yaml) and [validation set](configs/dataset/face_val.yaml) configuration files with appropriate values.
 
 
 
 
 
266
 
267
+ ### Train Stage1 Model
268
+
269
+ 1. Configure training-related information.
 
 
 
 
 
 
 
 
 
270
 
271
+ Fill in the configuration file of [training](configs/train_swinir.yaml) with appropriate values.
272
 
273
+ 2. Start training.
274
 
275
  ```shell
276
+ python train.py --config [training_config_path]
277
  ```
278
 
279
+ :bulb::Checkpoints of SwinIR will be used in training stage2 model.
280
 
281
+ ### Train Stage2 Model
282
+
283
+ 1. Download pretrained [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) to provide generative capabilities.
284
 
285
  ```shell
286
  wget https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt --no-check-certificate
287
  ```
288
 
289
+ 2. Create the initial model weights.
290
+
291
+ ```shell
292
+ python scripts/make_stage2_init_weight.py \
293
+ --cldm_config configs/model/cldm.yaml \
294
+ --sd_weight [sd_v2.1_ckpt_path] \
295
+ --swinir_weight [swinir_ckpt_path] \
296
+ --output [init_weight_output_path]
297
+ ```
298
+
299
+ You will see some [outputs](assets/init_weight_outputs.txt) which show the weight initialization.
300
+
301
+ 3. Configure training-related information.
302
 
303
+ Fill in the configuration file of [training](configs/train_cldm.yaml) with appropriate values.
304
 
305
+ 4. Start training.
306
 
307
  ```shell
308
+ python train.py --config [training_config_path]
309
  ```
310
 
311
  ## Citation
 
313
  Please cite us if our work is useful for your research.
314
 
315
  ```
316
+ @article{2023diffbir,
317
+ author = {Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Ben Fei, Bo Dai, Wanli Ouyang, Yu Qiao, Chao Dong},
318
+ title = {DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior},
319
+ journal = {arxiv},
320
+ year = {2023},
 
 
321
  }
322
  ```
323
 
 
331
 
332
  ## Contact
333
 
334
+ If you have any questions, please feel free to contact with me at linxinqi@tju.edu.cn.