gudada commited on
Commit
9c65063
·
verified ·
1 Parent(s): 8a0d194

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -199
README.md CHANGED
@@ -1,199 +1,8 @@
1
- # 🎨 DDColor
2
-
3
- Official PyTorch implementation of ICCV 2023 Paper "DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders".
4
-
5
-
6
- [![arXiv](https://img.shields.io/badge/arXiv-2212.11613-b31b1b.svg)](https://arxiv.org/abs/2212.11613)
7
- [![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FF8000)](https://huggingface.co/piddnad/DDColor-models)
8
- [![ModelScope demo](https://img.shields.io/badge/%F0%9F%91%BE%20ModelScope-Demo-8A2BE2)](https://www.modelscope.cn/models/damo/cv_ddcolor_image-colorization/summary)
9
- [![Replicate](https://replicate.com/piddnad/ddcolor/badge)](https://replicate.com/piddnad/ddcolor)
10
- ![visitors](https://visitor-badge.laobi.icu/badge?page_id=piddnad/DDColor)
11
-
12
-
13
- > Xiaoyang Kang, Tao Yang, Wenqi Ouyang, Peiran Ren, Lingzhi Li, Xuansong Xie
14
- >
15
- > *DAMO Academy, Alibaba Group*
16
-
17
- 🪄 DDColor can provide vivid and natural colorization for historical black and white old photos.
18
-
19
- <p align="center">
20
- <img src="assets/teaser.png" width="100%">
21
- </p>
22
-
23
- 🎲 It can even colorize/recolor landscapes from anime games, transforming your animated scenery into a realistic real-life style! (Image source: Genshin Impact)
24
-
25
- <p align="center">
26
- <img src="assets/anime_landscapes.png" width="100%">
27
- </p>
28
-
29
-
30
- ## 🔥 News
31
-
32
- * [2024-01-28] Support inferencing via Hugging Face! Thanks @[Niels](https://github.com/NielsRogge) for the suggestion and example code and @[Skwara](https://github.com/Skwarson96) for fixing bug.
33
-
34
- * [2024-01-18] Add Replicate demo and API! Thanks @[Chenxi](https://github.com/chenxwh).
35
-
36
- * [2023-12-13] Release the DDColor-tiny pre-trained model!
37
-
38
- * [2023-09-07] Add the Model Zoo and release three pretrained models!
39
-
40
- * [2023-05-15] Code release for training and inference!
41
-
42
- * [2023-05-05] The online demo is available!
43
-
44
- ## Online Demo
45
-
46
- We provide online demos through ModelScope at [![ModelScope demo](https://img.shields.io/badge/%F0%9F%91%BE%20ModelScope-Demo-8A2BE2)](https://www.modelscope.cn/models/damo/cv_ddcolor_image-colorization/summary) and Replicate at [![Replicate](https://replicate.com/piddnad/ddcolor/badge)](https://replicate.com/piddnad/ddcolor) .
47
-
48
- Feel free to try them out!
49
-
50
- ## Methods
51
-
52
- *In short:* DDColor uses multi-scale visual features to optimize **learnable color tokens** (i.e. color queries) and achieves state-of-the-art performance on automatic image colorization.
53
-
54
- <p align="center">
55
- <img src="assets/network_arch.jpg" width="100%">
56
- </p>
57
-
58
-
59
- ## Installation
60
-
61
- ### Requirements
62
-
63
- - Python >= 3.7
64
- - PyTorch >= 1.7
65
-
66
- ### Install with conda (Recommend)
67
-
68
- ```
69
- conda create -n ddcolor python=3.8
70
- conda activate ddcolor
71
- pip install -r requirements.txt
72
-
73
- python3 setup.py develop # install basicsr
74
- ```
75
-
76
- ## Quick Start
77
-
78
- ### Inference with Modelscope library
79
-
80
- 1. Install modelscope:
81
-
82
- ```
83
- pip install "modelscope[cv]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
84
- ```
85
-
86
- 2. Run the following codes:
87
-
88
- ```
89
- import cv2
90
- from modelscope.outputs import OutputKeys
91
- from modelscope.pipelines import pipeline
92
- from modelscope.utils.constant import Tasks
93
-
94
- img_colorization = pipeline(Tasks.image_colorization, model='damo/cv_ddcolor_image-colorization')
95
- result = img_colorization('https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/audrey_hepburn.jpg')
96
- cv2.imwrite('result.png', result[OutputKeys.OUTPUT_IMG])
97
- ```
98
-
99
- It will automatically download the DDColor models.
100
-
101
- You can find the model file `pytorch_model.pt` in the local path ~/.cache/modelscope/hub/damo.
102
-
103
- ### Inference from local script
104
-
105
- 1. Download the pretrained model file by simply running:
106
-
107
- ```
108
- from modelscope.hub.snapshot_download import snapshot_download
109
-
110
- model_dir = snapshot_download('damo/cv_ddcolor_image-colorization', cache_dir='./modelscope')
111
- print('model assets saved to %s'%model_dir)
112
- ```
113
-
114
- then the weights will be `modelscope/damo/cv_ddcolor_image-colorization/pytorch_model.pt`.
115
-
116
- Or, download the model from [Hugging Face](https://huggingface.co/piddnad/DDColor-models).
117
-
118
- 2. Run
119
- ```
120
- sh scripts/inference.sh
121
- ```
122
-
123
- ### Inference with Hugging Face
124
-
125
- Now we can load model via Huggingface Hub like this:
126
-
127
- ```
128
- from inference.colorization_pipeline_hf import DDColorHF
129
-
130
- ddcolor_paper_tiny = DDColorHF.from_pretrained("piddnad/ddcolor_paper_tiny")
131
- ddcolor_paper = DDColorHF.from_pretrained("piddnad/ddcolor_paper")
132
- ddcolor_modelscope = DDColorHF.from_pretrained("piddnad/ddcolor_modelscope")
133
- ddcolor_artistic = DDColorHF.from_pretrained("piddnad/ddcolor_artistic")
134
- ```
135
-
136
- Check `inference/colorization_pipeline_hf.py` for the details of the inference, or directly perform model inference by simply running:
137
-
138
- ```
139
- python inference/colorization_pipeline_hf.py --model_name ddcolor_modelscope --input ./assets/test_images
140
- # model_name: [ddcolor_paper | ddcolor_modelscope | ddcolor_artistic | ddcolor_paper_tiny]
141
- ```
142
-
143
- ### Gradio Demo
144
-
145
- 1. Install the gradio and other required libraries
146
-
147
- ```python
148
- !pip install gradio gradio_imageslider timm -q
149
- ```
150
-
151
- 2. Run the demo
152
-
153
- ```python
154
- python gradio_app.py
155
- ```
156
-
157
- ## Model Zoo
158
-
159
- We provide several different versions of pretrained models, please check out [Model Zoo](MODEL_ZOO.md).
160
-
161
-
162
- ## Train
163
-
164
- 1. Dataset preparation: download [ImageNet](https://www.image-net.org/) dataset, or prepare any custom dataset of your own. Use the following script to get the dataset list file:
165
-
166
- ```
167
- python data_list/get_meta_file.py
168
- ```
169
-
170
- 2. Download pretrained weights for [ConvNeXt](https://dl.fbaipublicfiles.com/convnext/convnext_large_22k_224.pth) and [InceptionV3](https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth) and put it into `pretrain` folder.
171
-
172
- 3. Specify 'meta_info_file' and other options in `options/train/train_ddcolor.yml`.
173
-
174
- 4. Run
175
-
176
- ```
177
- sh scripts/train.sh
178
- ```
179
-
180
- ## Citation
181
-
182
- If our work is helpful for your research, please consider citing:
183
-
184
- ```
185
- @inproceedings{kang2023ddcolor,
186
- title={DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders},
187
- author={Kang, Xiaoyang and Yang, Tao and Ouyang, Wenqi and Ren, Peiran and Li, Lingzhi and Xie, Xuansong},
188
- booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
189
- pages={328--338},
190
- year={2023}
191
- }
192
- ```
193
-
194
- ## Acknowledgments
195
- We thank the authors of BasicSR for the awesome training pipeline.
196
-
197
- > Xintao Wang, Ke Yu, Kelvin C.K. Chan, Chao Dong and Chen Change Loy. BasicSR: Open Source Image and Video Restoration Toolbox. https://github.com/xinntao/BasicSR, 2020.
198
-
199
- Some codes are adapted from [ColorFormer](https://github.com/jixiaozhong/ColorFormer), [BigColor](https://github.com/KIMGEONUNG/BigColor), [ConvNeXt](https://github.com/facebookresearch/ConvNeXt), [Mask2Former](https://github.com/facebookresearch/Mask2Former), and [DETR](https://github.com/facebookresearch/detr). Thanks for their excellent work!
 
1
+ ---
2
+ license: apache-2.0
3
+ title: DDColor
4
+ sdk: gradio
5
+ emoji: 😻
6
+ colorFrom: pink
7
+ colorTo: gray
8
+ ---