balthou commited on
Commit
1089c19
·
1 Parent(s): b8d7d34

udpate image links

Browse files
Files changed (1) hide show
  1. description.md +5 -5
description.md CHANGED
@@ -16,23 +16,23 @@ We propose to explore several tracks:
16
  - Try to see if the generalization property to natural images observed in denoising holds for deblurring.
17
 
18
 
19
- ![](https://huggingface.co/spaces/balthou/interactive-pipe-tutorial/resolve/main/illustrations/blind_deblur_teaser_figure.png)
20
 
21
 
22
  We first validated that NAFNet trained on deadleaves performed well on the blind denoising task. Below you can see that it also performs correctly on natural images, although the performances are not as good as a network purely trained on natural images.
23
 
24
  | Qualitative results at SNR in = 20dB | Quantitative results
25
  | :---: | :---: |
26
- | ![](https://huggingface.co/spaces/balthou/interactive-pipe-tutorial/resolve/main/illustrationsdeadleaves_vs_natural_with_psnr.png) | ![](
27
- https://huggingface.co/spaces/balthou/interactive-pipe-tutorial/resolve/main/illustrations/cross_val_natural.png)
28
 
29
 
30
  Finally, when applying the deadleaves training to the blind deblurring problem, one of the advantage we have notticed is that the network always tries to deblur even when the level of blur is high. On the contrary, when trained on natural images, the NAFNEt does not work so well when the blur level is too big.
31
  |Blind deblurring results|
32
  |:----:|
33
- | ![](illustrations/deblur_results.png) |
34
  |Deblurring result for different amount of blur, using Nafnet trained on Div2K or deadleaves. From left to right column: ”small”, ”mild” and ”big” blur kernels to degrade the input. **Top row**: input image. **Middle row**: output of NafNet trained on deadleaves. **Bottom row**: output of NafNet trained on Div2K.|
35
- |![](https://huggingface.co/spaces/balthou/interactive-pipe-tutorial/resolve/main/illustrations/deblur_table.png) |
36
 
37
  **Conclusion** :
38
  - Using extra primitives to pure deadleaves seems like a good idea but did not bring as much as we'd expected. A rework by adding anisotropy and extra geometric shapes could lead to significantly better results.
 
16
  - Try to see if the generalization property to natural images observed in denoising holds for deblurring.
17
 
18
 
19
+ ![](https://huggingface.co/spaces/balthou/image-deblurring/resolve/main/illustrations/blind_deblur_teaser_figure.png)
20
 
21
 
22
  We first validated that NAFNet trained on deadleaves performed well on the blind denoising task. Below you can see that it also performs correctly on natural images, although the performances are not as good as a network purely trained on natural images.
23
 
24
  | Qualitative results at SNR in = 20dB | Quantitative results
25
  | :---: | :---: |
26
+ | ![](https://huggingface.co/spaces/balthou/image-deblurring/resolve/main/illustrationsdeadleaves_vs_natural_with_psnr.png) | ![](
27
+ https://huggingface.co/spaces/balthou/image-deblurring/resolve/main/illustrations/cross_val_natural.png)
28
 
29
 
30
  Finally, when applying the deadleaves training to the blind deblurring problem, one of the advantage we have notticed is that the network always tries to deblur even when the level of blur is high. On the contrary, when trained on natural images, the NAFNEt does not work so well when the blur level is too big.
31
  |Blind deblurring results|
32
  |:----:|
33
+ | ![](https://huggingface.co/spaces/balthou/image-deblurring/resolve/main/illustrations/deblur_results.png) |
34
  |Deblurring result for different amount of blur, using Nafnet trained on Div2K or deadleaves. From left to right column: ”small”, ”mild” and ”big” blur kernels to degrade the input. **Top row**: input image. **Middle row**: output of NafNet trained on deadleaves. **Bottom row**: output of NafNet trained on Div2K.|
35
+ |![](https://huggingface.co/spaces/balthou/image-deblurring/resolve/main/illustrations/deblur_table.png) |
36
 
37
  **Conclusion** :
38
  - Using extra primitives to pure deadleaves seems like a good idea but did not bring as much as we'd expected. A rework by adding anisotropy and extra geometric shapes could lead to significantly better results.