Apply for community grant: Academic project (gpu and storage)

#5
by haodongli - opened

Dear HuggingFace Officers, I am writing for applying the GPU Grant for this space: Lotus_Depth

Based on Stable Diffusion, Lotus delivers SoTA performance on monocular depth & normal estimation with simple yet effective fine-tuning protocol that better fits the pre-trained visual prior for dense prediction.

Project page: https://lotus3d.github.io/
Paper: https://arxiv.org/abs/2409.18124
Code: https://github.com/EnVision-Research/Lotus

Abs:
Leveraging the visual priors of pre-trained text-to-image diffusion models offers a promising solution to enhance zero-shot generalization in dense prediction tasks. However, existing methods often uncritically use the original diffusion formulation, which may not be optimal due to the fundamental differences between dense prediction and image generation.
In this paper, we provide a systemic analysis of the diffusion formulation for the dense prediction, focusing on both quality and efficiency. And we find that the original parameterization type for image generation, which learns to predict noise, is harmful for dense prediction; the multi-step noising/denoising diffusion process is also unnecessary and challenging to optimize.
Based on these insights, we introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction. Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance. We also reformulate the diffusion process into a single-step procedure, simplifying optimization and significantly boosting inference speed. Additionally, we introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions.
Without scaling up the training data or model capacity, Lotus achieves SoTA performance in zero-shot depth and normal estimation across various datasets. It also enhances efficiency, being significantly faster than most existing diffusion-based methods. Lotus' superior quality and efficiency also enable a wide range of practical applications, such as joint estimation, single/multi-view 3D reconstruction, etc.

Best,
Authors of Lotus

Hi @haodongli , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.

BTW, looks like your Space is using a bit old version of gradio-imageslider and it would be nice if you could upgrade it to the latest version as the latest version has better UI.

Also, it's not recommended to set cache_examples=True in the case of ZeroGPU because it might cause an error at startup time due to the GPU quota. In ZeroGPU Spaces, GRADIO_CACHE_EXAMPLES=lazy environment variable is set by default and examples are cached when users run them, so you can just remove cache_examples=True.

Got it, thanks!

Dear sir or madam,

Thanks so much for your such a detailed feedback! I've upgraded the gradio-imageslider to 0.0.20 and set cache_examples=False.

However, I switched from the ZeroGPU to L40S by mistake, could you please help me re-assign the ZeroGPU hardware?

Thank you and have a good day!

Best,

Hi @haodongli I have assigned zero-gpu grant again

Thanks @akhaliq !

However I just found some compiling errors about the bitsandbytes, fixing now.

Btw, there is also another demo about Normal prediction (https://huggingface.co/spaces/haodongli/Lotus_Normal), can you also assign the ZeroGPU Grant? Thanks so much!

Best,

This comment has been hidden

Hi @akhaliq ,

The error has been fixed! Please also consider assigning the ZeroGPU for LOTUS - Normal (https://huggingface.co/spaces/haodongli/Lotus_Normal), this demo has also been updated for ZeroGPU.

Thanks!

Thanks for adapting the code to ZeroGPU!

Please also consider assigning the ZeroGPU for LOTUS - Normal (https://huggingface.co/spaces/haodongli/Lotus_Normal), this demo has also been updated for ZeroGPU.

Done!

Sign up or log in to comment