runtime error
00:42<00:00, 50.7MB/s] Downloading shards: 100%|ββββββββββ| 4/4 [09:28<00:00, 129.78s/it] Downloading shards: 100%|ββββββββββ| 4/4 [09:28<00:00, 142.20s/it] ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /home/user/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so CUDA SETUP: Loading binary /home/user/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cpu.so... /home/user/.local/lib/python3.8/site-packages/bitsandbytes/cextension.py:33: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " Traceback (most recent call last): File "app.py", line 31, in <module> blip2_model_8_bit = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-6.7b", device_map="auto", load_in_8bit=True) File "/home/user/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2740, in from_pretrained raise ValueError( ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.
Container logs:
Fetching error logs...