runtime error

Exit code: 1. Reason: ding wheel for flash-attn (setup.py): started Building wheel for flash-attn (setup.py): finished with status 'done' Created wheel for flash-attn: filename=flash_attn-2.7.2.post1-py3-none-any.whl size=190148610 sha256=aaca54f8ee67507c92683e7b71e524f31d370d50ec110811e4b7492976ebe89f Stored in directory: /home/user/.cache/pip/wheels/da/ec/5b/b2c37a8e4f755ad82492a822463bca0817f0e0e11de874b550 Successfully built flash-attn Installing collected packages: einops, flash-attn Successfully installed einops-0.8.0 flash-attn-2.7.2.post1 [notice] A new release of pip is available: 24.2 -> 24.3.1 [notice] To update, run: /usr/local/bin/python3.10 -m pip install --upgrade pip Loading CLIP Loading VLM's custom vision model Loading tokenizer Loading LLM: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:09<00:29, 9.96s/it] Downloading shards: 50%|█████ | 2/4 [00:19<00:19, 9.90s/it] Downloading shards: 75%|███████▌ | 3/4 [00:28<00:09, 9.57s/it] Downloading shards: 100%|██████████| 4/4 [00:32<00:00, 7.09s/it] Downloading shards: 100%|██████████| 4/4 [00:32<00:00, 8.07s/it] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 6.36it/s] Loading VLM's custom text model The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. Loading image adapter pixtral_model: <class 'NoneType'> pixtral_processor: <class 'NoneType'> Traceback (most recent call last): File "/home/user/app/app.py", line 3, in <module> from joycaption import stream_chat_mod, get_text_model, change_text_model, get_repo_gguf File "/home/user/app/joycaption.py", line 237, in <module> @spaces.GPU() TypeError: spaces.GPU() missing 1 required positional argument: 'func'

Container logs:

Fetching error logs...