π© Report: Ethical issue(s)
Hi peter886,
This issue is related to the BasicSR library. When fused_act_ext
is not defined in the system, attempting to execute fused_leaky_relu
will trigger the error: "name 'fused_act_ext' is not defined."
I have implemented a fix in my forked version of BasicSR. Please update your Python pip libraries (e.g., pip install -r requirements.txt -U
) or rebuild the Docker image from scratch.
Thank you for reporting this issue!
I searched Stack Overflow and found some Tesla T4 users experiencing the same issue where cuda.is_available()
returns false. However, there doesn't seem to be a resolved answer for this. It might be related to the version of PyTorch, but I'm not sure.
Currently, the versions specified in the requirements are as follows:
torch==2.5.0+cu124; sys_platform != 'darwin'
torchvision==0.20.0+cu124; sys_platform != 'darwin'
You might want to try changing the PyTorch version and testing again.
Thank you!
nvidia-smi not working properly inside the container. The NVIDIA driver on the host machine is not being recognized within the container.
start container with docker run --restart=always -d -e gpus=all -name multi ......
can you share your dockerfile?
I used a CUDA 12.4 image, and nvidia-smi works fine. This means my driver supports CUDA 12.4. Now, nvidia-smi fails in the container you provided. Is your container image based on CUDA 12.4 or higher?
i have found the reason ,maybe it's caused my os system---wsl . thanks for your help