cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

#19
by d3vnu77 - opened

Running this on a Tesla P40 24GB card. Any idea why I might be getting the following error?

rafflevision@rafflevision:~/python/rafflevision$ bin/python main.py
INFO 09-21 13:19:08 config.py:1653] Downcasting torch.float32 to torch.float16.
WARNING 09-21 13:19:08 arg_utils.py:910] The model has a long context length (128000). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
INFO 09-21 13:19:08 llm_engine.py:223] Initializing an LLM engine (v0.6.1.post2) with config: model='mistralai/Pixtral-12B-2409', speculative_config=None, tokenizer='mistralai/Pixtral-12B-2409', skip_tokenizer_init=False, tokenizer_mode=mistral, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=128000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=mistralai/Pixtral-12B-2409, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-21 13:19:10 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-21 13:19:10 selector.py:116] Using XFormers backend.
/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: torch.library.impl_abstract was renamed to torch.library.register_fake. Please use that instead; we will remove torch.library.impl_abstract in a future version of PyTorch.
@torch .library.impl_abstract("xformers_flash::flash_fwd")
/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: torch.library.impl_abstract was renamed to torch.library.register_fake. Please use that instead; we will remove torch.library.impl_abstract in a future version of PyTorch.
@torch .library.impl_abstract("xformers_flash::flash_bwd")
INFO 09-21 13:19:11 model_runner.py:997] Starting to load model mistralai/Pixtral-12B-2409...
INFO 09-21 13:19:11 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-21 13:19:11 selector.py:116] Using XFormers backend.
INFO 09-21 13:19:11 weight_utils.py:242] Using model weights format ['*.safetensors']
INFO 09-21 13:19:11 weight_utils.py:287] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:20<00:00, 20.83s/it]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:20<00:00, 20.83s/it]

INFO 09-21 13:19:33 model_runner.py:1008] Loading model weights took 23.6552 GB
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/worker/model_runner_base.py", line 112, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1546, in execute_model
[rank0]: hidden_or_intermediate_states = model_executable(
[rank0]: ^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/model_executor/models/pixtral.py", line 179, in forward
[rank0]: vision_embeddings = self._process_image_input(image_input)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/model_executor/models/pixtral.py", line 229, in _process_image_input
[rank0]: return self.vision_language_adapter(self.vision_encoder(image_input))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/model_executor/models/pixtral.py", line 530, in forward
[rank0]: patch_embeds_list = [
[rank0]: ^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/model_executor/models/pixtral.py", line 531, in
[rank0]: self.patch_conv(img.unsqueeze(0).to(self.dtype)) for img in images
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 458, in forward
[rank0]: return self._conv_forward(input, self.weight, self.bias)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 454, in _conv_forward
[rank0]: return F.conv2d(input, weight, bias, self.stride,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

[rank0]: The above exception was the direct cause of the following exception:

[rank0]: Traceback (most recent call last):
[rank0]: File "/home/rafflevision/python/rafflevision/main.py", line 10, in
[rank0]: llm = LLM(model=model_name, tokenizer_mode="mistral")
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 178, in init
[rank0]: self.llm_engine = LLMEngine.from_engine_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 550, in from_engine_args
[rank0]: engine = cls(
[rank0]: ^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 331, in init
[rank0]: self._initialize_kv_caches()
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 460, in _initialize_kv_caches
[rank0]: self.model_executor.determine_num_available_blocks())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
[rank0]: return self.driver_worker.determine_num_available_blocks()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/worker/worker.py", line 223, in determine_num_available_blocks
[rank0]: self.model_runner.profile_run()
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1218, in profile_run
[rank0]: self.execute_model(model_input, kv_caches, intermediate_tensors)
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rafflevision/python/rafflevision/lib/python3.11/site-packages/vllm/worker/model_runner_base.py", line 126, in _wrapper
[rank0]: raise type(err)(
[rank0]: RuntimeError: Error in model execution (input dumped to /tmp/err_execute_model_input_20240921-131934.pkl): cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
rafflevision@rafflevision:~/python/rafflevision$ CUDNN_STATUS_INTERNAL_ERROR[rank0]: RuntimeError: Error in model execution (input dumped to /tmp/err_execute_model_input_20240921-131934.pkl): cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

Did you already tryied to download torch for your actual cuda version?

Run command bellow to see the version

nvcc --version

On this link you can get the correct command for your current version:
https://pytorch.org/get-started/locally/

Sign up or log in to comment