runtime error
Exit code: 1. Reason: ocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s][A vocab.txt: 100%|ββββββββββ| 232k/232k [00:00<00:00, 22.1MB/s] nar_pretrain.yaml: 0%| | 0.00/1.37k [00:00<?, ?B/s][A nar_pretrain.yaml: 100%|ββββββββββ| 1.37k/1.37k [00:00<00:00, 8.19MB/s] vocab.txt: 0%| | 0.00/599 [00:00<?, ?B/s][A vocab.txt: 100%|ββββββββββ| 599/599 [00:00<00:00, 4.18MB/s] Loading models... Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> model_list = nar.load_model(device, "CapTTS") File "/home/user/app/capspeech/nar/generate.py", line 191, in load_model checkpoint = torch.load(model_path)['model'] File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 1516, in load return _load( File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 2114, in _load result = unpickler.load() File "/usr/local/lib/python3.10/site-packages/torch/_weights_only_unpickler.py", line 532, in load self.append(self.persistent_load(pid)) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 2078, in persistent_load typed_storage = load_tensor( File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 2044, in load_tensor wrap_storage = restore_location(storage, location) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 698, in default_restore_location result = fn(storage, location) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 636, in _deserialize device = _validate_device(location, backend_name) File "/usr/local/lib/python3.10/site-packages/torch/serialization.py", line 605, in _validate_device raise RuntimeError( RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Container logs:
Fetching error logs...