runtime error
Exit code: 1. Reason: Downloading tokenizer_config.json: 0%| | 0.00/698 [00:00<?, ?B/s][A Downloading tokenizer_config.json: 100%|██████████| 698/698 [00:00<00:00, 1.37MB/s] Downloading vocab.json: 0%| | 0.00/779k [00:00<?, ?B/s][A Downloading vocab.json: 100%|██████████| 779k/779k [00:00<00:00, 12.1MB/s] Downloading merges.txt: 0%| | 0.00/446k [00:00<?, ?B/s][A Downloading merges.txt: 100%|██████████| 446k/446k [00:00<00:00, 44.6MB/s] Downloading tokenizer.json: 0%| | 0.00/3.39M [00:00<?, ?B/s][A Downloading tokenizer.json: 100%|██████████| 3.39M/3.39M [00:00<00:00, 18.7MB/s] Downloading special_tokens_map.json: 0%| | 0.00/583 [00:00<?, ?B/s][A Downloading special_tokens_map.json: 100%|██████████| 583/583 [00:00<00:00, 3.17MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 5, in <module> tokenizer = AutoTokenizer.from_pretrained("ahmed-7124/dgptAW") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 591, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1805, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1950, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 138, in __init__ super().__init__( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 110, in __init__ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: data did not match any variant of untagged enum ModelWrapper at line 250319 column 3
Container logs:
Fetching error logs...