runtime error
Exit code: 1. Reason: tokenizer_config.json: 0%| | 0.00/1.29k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 1.29k/1.29k [00:00<00:00, 10.1MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 944, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1736, in __getattribute__ requires_backends(cls, cls._backends) File "/usr/local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1724, in requires_backends raise ImportError("".join(failed)) ImportError: LlamaTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment. Please note that you may need to restart your runtime after installation.
Container logs:
Fetching error logs...