runtime error

Exit code: 1. Reason: s. making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels Traceback (most recent call last): File "app.py", line 2, in <module> from pair_diff_demo import ImageComp File "/code/pair_diff_demo.py", line 49, in <module> model = create_model('configs/pair_diff.yaml').cpu() File "/code/cldm/model.py", line 27, in create_model model = instantiate_from_config(config.model).cpu() File "/code/ldm/util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/home/user/.pyenv/versions/3.8.15/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/code/cldm/cldm.py", line 446, in __init__ super().__init__(control_stage_config=control_stage_config, File "/code/cldm/cldm.py", line 313, in __init__ super().__init__(*args, **kwargs) File "/code/ldm/models/diffusion/ddpm.py", line 565, in __init__ self.instantiate_cond_stage(cond_stage_config) File "/code/ldm/models/diffusion/ddpm.py", line 632, in instantiate_cond_stage model = instantiate_from_config(config) File "/code/ldm/util.py", line 79, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "/code/ldm/modules/encoders/modules.py", line 99, in __init__ self.tokenizer = CLIPTokenizer.from_pretrained(version) File "/home/user/.pyenv/versions/3.8.15/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Container logs:

Fetching error logs...