OSError: gpt2 does not appear to have a file named config.json. Checkout 'https://huggingface.co/gpt2/None' for available files.

#59
by MorphzZ - opened

tokenizer = AutoTokenizer.from_pretrained("gpt2")
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/morpheus/.env/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 659, in from_pretrained
config = AutoConfig.from_pretrained(
File "/home/ubuntu/morpheus/.env/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 953, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/morpheus/.env/lib/python3.8/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/morpheus/.env/lib/python3.8/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict
resolved_config_file = cached_file(
File "/home/ubuntu/morpheus/.env/lib/python3.8/site-packages/transformers/utils/hub.py", line 388, in cached_file
raise EnvironmentError(
OSError: gpt2 does not appear to have a file named config.json. Checkout 'https://huggingface.co/gpt2/None' for available files.
import transformers
transformers.version
'4.31.0.dev0'

pip install --upgrade transformers

I had the exact same issue. I had to remove the ~/.cache/huggingface and then it works for me.

I'm still facing a similar issue, running on an HPC cluster with the compute nodes offline. The error pops up when I submit the job via a bash script, if I launch a jupyter notebook and run the code it works perfectly fine but the challenge is that the jupyter server will note utilize all of the available compute nodes hence training will take too long

OpenAI community org

@thaboe01 you might be interested in the offline mode for transformers if you work in an airtight network: https://huggingface.co/docs/transformers/v4.38.2/en/installation#offline-mode

@lysandre I have followed the instructions from the huggungface docs but I keep getting the error:
OSError: Models/gpt2 does not appear to have a file named config.json. Checkout 'https://huggingface.co/Models/gpt2/main' for available files.
But the config.json file is definitely there in the mentioned directory

hi @pitehu how did you remove the ~/.cache/huggingface

OpenAI community org

@thaboe01 do you mind sharing your code? The presence of "Models/" is weird, it should be looking for "gpt2"

model_hf = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")

For me, the issue that I was running my script under the 'gpt2' folder. something like this /gpt2/
As other people suggest, I tried rm ~/.cache/huggingface but it didn't work for me.

Solution: I just moved my script under / and the 'gpt2' folder and it worked.

Sign up or log in to comment