Tokenizer

#2
by papaponcho - opened

Hi, where can I find the tokenizer model?

I get this error when trying to load the model.

        self.model_path = "local_models/dolphin-2.8-mistral-7b-v02-bnb-4bit"
        self.model = AutoModelForCausalLM.from_pretrained(self.model_path,
                                             device_map="cuda",
                                             trust_remote_code=False)
        self.tokenizer = AutoTokenizer.from_pretrained(self.model_path, use_fast=True)
OSError: Can't load tokenizer for 'local_models/dolphin-2.8-mistral-7b-v02-bnb-4bit'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'local_models/dolphin-2.8-mistral-7b-v02-bnb-4bit' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

Sign up or log in to comment