gpt4-alpaca-lora-7b-llm_tuner / tokenizer_config.json
kallaballa
config for llm-tuner
543fc5c
raw
history blame
213 Bytes
{ "model_type": "lama-for-causal-lm", "tokenizer_class": "LlamaTokenizerFast", "vocab_size": 50257, "bos_token_id": 50256, "eos_token_id": 50256, "unk_token_id": 0, "pad_token_id": 50256, "do_lower_case": false }