bmarie4i commited on
Commit
dffbc04
·
1 Parent(s): b6cbdd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -51,7 +51,7 @@ The following code sample uses 4-bit quantization, you may load the model withou
51
 
52
  ```py
53
  from transformers import AutoTokenizer, AutoModelForCausalLM TrainingArguments, GenerationConfig
54
- model_name = "4i-ai/Spanish-Llama-2-7b"
55
 
56
 
57
  #Tokenizer
 
51
 
52
  ```py
53
  from transformers import AutoTokenizer, AutoModelForCausalLM TrainingArguments, GenerationConfig
54
+ model_name = "4i-ai/Llama-2-7b-alpaca-es"
55
 
56
 
57
  #Tokenizer