SebastianBodza commited on
Commit
6831e11
·
1 Parent(s): 3d4e8d1

Update README.md

Browse files

Typing error:
low_cup_mem_usage instead of low_cpu_mem_usage

Tippfehler bei low_cup_mem_usage

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ The first trick would be to load the model with the specific argument below to l
33
  from transformers import AutoTokenizer, AutoModelForCausalLM
34
 
35
  tokenizer = AutoTokenizer.from_pretrained("Cedille/de-anna")
36
- model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna", low_cup_mem_usage=True)
37
  ```
38
 
39
  We are planning on adding an fp16 branch soon. Combined with the lower memory loading above, loading could be done on 12.1GB of RAM.
 
33
  from transformers import AutoTokenizer, AutoModelForCausalLM
34
 
35
  tokenizer = AutoTokenizer.from_pretrained("Cedille/de-anna")
36
+ model = AutoModelForCausalLM.from_pretrained("Cedille/de-anna", low_cpu_mem_usage=True)
37
  ```
38
 
39
  We are planning on adding an fp16 branch soon. Combined with the lower memory loading above, loading could be done on 12.1GB of RAM.