baruga commited on
Commit
44172c8
·
1 Parent(s): 9a17fb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -2
README.md CHANGED
@@ -6,6 +6,23 @@ datasets:
6
 
7
  This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
8
 
9
- It doesn't contain the foundation model itself, so it's MIT licensed!
10
 
11
- Instructions for running it can be found at https://github.com/tloen/alpaca-lora.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
8
 
9
+ ### How to use (8-bit)
10
 
11
+ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
12
+
13
+ ```python
14
+ from peft import PeftModel
15
+ from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
16
+
17
+ tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf")
18
+ model = LlamaForCausalLM.from_pretrained(
19
+ "decapoda-research/llama-13b-hf",
20
+ load_in_8bit=True,
21
+ torch_dtype=torch.float16,
22
+ device_map="auto",
23
+ )
24
+ model = PeftModel.from_pretrained(
25
+ model, "baruga/alpaca-lora-13b",
26
+ torch_dtype=torch.float16
27
+ )
28
+ ```