Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- shibing624/alpaca-zh
|
5 |
+
language:
|
6 |
+
- zh
|
7 |
+
tags:
|
8 |
+
- LoRA
|
9 |
+
- LLaMA
|
10 |
+
- Alpaca
|
11 |
+
- PEFT
|
12 |
+
- int8
|
13 |
+
---
|
14 |
+
|
15 |
+
# Model Card for llama-7b-alpaca-zh-20k
|
16 |
+
|
17 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
18 |
+
|
19 |
+
## Uses
|
20 |
+
|
21 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
22 |
+
|
23 |
+
### Direct Use
|
24 |
+
|
25 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
26 |
+
|
27 |
+
```python
|
28 |
+
from peft import PeftModel
|
29 |
+
from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer
|
30 |
+
|
31 |
+
|
32 |
+
max_memory = {i: "15GIB" for i in range(torch.cuda.device_count())}
|
33 |
+
tokenizer = LlamaTokenizer.from_pretrained(base_model)
|
34 |
+
model = LlamaForCausalLM.from_pretrained(
|
35 |
+
base_model,
|
36 |
+
load_in_8bit=True,
|
37 |
+
torch_dtype=torch.float16,
|
38 |
+
device_map="auto"
|
39 |
+
max_memory=max_memory
|
40 |
+
)
|
41 |
+
model = PeftModel.from_pretrained(
|
42 |
+
model,
|
43 |
+
lora_weights,
|
44 |
+
torch_dtype=torch.float16,
|
45 |
+
max_memory=max_memory
|
46 |
+
)
|
47 |
+
```
|