Commit
·
94d319d
1
Parent(s):
34983b9
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
tags:
|
4 |
- text2text-generation
|
5 |
pipeline_tag: text2text-generation
|
@@ -51,12 +51,37 @@ ff291fcfa4e0048ca4ff262312faad83 ./tokenizer_config.json.ef7ef410b9b909949e96f1
|
|
51 |
39ec1b33fbf9a0934a8ae0f9a24c7163 ./tokenizer.model.9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347.enc
|
52 |
```
|
53 |
|
54 |
-
2. Decrypt the files using https://github.com/LianjiaTech/BELLE/tree/main/models
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
```
|
56 |
-
|
|
|
|
|
|
|
|
|
|
|
57 |
```
|
58 |
|
59 |
3. Check md5sum
|
|
|
|
|
|
|
60 |
```
|
61 |
md5sum ./*
|
62 |
32490e7229fb82c643e3a7b8d04a6c4b ./config.json
|
@@ -83,7 +108,7 @@ After you decrypt the files, BELLE-LLAMA-7B-0.6M can be easily loaded with Llama
|
|
83 |
from transformers import LlamaForCausalLM, AutoTokenizer
|
84 |
import torch
|
85 |
|
86 |
-
ckpt = '
|
87 |
device = torch.device('cuda')
|
88 |
model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
|
89 |
tokenizer = AutoTokenizer.from_pretrained(ckpt)
|
|
|
1 |
---
|
2 |
+
license: gpl-3.0
|
3 |
tags:
|
4 |
- text2text-generation
|
5 |
pipeline_tag: text2text-generation
|
|
|
51 |
39ec1b33fbf9a0934a8ae0f9a24c7163 ./tokenizer.model.9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347.enc
|
52 |
```
|
53 |
|
54 |
+
2. Decrypt the files using the scripts in https://github.com/LianjiaTech/BELLE/tree/main/models
|
55 |
+
|
56 |
+
You can use the following command in Bash.
|
57 |
+
Please replace "/path/to_encrypted" with the path where you stored your encrypted file,
|
58 |
+
replace "/path/to_original_llama_7B" with the path where you stored your original llama7B file,
|
59 |
+
and replace "/path/to_finetuned_model" with the path where you want to save your final trained model.
|
60 |
+
|
61 |
+
```bash
|
62 |
+
mkdir /path/to_finetuned_model
|
63 |
+
for f in "/path/to_encrypted"/*; \
|
64 |
+
do if [ -f "$f" ]; then \
|
65 |
+
python3 decrypt.py "$f" "/path/to_original_llama_7B/consolidated.00.pth" "/path/to_finetuned_model/"; \
|
66 |
+
fi; \
|
67 |
+
done
|
68 |
+
```
|
69 |
+
|
70 |
+
After executing the aforementioned command, you will obtain the following files.
|
71 |
+
|
72 |
```
|
73 |
+
./config.json
|
74 |
+
./generation_config.json
|
75 |
+
./pytorch_model.bin
|
76 |
+
./special_tokens_map.json
|
77 |
+
./tokenizer_config.json
|
78 |
+
./tokenizer.model
|
79 |
```
|
80 |
|
81 |
3. Check md5sum
|
82 |
+
|
83 |
+
You can verify the integrity of these files by performing an MD5 checksum to ensure their complete recovery.
|
84 |
+
Here are the MD5 checksums for the relevant files:
|
85 |
```
|
86 |
md5sum ./*
|
87 |
32490e7229fb82c643e3a7b8d04a6c4b ./config.json
|
|
|
108 |
from transformers import LlamaForCausalLM, AutoTokenizer
|
109 |
import torch
|
110 |
|
111 |
+
ckpt = '/path/to_finetuned_model/'
|
112 |
device = torch.device('cuda')
|
113 |
model = LlamaForCausalLM.from_pretrained(ckpt, device_map='auto', low_cpu_mem_usage=True)
|
114 |
tokenizer = AutoTokenizer.from_pretrained(ckpt)
|