add 70B single turn inference script
Browse files
README.md
CHANGED
@@ -32,6 +32,16 @@ torchrun --nproc-per-node=1 demos/single_turn_mm.py \
|
|
32 |
# --llama_type : Choose from [llama, llama_adapter, llama_peft, llama_qformerv2, llama_qformerv2_peft]
|
33 |
```
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
Make sure to replace placeholders like `<path-to-params.json>`, `<path-to-tokenizer.model>`, and `<stage1-of-lamaQformerv2_13b>` with the actual paths.
|
36 |
|
37 |
Follow these steps to successfully run the checkpoints using the provided commands and flags. For more details, refer to the documentation in the LLaMA2-Accessory repository.
|
|
|
32 |
# --llama_type : Choose from [llama, llama_adapter, llama_peft, llama_qformerv2, llama_qformerv2_peft]
|
33 |
```
|
34 |
|
35 |
+
For the checkpoints located in the `finetune/sg/` directory, use the following commands:
|
36 |
+
```bash
|
37 |
+
# 70B single turn platypus
|
38 |
+
torchrun --nproc-per-node=1 --master-port 29500 demos/single_turn.py \
|
39 |
+
--llama_config <path-to-Llama-2-70b/params.json> \
|
40 |
+
--tokenizer_path <path-to-tokenizer.model> \
|
41 |
+
--pretrained_path <path-to-Llama-2-70b> <path-to-platypus_normBias_QF_70B/epoch3> \
|
42 |
+
--quant --llama_type llama_peft
|
43 |
+
```
|
44 |
+
|
45 |
Make sure to replace placeholders like `<path-to-params.json>`, `<path-to-tokenizer.model>`, and `<stage1-of-lamaQformerv2_13b>` with the actual paths.
|
46 |
|
47 |
Follow these steps to successfully run the checkpoints using the provided commands and flags. For more details, refer to the documentation in the LLaMA2-Accessory repository.
|