Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ pipeline_tag: visual-question-answering
|
|
17 |
|
18 |
## Model
|
19 |
|
20 |
-
llava-v1.5-7b-xtuner is a LLaVA model fine-tuned from [Vicuna-v1.5
|
21 |
|
22 |
|
23 |
## Quickstart
|
@@ -43,13 +43,13 @@ xtuner chat lmsys/vicuna-7b-v1.5 \
|
|
43 |
1. Alignment module pretraining (saved by default in `./work_dirs/`)
|
44 |
|
45 |
```shell
|
46 |
-
NPROC_PER_NODE=8 xtuner train
|
47 |
```
|
48 |
|
49 |
2. Instruction following fine-tuning (saved by default in `./work_dirs/`)
|
50 |
|
51 |
```shell
|
52 |
-
NPROC_PER_NODE=8 xtuner train
|
53 |
```
|
54 |
|
55 |
|
|
|
17 |
|
18 |
## Model
|
19 |
|
20 |
+
llava-v1.5-7b-xtuner is a LLaVA model fine-tuned from [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [XTuner](https://github.com/InternLM/xtuner).
|
21 |
|
22 |
|
23 |
## Quickstart
|
|
|
43 |
1. Alignment module pretraining (saved by default in `./work_dirs/`)
|
44 |
|
45 |
```shell
|
46 |
+
NPROC_PER_NODE=8 xtuner train llava_vicuna_7b_v15_clip_vit_large_p14_336_e1_gpu8_pretrain --deepspeed deepspeed_zero2
|
47 |
```
|
48 |
|
49 |
2. Instruction following fine-tuning (saved by default in `./work_dirs/`)
|
50 |
|
51 |
```shell
|
52 |
+
NPROC_PER_NODE=8 xtuner train llava_vicuna_7b_v15_qlora_clip_vit_large_p14_336_lora_e1_gpu8_finetune --deepspeed deepspeed_zero2
|
53 |
```
|
54 |
|
55 |
|