void0721 commited on
Commit
a66ff9f
·
1 Parent(s): b1ad2db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -50,19 +50,8 @@ ckpt_path/
50
  ```
51
 
52
  ### Host Local Demo
53
- Execute the following command for demo hosting:
54
- ``` bash
55
- cd LLaMA2-Accessory/accessory
56
- python demos/multi_turn_mm.py --n_gpus=2 \
57
- --tokenizer_path=/path/to/tokenizer.model --llama_type=llama_ens \
58
- --pretrained_path ckpt_path/
59
- ```
60
- Explanation of each argument:
61
 
62
- + `--n_gpus`: Number of gpus to use. Utilizing more GPUs will alleviate memory usage on each GPU through model parallelism. Currently, this argument should be set to either 1 or 2, as support for *consolidated ckpt num < gpu num* is not yet available.
63
- + `--tokenizer_path`: Path to the official LLaMA2 tokenizer. Note that the tokenizer file is the same for both LLaMA and LLaMA2. You may download it from [here](https://huggingface.co/Alpha-VLLM/LLaMA2-Accessory/blob/main/config/tokenizer.model).
64
- + `--llama_type`: The model architecture of SPHINX is defined in [accessory/model/LLM/llama_ens.py](../accessory/model/LLM/llama_ens.py), and specifying `--llama_type=llama_ens ` tells the demo program to use this architecture.
65
- + `--pretrained_path`: The path to pre-trained checkpoint.
66
 
67
 
68
  ## Result
@@ -105,3 +94,13 @@ Our evaluation encompasses both **quantitative metrics** and **qualitative asses
105
  * The SPHINX model and baseline models on REC benchmarks results on table4.
106
  * SPHINX exhibits robust performance in visual grounding tasks such as RefCOCO, RefCOCO+, and RefCOCOg, **surpassing other vision-language generalist models**.
107
  * Notably, SPHINX outperforms specialist models G-DINO-L by **more than 1.54%** in accuracy across all tasks within RefCOCO/RefCOCO+/RefCOCOg.
 
 
 
 
 
 
 
 
 
 
 
50
  ```
51
 
52
  ### Host Local Demo
53
+ Please follow the instructions [here](https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX#host-local-demo) to see the instruction and complete the use of the model.
 
 
 
 
 
 
 
54
 
 
 
 
 
55
 
56
 
57
  ## Result
 
94
  * The SPHINX model and baseline models on REC benchmarks results on table4.
95
  * SPHINX exhibits robust performance in visual grounding tasks such as RefCOCO, RefCOCO+, and RefCOCOg, **surpassing other vision-language generalist models**.
96
  * Notably, SPHINX outperforms specialist models G-DINO-L by **more than 1.54%** in accuracy across all tasks within RefCOCO/RefCOCO+/RefCOCOg.
97
+
98
+
99
+ ## Frequently Asked Questions (FAQ)
100
+
101
+ ❓ Encountering issues or have further questions? Find answers to common inquiries [here](https://llama2-accessory.readthedocs.io/en/latest/faq.html). We're here to assist you!
102
+
103
+ ## License
104
+
105
+ Llama 2 is licensed under the [LLAMA 2 Community License](LICENSE_llama2), Copyright (c) Meta Platforms, Inc. All Rights Reserved.
106
+