Update README.md
Browse files
README.md
CHANGED
@@ -52,16 +52,16 @@ In addition to MentaLLaMA-33B-lora, the MentaLLaMA project includes another mode
|
|
52 |
You can use the MentaLLaMA-33B-lora model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
|
53 |
|
54 |
Since our model is based on the Vicuna-33B foundation model, you need to first download the Vicuna-33B model [here](https://huggingface.co/lmsys/vicuna-33b-v1.3),
|
55 |
-
and put it under the `./
|
56 |
|
57 |
```python
|
58 |
-
from
|
59 |
-
|
60 |
-
|
|
|
61 |
```
|
62 |
|
63 |
-
In this example,
|
64 |
-
use the GPU if it's available.
|
65 |
|
66 |
## License
|
67 |
|
|
|
52 |
You can use the MentaLLaMA-33B-lora model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
|
53 |
|
54 |
Since our model is based on the Vicuna-33B foundation model, you need to first download the Vicuna-33B model [here](https://huggingface.co/lmsys/vicuna-33b-v1.3),
|
55 |
+
and put it under the `./vicuna-33B` dir. Then download the MentaLLaMA-33B-lora weights and put it under the `./MentaLLaMA-33B-lora` dir.
|
56 |
|
57 |
```python
|
58 |
+
from peft import AutoPeftModelForCausalLM
|
59 |
+
from transformers import AutoTokenizer
|
60 |
+
peft_model = AutoPeftModelForCausalLM.from_pretrained("./MentaLLaMA-33B-lora")
|
61 |
+
tokenizer = AutoTokenizer.from_pretrained('./MentaLLaMA-33B-lora')
|
62 |
```
|
63 |
|
64 |
+
In this example, AutoPeftModelForCausalLM can automatically load the base model and the lora weights from the downloaded dir, and AutoTokenizer can load the tokenizer.
|
|
|
65 |
|
66 |
## License
|
67 |
|