Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ Note that when using LoRA, there is a strange quirk that prevents me from causin
|
|
21 |
I also included a model-agnostic export_hf_checkpoint.py script, which you can use to merge your lora back into a new full model. Once you do this, you do not need to use the patched version of the model code anymore. That being said, if you want to be able to load the model in 8bit you will still need it. The usage is python export_hf_checkpoint.py <source> <lora> <dest>.
|
22 |
|
23 |
If you would like to use this with text-generation-webui, apply the following patch:
|
24 |
-
|
25 |
--- a/modules/training.py
|
26 |
+++ b/modules/training.py
|
27 |
@@ -28,12 +28,13 @@ try:
|
@@ -40,11 +40,13 @@ If you would like to use this with text-generation-webui, apply the following pa
|
|
40 |
}
|
41 |
|
42 |
WANT_INTERRUPT = False
|
43 |
-
|
44 |
You will need to run the webui with these options:
|
|
|
45 |
python server.py --model mosaicml_mpt-7b-instruct --trust-remote-code --load-in-8bit
|
46 |
-
|
47 |
You may also need to patch bitsandbytes/nn/modules.py to prevent running out of VRAM when saving the LoRA:
|
|
|
48 |
--- a/modules.py
|
49 |
+++ b/modules.py
|
50 |
@@ -259,13 +259,13 @@
|
@@ -62,7 +64,7 @@ You may also need to patch bitsandbytes/nn/modules.py to prevent running out of
|
|
62 |
+ self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())
|
63 |
|
64 |
super()._save_to_state_dict(destination, prefix, keep_vars)
|
65 |
-
|
66 |
(It resides in miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/nn/modules.py for me.)
|
67 |
|
68 |
The alterations are based on the source code for the llama model from HF Transformers.
|
|
|
21 |
I also included a model-agnostic export_hf_checkpoint.py script, which you can use to merge your lora back into a new full model. Once you do this, you do not need to use the patched version of the model code anymore. That being said, if you want to be able to load the model in 8bit you will still need it. The usage is python export_hf_checkpoint.py <source> <lora> <dest>.
|
22 |
|
23 |
If you would like to use this with text-generation-webui, apply the following patch:
|
24 |
+
```
|
25 |
--- a/modules/training.py
|
26 |
+++ b/modules/training.py
|
27 |
@@ -28,12 +28,13 @@ try:
|
|
|
40 |
}
|
41 |
|
42 |
WANT_INTERRUPT = False
|
43 |
+
```
|
44 |
You will need to run the webui with these options:
|
45 |
+
```
|
46 |
python server.py --model mosaicml_mpt-7b-instruct --trust-remote-code --load-in-8bit
|
47 |
+
```
|
48 |
You may also need to patch bitsandbytes/nn/modules.py to prevent running out of VRAM when saving the LoRA:
|
49 |
+
```
|
50 |
--- a/modules.py
|
51 |
+++ b/modules.py
|
52 |
@@ -259,13 +259,13 @@
|
|
|
64 |
+ self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())
|
65 |
|
66 |
super()._save_to_state_dict(destination, prefix, keep_vars)
|
67 |
+
```
|
68 |
(It resides in miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/nn/modules.py for me.)
|
69 |
|
70 |
The alterations are based on the source code for the llama model from HF Transformers.
|