cekal commited on
Commit
a677387
1 Parent(s): eba794e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,11 +14,11 @@ datasets:
14
  inference: false
15
  ---
16
 
17
- This is the Python model code for MPT-7B patched so that it can be used with a LoRA. Note that while I tested that it works and I get reasonable results out, it is very possible that the model isn't being trained correctly. The model code specifically says that left padding is not supported, but I forcibly did so and got decent results.
18
 
19
  Note that when using LoRA, there is a strange quirk that prevents me from causing generation with an empty prompt.
20
 
21
- I also included a model-agnostic export_hf_checkpoint.py script, which you can use to merge your lora back into a new full model. Once you do this, you do not need to use the patched version of the model code anymore. That being said, if you want to be able to load the model in 8bit you will still need it. The usage is python export_hf_checkpoint.py <source> <lora> <dest>.
22
 
23
  If you would like to use this with text-generation-webui, apply the following patch:
24
  ```
 
14
  inference: false
15
  ---
16
 
17
+ This is MPT-7B patched so that it can be used with a LoRA. Note that while I tested that it works and I get reasonable results out, it is very possible that the model isn't being trained correctly. The model code specifically says that left padding is not supported, but I forcibly did so and got decent results.
18
 
19
  Note that when using LoRA, there is a strange quirk that prevents me from causing generation with an empty prompt.
20
 
21
+ I also included a model-agnostic export_hf_checkpoint.py script, which you can use to merge your lora back into a new full model. Once you do this, you do not need to use the patched version of the model code anymore. That being said, if you want to be able to load the model in 8bit you will still need it. The usage is `python export_hf_checkpoint.py <source> <lora> <dest>`.
22
 
23
  If you would like to use this with text-generation-webui, apply the following patch:
24
  ```