georgesung
commited on
Commit
•
e9a972b
1
Parent(s):
99366b7
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,13 @@ datasets:
|
|
8 |
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
|
9 |
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
# Prompt style
|
12 |
The model was trained with the following prompt style:
|
13 |
```
|
|
|
8 |
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
|
9 |
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
|
10 |
|
11 |
+
The version here is the fp16 HuggingFace model.
|
12 |
+
|
13 |
+
## GGML & GPTQ versions
|
14 |
+
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
|
15 |
+
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML
|
16 |
+
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ
|
17 |
+
|
18 |
# Prompt style
|
19 |
The model was trained with the following prompt style:
|
20 |
```
|