Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,10 @@ datasets:
|
|
16 |
- grimulkan/LimaRP-augmented
|
17 |
- KaraKaraWitch/PIPPA-ShareGPT-formatted
|
18 |
---
|
|
|
|
|
|
|
|
|
19 |
This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B).
|
20 |
|
21 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/) as the base model for 1 epoch.
|
@@ -24,6 +28,8 @@ Dataset used is [mpasila/LimaRP-PIPPA-Mix-8K-Context](https://huggingface.co/dat
|
|
24 |
|
25 |
This has been trained on the instruct model and not the base model. The model trained with the base model using the same dataset is here: [mpasila/Llama-3-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-8B)
|
26 |
|
|
|
|
|
27 |
### Prompt format: Llama 3 Instruct
|
28 |
|
29 |
Unsloth changed assistant to gpt and user to human.
|
|
|
16 |
- grimulkan/LimaRP-augmented
|
17 |
- KaraKaraWitch/PIPPA-ShareGPT-formatted
|
18 |
---
|
19 |
+
This is an ExLlamaV2 quantized model in 4.7bpw of [mpasila/Llama-3-Instruct-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-Instruct-LiPPA-8B) using the default calibration dataset with 8192 context length.
|
20 |
+
|
21 |
+
# Original Model card:
|
22 |
+
|
23 |
This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B).
|
24 |
|
25 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/) as the base model for 1 epoch.
|
|
|
28 |
|
29 |
This has been trained on the instruct model and not the base model. The model trained with the base model using the same dataset is here: [mpasila/Llama-3-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-8B)
|
30 |
|
31 |
+
This also seems to work fairly well for chatting.
|
32 |
+
|
33 |
### Prompt format: Llama 3 Instruct
|
34 |
|
35 |
Unsloth changed assistant to gpt and user to human.
|