Update README.md
Browse files
README.md
CHANGED
@@ -16,12 +16,18 @@ datasets:
|
|
16 |
- grimulkan/LimaRP-augmented
|
17 |
- KaraKaraWitch/PIPPA-ShareGPT-formatted
|
18 |
---
|
|
|
|
|
|
|
|
|
19 |
This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B).
|
20 |
|
21 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
|
22 |
|
23 |
Dataset used is [mpasila/LimaRP-PIPPA-Mix-8K-Context](https://huggingface.co/datasets/mpasila/LimaRP-PIPPA-Mix-8K-Context) which was made using [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented) and [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted).
|
24 |
|
|
|
|
|
25 |
From quick testing it appears to work fairly well for chatting.
|
26 |
|
27 |
### Prompt format: Llama 3 Instruct
|
|
|
16 |
- grimulkan/LimaRP-augmented
|
17 |
- KaraKaraWitch/PIPPA-ShareGPT-formatted
|
18 |
---
|
19 |
+
This is an ExLlamaV2 quantized model in 4.7bpw of [mpasila/Llama-3-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-8B) using the default calibration dataset with 8192 context length.
|
20 |
+
|
21 |
+
# Original Model card:
|
22 |
+
|
23 |
This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B).
|
24 |
|
25 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
|
26 |
|
27 |
Dataset used is [mpasila/LimaRP-PIPPA-Mix-8K-Context](https://huggingface.co/datasets/mpasila/LimaRP-PIPPA-Mix-8K-Context) which was made using [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented) and [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted).
|
28 |
|
29 |
+
This has been trained on the base model and not the instruct model. The model trained with the instruct model using the same dataset is here: [mpasila/Llama-3-Instruct-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-Instruct-LiPPA-8B)
|
30 |
+
|
31 |
From quick testing it appears to work fairly well for chatting.
|
32 |
|
33 |
### Prompt format: Llama 3 Instruct
|