Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ library_name: peft
|
|
3 |
license: llama2
|
4 |
datasets:
|
5 |
- ehartford/dolphin
|
|
|
6 |
tags:
|
7 |
- llama-2
|
8 |
inference: false
|
@@ -11,7 +12,8 @@ pipeline_tag: text-generation
|
|
11 |
|
12 |
# llama-2-7b-dolphin 🦙🐬
|
13 |
|
14 |
-
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first
|
|
|
15 |
|
16 |
* Model license: Llama 2 Community License Agreement
|
17 |
* Basic usage: [notebook](assets/basic_inference_llama_2_7b_dolphin.ipynb)
|
|
|
3 |
license: llama2
|
4 |
datasets:
|
5 |
- ehartford/dolphin
|
6 |
+
- garage-bAInd/Open-Platypus
|
7 |
tags:
|
8 |
- llama-2
|
9 |
inference: false
|
|
|
12 |
|
13 |
# llama-2-7b-dolphin 🦙🐬
|
14 |
|
15 |
+
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k riws of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on a single 1x A100 (40 GB SXM) for roughly 1.3 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
16 |
+
|
17 |
|
18 |
* Model license: Llama 2 Community License Agreement
|
19 |
* Basic usage: [notebook](assets/basic_inference_llama_2_7b_dolphin.ipynb)
|