georgesung's picture
Update README.md
c349e33
|
raw
history blame
934 Bytes
metadata
license: other
datasets:
  - ehartford/wizard_vicuna_70k_unfiltered

Overview

Fine-tuned Llama-2 7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset ehartford/wizard_vicuna_70k_unfiltered. Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.

Prompt style

The model was trained with the following prompt style:

### HUMAN:
Hello

### RESPONSE:
Hi, how are you?

### HUMAN:
I'm fine.

### RESPONSE:
How can I help you?
...

Training code

Code used to train the model is available here.

To reproduce the results:

git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.pyy configs/llama2_7b_chat_uncensored.yaml