georgesung's picture
Update README.md
d31d867 verified
metadata
license: apache-2.0
datasets:
  - georgesung/wizard_vicuna_70k_unfiltered
base_model: OpenLLaMA-7B

Overview

Fine-tuned OpenLLaMA-7B with an uncensored/unfiltered Wizard-Vicuna conversation dataset (originally from ehartford/wizard_vicuna_70k_unfiltered). Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~18 hours to train.

Prompt style

The model was trained with the following prompt style:

### HUMAN:
Hello

### RESPONSE:
Hi, how are you?

### HUMAN:
I'm fine.

### RESPONSE:
How can I help you?
...

Training code

Code used to train the model is available here.

Demo

For a Gradio chat application using this model, clone this HuggingFace Space and run it on top of a GPU instance. The basic T4 GPU instance will work.

Blog post

Since this was my first time fine-tuning an LLM, I also wrote an accompanying blog post about how I performed the training :)

https://georgesung.github.io/ai/qlora-ift/