Text Generation
Transformers
PyTorch
llama
Eval Results
text-generation-inference
Inference Endpoints
Declare commited on
Commit
8d9bf15
·
1 Parent(s): 3ffbf1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
3
  ---
4
  [**Paper**](https://openreview.net/pdf?id=jkcHYEfPv3) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
5
 
6
- We created **Starling** by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
7
 
8
  <img src="https://declare-lab.net/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
9
 
@@ -33,6 +33,8 @@ Experimental results on several safety benchmark datasets indicate that **Starli
33
 
34
  <h2>HarmfulQA Data Collection</h2>
35
 
 
 
36
  <img src="https://declare-lab.net/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
37
 
38
- Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps model learn from the negative data.
 
3
  ---
4
  [**Paper**](https://openreview.net/pdf?id=jkcHYEfPv3) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)
5
 
6
+ We created **Starling** by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are on our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://openreview.net/pdf?id=jkcHYEfPv3)
7
 
8
  <img src="https://declare-lab.net/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
9
 
 
33
 
34
  <h2>HarmfulQA Data Collection</h2>
35
 
36
+ We also release our **HarmfulQA** dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details [**here**](https://huggingface.co/datasets/declare-lab/HarmfulQA). Following figure describes the data collection process.
37
+
38
  <img src="https://declare-lab.net/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
39
 
40
+ Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data.