Kikkk commited on
Commit
2eb29ed
β€’
1 Parent(s): 1eb4fc7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -1,3 +1,4 @@
 
1
  license: apache-2.0
2
  tags:
3
  - alignment-handbook
@@ -17,7 +18,28 @@ pipeline_tag: text-generation
17
  Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advanced LLMs cannot follow complex instructions well, thus limiting the quality of generated data. In this work, we find that <b><i>existing datasets inherently contain implicit complex constraints</i></b> and propose a novel data generation technique, <b><i>constraint back-translation</i></b>. Specifically, we take the high-quality instruction-response pairs in existing datasets and only adopt advanced LLMs to add complex constraints already met by the responses to the instructions, which naturally reduces costs and data noise. In the experiments, we adopt Llama3-70B-Instruct to back-translate constraints and create a high-quality complex instruction-response dataset, named <b>CRAB</b>. We present that post-training on <font face="Verdana">CRAB</font> improves multiple backbone LLMs' complex instruction-following ability, evaluated on extensive instruction-following benchmarks. We further find that constraint back-translation also serves as a useful auxiliary training objective in post-training.
18
  - πŸ“– Paper: [Constraint Back-translation Improves Complex Instruction Following of Large Language Models](https://arxiv.org/abs/2410.24175)
19
  </p>
20
- - 🐧 Github: [THU/Crab](https://github.com/THU-KEG/Crab)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
 
23
  ### Model Description
 
1
+ ---
2
  license: apache-2.0
3
  tags:
4
  - alignment-handbook
 
18
  Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advanced LLMs cannot follow complex instructions well, thus limiting the quality of generated data. In this work, we find that <b><i>existing datasets inherently contain implicit complex constraints</i></b> and propose a novel data generation technique, <b><i>constraint back-translation</i></b>. Specifically, we take the high-quality instruction-response pairs in existing datasets and only adopt advanced LLMs to add complex constraints already met by the responses to the instructions, which naturally reduces costs and data noise. In the experiments, we adopt Llama3-70B-Instruct to back-translate constraints and create a high-quality complex instruction-response dataset, named <b>CRAB</b>. We present that post-training on <font face="Verdana">CRAB</font> improves multiple backbone LLMs' complex instruction-following ability, evaluated on extensive instruction-following benchmarks. We further find that constraint back-translation also serves as a useful auxiliary training objective in post-training.
19
  - πŸ“– Paper: [Constraint Back-translation Improves Complex Instruction Following of Large Language Models](https://arxiv.org/abs/2410.24175)
20
  </p>
21
+ - πŸ¦€ Github: [THU/Crab](https://github.com/THU-KEG/Crab)
22
+
23
+ ### Model Performance
24
+
25
+ | Models | BaseModel | IFEval | FollowBench(HSR) | | | AVG |
26
+ |--------------------|-----------|--------|------------------|-------|------|------|
27
+ | | | AVG | L1-L2 | L3-L5 | AVG | |
28
+ | GPT-3.5-turbo | GPT | 66.3 | 74.2 | 61 | 66.2 | 66.3 |
29
+ | GPT-4 | GPT | 81.3 | 80.4 | 69.4 | 73.8 | 77.6 |
30
+ | Vicuna-13b-V1.5 | Llama2 | 50.3 | 66.3 | 39.8 | 50.4 | 50.4 |
31
+ | WizardLM-13B-V1.2 | Llama2 | 51.4 | 56.5 | 36.9 | 44.7 | 48 |
32
+ | Conifer-13B | Llama2 | 50.2 | 57.1 | 40.3 | 47 | 48.6 |
33
+ | Zephyr-7B-beta | Mistral | 45.4 | 54.8 | 38.2 | 44.8 | 45.1 |
34
+ | Conifer-7B | Mistral | 53.9 | 51.9 | 40.2 | 44.9 | 49.4 |
35
+ | Conifer-7B-DPO | Mistral | 55.7 | 57 | 45.4 | 50 | 52.9 |
36
+ | Llama3 8B | Llama3 | 31.4 | 6.8 | 8.2 | 7.6 | 19.5 |
37
+ | Llama3-crab | Llama3 | 46.9 | 51.2 | 26.7 | 36.5 | 41.7 |
38
+ | Llama3-crab + DPO | Llama3 | 49.7 | 56.8 | 38.1 | 45.5 | 47.6 |
39
+ | Mistral 7B | Mistral | 25.2 | 15.5 | 6.5 | 10.1 | 17.7 |
40
+ | Mistral-crab | Mistral | 54.5 | 59.2 | 32.8 | 43.3 | 48.9 |
41
+ | Mistral-crab + DPO | Mistral | 59.4 | 59.9 | 42.5 | 49.4 | 54.4 |
42
+
43
 
44
 
45
  ### Model Description