Ahjeong commited on
Commit
8dcd5e6
·
verified ·
1 Parent(s): be9a12d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -11
README.md CHANGED
@@ -1,14 +1,22 @@
1
- DatasetDict({
2
- train_prefs: Dataset({
3
- features: ['post_id', 'domain', 'upvote_ratio', 'history', 'c_root_id_A', 'c_root_id_B', 'created_at_utc_A', 'created_at_utc_B', 'score_A', 'score_B', 'human_ref_A', 'human_ref_B', 'labels', 'seconds_difference', 'score_ratio'],
4
- num_rows: 80000
5
- })
6
- test_prefs: Dataset({
7
- features: ['post_id', 'domain', 'upvote_ratio', 'history', 'c_root_id_A', 'c_root_id_B', 'created_at_utc_A', 'created_at_utc_B', 'score_A', 'score_B', 'human_ref_A', 'human_ref_B', 'labels', 'seconds_difference', 'score_ratio'],
8
- num_rows: 2000
9
- })
10
 
 
11
 
12
- # Dataset Card for "shp_filtered_uniform_large"
 
 
13
 
14
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for "shp_filtered_MMPO"
 
 
 
 
 
 
 
 
2
 
3
+ This is the filtered version of SHP dataset, which was used to train MMPO, as introduced in the paper below:
4
 
5
+ **Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback** <br>
6
+ Kyuyoung Kim*, Ah Jeong Seo*, Hao Liu, Jinwoo Shin, Kimin Lee <br>
7
+ *In EMNLP 2024 Findings*
8
 
9
+
10
+ ## Dataset Description
11
+
12
+ The original [SHP dataset](https://huggingface.co/datasets/stanfordnlp/SHP) consists of 385k collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
13
+ To create SHP filtered dataset to train MMPO, we extracted a subset of size 55k, following [Ethayarajh et al. (2022)](https://arxiv.org/abs/2110.08420) and [Sun et al. (2023)](https://arxiv.org/abs/2310.05910).
14
+
15
+ However, we sampled uniformly across score differences to evaluate the methods over a wide range of quality margins, which is different from prior works that trained only on preferences with significant score differences.
16
+ Upon analyzing the distribution of score differences in the SHP dataset, we found that 50% of the data had relatively small differences.
17
+ Therefore, to check if the model can be optimized effectively with datasets containing many preferences with low confidence,
18
+ we employed a method of **sampling proportional to the score difference distribution of the original SHP**.
19
+
20
+
21
+ More details can be found in the paper referenced above.
22
+ Additionally, you can find more filtering details about the SHP dataset in the [official code](https://github.com/kykim0/margin-matching-pref-opt).