Update documentation to include complete-only configuration
Browse files
README.md
CHANGED
@@ -1,61 +1,29 @@
|
|
1 |
-
---
|
2 |
-
configs:
|
3 |
-
- config_name: complete-only
|
4 |
-
data_files:
|
5 |
-
- split: train
|
6 |
-
path: complete-only/train-*
|
7 |
-
- config_name: default
|
8 |
-
data_files:
|
9 |
-
- split: train
|
10 |
-
path: data/train-*
|
11 |
-
dataset_info:
|
12 |
-
config_name: complete-only
|
13 |
-
features:
|
14 |
-
- name: prompt
|
15 |
-
dtype: string
|
16 |
-
- name: chosen_story
|
17 |
-
dtype: string
|
18 |
-
- name: rejected_story
|
19 |
-
dtype: string
|
20 |
-
- name: chosen_username
|
21 |
-
dtype: string
|
22 |
-
- name: rejected_username
|
23 |
-
dtype: string
|
24 |
-
- name: chosen_timestamp
|
25 |
-
dtype: string
|
26 |
-
- name: rejected_timestamp
|
27 |
-
dtype: string
|
28 |
-
- name: chosen_upvotes
|
29 |
-
dtype: int64
|
30 |
-
- name: rejected_upvotes
|
31 |
-
dtype: int64
|
32 |
-
- name: chosen_comment_id
|
33 |
-
dtype: string
|
34 |
-
- name: rejected_comment_id
|
35 |
-
dtype: string
|
36 |
-
- name: chosen_reddit_post_id
|
37 |
-
dtype: string
|
38 |
-
- name: rejected_reddit_post_id
|
39 |
-
dtype: string
|
40 |
-
splits:
|
41 |
-
- name: train
|
42 |
-
num_bytes: 15739152
|
43 |
-
num_examples: 2381
|
44 |
-
download_size: 8847336
|
45 |
-
dataset_size: 15739152
|
46 |
-
---
|
47 |
# LitBench-Test-IDs-Complete-Final
|
48 |
-
|
49 |
## Dataset Description
|
50 |
|
51 |
This dataset contains the **complete and verified comment IDs** for the LitBench-Test dataset, enhanced through intelligent text matching techniques. This represents the final, highest-quality version of the comment ID dataset.
|
52 |
|
53 |
-
##
|
|
|
|
|
54 |
|
|
|
55 |
- **Total rows**: 2,480
|
56 |
-
- **
|
57 |
-
- **
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
- **Additional IDs recovered**: **425** comment IDs beyond the original dataset
|
60 |
|
61 |
## Enhancement Process
|
@@ -66,46 +34,39 @@ This dataset was created through a comprehensive enhancement process:
|
|
66 |
2. **Text Matching**: Intelligent matching of story text to find missing comment IDs
|
67 |
3. **Quality Control**: 90%+ similarity threshold for all matches
|
68 |
4. **Verification**: Strict validation to eliminate false positives
|
69 |
-
5. **
|
70 |
-
|
71 |
-
## Methodology
|
72 |
-
|
73 |
-
### Recovery Process
|
74 |
-
- **549 missing stories** identified in original dataset
|
75 |
-
- **406 comment IDs** successfully recovered through text matching (74% success rate)
|
76 |
-
- **19 additional IDs** found through refined search
|
77 |
-
- **All matches verified** with >90% text similarity to ensure accuracy
|
78 |
-
|
79 |
-
### Quality Assurance
|
80 |
-
- **High similarity thresholds**: All recovered comment IDs matched with 90%+ similarity
|
81 |
-
- **False positive elimination**: Aggressive search attempts with lower thresholds were tested and rejected
|
82 |
-
- **Verification**: Multiple validation passes confirmed data integrity
|
83 |
-
- **Story fidelity**: 99%+ accuracy maintained throughout the process
|
84 |
|
85 |
-
##
|
86 |
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
| **Completeness** | 96.0% (outstanding) |
|
91 |
-
| **False Positives** | 0 (eliminated through strict validation) |
|
92 |
-
| **Data Consistency** | Perfect (zero ID-story mismatches) |
|
93 |
|
94 |
-
|
|
|
|
|
95 |
|
96 |
-
|
|
|
97 |
|
|
|
98 |
```python
|
99 |
from datasets import load_dataset
|
100 |
|
101 |
-
# Load
|
102 |
-
|
103 |
-
print(f"Loaded {len(
|
104 |
```
|
105 |
|
106 |
-
##
|
107 |
|
108 |
-
|
|
|
|
|
|
|
|
|
|
|
109 |
|
110 |
## Dataset Structure
|
111 |
|
@@ -116,6 +77,20 @@ Each row contains:
|
|
116 |
- `rejected_reddit_post_id`: Reddit post ID containing the rejected story
|
117 |
- Additional metadata fields from the original dataset
|
118 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
## Citation
|
120 |
|
121 |
If you use this enhanced dataset, please cite both the original LitBench paper and acknowledge the enhancement methodology:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# LitBench-Test-IDs-Complete-Final
|
2 |
+
|
3 |
## Dataset Description
|
4 |
|
5 |
This dataset contains the **complete and verified comment IDs** for the LitBench-Test dataset, enhanced through intelligent text matching techniques. This represents the final, highest-quality version of the comment ID dataset.
|
6 |
|
7 |
+
## Dataset Configurations
|
8 |
+
|
9 |
+
This repository contains two configurations:
|
10 |
|
11 |
+
### 1. `default` (Full Dataset)
|
12 |
- **Total rows**: 2,480
|
13 |
+
- **Complete rows**: 2381 (96.0%)
|
14 |
+
- **Includes**: All rows from original dataset, including those with missing comment IDs
|
15 |
+
|
16 |
+
### 2. `complete-only` (Complete Rows Only)
|
17 |
+
- **Total rows**: 2,381
|
18 |
+
- **Complete rows**: 2,381 (100.0%)
|
19 |
+
- **Includes**: Only rows where both chosen and rejected comment IDs are present
|
20 |
+
- **Filtered out**: 99 incomplete rows
|
21 |
+
|
22 |
+
## Key Statistics (Complete-Only Version)
|
23 |
+
|
24 |
+
- **Total rows**: 2,381
|
25 |
+
- **Completeness**: 100.0% (by definition - all rows have both comment IDs)
|
26 |
+
- **Unique comment IDs**: 3,438
|
27 |
- **Additional IDs recovered**: **425** comment IDs beyond the original dataset
|
28 |
|
29 |
## Enhancement Process
|
|
|
34 |
2. **Text Matching**: Intelligent matching of story text to find missing comment IDs
|
35 |
3. **Quality Control**: 90%+ similarity threshold for all matches
|
36 |
4. **Verification**: Strict validation to eliminate false positives
|
37 |
+
5. **Filtering**: Complete-only version includes only rows with both comment IDs
|
38 |
+
6. **Final Result**: 96.0% completeness in full dataset, 100% in filtered version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
## Usage
|
41 |
|
42 |
+
### Loading the Complete-Only Dataset
|
43 |
+
```python
|
44 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
45 |
|
46 |
+
# Load only complete rows (both comment IDs present)
|
47 |
+
complete_dataset = load_dataset("SAA-Lab/LitBench-Test-IDs-Complete-Final", "complete-only")
|
48 |
+
print(f"Loaded {len(complete_dataset['train'])} complete rows")
|
49 |
|
50 |
+
# All rows are guaranteed to have both chosen_comment_id and rejected_comment_id
|
51 |
+
```
|
52 |
|
53 |
+
### Loading the Full Dataset
|
54 |
```python
|
55 |
from datasets import load_dataset
|
56 |
|
57 |
+
# Load full dataset (includes incomplete rows)
|
58 |
+
full_dataset = load_dataset("SAA-Lab/LitBench-Test-IDs-Complete-Final")
|
59 |
+
print(f"Loaded {len(full_dataset['train'])} total rows")
|
60 |
```
|
61 |
|
62 |
+
## Data Quality
|
63 |
|
64 |
+
| Metric | Full Dataset | Complete-Only |
|
65 |
+
|--------|--------------|---------------|
|
66 |
+
| **Text Fidelity** | 99%+ | 99%+ |
|
67 |
+
| **Completeness** | 96.0% | 100.0% |
|
68 |
+
| **False Positives** | 0 | 0 |
|
69 |
+
| **Data Consistency** | Perfect | Perfect |
|
70 |
|
71 |
## Dataset Structure
|
72 |
|
|
|
77 |
- `rejected_reddit_post_id`: Reddit post ID containing the rejected story
|
78 |
- Additional metadata fields from the original dataset
|
79 |
|
80 |
+
## Methodology
|
81 |
+
|
82 |
+
### Recovery Process
|
83 |
+
- **549 missing stories** identified in original dataset
|
84 |
+
- **406 comment IDs** successfully recovered through text matching (74% success rate)
|
85 |
+
- **19 additional IDs** found through refined search
|
86 |
+
- **All matches verified** with >90% text similarity to ensure accuracy
|
87 |
+
|
88 |
+
### Quality Assurance
|
89 |
+
- **High similarity thresholds**: All recovered comment IDs matched with 90%+ similarity
|
90 |
+
- **False positive elimination**: Aggressive search attempts with lower thresholds were tested and rejected
|
91 |
+
- **Verification**: Multiple validation passes confirmed data integrity
|
92 |
+
- **Story fidelity**: 99%+ accuracy maintained throughout the process
|
93 |
+
|
94 |
## Citation
|
95 |
|
96 |
If you use this enhanced dataset, please cite both the original LitBench paper and acknowledge the enhancement methodology:
|