File size: 1,527 Bytes
25c1a19 72f0335 25c1a19 72f0335 bd37526 72f0335 25c1a19 bd37526 25c1a19 6501474 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 105750984.56504539
num_examples: 152946
- name: test
num_bytes: 22660826.48866134
num_examples: 32774
- name: val
num_bytes: 22661517.91560003
num_examples: 32775
download_size: 65442094
dataset_size: 151073328.96930677
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
## Dataset Card for "vibhorag101/suicide_prediction_dataset_phr"
- The dataset is sourced from Reddit and is available on [Kaggle](https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch).
- The dataset contains text with binary labels for suicide or non-suicide.
- The dataset was cleaned minimally, as BERT depends on contextually sensitive information, which can worsely effect its performance.
- Removed numbers
- Removed URLs, Emojis, and accented characters.
- Remove any extra white spaces and any extra spaces after a single space.
- Removed any consecutive characters repeated more than 3 times.
- The rows with more than 512 BERT Tokens were removed, as they exceeded BERT's max token limit.
- The cleaned dataset can be found [here](https://huggingface.co/datasets/vibhorag101/phr_suicide_prediction_dataset_clean_light)
- The evaluation set had ~33k samples, while the training set had ~153k samples, i.e., a 70:15:15 (train:test:val) split.
|