|
--- |
|
dataset_info: |
|
- config_name: default |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
- name: poster |
|
dtype: string |
|
- name: date_utc |
|
dtype: timestamp[ns] |
|
- name: flair |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: permalink |
|
dtype: string |
|
- name: nsfw |
|
dtype: bool |
|
- name: updated |
|
dtype: bool |
|
- name: new |
|
dtype: bool |
|
splits: |
|
- name: train |
|
num_bytes: 50994948 |
|
num_examples: 98828 |
|
download_size: 31841070 |
|
dataset_size: 50994948 |
|
- config_name: year_2015 |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
- name: poster |
|
dtype: string |
|
- name: date_utc |
|
dtype: timestamp[ns] |
|
- name: flair |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: permalink |
|
dtype: string |
|
- name: nsfw |
|
dtype: bool |
|
- name: updated |
|
dtype: bool |
|
- name: new |
|
dtype: bool |
|
- name: __index_level_0__ |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 3229520 |
|
num_examples: 5774 |
|
download_size: 1995677 |
|
dataset_size: 3229520 |
|
- config_name: year_2016 |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
- name: poster |
|
dtype: string |
|
- name: date_utc |
|
dtype: timestamp[ns] |
|
- name: flair |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: permalink |
|
dtype: string |
|
- name: nsfw |
|
dtype: bool |
|
- name: updated |
|
dtype: bool |
|
- name: new |
|
dtype: bool |
|
- name: __index_level_0__ |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 5298054 |
|
num_examples: 9701 |
|
download_size: 3351804 |
|
dataset_size: 5298054 |
|
- config_name: year_2017 |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: content |
|
dtype: string |
|
- name: score |
|
dtype: int64 |
|
- name: poster |
|
dtype: string |
|
- name: date_utc |
|
dtype: timestamp[ns] |
|
- name: flair |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: permalink |
|
dtype: string |
|
- name: nsfw |
|
dtype: bool |
|
- name: updated |
|
dtype: bool |
|
- name: new |
|
dtype: bool |
|
- name: __index_level_0__ |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 6890884 |
|
num_examples: 12528 |
|
download_size: 4379140 |
|
dataset_size: 6890884 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- config_name: year_2015 |
|
data_files: |
|
- split: train |
|
path: year_2015/train-* |
|
- config_name: year_2016 |
|
data_files: |
|
- split: train |
|
path: year_2016/train-* |
|
- config_name: year_2017 |
|
data_files: |
|
- split: train |
|
path: year_2017/train-* |
|
--- |
|
|
|
|
|
--- Generated Part of README Below --- |
|
|
|
|
|
## Dataset Overview |
|
The goal is to have an open dataset of [r/uwaterloo](https://www.reddit.com/r/uwaterloo/) submissions. I'm leveraging PRAW and the Reddit API to get downloads. |
|
|
|
There is a limit of 1000 in an API call and limited search functionality, so this is run hourly to get new submissions. |
|
|
|
## Creation Details |
|
This dataset was created by [alvanlii/dataset-creator-reddit-uwaterloo](https://huggingface.co/spaces/alvanlii/dataset-creator-reddit-uwaterloo) |
|
|
|
## Update Frequency |
|
The dataset is updated hourly with the most recent update being `2024-08-29 14:00:00 UTC+0000` where we added **0 new rows**. |
|
|
|
## Licensing |
|
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25: |
|
[License information] |
|
|
|
## Opt-out |
|
To opt-out of this dataset please make a pull request with your justification and add your ids in filter_ids.json |
|
|
|
1. Go to [filter_ids.json](https://huggingface.co/spaces/reddit-tools-HF/dataset-creator-reddit-bestofredditorupdates/blob/main/filter_ids.json) |
|
2. Click Edit |
|
3. Add your ids, 1 per row |
|
4. Comment with your justification |
|
|