Ganz00's picture
Upload dataset
91c050c verified
metadata
dataset_info:
  features:
    - name: body
      dtype: string
    - name: created_utc
      dtype: int64
    - name: score
      dtype: int64
    - name: subreddit
      dtype: string
  splits:
    - name: train
      num_bytes: 1424805470
      num_examples: 2615435
  download_size: 884143285
  dataset_size: 1424805470
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for Cleaned Subset of Pushshift Reddit Comments

Dataset Summary

This dataset is a curated and cleaned subset of the fddemarco/pushshift-reddit-comments dataset. The original dataset, which contains a vast collection of Reddit comments, has been filtered and refined to enhance its suitability for training language models. Specifically, this subset has undergone the following preprocessing steps:

  • Removal of Special Characters: Certain special characters that may interfere with the language modeling process have been removed.
  • Length Filtering: Comments shorter than 200 characters have been excluded to ensure the dataset primarily contains substantial and contextually rich text.

Dataset Creation

The dataset was derived from the fddemarco/pushshift-reddit-comments dataset by applying a series of cleaning steps aimed at improving its quality for language model training. The main steps involved were:

  1. Special Characters Removal: Special characters such as @, #, and other non-alphanumeric symbols were removed to standardize the text data.
  2. Comment Length Filtering: Only comments with 200 characters or more were retained. This filtering step ensures that the dataset includes comments that are more likely to provide meaningful context for training language models.

Usage

This cleaned subset is particularly well-suited for training and fine-tuning language models. By focusing on longer comments and removing extraneous special characters, the dataset helps models learn from richer and more coherent text. Researchers and developers can use this dataset to build more robust and contextually aware language models.

Acknowledgements

This dataset is based on the fddemarco/pushshift-reddit-comments dataset. Special thanks to the creators and maintainers of the original dataset for making this resource available to the community.

License

The dataset inherits the licensing terms of the original fddemarco/pushshift-reddit-comments dataset.


license: apache-2.0 dataset_info: features: - name: body dtype: string - name: created_utc dtype: int64 - name: score dtype: int64 - name: subreddit dtype: string splits: - name: train num_bytes: 5569501 num_examples: 10000 download_size: 3404911 dataset_size: 5569501 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - text-generation - text2text-generation language: - en pretty_name: reddit comment size_categories: - 100M<n<1B