hackernews-comments / README.md
shuttie's picture
move features to dataset_info
e649295
|
raw
history blame
1.79 kB
---
license: apache-2.0
language:
- en
pretty_name: HackerNews comments dataset
dataset_info:
config_name: default
features:
- name: id
dtype: int
- name: deleted
dtype: boolean
- name: type
dtype: string
- name: by
dtype: string
- name: time
dtype: int
- name: text
dtype: string
- name: dead
dtype: boolean
- name: parent
dtype: int
- name: poll
dtype: int
- name: kids
sequence: int
- name: url
dtype: string
- name: score
dtype: int
- name: title
dtype: string
- name: parts
sequence: int
- name: descendants
sequence: int
configs:
- config_name: default
data_files:
- split: train
path: items/*.jsonl.zst
---
# Hackernews Comments Dataset
A dataset of all [HN API](https://github.com/HackerNews/API) items from `id=0` till `id=41723169` (so from 2006 till 02 Oct 2024). The dataset is build by scraping the HN API according to its official [schema and docs](https://github.com/HackerNews/API). Scraper code is also available on github: [nixiesearch/hnscrape](https://github.com/nixiesearch/hnscrape)
## Dataset contents
No cleaning, validation or filtering was performed. The resulting data files are raw JSON API response dumps in zstd-compressed JSONL files. An example payload:
```json
{
"by": "goldfish",
"descendants": 0,
"id": 46,
"score": 4,
"time": 1160581168,
"title": "Rentometer: Check How Your Rent Compares to Others in Your Area",
"type": "story",
"url": "http://www.rentometer.com/"
}
```
## Usage
You can directly load this dataset with a [Huggingface Datasets](https://github.com/huggingface/datasets/) library.
```python
todo
```
## License
Apache License 2.0.