File size: 3,299 Bytes
393de6c
 
7367a0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
393de6c
7367a0e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---

license: apache-2.0
language:
- en
pretty_name: HackerNews stories dataset
dataset_info:
  config_name: default
  features:
    - name: id
      dtype: int64
    - name: url
      dtype: string
    - name: title
      dtype: string
    - name: author
      dtype: string
    - name: markdown
      dtype: string
    - name: downloaded
      dtype: bool
    - name: meta_extracted
      dtype: bool
    - name: parsed
      dtype: bool
    - name: description
      dtype: string
    - name: filedate
      dtype: string
    - name: date
      dtype: string
    - name: image
      dtype: string
    - name: pagetype
      dtype: string
    - name: hostname
      dtype: string
    - name: sitename
      dtype: string
    - name: tags
      dtype: string
    - name: categories
      dtype: string
configs:
- config_name: default
  data_files:
  - split: train
    path: data/*.jsonl.zst
---

# A HackerNews Stories dataset

This dataset is based on [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset:

* for each item of `type=story` we downloaded the target URL. Out of ~3.8M stories ~2.1M are still reachable.
* each story HTML was parsed using [trafilatura](https://trafilatura.readthedocs.io) library
* we store article text in `markdown` format along with all page-specific metadata.

## Dataset stats

* date coverage: xx.2006-09.2024, same as in upstream [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset
* total scraped pages: 2150271 (around 55% of the original dataset)
* unpacked size: ~20GB of text.

## Usage

The dataset is available as a set of JSONL-formatted files with ZSTD compression:

```json

{

  "id": 8961943,

  "url": "https://www.eff.org/deeplinks/2015/01/internet-sen-ron-wyden-were-counting-you-oppose-fast-track-tpp",

  "title": "Digital Rights Groups to Senator Ron Wyden: We're Counting on You to Oppose Fast Track for the TPP",

  "author": "Maira Sutton",

  "markdown": "Seven leading US digital rights and access to knowledge groups, ...",

  "downloaded": true,

  "meta_extracted": true,

  "parsed": true,

  "description": "Seven leading US digital rights and access to knowledge groups, and over 7,550 users, have called on Sen. Wyden today to oppose any new version of Fast Track (aka trade promotion authority) that does not fix the secretive, corporate-dominated process of trade negotiations. In particular, we urge...",

  "filedate": "2024-10-13",

  "date": "2015-01-27",

  "image": "https://www.eff.org/files/issues/fair-use-og-1.png",

  "pagetype": "article",

  "hostname": "eff.org",

  "sitename": "Electronic Frontier Foundation",

  "categories": null,

  "tags": null

}

```

The `id` field matches the `id` field from the upstream [nixiesearch/hackernews-comments](https://huggingface.co/datasets/nixiesearch/hackernews-comments) dataset.

You can also use this dataset using Huggingface datasets library:

```shell

pip install datasets zstandard

```

and then:

```python

from datasets import load_dataset



stories = load_dataset("nixiesearch/hackernews-stories", split="train")

print(stories[0])

```

## License

Apache License 2.0