hackernews-stories / README.md
shuttie's picture
update readme
7367a0e
metadata
license: apache-2.0
language:
  - en
pretty_name: HackerNews stories dataset
dataset_info:
  config_name: default
  features:
    - name: id
      dtype: int64
    - name: url
      dtype: string
    - name: title
      dtype: string
    - name: author
      dtype: string
    - name: markdown
      dtype: string
    - name: downloaded
      dtype: bool
    - name: meta_extracted
      dtype: bool
    - name: parsed
      dtype: bool
    - name: description
      dtype: string
    - name: filedate
      dtype: string
    - name: date
      dtype: string
    - name: image
      dtype: string
    - name: pagetype
      dtype: string
    - name: hostname
      dtype: string
    - name: sitename
      dtype: string
    - name: tags
      dtype: string
    - name: categories
      dtype: string
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.jsonl.zst

A HackerNews Stories dataset

This dataset is based on nixiesearch/hackernews-comments dataset:

  • for each item of type=story we downloaded the target URL. Out of ~3.8M stories ~2.1M are still reachable.
  • each story HTML was parsed using trafilatura library
  • we store article text in markdown format along with all page-specific metadata.

Dataset stats

  • date coverage: xx.2006-09.2024, same as in upstream nixiesearch/hackernews-comments dataset
  • total scraped pages: 2150271 (around 55% of the original dataset)
  • unpacked size: ~20GB of text.

Usage

The dataset is available as a set of JSONL-formatted files with ZSTD compression:

{
  "id": 8961943,
  "url": "https://www.eff.org/deeplinks/2015/01/internet-sen-ron-wyden-were-counting-you-oppose-fast-track-tpp",
  "title": "Digital Rights Groups to Senator Ron Wyden: We're Counting on You to Oppose Fast Track for the TPP",
  "author": "Maira Sutton",
  "markdown": "Seven leading US digital rights and access to knowledge groups, ...",
  "downloaded": true,
  "meta_extracted": true,
  "parsed": true,
  "description": "Seven leading US digital rights and access to knowledge groups, and over 7,550 users, have called on Sen. Wyden today to oppose any new version of Fast Track (aka trade promotion authority) that does not fix the secretive, corporate-dominated process of trade negotiations. In particular, we urge...",
  "filedate": "2024-10-13",
  "date": "2015-01-27",
  "image": "https://www.eff.org/files/issues/fair-use-og-1.png",
  "pagetype": "article",
  "hostname": "eff.org",
  "sitename": "Electronic Frontier Foundation",
  "categories": null,
  "tags": null
}

The id field matches the id field from the upstream nixiesearch/hackernews-comments dataset.

You can also use this dataset using Huggingface datasets library:

pip install datasets zstandard

and then:

from datasets import load_dataset

stories = load_dataset("nixiesearch/hackernews-stories", split="train")
print(stories[0])

License

Apache License 2.0