Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
aboutme / README.md
lucy4's picture
Update README.md
926bcfa verified
|
raw
history blame
2.69 kB
metadata
language:
  - en
tags:
  - common crawl
  - webtext
  - social nlp
size_categories:
  - 10M<n<100M
pretty_name: AboutMe

AboutMe: Self-Descriptions in Webpages

Dataset description

Curated by: Li Lucy, Suchin Gururangan, Luca Soldaini, Emma Strubell, David Bamman, Lauren Klein, Jesse Dodge

Languages: English

License: [TBD]

Paper: [TBD]

Dataset sources

Common Crawl

Uses

[TBD]

Dataset structure

This dataset consists of three parts:

  • about_pages: webpages that are self-descriptions and profiles of website creators, or text about individuals and organizations on the web. These are zipped files with one json per line, with the following keys:
    • url
    • hostname
    • cc_segment (for tracking where in Common Crawl the page is originally retrieved from)
    • text
    • title (webpage title)
  • sampled_pages: random webpages from the same set of websites, or text created or curated by individuals and organizations on the web. It has the same keys as about_pages.
  • about_pages_meta: algorithmically extracted information from "About" pages, including:
    • hostname
    • country: the most frequent country of locations on the page, obtained using Mordecai3 geoparsing
    • roles: social roles and occupations detected using RoBERTa based on expressions of self-identification, e.g. I am a dancer.
    • individual: 1 if the page is detected to be an individual, 0 if it is likely an organization
    • topic: one of fifty labels obtained via tf-idf clustering of "about" pages

Note that the entries in each file are not in a random order, but instead reflect an ordering outputted by CCNet (e.g. neighboring pages may be similar in Wikipedia-based perplexity.)

Dataset creation

AboutMe is derived from twenty four snapshots of Common Crawl collected between 2020–05 to 2023–06. We extract text from raw Common Crawl using CCNet, and deduplicate URLs across all snapshots. We only include text that has a fastText English score > 0.5. "About" pages are identified using keywords in URLs (about, about-me, about-us, and bio), and their URLs end in /keyword/ or keyword.*, e.g. about.html. We only include pages that have one candidate URL, to avoid ambiguity around which page is actually about the main website creator. If a webpage has both https and http versions in Common Crawl, we take the https version. The "sampled" pages are a single webpage randomly sampled from the website that has an "about" page.

More details on metadata creation can be found in our paper, linked above.

Bias, Risks, and Limitations

[TBD]

Citation

[TBD]

Dataset contact

[email protected]