Update README.md
Browse files
README.md
CHANGED
@@ -637,39 +637,101 @@ size_categories:
|
|
637 |
|
638 |
## Dataset Description
|
639 |
|
640 |
-
|
641 |
-
|
642 |
-
|
643 |
|
644 |
### Dataset Summary
|
645 |
|
646 |
-
[WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia](https://arxiv.org/abs/2305.05928) by [Kenichiro Ando](https://ken-ando.github.io/kenichiro_ando/index.html), Satoshi Sekine and Mamoru Komachi (AAAI
|
647 |
|
648 |
-
The WikiSQE dataset is an English
|
|
|
649 |
|
650 |
-
A complete list of labels:
|
651 |
|
652 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
653 |
|
654 |
-
|
655 |
|
656 |
-
|
|
|
|
|
|
|
|
|
|
|
657 |
|
658 |
-
See https://github.com/ken-ando/WikiSQE.
|
659 |
|
660 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
661 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
662 |
@inproceedings{ando-etal-2024-wikisqe,
|
663 |
-
title
|
664 |
-
author
|
665 |
-
|
666 |
-
|
667 |
-
|
668 |
-
|
669 |
-
|
670 |
-
|
671 |
-
|
672 |
-
address = "Vancouver, Canada",
|
673 |
-
publisher = "Association for the Advancement of Artificial Intelligence",
|
674 |
}
|
675 |
-
```
|
|
|
|
637 |
|
638 |
## Dataset Description
|
639 |
|
640 |
+
* **Repository**: [https://github.com/ken-ando/WikiSQE](https://github.com/ken-ando/WikiSQE)
|
641 |
+
* **Paper**: [https://arxiv.org/abs/2305.05928](https://arxiv.org/abs/2305.05928)
|
|
|
642 |
|
643 |
### Dataset Summary
|
644 |
|
645 |
+
[WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia](https://arxiv.org/abs/2305.05928) by [Kenichiro Ando](https://ken-ando.github.io/kenichiro_ando/index.html), Satoshi Sekine and Mamoru Komachi (AAAI 2024).
|
646 |
|
647 |
+
The WikiSQE dataset is an English‑language dataset containing **over 3.4 million sentences** extracted from the complete edit history of English Wikipedia. Every sentence in the corpus is considered by Wikipedia editors to have **some quality issue**. The type of issue is annotated with one of **153 fine‑grained quality labels**.
|
648 |
+
The train/validation/test split used in the original paper is hosted separately at [https://huggingface.co/datasets/ando55/WikiSQE\_experiment](https://huggingface.co/datasets/ando55/WikiSQE_experiment).
|
649 |
|
650 |
+
A complete list of labels:
|
651 |
|
652 |
+
```
|
653 |
+
["a fact or an opinion", "according to whom", "additional citation needed", "ambiguous", "anachronism", "as of", "attribution needed", "author missing", "bare url", "better source needed", "broken citation", "broken footnote", "buzzword", "by how much", "by whom", "check issn", "check quotation syntax", "chronology citation needed", "circular definition", "circular reference", "citation needed", "citation not found", "clarification needed", "colloquialism", "compared to", "conflicted source", "contentious label", "context needed", "contradictory", "coordinates", "copyright violation", "date mismatch", "date missing", "dead link", "definition needed", "disambiguation needed", "discuss", "disputed", "dubious", "editorializing", "emphasis added", "episode needed", "example needed", "example's importance", "excessive citations", "excessive detail", "expand acronym", "failed verification", "from whom", "full citation needed", "further explanation needed", "generally unreliable", "globalize", "how", "how often", "image reference needed", "importance", "improper synthesis", "incomplete short citation", "incomprehensible", "inconsistent", "infringing link", "irrelevant citation", "jargon", "like whom", "link currently leads to a wrong person", "list entry too long", "marketing material", "may be outdated", "medical citation needed", "more detail needed", "need quotation on talk to verify", "need quotation to verify", "needs copy edit", "needs ipa", "needs update", "neologism", "neutrality disputed", "non sequitur", "non-primary source needed", "non-tertiary source needed", "not in citation given", "not specific enough to verify", "not verified in body", "old info", "opinion", "original research", "over-explained", "page needed", "page range too broad", "page will play audio when loaded", "password-protected", "peacock term", "predatory publisher", "promotion", "pronunciation", "qualify evidence", "quantify", "registration required", "relevant", "relevant to this paragraph", "relevant to this section", "repetition", "romanization needed", "says who", "scientific citation needed", "self-published source", "sentence fragment", "sia disambiguation needed", "sic", "spam link", "specify", "speculation", "spelling", "stress", "subscription may be required or content may be available in libraries", "subscription or uk public library membership required", "subscription required", "template problem", "text-source integrity", "third-party source needed", "this quote needs a citation", "this tertiary source reuses information from other sources but does not name them", "timeframe", "to be determined", "to whom", "tone", "unbalanced opinion", "under discussion", "unreliable fringe source", "unreliable medical source", "unreliable scientific source", "unreliable source", "until when", "user-generated source", "vague", "verification needed", "verify", "volume & issue needed", "weasel words", "when", "when defined as", "where", "which", "which calendar", "who", "who said this", "whose", "whose translation", "why", "with whom", "year missing", "year needed"]
|
654 |
+
```
|
655 |
+
|
656 |
+
---
|
657 |
+
## Label Details and Statistics
|
658 |
+
|
659 |
+
Detailed frequency statistics, per‑label examples, and data‑collection scripts are available in the project repository: [https://github.com/ken-ando/WikiSQE](https://github.com/ken-ando/WikiSQE).
|
660 |
+
|
661 |
+
---
|
662 |
+
## How to Download & Use the Dataset
|
663 |
|
664 |
+
### 1 — Quick load with the 🤗 `datasets` library
|
665 |
|
666 |
+
Use this route when you want to start analysing or training right away without storing the whole snapshot on disk.
|
667 |
+
|
668 |
+
```bash
|
669 |
+
# Install (if you haven't already)
|
670 |
+
pip install --upgrade datasets huggingface_hub
|
671 |
+
```
|
672 |
|
|
|
673 |
|
674 |
+
```python
|
675 |
+
from huggingface_hub import snapshot_download
|
676 |
+
|
677 |
+
# Download the full dataset as parquet
|
678 |
+
repo_dir = snapshot_download(
|
679 |
+
repo_id="ando55/WikiSQE",
|
680 |
+
repo_type="dataset",
|
681 |
+
local_dir="WikiSQE_parquet",
|
682 |
+
local_dir_use_symlinks=False,
|
683 |
+
)
|
684 |
+
print("Saved at:", repo_dir)
|
685 |
```
|
686 |
+
|
687 |
+
**Tips**
|
688 |
+
|
689 |
+
* All splits are named `"train"` because the Wikipedia dump has no natural train/dev/test split.
|
690 |
+
Re‑use the official split ([https://huggingface.co/datasets/ando55/WikiSQE\_experiment](https://huggingface.co/datasets/ando55/WikiSQE_experiment)) or create your own.
|
691 |
+
|
692 |
+
### 2 — Convert parquet to csv
|
693 |
+
|
694 |
+
```python
|
695 |
+
import pyarrow.dataset as ds, pyarrow.csv as pv, pyarrow as pa, pathlib
|
696 |
+
|
697 |
+
src = pathlib.Path("WikiSQE_parquet")
|
698 |
+
dst = pathlib.Path("WikiSQE_csv"); dst.mkdir(exist_ok=True)
|
699 |
+
|
700 |
+
for pq in src.rglob("*.parquet"):
|
701 |
+
label = pq.parent.name
|
702 |
+
out = dst / f"{label}.csv"
|
703 |
+
first = not out.exists()
|
704 |
+
dset = ds.dataset(str(pq))
|
705 |
+
with out.open("ab") as f, pv.CSVWriter(f, dset.schema,
|
706 |
+
write_options=pv.WriteOptions(include_header=first)) as w:
|
707 |
+
for batch in dset.to_batches():
|
708 |
+
w.write_table(pa.Table.from_batches([batch]))
|
709 |
+
```
|
710 |
+
|
711 |
+
---
|
712 |
+
|
713 |
+
## Data Fields
|
714 |
+
|
715 |
+
| Field | Type | Description |
|
716 |
+
| ------- | -------- | -------------------------------------------------------------------------------------------------- |
|
717 |
+
| `text` | *string* | Raw sentence from a Wikipedia revision that Wikipedia editors judged to contain a quality problem. |
|
718 |
+
|
719 |
+
|
720 |
+
## Citation
|
721 |
+
|
722 |
+
If you use the dataset, please cite us:
|
723 |
+
|
724 |
+
```bibtex
|
725 |
@inproceedings{ando-etal-2024-wikisqe,
|
726 |
+
title = {{WikiSQE}: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia},
|
727 |
+
author = {Ando, Kenichiro and Sekine, Satoshi and Komachi, Mamoru},
|
728 |
+
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
|
729 |
+
year = {2024},
|
730 |
+
volume = {38},
|
731 |
+
number = {16},
|
732 |
+
pages = {17656--17663},
|
733 |
+
address = {Vancouver, Canada},
|
734 |
+
publisher = {Association for the Advancement of Artificial Intelligence},
|
|
|
|
|
735 |
}
|
736 |
+
```
|
737 |
+
|