Datasets:
File size: 12,059 Bytes
bd8c68b c56a063 f9233bd c56a063 f9233bd c56a063 f9233bd c56a063 f9233bd bd8c68b f9233bd 9d90aa8 f9233bd 13318e5 f9233bd 13318e5 f9233bd af8df14 f9233bd af8df14 f9233bd 13318e5 f9233bd 13318e5 f9233bd 13318e5 f9233bd 13318e5 f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 3ca37fd f9233bd 13318e5 f9233bd c56a063 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
---
license: cc-by-4.0
task_categories:
- text-classification
- zero-shot-classification
task_ids:
- multi-label-classification
language:
- en
tags:
- Human Values
- Value Detection
- Multi-Label
pretty_name: Human Value Detection Dataset
size_categories:
- 1K<n<10K
---
# The Touché23-ValueEval Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Usage](#dataset-usage)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Argument Instances](#argument-instances)
- [Metadata Instances](#metadata-instances)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://webis.de/data/touche23-valueeval.html](https://webis.de/data/touche23-valueeval.html)
- **Repository:** [Zenodo](https://doi.org/10.5281/zenodo.6814563)
- **Paper:** [The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments.](https://webis.de/downloads/publications/papers/mirzakhmedova_2023a.pdf)
- **Leaderboard:** [https://touche.webis.de/](https://touche.webis.de/semeval23/touche23-web/index.html#results)
- **Point of Contact:** [Webis Group](https://webis.de/people.html)
### Dataset Summary
The Touché23-ValueEval Dataset comprises 9324 arguments from six different sources. An arguments source is indicated with the first letter of its `Argument ID`:
- `A`: [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `C`:Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `D`:[Group Discussion Ideas (GD IDEAS)](https://www.groupdiscussionideas.com)
- `E`:[The Conference for the Future of Europe](https://futureu.europa.eu)
- `F`:Contribution by the language.ml lab (Doratossadat, Omid, Mohammad, Ehsaneddin) [1]:
arguments from the "Nahj al-Balagha" [2] and "Ghurar al-Hikam wa Durar ak-Kalim" [3]
- `G`:[The New York Times](https://www.nytimes.com)
The annotated labels are based on the value taxonomy published in
[Identifying the Human Values behind Arguments](https://webis.de/publications.html#kiesel_2022b) (Kiesel et al. 2022) at ACL'22.
[1] https://language.ml
[2] https://en.wikipedia.org/wiki/Nahj_al-Balagha
[3] https://en.wikipedia.org/wiki/Ghurar_al-Hikam_wa_Durar_al-Kalim
### Dataset Usage
The default configuration name is `main`.
```python
from datasets import load_dataset
dataset = load_dataset("webis/Touche23-ValueEval")
print(dataset['train'].info.description)
for argument in iter(dataset['train']):
print(f"{argument['Argument ID']}: {argument['Stance']} '{argument['Conclusion']}': {argument['Premise']}")
```
### Supported Tasks and Leaderboards
Human Value Detection
### Languages
The [Argument Instances](#argument-instances) are all monolingual; it only includes English (mostly en-US) documents.
The [Metadata Instances](#metadata-instances) for some dataset parts additionally state the arguments in their original language and phrasing.
## Dataset Structure
### Argument Instances
Each argument instance has the following attributes:
- `Argument ID`: The unique identifier for the argument within the dataset
- `Conclusion`: Conclusion text of the argument
- `Stance`: Stance of the `Premise` towards the `Conclusion; one of "in favor of", "against"
- `Premise`: Premise text of the argument
- `Labels`: The `Labels` for each example is an array of 1s (argument resorts to value) and 0s (argument does not resort to value). The order is the same as in the original files.
Additionally, the labels are separated into *value-categories*, aka. level 2 labels of the value taxonomy (Kiesel et al. 2022b), and *human values*, aka. level 1 labels of the value taxonomy.
This distinction is also reflected in the configuration names:
- `<config>`: As the [Task](https://touche.webis.de/semeval23/touche23-web/) is focused mainly on the detection of value-categories,
each base configuration ([listed below](#p-list-base-configs)) has the 20 value-categories as labels:
```python
labels = ["Self-direction: thought", "Self-direction: action", "Stimulation", "Hedonism", "Achievement", "Power: dominance", "Power: resources", "Face", "Security: personal", "Security: societal", "Tradition", "Conformity: rules", "Conformity: interpersonal", "Humility", "Benevolence: caring", "Benevolence: dependability", "Universalism: concern", "Universalism: nature", "Universalism: tolerance", "Universalism: objectivity"]
```
- `<config>-level1`: The 54 human values from the level 1 of the value taxonomy are not used for the 2023 task
(except for the annotation), but are still listed here for some might find them useful for understanding the value
categories. Their order is also the same as in the original files. For more details see the [value-categories](#metadata-instances) configuration.
<p id="p-list-base-configs">The configuration names (as replacements for <code><config></code>) in this dataset are:</p>
- `main`: 8865 arguments (sources: `A`, `D`, `E`) with splits `train`, `validation`, and `test` (default configuration name)
```python
dataset_main_train = load_dataset("webis/Touche23-ValueEval", split="train")
dataset_main_validation = load_dataset("webis/Touche23-ValueEval", split="validation")
dataset_main_test = load_dataset("webis/Touche23-ValueEval", split="test")
```
- `nahjalbalagha`: 279 arguments (source: `F`) with split `test`
```python
dataset_nahjalbalagha_test = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha", split="test")
```
- `nyt`: 80 arguments (source: `G`) with split `test`
```python
dataset_nyt_test = load_dataset("webis/Touche23-ValueEval", name="nyt", split="test")
```
- `zhihu`: 100 arguments (source: `C`) with split `validation`
```python
dataset_zhihu_validation = load_dataset("webis/Touche23-ValueEval", name="zhihu", split="validation")
```
Please note that due to copyright reasons, there currently does not exist a direct download link to the arguments contained in the
New york Times
dataset. Accessing any of the `nyt` or `nyt-level1` configurations will therefore use the specifically created
[nyt-downloader program](https://github.com/touche-webis-de/touche-code/tree/main/semeval23/human-value-detection/nyt-downloader)
to create and access the arguments locally. See the program's
[README](https://github.com/touche-webis-de/touche-code/blob/main/semeval23/human-value-detection/nyt-downloader/README.md)
for further details.
### Metadata Instances
The following lists all configuration names for metadata. Each configuration only has a single split named `meta`.
- `ibm-meta`: Each row corresponds to one argument (IDs starting with `A`) from the [IBM-ArgQ-Rank-30kArgs](https://research.ibm.com/haifa/dept/vst/debating_data.shtml#Argument%20Quality)
- `Argument ID`: The unique identifier for the argument
- `WA`: the quality label according to the weighted-average scoring function
- `MACE-P`: the quality label according to the MACE-P scoring function
- `stance_WA`: the stance label according to the weighted-average scoring function
- `stance_WA_conf`: the confidence in the stance label according to the weighted-average scoring function
```python
dataset_ibm_metadata = load_dataset("webis/Touche23-ValueEval", name="ibm-meta", split="meta")
```
- `zhihu-meta`: Each row corresponds to one argument (IDs starting with `C`) from the Chinese question-answering website [Zhihu](https://www.zhihu.com)
- `Argument ID`: The unique identifier for the argument
- `Conclusion Chinese`: The original chinese conclusion statement
- `Premise Chinese`: The original chinese premise statement
- `URL`: Link to the original statement the argument was taken from
```python
dataset_zhihu_metadata = load_dataset("webis/Touche23-ValueEval", name="zhihu-meta", split="meta")
```
- `gdi-meta`: Each row corresponds to one argument (IDs starting with `D`) from [GD IDEAS](https://www.groupdiscussionideas.com/)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the topic the argument was taken from
```python
dataset_gdi_metadata = load_dataset("webis/Touche23-ValueEval", name="gdi-meta", split="meta")
```
- `cofe-meta`: Each row corresponds to one argument (IDs starting with `E`) from [the Conference for the Future of Europe](https://futureu.europa.eu)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the comment the argument was taken from
```python
dataset_cofe_metadata = load_dataset("webis/Touche23-ValueEval", name="cofe-meta", split="meta")
```
- `nahjalbalagha-meta`: Each row corresponds to one argument (IDs starting with `F`). This file contains information on the 279 arguments in `nahjalbalagha` (or `nahjalbalagha-level1`)
and 1047 additional arguments that were not labeled so far. This data was contributed by the language.ml lab.
- `Argument ID`: The unique identifier for the argument
- `Conclusion Farsi`: Conclusion text of the argument in Farsi
- `Stance Farsi`: Stance of the `Premise` towards the `Conclusion`, in Farsi
- `Premise Farsi`: Premise text of the argument in Farsi
- `Conclusion English`: Conclusion text of the argument in English (translated from Farsi)
- `Stance English`: Stance of the `Premise` towards the `Conclusion`; one of "in favor of", "against"
- `Premise English`: Premise text of the argument in English (translated from Farsi)
- `Source`: Source text of the argument; one of "Nahj al-Balagha", "Ghurar al-Hikam wa Durar ak-Kalim"; their Farsi translations were used
- `Method`: How the premise was extracted from the source; one of "extracted" (directly taken), "deduced"; the conclusion are deduced
```python
dataset_nahjalbalagha_metadata = load_dataset("webis/Touche23-ValueEval", name="nahjalbalagha-meta", split="meta")
```
- `nyt-meta`: Each row corresponds to one argument (IDs starting with `G`) from [The New York Times](https://www.nytimes.com)
- `Argument ID`: The unique identifier for the argument
- `URL`: Link to the article the argument was taken from
- `Internet Archive timestamp`: Timestamp of the article's version in the Internet Archive that was used
```python
dataset_nyt_metadata = load_dataset("webis/Touche23-ValueEval", name="nyt-meta", split="meta")
```
- `value-categories`: Contains a single JSON-entry with the structure of level 2 and level 1 values regarding the value taxonomy:
```
{
"<value category>": {
"<level 1 value>": [
"<exemplary effect a corresponding argument might target>",
...
], ...
}, ...
}
```
As this configuration contains just a single entry, an example usage could be:
```python
value_categories = load_dataset("webis/Touche23-ValueEval", name="value-categories", split="meta")[0]
```
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@Article{mirzakhmedova:2023a,
author = {Nailia Mirzakhmedova and Johannes Kiesel and Milad Alshomary and Maximilian Heinrich and Nicolas Handke\
and Xiaoni Cai and Valentin Barriere and Doratossadat Dastgheib and Omid Ghahroodi and {Mohammad Ali} Sadraei\
and Ehsaneddin Asgari and Lea Kawaletz and Henning Wachsmuth and Benno Stein},
doi = {10.48550/arXiv.2301.13771},
journal = {CoRR},
month = jan,
publisher = {arXiv},
title = {{The Touch{\'e}23-ValueEval Dataset for Identifying Human Values behind Arguments}},
volume = {abs/2301.13771},
year = 2023
}
``` |