Update README.md
Browse files
README.md
CHANGED
@@ -450,25 +450,49 @@ Modalities:
|
|
450 |
|
451 |
### Datasets Summary
|
452 |
|
453 |
-
Misinformation is a challenging societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this problem, we have
|
454 |
-
Please refer to our [paper](https://arxiv.org/abs/2411.05060) for further details.
|
455 |
|
456 |
-
|
457 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
458 |
|
459 |
### Note for Users
|
460 |
|
461 |
-
|
462 |
|
|
|
463 |
|
464 |
### Data pre-processing
|
465 |
|
466 |
-
[These scripts](https://github.com/ComplexData-MILA/misinfo-dataset-preprocessing) were designed to transform the
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
467 |
|
468 |
-
|
469 |
-
This dataset is made available by [Complex Data Lab](https://complexdatalabmcgill.github.io/), a group composed of researchers from University of Montreal and McGill University.
|
470 |
-
The lab is led by [Dr. Reihaneh Rabbany](https://complexdatalabmcgill.github.io/team/reihaneh+rabbany) and [Dr. Jean-François Godbout
|
471 |
-
](https://jf-godbout.github.io/)
|
472 |
|
473 |
|
474 |

|
|
|
450 |
|
451 |
### Datasets Summary
|
452 |
|
453 |
+
Misinformation is a challenging societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this problem, we have surveyed (mis)information datasets in the literature, collected those that are accessible, and made them available here in a unified repository. We also harmonized all original factuality labels into a single variable named veracity, which includes three categories: true, false, and unknown. We further analyzed the quality of these datasets with the results presented in the table below. This is a live repository, and we add more datasets as they become available. If you would like to contribute a novel dataset or report any issues, please email us or visit our GitHub.
|
|
|
454 |
|
455 |
+
Currently, we have 36 claims datasets and 9 paragraph datasets in the repository listed below, where the columns indicate three quality checks: keyword and temporal columns indicate spurious correlations in the data, and feasibility indicates if the majority of the data points have enough information to be verified in the first place. Please refer to our [paper](https://arxiv.org/abs/2411.05060) for further details.
|
456 |
+
|
457 |
+
|
458 |
+

|
459 |
+
|
460 |
+
We also present the subject counts and the language coverage present in these datasets.
|
461 |
+
|
462 |
+

|
463 |
+
|
464 |
+

|
465 |
|
466 |
### Note for Users
|
467 |
|
468 |
+
The large number of NA values is expected. Given that we have merged multiple datasets with different variables, there are a large number of missing values (NA) for variables not relevant to records from each specific dataset. Variables referring to the same type of information, but labelled differently (e.g. claim, text, tweet, etc.), were standardized under a common name (e.g. claim). In contrast, variables unique to a specific dataset and not present in others retained their original names. As a result, these variables appear as missing (NA) in other datasets where they were not originally present.
|
469 |
|
470 |
+
As a result of merging datasets with different codings, some variables may contain equivalent observations expressed in different forms. For instance, the country variable may include values such as US, usa, and United States. Further data cleaning is recommended for non-standarized variables.
|
471 |
|
472 |
### Data pre-processing
|
473 |
|
474 |
+
[These scripts](https://github.com/ComplexData-MILA/misinfo-dataset-preprocessing) were designed to transform the data format from [the original CSV file](https://huggingface.co/datasets/ComplexDataLab/Misinfo_Datasets/blob/main/claims_data.csv.gz) to the Parquet files.
|
475 |
+
|
476 |
+
The pre-processing can be summarized as the following steps:
|
477 |
+
|
478 |
+
1. Mapping the Veracity from a number/null value to a string-like boolean: 1 --> True, 2 --> False, 3 --> unknown, null --> na, etc.
|
479 |
+
|
480 |
+
2. The columns are reordered so that the "veracity" column is 2nd and the "dataset" column is 3rd. No other order changes will be made.
|
481 |
+
|
482 |
+
3. Converting all the "null" to "na" and all data to string type
|
483 |
+
|
484 |
+
4. Making the "split" column evenly distributed
|
485 |
+
|
486 |
+
5. Converting from csv format to parquet format
|
487 |
+
|
488 |
+
6. Splitting the parquet file into 3 categories (train, test, validation)
|
489 |
+
|
490 |
+
7. Splitting the train, test, and validation files into versions per dataset
|
491 |
+
|
492 |
+
### For More Information
|
493 |
+
C. Thibault et al., 'A Guide to Misinformation Detection Data and Evaluation', arXiv [cs.SI]. 2025.
|
494 |
|
495 |
+
[Link here](https://arxiv.org/abs/2411.05060v1)
|
|
|
|
|
|
|
496 |
|
497 |
|
498 |

|