Datasets:
Tasks:
Text Classification
Languages:
English
Size:
10K<n<100K
ArXiv:
Tags:
fake-news-detection
License:
Sasha Luccioni
commited on
Commit
•
07a31b8
1
Parent(s):
c1068b3
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment (#4336)
Browse files* Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, PiQA, Poem Sentiment, QAsper
* Update README.md
fixing header
* Update datasets/piqa/README.md
Co-authored-by: Quentin Lhoest <[email protected]>
* Update README.md
changing MSRA NER metric to `seqeval`
* Update README.md
removing ROUGE args
* Update README.md
removing duplicate information
* Update README.md
removing eval for now
* Update README.md
removing eval for now
Co-authored-by: sashavor <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Commit from https://github.com/huggingface/datasets/commit/095d12ff7414df118f60e00cd6494299a881743a
README.md
CHANGED
@@ -19,6 +19,55 @@ task_ids:
|
|
19 |
- text-classification-other-fake-news-detection
|
20 |
paperswithcode_id: liar
|
21 |
pretty_name: LIAR
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for [Dataset Name]
|
|
|
19 |
- text-classification-other-fake-news-detection
|
20 |
paperswithcode_id: liar
|
21 |
pretty_name: LIAR
|
22 |
+
train-eval-index:
|
23 |
+
- config: default
|
24 |
+
task: text-classification
|
25 |
+
task_id: multi_class_classification
|
26 |
+
splits:
|
27 |
+
train_split: train
|
28 |
+
eval_split: test
|
29 |
+
col_mapping:
|
30 |
+
statement: text
|
31 |
+
label: target
|
32 |
+
metrics:
|
33 |
+
- type: accuracy
|
34 |
+
name: Accuracy
|
35 |
+
- type: f1
|
36 |
+
name: F1 macro
|
37 |
+
args:
|
38 |
+
average: macro
|
39 |
+
- type: f1
|
40 |
+
name: F1 micro
|
41 |
+
args:
|
42 |
+
average: micro
|
43 |
+
- type: f1
|
44 |
+
name: F1 weighted
|
45 |
+
args:
|
46 |
+
average: weighted
|
47 |
+
- type: precision
|
48 |
+
name: Precision macro
|
49 |
+
args:
|
50 |
+
average: macro
|
51 |
+
- type: precision
|
52 |
+
name: Precision micro
|
53 |
+
args:
|
54 |
+
average: micro
|
55 |
+
- type: precision
|
56 |
+
name: Precision weighted
|
57 |
+
args:
|
58 |
+
average: weighted
|
59 |
+
- type: recall
|
60 |
+
name: Recall macro
|
61 |
+
args:
|
62 |
+
average: macro
|
63 |
+
- type: recall
|
64 |
+
name: Recall micro
|
65 |
+
args:
|
66 |
+
average: micro
|
67 |
+
- type: recall
|
68 |
+
name: Recall weighted
|
69 |
+
args:
|
70 |
+
average: weighted
|
71 |
---
|
72 |
|
73 |
# Dataset Card for [Dataset Name]
|