Datasets:

Languages:
Chinese
License:
Sasha Luccioni commited on
Commit
e298cb5
·
1 Parent(s): c7e84e4

Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full (#4338)

Browse files

* Eval metadata for Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full

* Update README.md

oops, duplicate

* Update README.md

adding header

* Update README.md

adding header

* Update datasets/xsum/README.md

Co-authored-by: Quentin Lhoest <[email protected]>

* Update README.md

removing ROUGE args

* Update README.md

removing F1 params

* Update README.md

* Update README.md

updating to binary F1

* Update README.md

oops typo

* Update README.md

F1 binary

Co-authored-by: sashavor <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>

Commit from https://github.com/huggingface/datasets/commit/bce8e6541af98cb0e06b24f9c96a6431cf813a38

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -19,9 +19,22 @@ task_ids:
19
  - named-entity-recognition
20
  paperswithcode_id: weibo-ner
21
  pretty_name: Weibo NER
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
- # Dataset Card Creation Guide
25
 
26
  ## Table of Contents
27
  - [Dataset Description](#dataset-description)
 
19
  - named-entity-recognition
20
  paperswithcode_id: weibo-ner
21
  pretty_name: Weibo NER
22
+ train-eval-index:
23
+ - config: default
24
+ task: token-classification
25
+ task_id: entity_extraction
26
+ splits:
27
+ train_split: train
28
+ eval_split: test
29
+ col_mapping:
30
+ tokens: tokens
31
+ ner_tags: tags
32
+ metrics:
33
+ - type: seqeval
34
+ name: seqeval
35
  ---
36
 
37
+ # Dataset Card for "Weibo NER"
38
 
39
  ## Table of Contents
40
  - [Dataset Description](#dataset-description)