mrovera commited on
Commit
a3d0e94
·
verified ·
1 Parent(s): 57fe28d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,7 +19,7 @@ ModaFact is a textual dataset annotated with Event Factuality and Modality in It
19
 
20
  ### Textual data source
21
 
22
- Original texts (sentences) have been sampled from EventNet-ITA, a dataset for Frame Parsing, which was created using texts from Wikipedia.
23
 
24
  ### Statistics
25
 
@@ -40,7 +40,7 @@ ModaFact has been originally annotated at token level, adopting the IOB2 style.
40
  Whereas for Modality the schema is unique, for Factuality we provide two representations: a fine-grained representation (FG), which specifies values over three axes (CERTAINTY, POLARITY, TIME), and a coarse-grained representation (CG), which only provides the final factuality value.
41
 
42
 
43
- Example of **fine-grained representation (CG)**:
44
  ```
45
  Per O
46
  chiarire B-POSSIBLE-POS-FUTURE-FINAL
@@ -120,7 +120,7 @@ Modality:
120
  According to the experimental set presented in the paper (see below, Citation Information) we provide different data formats:
121
  - **token-level BIO sequence labelling**: the dataset is formatted as a two-column `tsv`. The first column contains the token, the second column contains all corresponding labels (factuality and modality), concatenated with `-`. This format makes the dataset ready-to-train with the MaChAmp [seq_bio](https://github.com/machamp-nlp/machamp/blob/master/docs/seq_bio.md) task type.
122
  - **token-level multi-task sequence labelling**: the dataset is formatted as a three-column `tsv`. The first column contains the token, the second column contains all factuality labels, the third column contains the modality label. This format makes the dataset ready-to-train with the Machamp seq_bio **multitask** setting.
123
- - **generative and sequence-to-sequence**: the dataset is formatted as a `jsonl` file, containing a list of dictionaries. Each dictionary has an *Input* field (the sentence) and an *Output* field, a string composed by *token=labels* pairs. This format makes the dataset ready-to train with sequence-to-sequence and causal/generative models.
124
 
125
  ### Data Split
126
 
 
19
 
20
  ### Textual data source
21
 
22
+ Original texts (sentences) have been sampled from [EventNet-ITA](https://huggingface.co/mrovera/eventnet-ita), a dataset for Frame Parsing, consisting of annotated sentences from Wikipedia.
23
 
24
  ### Statistics
25
 
 
40
  Whereas for Modality the schema is unique, for Factuality we provide two representations: a fine-grained representation (FG), which specifies values over three axes (CERTAINTY, POLARITY, TIME), and a coarse-grained representation (CG), which only provides the final factuality value.
41
 
42
 
43
+ Example of **fine-grained representation (FG)**:
44
  ```
45
  Per O
46
  chiarire B-POSSIBLE-POS-FUTURE-FINAL
 
120
  According to the experimental set presented in the paper (see below, Citation Information) we provide different data formats:
121
  - **token-level BIO sequence labelling**: the dataset is formatted as a two-column `tsv`. The first column contains the token, the second column contains all corresponding labels (factuality and modality), concatenated with `-`. This format makes the dataset ready-to-train with the MaChAmp [seq_bio](https://github.com/machamp-nlp/machamp/blob/master/docs/seq_bio.md) task type.
122
  - **token-level multi-task sequence labelling**: the dataset is formatted as a three-column `tsv`. The first column contains the token, the second column contains all factuality labels, the third column contains the modality label. This format makes the dataset ready-to-train with the Machamp seq_bio **multitask** setting.
123
+ - **generative and sequence-to-sequence**: the dataset is formatted as a `jsonl` file, containing a list of dictionaries. Each dictionary has an *Input* field (the sentence) and an *Output* field, a string composed by *token=labels* pairs, separated by `|`. This format makes the dataset ready-to train with sequence-to-sequence and causal/generative models.
124
 
125
  ### Data Split
126