Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -61,6 +61,55 @@ It is splitted into train, dev, and test set with following information:
|
|
61 |
2. Dev set: 1,106 comments
|
62 |
3. Test set: 1,106 comments
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
### Citation Information
|
65 |
```
|
66 |
@inproceedings{hoang-etal-2023-vihos,
|
|
|
61 |
2. Dev set: 1,106 comments
|
62 |
3. Test set: 1,106 comments
|
63 |
|
64 |
+
## Data Structure
|
65 |
+
Here is our data folder structure!
|
66 |
+
```
|
67 |
+
.
|
68 |
+
└── data/
|
69 |
+
├── train_sequence_labeling/
|
70 |
+
│ ├── syllable/
|
71 |
+
│ │ ├── dev_BIO_syllable.csv
|
72 |
+
│ │ ├── test_BIO_syllable.csv
|
73 |
+
│ │ └── train_BIO_syllable.csv
|
74 |
+
│ └── word/
|
75 |
+
│ ├── dev_BIO_Word.csv
|
76 |
+
│ ├── test_BIO_Word.csv
|
77 |
+
│ └── train_BIO_Word.csv
|
78 |
+
├── train_span_extraction/
|
79 |
+
│ ├── dev.csv
|
80 |
+
│ └── train.csv
|
81 |
+
└── test/
|
82 |
+
└── test.csv
|
83 |
+
```
|
84 |
+
### Sequence labeling-based version
|
85 |
+
#### Syllable
|
86 |
+
Description:
|
87 |
+
- This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns:
|
88 |
+
- **index**: The id of the word.
|
89 |
+
- **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer followed by underscore tokenization.
|
90 |
+
The reason for this is that some words are in bad format:
|
91 |
+
e.g. "điện.thoại của tôi" is split into ["điện.thoại", "của", "tôi"] instead of ["điện", "thoại", "của", "tôi"] if we use space tokenization, which is not in the right format of Syllable.
|
92 |
+
As that, we used VnCoreNLP to tokenize first and then split words into tokens.
|
93 |
+
e.g. "điện.thoại của tôi" ---(VnCoreNLP)---> ["điện_thoại", "của", "tôi"] ---(split by "_")---> ["điện", "thoại", "của", "tôi"].
|
94 |
+
- **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word).
|
95 |
+
- The train_BIO_syllable and dev_BIO_syllable file are used for training and validation for XLMR model, respectively.
|
96 |
+
- The test_BIO_syllable file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the Testdata folder for testing the model.**
|
97 |
+
#### Word
|
98 |
+
Description:
|
99 |
+
- This folder contains the data for the sequence labeling-based version of the task. The data is divided into two files: train, and dev. Each file contains the following columns:
|
100 |
+
- **index**: The id of the word.
|
101 |
+
- **word**: Words in the sentence after the processing of tokenization using [VnCoreNLP](https://github.com/vncorenlp/VnCoreNLP) tokenizer
|
102 |
+
- **tag**: The tag of the word. The tag is either B-T (beginning of a word), I-T (inside of a word), or O (outside of a word).
|
103 |
+
- The train_BIO_Word and dev_BIO_Word file are used for training and validation for PhoBERT model, respectively.
|
104 |
+
- The test_BIO_Word file is used for reference only. It is not used for testing the model. **Please use the test.csv file in the Testdata folder for testing the model.**
|
105 |
+
|
106 |
+
### Span Extraction-based version
|
107 |
+
Description:
|
108 |
+
- This folder contains the data for the span extraction-based version of the task. The data is divided into two files: train and dev. Each file contains the following columns:
|
109 |
+
- **content**: The content of the sentence.
|
110 |
+
- **index_spans**: The index of the hate and offensive spans in the sentence. The index is in the format of [start, end] where start is the index of the first character of the hate and offensive span and end is the index of the last character of the hate and offensive span.
|
111 |
+
- The train and dev file are used for training and validation for BiLSTM-CRF model, respectively.
|
112 |
+
|
113 |
### Citation Information
|
114 |
```
|
115 |
@inproceedings{hoang-etal-2023-vihos,
|