Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
Commit
•
f5baa18
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +183 -0
- best2009.py +140 -0
- dataset_infos.json +1 -0
- dummy/best2009/1.0.0/dummy_data.zip +3 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- th
|
8 |
+
licenses:
|
9 |
+
- cc-by-nc-sa-3-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 100k<n<1M
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- structure-prediction
|
18 |
+
task_ids:
|
19 |
+
- structure-prediction-other-word-tokenization
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for `best2009`
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-fields)
|
32 |
+
- [Data Splits](#data-splits)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Homepage:** https://aiforthai.in.th/
|
50 |
+
- **Repository:** https://aiforthai.in.th/corpus.php
|
51 |
+
- **Paper:**
|
52 |
+
- **Leaderboard:**
|
53 |
+
- **Point of Contact:** https://aiforthai.in.th/
|
54 |
+
|
55 |
+
### Dataset Summary
|
56 |
+
|
57 |
+
`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly.
|
58 |
+
|
59 |
+
### Supported Tasks and Leaderboards
|
60 |
+
|
61 |
+
word tokenization
|
62 |
+
|
63 |
+
### Languages
|
64 |
+
|
65 |
+
Thai
|
66 |
+
|
67 |
+
## Dataset Structure
|
68 |
+
|
69 |
+
### Data Instances
|
70 |
+
|
71 |
+
```
|
72 |
+
{'char': ['?', 'ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', '\n'], 'char_type': [4, 1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1]}
|
73 |
+
{'char': ['ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ห', 'ม', 'า', 'ย', 'ถ', 'ึ', 'ง', ' ', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ข', 'อ', 'ง', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ซ', 'ึ', '่', 'ง', 'เ', 'ร', 'ี', 'ย', 'น', 'ร', 'ู', '้', 'ม', 'า', 'จ', 'า', 'ก', 'พ', '่', 'อ', 'แ', 'ม', '่', ' ', 'ป', 'ู', '่', 'ย', '่', 'า', 'ต', 'า', 'ย', 'า', 'ย', ' ', 'ญ', 'า', 'ต', 'ิ', 'พ', 'ี', '่', 'น', '้', 'อ', 'ง', ' ', 'ห', 'ร', 'ื', 'อ', 'ผ', 'ู', '้', 'ม', 'ี', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ใ', 'น', 'ห', 'ม', 'ู', '่', 'บ', '้', 'า', 'น', 'ใ', 'น', 'ท', '้', 'อ', 'ง', 'ถ', 'ิ', '่', 'น', 'ต', '่', 'า', 'ง', 'ๆ', '\n'], 'char_type': [1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 5, 3, 1, 10, 1, 1, 10, 1, 5, 1, 1, 10, 1, 1, 10, 9, 1, 1, 1, 1, 10, 1, 1, 9, 10, 1, 5, 1, 10, 9, 1, 11, 1, 10, 1, 1, 1, 10, 9, 1, 10, 1, 10, 1, 1, 9, 1, 11, 1, 9, 5, 1, 10, 9, 1, 9, 10, 1, 10, 1, 10, 1, 5, 1, 10, 1, 10, 1, 10, 9, 1, 9, 1, 1, 5, 3, 1, 10, 1, 3, 10, 9, 1, 10, 1, 1, 10, 1, 1, 10, 9, 11, 1, 3, 1, 10, 9, 1, 9, 10, 1, 11, 1, 1, 9, 1, 1, 1, 10, 9, 1, 1, 9, 10, 1, 7, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]}
|
74 |
+
```
|
75 |
+
|
76 |
+
### Data Fields
|
77 |
+
|
78 |
+
- `fname`: file name; also marks if article is articles, news, encyclopedia or novels
|
79 |
+
- `char`: characters
|
80 |
+
- `char_type`: character types as adopted from []() by [deepcut](https://github.com/rkcosmos/deepcut)
|
81 |
+
- `is_beginning`: is beginning of word
|
82 |
+
|
83 |
+
### Data Splits
|
84 |
+
|
85 |
+
| | train | test |
|
86 |
+
|-------------------------|------------|---------|
|
87 |
+
| # lines | 148,995 | 2,252 |
|
88 |
+
| avg words per line | 39.05 | NA |
|
89 |
+
| total words | 5,818,521 | NA |
|
90 |
+
| avg characters per line | 140.39 | 202.79 |
|
91 |
+
| total characters | 20,918,132 | 456,684 |
|
92 |
+
| # lines articles | 16,990 | NA |
|
93 |
+
| # lines encyclopedia | 50,631 | NA |
|
94 |
+
| # lines novels | 50,140 | NA |
|
95 |
+
| # lines news | 31,234 | NA |
|
96 |
+
|
97 |
+
## Dataset Creation
|
98 |
+
|
99 |
+
### Curation Rationale
|
100 |
+
|
101 |
+
The dataset was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10) by [NECTEC](https://www.nectec.or.th/).
|
102 |
+
|
103 |
+
### Source Data
|
104 |
+
|
105 |
+
#### Initial Data Collection and Normalization
|
106 |
+
|
107 |
+
[More Information Needed]
|
108 |
+
|
109 |
+
#### Who are the source language producers?
|
110 |
+
|
111 |
+
Respective authors of the articles, news, encyclopedia and novels
|
112 |
+
|
113 |
+
### Annotations
|
114 |
+
|
115 |
+
#### Annotation process
|
116 |
+
|
117 |
+
Detailed annotation guidelines can be found in `BEST_Guideline_Release1.pdf` as part of the uncompressed files. Word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf)
|
118 |
+
|
119 |
+
#### Who are the annotators?
|
120 |
+
|
121 |
+
[More Information Needed]
|
122 |
+
|
123 |
+
### Personal and Sensitive Information
|
124 |
+
|
125 |
+
All data are curated from public sources. No personal and sensitive information is expected to be included.
|
126 |
+
|
127 |
+
## Considerations for Using the Data
|
128 |
+
|
129 |
+
### Social Impact of Dataset
|
130 |
+
|
131 |
+
- word tokenization dataset from articles, news, encyclopedia and novels
|
132 |
+
|
133 |
+
### Discussion of Biases
|
134 |
+
|
135 |
+
- texts are relatively formal ones from articles, news, encyclopedia and novels.
|
136 |
+
- word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf).
|
137 |
+
|
138 |
+
### Other Known Limitations
|
139 |
+
|
140 |
+
- some tags unrelated to word tokenization (`<NE>` and `<AB>`) are cleaned out.
|
141 |
+
- no word boundary provdied for the test set
|
142 |
+
|
143 |
+
## Additional Information
|
144 |
+
|
145 |
+
### Dataset Curators
|
146 |
+
|
147 |
+
[NECTEC](https://www.nectec.or.th/)
|
148 |
+
|
149 |
+
### Licensing Information
|
150 |
+
|
151 |
+
CC-BY-NC-SA 3.0
|
152 |
+
|
153 |
+
### Citation Information
|
154 |
+
|
155 |
+
Dataset:
|
156 |
+
```
|
157 |
+
@inproceedings{kosawat2009best,
|
158 |
+
title={BEST 2009: Thai word segmentation software contest},
|
159 |
+
author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others},
|
160 |
+
booktitle={2009 Eighth International Symposium on Natural Language Processing},
|
161 |
+
pages={83--88},
|
162 |
+
year={2009},
|
163 |
+
organization={IEEE}
|
164 |
+
}
|
165 |
+
@inproceedings{boriboon2009best,
|
166 |
+
title={Best corpus development and analysis},
|
167 |
+
author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit},
|
168 |
+
booktitle={2009 International Conference on Asian Language Processing},
|
169 |
+
pages={322--327},
|
170 |
+
year={2009},
|
171 |
+
organization={IEEE}
|
172 |
+
}
|
173 |
+
```
|
174 |
+
|
175 |
+
Character type features:
|
176 |
+
```
|
177 |
+
@inproceedings{haruechaiyasak2009tlex,
|
178 |
+
title={TLex: Thai lexeme analyser based on the conditional random fields},
|
179 |
+
author={Haruechaiyasak, Choochart and Kongyoung, Sarawoot},
|
180 |
+
booktitle={Proceedings of 8th International Symposium on Natural Language Processing},
|
181 |
+
year={2009}
|
182 |
+
}
|
183 |
+
```
|
best2009.py
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from __future__ import absolute_import, division, print_function
|
2 |
+
|
3 |
+
import os
|
4 |
+
from functools import reduce
|
5 |
+
from pathlib import Path
|
6 |
+
|
7 |
+
import datasets
|
8 |
+
|
9 |
+
|
10 |
+
_CITATION = """\
|
11 |
+
@inproceedings{kosawat2009best,
|
12 |
+
title={BEST 2009: Thai word segmentation software contest},
|
13 |
+
author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others},
|
14 |
+
booktitle={2009 Eighth International Symposium on Natural Language Processing},
|
15 |
+
pages={83--88},
|
16 |
+
year={2009},
|
17 |
+
organization={IEEE}
|
18 |
+
}
|
19 |
+
@inproceedings{boriboon2009best,
|
20 |
+
title={Best corpus development and analysis},
|
21 |
+
author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit},
|
22 |
+
booktitle={2009 International Conference on Asian Language Processing},
|
23 |
+
pages={322--327},
|
24 |
+
year={2009},
|
25 |
+
organization={IEEE}
|
26 |
+
}
|
27 |
+
"""
|
28 |
+
|
29 |
+
_LICENSE = "CC-BY-NC-SA 3.0"
|
30 |
+
|
31 |
+
_DESCRIPTION = """\
|
32 |
+
`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by
|
33 |
+
[NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for
|
34 |
+
[BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10).
|
35 |
+
The test set answers are not provided publicly.
|
36 |
+
"""
|
37 |
+
|
38 |
+
|
39 |
+
class Best2009Config(datasets.BuilderConfig):
|
40 |
+
def __init__(self, **kwargs):
|
41 |
+
"""BuilderConfig
|
42 |
+
|
43 |
+
Args:
|
44 |
+
**kwargs: keyword arguments forwarded to super.
|
45 |
+
"""
|
46 |
+
super(Best2009Config, self).__init__(**kwargs)
|
47 |
+
|
48 |
+
|
49 |
+
class Best2009(datasets.GeneratorBasedBuilder):
|
50 |
+
|
51 |
+
_DOWNLOAD_URL = "https://archive.org/download/best_dataset/data.zip"
|
52 |
+
_TRAIN_FOLDER = "train"
|
53 |
+
_TEST_FOLDER = "test"
|
54 |
+
|
55 |
+
_USELESS_TAGS = {"<NE>": "", "</NE>": "", "<AB>": "", "</AB>": ""}
|
56 |
+
# character type mapping from https://github.com/rkcosmos/deepcut/blob/master/deepcut/utils.py
|
57 |
+
_CHAR_TYPES_DICT = {
|
58 |
+
"กขฃคฆงจชซญฎฏฐฑฒณดตถทธนบปพฟภมยรลวศษสฬอ": "c",
|
59 |
+
"ฅฉผฟฌหฮ": "n",
|
60 |
+
"ะาำิีืึุู": "v", # า ะ ำ ิ ี ึ ื ั ู ุ
|
61 |
+
"เแโใไ": "w",
|
62 |
+
"่้๊๋": "t", # วรรณยุกต์ ่ ้ ๊ ๋
|
63 |
+
"์ๆฯ.": "s", # ์ ๆ ฯ .
|
64 |
+
"0123456789๑๒๓๔๕๖๗๘๙": "d",
|
65 |
+
'"': "q",
|
66 |
+
"‘": "q",
|
67 |
+
"’": "q",
|
68 |
+
"'": "q",
|
69 |
+
" ": "p",
|
70 |
+
"abcdefghijklmnopqrstuvwxyz": "s_e",
|
71 |
+
"ABCDEFGHIJKLMNOPQRSTUVWXYZ": "b_e",
|
72 |
+
}
|
73 |
+
_CHAR_TYPE_FLATTEN = {}
|
74 |
+
for ks, v in _CHAR_TYPES_DICT.items():
|
75 |
+
for k in ks:
|
76 |
+
_CHAR_TYPE_FLATTEN[k] = v
|
77 |
+
_CHAR_TYPES = ["b_e", "c", "d", "n", "o", "p", "q", "s", "s_e", "t", "v", "w"]
|
78 |
+
|
79 |
+
BUILDER_CONFIGS = [
|
80 |
+
Best2009Config(
|
81 |
+
name="best2009",
|
82 |
+
version=datasets.Version("1.0.0"),
|
83 |
+
description=_DESCRIPTION,
|
84 |
+
),
|
85 |
+
]
|
86 |
+
|
87 |
+
def _info(self):
|
88 |
+
return datasets.DatasetInfo(
|
89 |
+
description=_DESCRIPTION,
|
90 |
+
features=datasets.Features(
|
91 |
+
{
|
92 |
+
"fname": datasets.Value("string"),
|
93 |
+
"char": datasets.Sequence(datasets.Value("string")),
|
94 |
+
"char_type": datasets.Sequence(datasets.features.ClassLabel(names=self._CHAR_TYPES)),
|
95 |
+
"is_beginning": datasets.Sequence(datasets.features.ClassLabel(names=["neg", "pos"])),
|
96 |
+
}
|
97 |
+
),
|
98 |
+
supervised_keys=None,
|
99 |
+
homepage="https://aiforthai.in.th/",
|
100 |
+
citation=_CITATION,
|
101 |
+
license=_LICENSE,
|
102 |
+
)
|
103 |
+
|
104 |
+
def _split_generators(self, dl_manager):
|
105 |
+
arch_path = dl_manager.download_and_extract(self._DOWNLOAD_URL)
|
106 |
+
data_dir = os.path.join(arch_path, "data")
|
107 |
+
return [
|
108 |
+
datasets.SplitGenerator(
|
109 |
+
name=datasets.Split.TRAIN,
|
110 |
+
gen_kwargs={"filepath": os.path.join(data_dir, self._TRAIN_FOLDER), "split": "train"},
|
111 |
+
),
|
112 |
+
datasets.SplitGenerator(
|
113 |
+
name=datasets.Split.TEST,
|
114 |
+
gen_kwargs={"filepath": os.path.join(data_dir, self._TEST_FOLDER), "split": "train"},
|
115 |
+
),
|
116 |
+
]
|
117 |
+
|
118 |
+
def _generate_examples(self, filepath, split):
|
119 |
+
for fname in sorted(Path(filepath).rglob("*.txt")):
|
120 |
+
with open(fname, encoding="utf-8") as f:
|
121 |
+
for _id, line in enumerate(f):
|
122 |
+
chars = []
|
123 |
+
char_types = []
|
124 |
+
is_beginnings = []
|
125 |
+
# replace useless tokens
|
126 |
+
line = reduce(lambda a, kv: a.replace(*kv), self._USELESS_TAGS.items(), line)
|
127 |
+
# tokens are pipe separated
|
128 |
+
splits = line.split("|")
|
129 |
+
for token in splits:
|
130 |
+
for i in range(len(token)):
|
131 |
+
chars.append(token[i])
|
132 |
+
char_types.append(self._CHAR_TYPE_FLATTEN.get(token[i], "o"))
|
133 |
+
is_beginning = 1 if i == 0 else 0
|
134 |
+
is_beginnings.append(is_beginning)
|
135 |
+
yield _id, {
|
136 |
+
"fname": fname.name,
|
137 |
+
"char": chars,
|
138 |
+
"char_type": char_types,
|
139 |
+
"is_beginning": is_beginnings if split == "train" else [0 for i in range(len(chars))],
|
140 |
+
}
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"best2009": {"description": "`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by\n[NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for\n[BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10).\nThe test set answers are not provided publicly.\n", "citation": "@inproceedings{kosawat2009best,\n title={BEST 2009: Thai word segmentation software contest},\n author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others},\n booktitle={2009 Eighth International Symposium on Natural Language Processing},\n pages={83--88},\n year={2009},\n organization={IEEE}\n}\n@inproceedings{boriboon2009best,\n title={Best corpus development and analysis},\n author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit},\n booktitle={2009 International Conference on Asian Language Processing},\n pages={322--327},\n year={2009},\n organization={IEEE}\n}\n", "homepage": "https://aiforthai.in.th/", "license": "CC-BY-NC-SA 3.0", "features": {"fname": {"dtype": "string", "id": null, "_type": "Value"}, "char": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "char_type": {"feature": {"num_classes": 12, "names": ["b_e", "c", "d", "n", "o", "p", "q", "s", "s_e", "t", "v", "w"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "is_beginning": {"feature": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "best2009", "config_name": "best2009", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 483129998, "num_examples": 148995, "dataset_name": "best2009"}, "test": {"name": "test", "num_bytes": 10498726, "num_examples": 2252, "dataset_name": "best2009"}}, "download_checksums": {"https://archive.org/download/best_dataset/data.zip": {"num_bytes": 13891260, "checksum": "009386ea03aab2abd194bcb3b86c01b81038f460296c447ce2c0e561d3eca64f"}}, "download_size": 13891260, "post_processing_size": null, "dataset_size": 493628724, "size_in_bytes": 507519984}}
|
dummy/best2009/1.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76493fafb238c2d7cf264354a31380029d69f66710c6385e246397d7b90688e1
|
3 |
+
size 18586
|