Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
Update files from the datasets library (from 1.6.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.6.0
- README.md +1 -1
- best2009.py +0 -2
README.md
CHANGED
@@ -10,7 +10,7 @@ licenses:
|
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
size_categories:
|
13 |
-
-
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
|
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
size_categories:
|
13 |
+
- 100K<n<1M
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
best2009.py
CHANGED
@@ -1,5 +1,3 @@
|
|
1 |
-
from __future__ import absolute_import, division, print_function
|
2 |
-
|
3 |
import os
|
4 |
from functools import reduce
|
5 |
from pathlib import Path
|
|
|
|
|
|
|
1 |
import os
|
2 |
from functools import reduce
|
3 |
from pathlib import Path
|