|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: q |
|
dtype: string |
|
- name: a |
|
dtype: string |
|
splits: |
|
- name: th2en |
|
num_bytes: 261404758 |
|
num_examples: 438008 |
|
- name: en2th |
|
num_bytes: 261020179 |
|
num_examples: 437191 |
|
- name: mtinstruct_th2en |
|
num_bytes: 194991586 |
|
num_examples: 438008 |
|
- name: mtinstruct_en2th |
|
num_bytes: 194712580 |
|
num_examples: 437191 |
|
download_size: 349631371 |
|
dataset_size: 912129103 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: th2en |
|
path: data/th2en-* |
|
- split: en2th |
|
path: data/en2th-* |
|
- split: mtinstruct_th2en |
|
path: data/mtinstruct_th2en-* |
|
- split: mtinstruct_en2th |
|
path: data/mtinstruct_en2th-* |
|
license: cc-by-3.0 |
|
language: |
|
- th |
|
- en |
|
--- |
|
# Thai Aligninstruct Dataset |
|
|
|
This project was try to make English-Thai Aligninstruct. |
|
|
|
I use simalign for making the High Quality Word Alignments Without Parallel Training Data Using Static and Contextualized Embeddings and use [deepcut](https://github.com/rkcosmos/deepcut) (by [lekcut](https://github.com/PyThaiNLP/LEKCut)) to tokenize Thai word. |
|
|
|
The data use [scb-mt-en-th-2020](https://huggingface.co/datasets/scb_mt_enth_2020). |
|
|
|
|
|
## References |
|
|
|
- Aligninstruct: [https://arxiv.org/abs/2401.05811v1](https://arxiv.org/abs/2401.05811v1) |
|
- SimAlign: High Quality Word Alignments Without Parallel Training Data Using Static and Contextualized Embeddings: [https://aclanthology.org/2020.findings-emnlp.147/](https://aclanthology.org/2020.findings-emnlp.147/) |
|
- scb-mt-en-th-2020: A Large English-Thai Parallel Corpus: [https://arxiv.org/abs/2007.03541](https://arxiv.org/abs/2007.03541) |