|
--- |
|
license: apache-2.0 |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: validation |
|
num_bytes: 642375 |
|
num_examples: 535 |
|
- name: train |
|
num_bytes: 15585375 |
|
num_examples: 12703 |
|
download_size: 7315916 |
|
dataset_size: 16227750 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: validation |
|
path: data/validation-* |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Open Assistant Conversations Dataset Release 2 (OASST2) in Uzbek language |
|
|
|
This dataset is an Uzbek translated version of [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2) dataset in a thread format with Llama3 chat template. |
|
|
|
Refer to this [translated version](https://huggingface.co/datasets/MLDataScientist/oasst2_uzbek) if you need the original tree format. Otherwise, use this thread format for fine-tuning Llama3 models. |
|
|
|
--- |
|
|
|
The Uzbek translation was completed in 45 hours using a single T4 GPU and [nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) model. |
|
|
|
Based on nllb metrics, you might want to only filter out records that were not originally in English or Russian since English-Uzbek and Russian-Uzbek have acceptable metrics and translation quality is noticeable better for those pairs based on my short reviews. |
|
|
|
I am sharing the entire Uzbek translated dataset for future research. |
|
|
|
The following repo and command was used to do the Uzbek translation. |
|
|
|
Repo: https://github.com/UnderstandLingBV/LLaMa2lang |
|
|
|
Command used: |
|
|
|
```!python3 translate.py nllb --model_size 3.3B uzn_Latn output_uzbek --quant8 --base_dataset OpenAssistant/oasst2 --max_length 512 --checkpoint_n 400 --batch_size 40``` |
|
|
|
I will fine-tune LLAMA3 8B Uzbek chat model and release in HF soon. |