Datasets:
Formats:
parquet
Size:
10K - 100K
metadata
tags:
- audio
- khmer
- english
- speech-to-text
- translation
dataset_info:
features:
- name: audio
dtype: audio
- name: kh
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 2387180819
num_examples: 18000
- name: test
num_bytes: 239838689.75
num_examples: 1850
download_size: 2381427254
dataset_size: 2627019508.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
Dataset Card for khmer-speech-large-english-google-translation
Audio recordings of khmer speech with varying speakers and background noises. English transcriptions were transcribed from the Khmer labels using Google Translate. Based off of seanghay/khmer-speech-large.
Dataset Details
Dataset Description
- Language(s) (NLP): Khmer, English
- License: [More Information Needed]
Dataset Sources
- Huggingface: seanghay/khmer-speech-large
Usage
from datasets import load_dataset
ds = load_dataset("djsamseng/khmer-speech-large-english-google-translations")
ds["train"] # First 18,000 records
ds["test"] # Remaining 1,900 records
ds["train"][0]["audio"] # { "array": [0.01, 0.02, ...], "sampling_rate": 16000 } }
ds["train"][0]["kh"] # "ααα αα
αααα»α ααααα ααα½αααααΆαα αα·α ααααα ααααα ααααααΆαα ααΆα α₯αα·ααΆαα ααΆαα ααααα α’ααααα"
ds["train"][0]["en"] # "Live in a society that recognizes and values ββas well as behaves in a way that pleases you"
Data Cleaning
- If desired, remove
\u200b
characters from both "en" and "kh" - If desired, replace
'
with'
from "en"