File size: 3,537 Bytes
eea17d6
196c302
 
 
 
 
 
 
 
 
 
 
 
 
 
4d66662
196c302
 
 
 
 
4d66662
196c302
eea17d6
842ea51
 
 
 
 
 
 
 
7a44f43
842ea51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a44f43
842ea51
 
 
 
 
7a44f43
842ea51
7a44f43
 
 
 
842ea51
7a44f43
842ea51
7a44f43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
842ea51
 
 
7a44f43
842ea51
 
 
7a44f43
842ea51
 
 
7a44f43
842ea51
 
 
7a44f43
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en
- zh-CN
licenses:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: 'ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in
  Multi-turn Conversation'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- code-switching
- speech-recognition
---

# Dataset Card for ASCEND

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
  - [Data Splits](#data-instances)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2112.06223
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]

### Dataset Summary

ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong. ASCEND consists of 10.62 hours of spontaneous speech with a total of ~12.3K utterances. The corpus is split into 3 sets: training, validation, and test with a ratio of 8:1:1 while maintaining a balanced gender proportion on each set.

### Supported Tasks and Leaderboards

Code-switching.

### Languages

Chinese and English

## Usage

```
import datasets
dataset = datasets.load_dataset("CAiRE/ASCEND", "train") # split: "train", "validation", "test"
```

## Dataset Structure

A typical data point comprises the path to the audio file, the loaded audio array, and its transcription. Additional fields include datapoint id, duration, language, speaker id, session id, and topic.

```
{
	'id': '00000',
	'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses1_spk17_L3818_9.3200_0.6400.wav',
	'audio': {
		'path': '.cache/huggingface/datasets/downloads/extracted/f0b33b5266cd9452ee310eef3577cf7adb7f29aa54dbff74b9a8ee406a55d614/waves/ses1_spk17_L3818_9.3200_0.6400.wav',
		'array': array([0.00057983, 0.00073242, 0.00125122, ..., 0.00204468, 0.00250244,
			0.00201416
		], dtype = float32),
		'sampling_rate': 16000
	},
	'transcription': '好的',
	'duration': 0.6399999856948853,
	'language': 'zh',
	'original_speaker_id': 17,
	'session_id': 1,
	'topic': 'persona'
}
```

### Data Splits

Number of utterances: 9,869 train, 1,129 validation, and 1,315 test.

## Additional Information

For comprehensive explanations, please check [our paper](https://arxiv.org/pdf/2112.06223.pdf).

### Licensing Information

Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0)

### Citation Information

If you use our dataset, please cite us:

```
@inproceedings{lovenia2022ascend,
  title={ASCEND: A Spontaneous Chinese-English Dataset for Code-switching in Multi-turn Conversation},
  author={Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta Indra and Xu, Peng and Yan, Xu and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J and others},
  booktitle={Proceedings of the 13th Language Resources and Evaluation Conference (LREC)},
  year={2022}
 ```