Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,41 @@ To ensure ease of use, the dataset is partitioned into 10 parts. Each part can b
|
|
41 |
1. **id**: The unique identifier for the sample.
|
42 |
2. **audio_file_path**: The file path for the audio in the dataset.
|
43 |
3. **category**: The category of the sample's text.
|
44 |
-
4. **text**: The corresponding text of the audio file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## Citations
|
47 |
If you find our paper, code, data, or models useful, please cite the paper:
|
|
|
41 |
1. **id**: The unique identifier for the sample.
|
42 |
2. **audio_file_path**: The file path for the audio in the dataset.
|
43 |
3. **category**: The category of the sample's text.
|
44 |
+
4. **text**: The corresponding text of the audio file.
|
45 |
+
|
46 |
+
## Usage Instructions
|
47 |
+
|
48 |
+
To use this dataset, download the parts and metadata files as follows:
|
49 |
+
|
50 |
+
### Option 1: Manual Download
|
51 |
+
Visit the [dataset repository](https://huggingface.co/datasets/llm-lab/SpeechBrown/tree/main) and download all `dataset_partX.zip` files and the `global_metadata.json` file.
|
52 |
+
|
53 |
+
### Option 2: Programmatic Download
|
54 |
+
Use the `huggingface_hub` library to download the files programmatically:
|
55 |
+
|
56 |
+
```python
|
57 |
+
from huggingface_hub import hf_hub_download
|
58 |
+
from zipfile import ZipFile
|
59 |
+
import os
|
60 |
+
import json
|
61 |
+
|
62 |
+
# Download dataset parts
|
63 |
+
zip_file_path1 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part1.zip", repo_type="dataset")
|
64 |
+
zip_file_path2 = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="dataset_part2.zip", repo_type="dataset")
|
65 |
+
# Download other parts...
|
66 |
+
|
67 |
+
# Download metadata
|
68 |
+
metadata_file_path = hf_hub_download(repo_id="llm-lab/SpeechBrown", filename="global_metadata.json", repo_type="dataset")
|
69 |
+
|
70 |
+
for i in range(1, 11):
|
71 |
+
with ZipFile(f'dataset_part{i}.zip', 'r') as zip_ref:
|
72 |
+
zip_ref.extractall(f'dataset_part{i}')
|
73 |
+
os.remove(f'dataset_part{i}.zip')
|
74 |
+
|
75 |
+
with open('global_metadata.json', 'r') as f:
|
76 |
+
metadata = json.load(f)
|
77 |
+
metadata.keys()
|
78 |
+
```
|
79 |
|
80 |
## Citations
|
81 |
If you find our paper, code, data, or models useful, please cite the paper:
|