Datasets:
mb23
/

Languages:
English
License:
File size: 6,175 Bytes
bee4911
 
ddc4c97
 
 
 
 
 
5973fe6
bee4911
ddc4c97
8779c37
ddc4c97
c143885
 
8779c37
5371483
fbdac43
5371483
 
 
 
fbdac43
 
 
61dae2a
fbdac43
 
 
 
 
5371483
 
8779c37
5371483
 
 
 
ddc4c97
 
 
 
 
 
 
 
 
 
 
 
 
 
8779c37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5973fe6
 
8779c37
a850817
5973fe6
c143885
5973fe6
c143885
5973fe6
c143885
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5973fe6
c143885
38655b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5973fe6
c143885
 
5973fe6
 
 
 
 
 
 
38655b2
 
 
 
5973fe6
 
 
 
 
 
 
 
 
 
 
3501186
38655b2
5973fe6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
license: cc-by-sa-4.0
language:
- en
tags:
- music
- spectrogram
size_categories:
- 10K<n<100K
---

# Google/MusicCapsをスペクトログラムにしたデータ。

* <font color="red">The dataset viwer of this repository is truncated, so maybe you should see <a href="https://huggingface.co/datasets/mb23/GraySpectrotram_example">this one</a> instaed.</font>

## Dataset information
<table>
  <thead>
    <td>画像</td>
    <td>caption</td>
    <td>data_idx</td>
    <td>number</td>
  </thead>
  <tbody>
    <tr>
      <td>1025px × 216px</td>
      <td>音楽の説明</td>
      <td>どのデータから生成されたデータか</td>
      <td>5秒ずつ区切ったデータのうち、何番目か</td>
    </tr>
  </tbody>
</table>

## How this dataset was made

* コード:https://colab.research.google.com/drive/13m792FEoXszj72viZuBtusYRUL1z6Cu2?usp=sharing
* 参考にしたKaggle Notebook : https://www.kaggle.com/code/osanseviero/musiccaps-explorer

```python
from PIL import Image
import IPython.display
import cv2

# 1. wavファイルを解析
y, sr = librosa.load("wavファイルなど")

# 2. フーリエ変換を適用して周波数成分を取得
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max) # librosaを用いてデータを作る
image = Image.fromarray(np.uint8(D), mode='L')  # 'L'は1チャンネルのグレースケールモードを指定します
image.save('spectrogram_{}.png')
```

## Recover music(wave form) from sprctrogram
```python
im = Image.open("pngファイル")
db_ud = np.uint8(np.array(im))
amp = librosa.db_to_amplitude(db_ud)
print(amp.shape)
# (1025, 861)は20秒のwavファイルをスペクトログラムにした場合
# (1025, 431)は10秒のwavファイルをスペクトログラムにした場合
# (1025, 216)は5秒のwavファイルをスペクトログラムにした場合

y_inv = librosa.griffinlim(amp*200)
display(IPython.display.Audio(y_inv, rate=sr))
```

## Example : How to use this
* <font color="red">Subset <b>data 1300-1600</b> and <b>data 3400-3600</b> are not working now, so please get subset_name_list</n>
  those were removed first</font>.
### 1 : get information about this dataset:
* copy this code~~

```python
'''
  if you use GoogleColab, remove # to install packages below..
'''
#!pip install datasets
#!pip install huggingface-hub
#!huggingface-cli login
import datasets
from datasets import load_dataset

# make subset_name_list
subset_name_list = [
  'data 0-200',
  'data 200-600',
  'data 600-1000',
  'data 1000-1300',
  'data 1600-2000',
  'data 2000-2200',
  'data 2200-2400',
  'data 2400-2600',
  'data 2600-2800',
  'data 3000-3200',
  'data 3200-3400',
  'data 3600-3800',
  'data 3800-4000',
  'data 4000-4200',
  'data 4200-4400',
  'data 4400-4600',
  'data 4600-4800',
  'data 4800-5000',
  'data 5000-5200',
  'data 5200-5520'
]

# load_all_datasets
data = load_dataset("mb23/GraySpectrogram", subset_name_list[0])
for subset in subset_name_list:
    # Confirm subset_list doesn't include "remove_list" datasets in the above cell.
    print(subset)
    new_ds = load_dataset("mb23/GraySpectrogram", subset)
    new_dataset_train = datasets.concatenate_datasets([data["train"], new_ds["train"]])
    new_dataset_test = datasets.concatenate_datasets([data["test"], new_ds["test"]])

    # take place of data[split]
    data["train"] = new_dataset_train
    data["test"] = new_dataset_test

data
```



### 2 : load dataset and change to dataloader:
* You can use the code below:
* <font color="red">...but (;・∀・)I don't know whether this code works efficiently, because I haven't tried this code so far</color>
```python
import datasets
from datasets import load_dataset, DatasetDict
from torchvision import transforms
from torch.utils.data import DataLoader
# BATCH_SIZE = ??? 
# IMAGE_SIZE = ???
# TRAIN_SIZE = ??? # the number of training data
# TEST_SIZE = ??? # the number of test data

def load_datasets():

    # Define data transforms
    data_transforms = [
        transforms.Resize((IMG_SIZE, IMG_SIZE)),
        transforms.ToTensor(), # Scales data into [0,1]
        transforms.Lambda(lambda t: (t * 2) - 1) # Scale between [-1, 1]
    ]
    data_transform = transforms.Compose(data_transforms)

    data = load_dataset("mb23/GraySpectrogram", subset_name_list[0])
    for subset in subset_name_list:
        # Confirm subset_list doesn't include "remove_list" datasets in the above cell.
        print(subset)
        new_ds = load_dataset("mb23/GraySpectrogram", subset)
        new_dataset_train = datasets.concatenate_datasets([data["train"], new_ds["train"]])
        new_dataset_test = datasets.concatenate_datasets([data["test"], new_ds["test"]])

        # take place of data[split]
        data["train"] = new_dataset_train
        data["test"] = new_dataset_test

    # memo:
    # 特徴量上手く抽出する方法が...わからん。これは力づく。
    # 本当はload_dataset()の時点で抽出したかったけど、無理そう
    # リポジトリ作り直してpush_to_hub()したほうがいいかもしれない。

    new_dataset = dict()
    new_dataset["train"] = Dataset.from_dict({
        "image" : data["train"]["image"],
        "caption" : data["train"]["caption"]
    })
    
    new_dataset["test"] = Dataset.from_dict({
        "image" : data["test"]["image"],
        "caption" : data["test"]["caption"]
    })
    data = datasets.DatasetDict(new_dataset)
    train = data["train"]
    test = data["test"]

    for idx in range(len(train["image"])):
        train["image"][idx] = data_transform(train["image"][idx])
        test["image"][idx] = data_transform(test["image"][idx])
    
    train = Dataset.from_dict(train)
    train = train.with_format("torch") # リスト型回避
    test = Dataset.from_dict(train)
    test = test.with_format("torch") # リスト型回避

    # or
    train_loader = DataLoader(train, batch_size=BATCH_SIZE, shuffle=True, drop_last=True)
    test_loader = DataLoader(test, batch_size=BATCH_SIZE, shuffle=True, drop_last=True)
    return train_loader, test_loader

```
* then try this?
```
train_loader, test_loader = load_datasets()
```