id
stringlengths
3
8
text
stringlengths
1
115k
st182700
I’m not sure which dimension refers to the “frame dimension”, but you could probably either use torch.stack or torch.cat to create the new stacked/concatenated tensor.
st182701
Hi @ptrblck . The x_train dataset is 3082092 frames. Each frame has 13 numbers (features). The y_train is 3082092 digit (labels). That is for each frame (1,13) there’s one label… Now, feeding pne frame to the DNN is not going to work because there’s too small information in it. Instead,I would like to stack a sequence of 7 frames (of those 3082092). I hope that makes sense.
st182702
Thanks for the explanation. The 7 frames would thus correspond to the batch size and you could set it in the DataLoader.
st182703
In that case, the DNN would look at them as 7 individual units, but I wanted to stack 7 frames as one unit.
st182704
Could you explain how one “unit” would be processed in the model and what the expected input shape would thus be?
st182705
I’m having issue with input/output sizes. This is my dataloader: train_data = torch.hstack((train_feat, train_labels)) train_loader = torch.utils.data.DataLoader(train_data, batch_size= 128, shuffle=True) This is my dataset: print(len(train_loader)) 24079 print((train_feat.shape)) torch.Size([3082092, 13]) print((train_labels.shape)) torch.Size([3082092, 1]) This is my model: class TDNN(nn.Module): def __init__(self, feat_dim=13, embedding_size=512, num_classes=51, config_str='batchnorm-relu'): super(TDNN, self).__init__() self.network = nn.Sequential(OrderedDict([ ('tdnn1', TDNNLayer(feat_dim, 512, 5, dilation=1, padding=0, config_str=config_str)), ('tdnn2', TDNNLayer(512, 512, 3, dilation=2, padding=0, config_str=config_str)), ('tdnn3', TDNNLayer(512, 512, 3, dilation=3, padding=0, config_str=config_str)), ('tdnn4', DenseLayer(512, 512, config_str=config_str)), ('tdnn5', DenseLayer(512, 1500, config_str=config_str)), ('stats', StatsPool()), ('affine', nn.Linear(3000, embedding_size)) ])) self.nonlinear = get_nonlinear(config_str, embedding_size) self.dense = DenseLayer(embedding_size, embedding_size, config_str=config_str) self.classifier = nn.Linear(embedding_size, num_classes) for m in self.modules(): if isinstance(m, nn.Linear): nn.init.kaiming_normal_(m.weight.data) nn.init.zeros_(m.bias) def forward(self, x): x = self.network(x) if self.training: x = self.dense(self.nonlinear(x)) x = self.classifier(x) return x The input should be batches of 128. I’m probably making an error with the data here: class IterMeter(object): """keeps track of total iterations""" def __init__(self): self.val = 0 def step(self): self.val += 1 def get(self): return self.val def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment): model.train() data_len = len(train_loader.dataset) with experiment.train(): for batch_idx, _data in enumerate(train_loader): features, labels = _data[:, :-1], _data[:, -1] features, labels = features.to(device), labels.to(device) features = features.unsqueeze(dim=1) optimizer.zero_grad() output = model(features) # (batch, n_class) loss = criterion(output, labels) loss.backward() experiment.log_metric('loss', loss.item(), step=iter_meter.get()) experiment.log_metric('learning_rate', scheduler.get_lr(), step=iter_meter.get()) optimizer.step() scheduler.step() iter_meter.step() if batch_idx % 100 == 0 or batch_idx == data_len: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(features), data_len, 100. * batch_idx / len(train_loader), loss.item())) iter_meter = IterMeter() for epoch in range(1, epochs + 1): train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment) If I remore the features.unsqueeze(dim=1), I get: RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 13, 5], but got 2-dimensional input of size [128, 13] instead
st182706
Depending on what makes sense, you should be able to permute() the dimensions or reshape() so that the number of input channels is what you want.
st182707
@eqy Thank you so much for your reply. I’m a noob so I could use some more explanation. The data is audio frames. Each frame has 13 features. The goal is to input 128 of those frames each time and end up with a linear classifier to classify each frame with one of the 51 classes. I hope that makes sense
st182708
Looking more closely, what happens if you simply torch.unsqueeze(dim=2) to make the input data shape [128, 13, 1]?
st182709
It gave a different type of error: def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment): model.train() data_len = len(train_loader.dataset) with experiment.train(): for batch_idx, _data in enumerate(train_loader): features, labels = _data[:, :-1], _data[:, -1] features, labels = features.to(device), labels.to(device) features = features.unsqueeze(dim=2) # ---> reshape optimizer.zero_grad() output = model(features) # (batch, n_class) loss = criterion(output, labels) loss.backward() experiment.log_metric('loss', loss.item(), step=iter_meter.get()) experiment.log_metric('learning_rate', scheduler.get_lr(), step=iter_meter.get()) optimizer.step() scheduler.step() iter_meter.step() if batch_idx % 100 == 0 or batch_idx == data_len: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(features), data_len, 100. * batch_idx / len(train_loader), loss.item())) RuntimeError: Calculated padded input size per channel: (1). Kernel size: (5). Kernel size can't be greater than actual input size
st182710
Right, you might need to consider what the meaning of your training data is. At the moment, you are passing in a batch of 128 examples, where each example has 13 channels but only a length of 1. I assume the TDNNLayer is something like a 1D convolution, so in this case it cannot compute an output when the filter size length (5) is greater than the input size (1). The fundamental issue is that the sequence length must be increased to use the layer.
st182711
The data is audio frames. Each frame has 13 features. The goal is to input 128 of those frames each time and end up with a linear classifier to classify each frame with one of the 51 classes. From this I understand that your sequence length is 13. Its also clear that TDNN layer has a Conv1d block to which you are currently passing in_channels as 13, which I believe is incorrect. Could you try changing feat_dim to 1 and keeping features.unsqueeze(dim=1). It shouldn’t throw an error now, but I’m not sure how the output will react.
st182712
I don’t understand why the input channels needs to be 1, my understanding is that the input is 13 since data size is (N * 13), which means that the input should be 128 * 1 * 13. Correct? After changing feat_dim to 1 , with features.unsqueeze(dim=1), I get this error: RuntimeError: Calculated padded input size per channel: (5). Kernel size: (7). Kernel size can't be greater than actual input size
st182713
Hello, I have two deep neural networks, one for speech enhancement to remove noise from the speech signals and the other network is for intent classification. I am trying to train them jointly i.e. the output of the first network is passed directly to the second network as follows: def _train_epoch(self, epoch=100): for i, (mixture, clean, name) in enumerate(self.train_data_loader): mixture = mixture.to(self.device, dtype=torch.float) clean = clean.to(self.device, dtype=torch.float) self.optimizer.zero_grad() enhanced = self.model(mixture).to(self.device) loss = self.loss_function(clean, enhanced) loss.backward() self.optimizer.step() for i, d in enumerate(train_set_generator): enhanced, l = d model_back.train() y = model_back(enhanced.float().to(device2)) loss_back = loss_func_back(y[0], l[0].to(device2)) print("Iteration %d in epoch%d--> loss = %f" % (i, epoch, loss_back.item()), end='\r') loss_back.backward() optimizer_back.step() optimizer_back.zero_grad() if i % 100 == 0: model_back.eval() correct = [] for j, ev in enumerate(valid_set_generator): enhanced, label = ev y_eval = model_back(enhanced.float().to(device2)) pred = torch.argmax(y_eval[0].detach().cpu(), dim=1) intent_pred = pred correct.append((intent_pred == label[0]).float()) if j > 100: break acc = np.mean(np.hstack(correct)) intent_acc = acc iter_acc = '\n iteration %d epoch %d -->' %(i, epoch) print(iter_acc, acc, best_accuracy) if intent_acc > best_accuracy: improved_accuracy = 'Current accuracy {}, {}'.format(intent_acc, best_accuracy) print(improved_accuracy) torch.save(model_back.state_dict(), 'bestmodel.pkl') Where the loss of the first DNN is the MSE loss and the loss of the second DNN is the Cross entropy loss. The problem is that after many iterations in the first epoch the second DNN starts to iterate on the first loop again not goes to the second loop. Any help will be appreciated.
st182714
So I have an input tensor with shape [16, 1, 125, 256] and a selector tensor with shape [124, 2]. Is there any PyTorch equivalent of tf.gather(input, selector, axis=2)? To begin with, how does Tensorflow handle both tensors despite not having the same number of dimensions (unlike torch.gather wherein they must be equal)? Furthermore, what must dim in torch.gather be to be similar to axis=2 in tf.gather. For more context, I’m trying to make PyTorch version of this 4, which defines a frame function that expands signal's axis dimension into frames of frame_length.
st182715
Solved by albanD in post #4 It’s still not 100% clear to me what this does . But I guess that would work: dim = 2 new_size = inp.size()[:dim] + selector.size() + inp.size()[dim+1:] out = inp.index_select(selector.view(-1), dim=dim) out.view(new_size)
st182716
Could you explain what tf.gather(input, selector, axis=2) is doing when the there is no matching dimension between input and selector? Maybe with a small example of what you are doing?
st182717
It gathers slices of select indices (selector) from input with respect to the specified axis. A workaround I found is input[:, :, selector, :] which returns the same output as tf.gather(input, selector, axis=2). However, how do I do the same thing if the shape of input is not known? Here are some more examples: tf.gather(input, selector, axis=0) is the same as input[selector, ...] tf.gather(input, selector, axis=1) is the same as input[:, selector, ...] tf.gather(input, selector, axis=2) is the same as input[..., selector, :] tf.gather(input, selector, axis=3) is the same as input[..., selector]
st182718
It’s still not 100% clear to me what this does . But I guess that would work: dim = 2 new_size = inp.size()[:dim] + selector.size() + inp.size()[dim+1:] out = inp.index_select(selector.view(-1), dim=dim) out.view(new_size)
st182719
It works. Thanks for this! One last question: is there a way to simultaneously slice along all dimensions of a tensor. Say, for example, I have a tensor inp with shape [16, 1, 125, 256]. I then have two lists (this is a simple example) begin = [0, 0, 0, 0] end = [16, 1, 125, 256] that specify the beginning and end indices to slice, respectively. From this, I perform slicing as out = inp[begin[0]:end[0], begin[1]:end[1], begin[2]:end[2], begin[3]:end[3]] How do I do this with any tensor with a variable number of dimensions?
st182720
The way I would do this is with a serie of narrows (keep in mind that narrow never copies the memory so it is very fast): out = inp for dim, (b, e) in enumerate(zip(begin, end)): out = out.narrow(dim, b, e-b)
st182721
How to put the x_train and y_train into a model for training? the x_train is a tensor of size (3000, 13). the y_train is of size (3000, 1) That is for each element of x_train (1, 13), the respective y label is one digit from y_train. if I do: train_data = torch.hstack((train_feat, train_labels)) print(train_data[0].shape) print(train_data[1].shape) torch.Size([3082092, 13]) torch.Size([3082092, 1]) train_loader = data.DataLoader(dataset=train_data, batch_size= 7, shuffle=True) The dataloader does not return the batch size, but returns the whole dataset instead
st182722
Solved by the-dharma-bum in post #4 I’m not sure what you’re trying to do. You want to apply a convolution with a kernel size of 7 to a 1d vector of size 13, is that correct ? In that case your input should be a tensor with 1 channel containing a 1D vector of size 13. So your input size should be [7, 1, 13] (batch_size, channels,…
st182723
Are you sure about the sizes of train_feat and train_labels ? Your code should works. As an example, the following works: dummy_features = torch.randn(3000, 13) dummy_labels = torch.randint(2, (3000, 1)) # integers in [0, 1] train_data = torch.hstack((dummy_features, dummy_labels)) print(train_data.shape) # torch.Size([3000, 14]) train_loader = torch.utils.data.DataLoader(train_data, batch_size= 7, shuffle=True) print(len(train_loader) == np.ceil(3000 / 7)) # True batch = next(iter(train_loader)) print(batch.size()) # torch.Size([7, 14]) features, labels = batch[:, :-1], batch[:, -1]
st182724
Thank you for your reply. Maybe I’m doing something wrong while iterating or in the model parameters. because using the model below, I get an error about the size. The input should be 7 frames, each of size (1, 13) (or (1, 14) considering the label. and the output channels is 1024 as an example. model = nn.Sequential(torch.nn.Conv1d(13, 1024, kernel_size=7, stride=1, padding=0, dilation=1, groups=1, bias=True), torch.nn.BatchNorm1d(1024), torch.nn.ReLU(inplace=True), torch.nn.Conv1d(1024, 1024, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True), torch.nn.BatchNorm1d(1024), torch.nn.ReLU(inplace=True), torch.nn.Conv1d(1024, 50, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True)) classifier = nn.Linear(1024, 50) This is the training script: class IterMeter(object): """keeps track of total iterations""" def __init__(self): self.val = 0 def step(self): self.val += 1 def get(self): return self.val def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment): model.train() data_len = len(train_loader.dataset) with experiment.train(): for batch_idx, _data in enumerate(train_loader): optimizer.zero_grad() output = model(x_train) loss = criterion(output, y_train) loss.backward() experiment.log_metric('loss', loss.item(), step=iter_meter.get()) experiment.log_metric('learning_rate', scheduler.get_lr(), step=iter_meter.get()) optimizer.step() scheduler.step() iter_meter.step() if batch_idx % 100 == 0 or batch_idx == data_len: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(x_train), data_len, 100. * batch_idx / len(train_loader), loss.item())) iter_meter = IterMeter() for epoch in range(1, epochs + 1): train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment) I’m getting an error RuntimeError: Expected 3-dimensional input for 3-dimensional weight [1024, 13, 7], but got 2-dimensional input of size [3082092, 13] instead
st182725
I’m not sure what you’re trying to do. You want to apply a convolution with a kernel size of 7 to a 1d vector of size 13, is that correct ? In that case your input should be a tensor with 1 channel containing a 1D vector of size 13. So your input size should be [7, 1, 13] (batch_size, channels, dim). And in your model, the first conv must have an one input channel instead of 13. Given your error, you’re giving to your model an input of size [3082092, 13] so it seems you didn’t followed my first comment. Starting from what I wrote, we have: features, labels = batch[:, :-1], batch[:, -1] # features size is [7, 13] In order to obtain the needed dimension you simply need to create the channel dim: features = features.unsqueeze(dim=1) # feature size is now [7, 1, 13] Then you can apply your model (with the first conv corrected to have 1 input channel). Then after this first convolution your tensor will be of shape [7, 1024, 7] (batch_size, output_dim of the fist conv, output_size in function of padding, dilation, and stride) As you seem to apply two convolutions with a kernel size of 1, the output dim won’t change. So at the end of your model, the size is [7, 50, 7]. If you wanna feed that to a linear classifier, you can flatten the last two dims and feed the result to your classifier, and correct your classifier input size which should be 50 * 7. Here is a complete example: model = torch.nn.Sequential( torch.nn.Conv1d(1, 1024, kernel_size=7, stride=1, padding=0, dilation=1, groups=1, bias=True), torch.nn.BatchNorm1d(1024), torch.nn.ReLU(inplace=True), torch.nn.Conv1d(1024, 1024, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True), torch.nn.BatchNorm1d(1024), torch.nn.ReLU(inplace=True), torch.nn.Conv1d(1024, 50, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=True) ) classifier = torch.nn.Linear(50 * 7, 50) dummy_features = torch.randn(3000, 13) dummy_labels = torch.randint(2, (3000, 1)) # integers in [0, 1] train_data = torch.hstack((dummy_features, dummy_labels)) train_loader = torch.utils.data.DataLoader(train_data, batch_size= 7, shuffle=True) batch = next(iter(train_loader)) features, labels = batch[:, :-1], batch[:, -1] features = features.unsqueeze(dim=1) outputs = model(features) outputs = outputs.view(outputs.size(0), -1) scores = classifier(outputs) # size (7, 50)
st182726
HI Luc… First let me tell you that if the internet is a good place, it’s just because of people like you… so thank you. Actually, I have a dataset as follows: x_train: of size (3082092, 13) (3082092 audio frames, each frame contains 13 features) y_train: a (3082092, 1) , where each item is a label for the respective audio frame. The classes = 50 (the number of different possible labels) The goal is to do a classification for each frame. So for each single frame, there should be 50 probabilities and the correct label would be the highest probability. That said, the input should be of batch_size = 7, ie: 7 frames each time. each frame has 13 features. Also the kernel_size = 7, and that’s why the next layer has a kernel_size = 1.
st182727
You’re welcome ! Ok then I think the code i wrote above should fit your need just fine. Did you manage to make it work ? If I may suggest you an other approach, you can try to transform your 1D audio signals into 2D signals (for instance by computing their spectrograms) and then apply a convolutional neural network onto those 2D signals. This allows you to use powerful CNN architectures, which, from my experience, lead to better results for audio classification.
st182728
Thank you very much for your help! I’ll have to try the code first thing tomorrow morning. As for the 2D spectrograms transformation, it’s a different approach that I’m not currently applying. In fact, those 13 features above are the computed mfcc’s ( Mel Frequency Cepstral Coefficients) which I have to use for my project.
st182729
Hi, I have a question, I have a dataset of audiofiles that I’d like to convert into melspectogram and I want to use tourchaudio library to convert audio into a tensor directly. I’ve seen some people doing this by saving as an image, and I’d like to bypass that step, and train directly as a tensor. My question is, how should I do regarding, creating a Dataloader so that I can do this computational expensive operation and taking advantage of the fact that each folder is the label that contains all the training data for that class. The end goal is create a classification algorithm Thanks so much!
st182730
You could create a custom Dataset as explained in this tutorial 54 and apply the transformation on each sample in the __getitem__ of this class. To create the class indices based on the folder structure you could reuse the logic from DatasetFolder 7.
st182731
Similarly to the previous answer, you can also checkout the audio classification tutorial 23 and update the line tensors += [waveform] in collate_fn to tensors += [transform(waveform)] where transform is whatever transform you want. If your goal is to apply the transform, save the transformed waveform to disk to avoid recomputing it later, and then create a new dataset with this, then you could also try to cache dynamically your dataset using something like diskcache_iterator.
st182732
you can use something simple like this: from torch.utils.data import Dataset as TorchDataset class SpectrogramDataset(TorchDataset): def __init__(self,file_label_ds, process_func, audio_path=""): self.ds= file_label_ds self.process_func = process_func self.audio_path=audio_path def __getitem__(self, index): file,label=self.ds[index] x=self.process_func(self.audio_path+file) return x, file, label def __len__(self): return len(self.ds) file_label_ds is a dataset that gives you the file name and label. process_func is a function that takes the full audio path and return the spectrogram. PS: this helps you to extract the spectrograms in parallel using the CPU (if you have num_workers>0), this won’t work if you use the gpu to extract the spectrograms.
st182733
Great stuff, truy appreciate the pointers, I did a rough implementation based on your suggestions! I have a small issue, and confirm if I’m thinking about this in the right way! I saw some examples where in the pre-process step, converts audio into images, but I’d like to convert straight into a tensor. And also, keep the dataset with the fix size. I have audiofiles with different durations. How can I accomplish this in the context of spectrogram? If I understood correctly, there are two different ways I can create this dataloader, one is by creating a class, and overriding the getitem method, and the other is passing a collate_fn method. Is this a fair assessment?
st182734
I use my custom dataset class to convert audio files to mel- Spectrogram images. the shape will be padded to (128,1024). I have 10 classes. after a while in the first epoch, my network will be crashed due to this error: Current run is terminating due to exception: Expected hidden size (1, 7, 32), got [1, 16, 32] Engine run is terminating due to exception: Expected hidden size (1, 7, 32), got [1, 16, 32] Traceback (most recent call last): File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3418, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-b8f3a45f8e35>", line 1, in <module> runfile('/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py', wdir='/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools') File "/home/omid/OMID/program/pycharm-professional-2020.2.4/pycharm-2020.2.4/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/home/omid/OMID/program/pycharm-professional-2020.2.4/pycharm-2020.2.4/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 60, in <module> main() File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 56, in main train(cfg) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/tools/train_net.py", line 35, in train do_train( File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/engine/trainer.py", line 79, in do_train trainer.run(train_loader, max_epochs=epochs) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 702, in run return self._internal_run() File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 775, in _internal_run self._handle_exception(e) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception raise e File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 745, in _internal_run time_taken = self._run_once_on_dataset() File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 850, in _run_once_on_dataset self._handle_exception(e) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception raise e File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/engine.py", line 833, in _run_once_on_dataset self.state.output = self._process_function(self, self.state.batch) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/ignite/engine/__init__.py", line 103, in _update y_pred = model(x) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/modeling/model.py", line 113, in forward x, h1 = self.gru1(x, h0) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 819, in forward self.check_forward_args(input, hx, batch_sizes) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 229, in check_forward_args self.check_hidden_size(hidden, expected_hidden_size) File "/home/omid/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 223, in check_hidden_size raise RuntimeError(msg.format(expected_hidden_size, list(hx.size()))) RuntimeError: Expected hidden size (1, 7, 32), got [1, 16, 32] my network is : import torch import torch.nn as nn import torch.nn.functional as F print('cuda', torch.cuda.is_available()) class MusicClassification(nn.Module): def __init__(self, cfg): super(MusicClassification, self).__init__() device = cfg.MODEL.DEVICE num_class = cfg.MODEL.NUM_CLASSES self.np_layers = 4 self.np_filters = [64, 128, 128, 128] self.kernel_size = (3, 3) self.pool_size = [(2, 2), (4, 2)] self.channel_axis = 1 self.frequency_axis = 2 self.time_axis = 3 # self.h0 = torch.zeros((1, 16, 32)).to(device) self.bn0 = nn.BatchNorm2d(num_features=self.channel_axis) self.bn1 = nn.BatchNorm2d(num_features=self.np_filters[0]) self.bn2 = nn.BatchNorm2d(num_features=self.np_filters[1]) self.bn3 = nn.BatchNorm2d(num_features=self.np_filters[2]) self.bn4 = nn.BatchNorm2d(num_features=self.np_filters[3]) self.conv1 = nn.Conv2d(1, self.np_filters[0], kernel_size=self.kernel_size) self.conv2 = nn.Conv2d(self.np_filters[0], self.np_filters[1], kernel_size=self.kernel_size) self.conv3 = nn.Conv2d(self.np_filters[1], self.np_filters[2], kernel_size=self.kernel_size) self.conv4 = nn.Conv2d(self.np_filters[2], self.np_filters[3], kernel_size=self.kernel_size) self.max_pool_2_2 = nn.MaxPool2d(self.pool_size[0]) self.max_pool_4_2 = nn.MaxPool2d(self.pool_size[1]) self.drop_01 = nn.Dropout(0.1) self.drop_03 = nn.Dropout(0.3) self.gru1 = nn.GRU(input_size=128, hidden_size=32, batch_first=True) self.gru2 = nn.GRU(input_size=32, hidden_size=32, batch_first=True) self.activation = nn.ELU() self.dense = nn.Linear(32, num_class) self.softmax = nn.LogSoftmax(dim=1) def forward(self, x): # x [16, 1, 128,938] x = self.bn0(x) # x [16, 1, 128,938] x = F.pad(x, (0, 0, 2, 1)) # x [16, 1, 131,938] x = self.conv1(x) # x [16, 64, 129,936] x = self.activation(x) # x [16, 64, 129,936] x = self.bn1(x) # x [16, 64, 129,936] x = self.max_pool_2_2(x) # x [16, 64, 64,468] x = self.drop_01(x) # x [16, 64, 64,468] x = F.pad(x, (0, 0, 2, 1)) # x [16, 64, 67,468] x = self.conv2(x) # x [16, 128, 65,466] x = self.activation(x) # x [16, 128, 65,466] x = self.bn2(x) # x [16, 128, 65,455] x = self.max_pool_4_2(x) # x [16, 128, 16,233] x = self.drop_01(x) # x [16, 128, 16,233] x = F.pad(x, (0, 0, 2, 1)) # x [16, 128, 19,233] x = self.conv3(x) # x [16, 128, 17,231] x = self.activation(x) # x [16, 128, 17,231] x = self.bn3(x) # x [16, 128, 17,231] x = self.max_pool_4_2(x) # x [16, 128, 4,115] x = self.drop_01(x) # x [16, 128, 4,115] x = F.pad(x, (0, 0, 2, 1)) # x [16, 128, 7,115] x = self.conv4(x) # x [16, 128, 5,113] x = self.activation(x) # x [16, 128, 5,113] x = self.bn4(x) # x [16, 128, 5,113] x = self.max_pool_4_2(x) # x [16, 128, 1,56] x = self.drop_01(x) # x [16, 128, 1,56] x = x.permute(0, 3, 1, 2) # x [16, 56, 128,1] resize_shape = list(x.shape)[2] * list(x.shape)[3] # x [16, 128, 56,1], reshape size is 128 x = torch.reshape(x, (list(x.shape)[0], list(x.shape)[1], resize_shape)) # x [16, 56, 128] device = torch.device("cuda" if torch.cuda.is_available() else "cpu") h0 = torch.zeros((1, 16, 32)).to(device) x, h1 = self.gru1(x, h0) # x [16, 56, 32] x, _ = self.gru2(x, h1) # x [16, 56, 32] x = x[:, -1, :] x = self.dense(x) # x [16,10] x = self.softmax(x) # x [16, 10] # x = torch.argmax(x, 1) return x my dataset is : from __future__ import print_function, division import os import librosa import matplotlib.pyplot as plt import numpy as np import torch import torchaudio from sklearn.preprocessing import OneHotEncoder, LabelEncoder from torch.utils.data import Dataset from utils.util import pad_along_axis print(torch.__version__) print(torchaudio.__version__) # Ignore warnings import warnings warnings.filterwarnings("ignore") plt.ion() import pathlib print(pathlib.Path().absolute()) class GTZANDataset(Dataset): def __init__(self, genre_folder='/home/omid/OMID/projects/python/mldl/NeuralMusicClassification/data/dataset/genres_original', one_hot_encoding=False, sr=16000, n_mels=128, n_fft=2048, hop_length=512, transform=None): self.genre_folder = genre_folder self.one_hot_encoding = one_hot_encoding self.audio_address, self.labels = self.extract_address() self.sr = sr self.n_mels = n_mels self.n_fft = n_fft self.transform = transform self.le = LabelEncoder() self.hop_length = hop_length def __len__(self): return len(self.labels) def __getitem__(self, index): address = self.audio_address[index] y, sr = librosa.load(address, sr=self.sr) S = librosa.feature.melspectrogram(y, sr=sr, n_mels=self.n_mels, n_fft=self.n_fft, hop_length=self.hop_length) sample = librosa.amplitude_to_db(S, ref=1.0) sample = np.expand_dims(sample, axis=0) sample = pad_along_axis(sample, 1024, axis=2) # print(sample.shape) sample = torch.from_numpy(sample) label = self.labels[index] # label = torch.from_numpy(label) print(sample.shape,label) if self.transform: sample = self.transform(sample) return sample, label def extract_address(self): label_map = { 'blues': 0, 'classical': 1, 'country': 2, 'disco': 3, 'hiphop': 4, 'jazz': 5, 'metal': 6, 'pop': 7, 'reggae': 8, 'rock': 9 } labels = [] address = [] # extract all genres' folders genres = [path for path in os.listdir(self.genre_folder)] for genre in genres: # e.g. ./data/generes_original/country genre_path = os.path.join(self.genre_folder, genre) # extract all sounds from genre_path songs = os.listdir(genre_path) for song in songs: song_path = os.path.join(genre_path, song) genre_id = label_map[genre] # one_hot_targets = torch.eye(10)[genre_id] labels.append(genre_id) address.append(song_path) samples = np.array(address) labels = np.array(labels) # convert labels to one-hot encoding # if self.one_hot_encoding: # labels = OneHotEncoder(sparse=False).fit_transform(labels) # else: # labels = LabelEncoder().fit_transform(labels) return samples, labels and trainer : # encoding: utf-8 import logging from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator from ignite.handlers import ModelCheckpoint, Timer from ignite.metrics import Accuracy, Loss, RunningAverage def do_train( cfg, model, train_loader, val_loader, optimizer, scheduler, loss_fn, ): log_period = cfg.SOLVER.LOG_PERIOD checkpoint_period = cfg.SOLVER.CHECKPOINT_PERIOD output_dir = cfg.OUTPUT_DIR device = cfg.MODEL.DEVICE epochs = cfg.SOLVER.MAX_EPOCHS model = model.to(device) logger = logging.getLogger("template_model.train") logger.info("Start training") trainer = create_supervised_trainer(model, optimizer, loss_fn, device=device) evaluator = create_supervised_evaluator(model, metrics={'accuracy': Accuracy(), 'ce_loss': Loss(loss_fn)}, device=device) checkpointer = ModelCheckpoint(output_dir, 'mnist', None, n_saved=10, require_empty=False) timer = Timer(average=True) trainer.add_event_handler(Events.EPOCH_COMPLETED, checkpointer, {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}) timer.attach(trainer, start=Events.EPOCH_STARTED, resume=Events.ITERATION_STARTED, pause=Events.ITERATION_COMPLETED, step=Events.ITERATION_COMPLETED) RunningAverage(output_transform=lambda x: x).attach(trainer, 'avg_loss') @trainer.on(Events.ITERATION_COMPLETED) def log_training_loss(engine): iter = (engine.state.iteration - 1) % len(train_loader) + 1 if iter % log_period == 0: logger.info("Epoch[{}] Iteration[{}/{}] Loss: {:.2f}" .format(engine.state.epoch, iter, len(train_loader), engine.state.metrics['avg_loss'])) @trainer.on(Events.EPOCH_COMPLETED) def log_training_results(engine): evaluator.run(train_loader) metrics = evaluator.state.metrics avg_accuracy = metrics['accuracy'] avg_loss = metrics['ce_loss'] logger.info("Training Results - Epoch: {} Avg accuracy: {:.3f} Avg Loss: {:.3f}" .format(engine.state.epoch, avg_accuracy, avg_loss)) if val_loader is not None: @trainer.on(Events.EPOCH_COMPLETED) def log_validation_results(engine): evaluator.run(val_loader) metrics = evaluator.state.metrics avg_accuracy = metrics['accuracy'] avg_loss = metrics['ce_loss'] logger.info("Validation Results - Epoch: {} Avg accuracy: {:.3f} Avg Loss: {:.3f}" .format(engine.state.epoch, avg_accuracy, avg_loss) ) # adding handlers using `trainer.on` decorator API @trainer.on(Events.EPOCH_COMPLETED) def print_times(engine): logger.info('Epoch {} done. Time per batch: {:.3f}[s] Speed: {:.1f}[samples/s]' .format(engine.state.epoch, timer.value() * timer.step_count, train_loader.batch_size / timer.value())) timer.reset() trainer.run(train_loader, max_epochs=epochs)
st182735
Hey guys, I need to install the torchaudio library via source because I had to install pytorch via source too(NPACK problem). So my question is how to install torchaudio via source becuase when I try to install this library via conda or pip my pytorch source build is being deleted and the normal pytorch is being downloaded. In order to avoid the deinstallation of my pytorch build I have to install torchaudio via source. Thank’s for every suggestion and help in advance:)
st182736
The install instructions are found here 85. Let us know, if you encounter any issues.
st182737
Hey @ptrblck, firstly thank you very very much for your help:) I am running in the first seconds of the installation process in this error: (after typing the command for the linux installation: BUILD_SOX=1 python setup.py install) -- Building version 0.9.0a0+b5d8027 running install running bdist_egg running egg_info writing torchaudio.egg-info/PKG-INFO writing dependency_links to torchaudio.egg-info/dependency_links.txt writing requirements to torchaudio.egg-info/requires.txt writing top-level names to torchaudio.egg-info/top_level.txt reading manifest file 'torchaudio.egg-info/SOURCES.txt' writing manifest file 'torchaudio.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying torchaudio/version.py -> build/lib.linux-x86_64-3.7/torchaudio running build_ext CMake Error: CMake was unable to find a build program corresponding to "Ninja". CMAKE_MAKE_PROGRAM is not set. You probably need to select a different build tool. CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! See also "/home/pytorch-audio/audio/build/temp.linux-x86_64-3.7/CMakeFiles/CMakeOutput.log". Traceback (most recent call last): File "setup.py", line 91, in <module> zip_safe=False, File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/jan/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run self.do_egg_install() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run cmd = self.call_command('install_lib', warn_dir=0) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command self.run_command(cmdname) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/pytorch-audio/audio/build_tools/setup_helpers/extension.py", line 55, in run super().run() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/anaconda3/envs/pytorch-build/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/pytorch-audio/audio/build_tools/setup_helpers/extension.py", line 109, in build_extension ["cmake", str(_ROOT_DIR)] + cmake_args, cwd=self.build_temp) File "/home/anaconda3/envs/pytorch-build/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '/home/pytorch-audio/audio', '-DCMAKE_BUILD_TYPE=Release', '-DCMAKE_PREFIX_PATH=/home/anaconda3/envs/pytorch-build/lib/python3.7/site-packages/torch/share/cmake', '-DCMAKE_INSTALL_PREFIX=/home/pytorch-audio/audio/build/lib.linux-x86_64-3.7/torchaudio/', '-DCMAKE_VERBOSE_MAKEFILE=ON', '- It would be very nice if you or someone else could help me out with this:) Thanks in advance:)
st182738
do you have ninja cmake generator installed on your linux machine? see for instance here 14 if this is indeed needed, please feel free to open an issue on torchaudio’s github and suggesting to add this to the requirements in the readme 7.
st182739
Hey @vincentqb, my machine is freezing at the end of the installation. But thank you very very much:) That fixed it for me:)
st182740
Hey @vincentqb, It’s a ultimate pain to install torchaudio using BUILD_SOX=1. It is nearly impossible to install torchaudio on systems that performing not that good:/ (my installation is breaking down at step 2 of 20)
st182741
Based on what you are saying, I recommend that you submit an issue directly on github with steps to reproduce and error output. We’ll be in a better position to investigate the issue. Do you have a link to the NPACK issue you mentioned in the description that made you install pytorch from source?
st182742
Is there any difference in training on the spectrogram of a waveform as compared to training on the waveform itself? Does training on the waveform generate better results in general?
st182743
Solved by ptrblck in post #2 I think the difference would be quite large, as the sampling rate could be high in the time domain and make the training quite challenging. If I remember correctly, one way to make some models work on waveforms directly was to use a stack of conv layers with a specific dilation, so that the input si…
st182744
I think the difference would be quite large, as the sampling rate could be high in the time domain and make the training quite challenging. If I remember correctly, one way to make some models work on waveforms directly was to use a stack of conv layers with a specific dilation, so that the input size was feasible to process in the end.
st182745
When getting the spectrogram of the waveform, is there any difference between torchaudio.transforms.Spectrogram and torch.stft? Also, is there an equivalent of torch.istft for torchaudio.transforms.Spectrogram?
st182746
@ptrblck - Would you happen to know any audio models that use this technique? Also, is there any advantage to training on time domain data using dilated convolution over just training on time/frequency domain data such as STFT?
st182747
I was thinking about WaveNet 2 in my previous post. This paper is a bit older by now, so I guess there a new insights today about advantages and shortcomings of time- vs. frequency-domain approaches.
st182748
@Mole_Turner A lot of the more recent models I’ve read about don’t resort to using a time-frequency (T-F) representation of the input wave signal (e.g. STFT representation). Also, I read in Supervised Speech Separation Based on Deep Learning: An Overview 2 that end-to-end speech separation methods like temporal mapping (which doesn’t require resorting to a T-F representation) has the following advantage: A potential advantage of this approach is to circumvent the need to use the phase of noisy speech in reconstructing enhanced speech, which can be a drag for speech quality, particularly when input SNR is low. and that As a convolution operator is the same as a filter or a feature extractor, CNNs appear to be a natural choice for temporal mapping.
st182749
Hi. I am working with audio data. The code that I am running takes the X and label data from the dataloader, sends it to GPU with X.cuda() and label.cuda() and then pass them through the forward graph. The dataset size in .npy files is around 8GB. My machine is RTX 2060 which has 6 gb memory. So if i run it on my GPU, it processes some batches and runs out of memory, although the code runs fine on Colab, which has tesla T4 with 15 GB memory. The reason for this behavior is I am guessing, by the first iteration, the code tries to force all the tensors on the gpu, but the size of tensors is greater than GPU RAM, That’s why cuda runs out of memory. Here is what I want to do. I want to load A tensor to the GPU, compute the forward and backward graph, and then send it back to CPU again. But I know that these operations like data.cuda() andn data.cpu() are expensive. So I want them to be parallel to the forward and backward pass (model(input) to be specific). Is it possible? If so, how? Thanks!
st182750
Write a custom data loader to do lazy loading, that is, at __init__ just load the list of data ( this could be path information, row/column/line information of each data. The actually .npy data is not loaded yet and with the help of that list, load data batch-wise using the __getitem__. Check the link below, they do something very similar! Loading huge data functionality Just define a Dataset object, that only loads a list of files in __init__, and loads them every time __getindex__ is called. Then, wrap it in a torch.utils.DataLoader with multiple workers, and you’ll have your files loaded lazily in parallel. class MyDataset(torch.utils.Dataset): def __init__(self): self.data_files = os.listdir('data_dir') sort(self.data_files) def __getindex__(self, idx): return load_file(self.data_files[idx]) def __len__(self): r…
st182751
Yes but Lazy loading is in case where dataset is huge. My dataset is not exactly huge though, it just doesn’t fit on the gpu. So what I need is asynchronous CPU-to-GPU data transfer and vice verca. I don’t think that is the case in the link you mentioned.
st182752
Even if you don’t have a large dataset but still can’t fit it in a GPU, you can use lazy loading to mitigate this issue. One thing I didn’t notice properly from your question earlier is this line, …it processes some batches and runs out of memory…" This won’t happen unless you have a variable batch size or data points that have different memory sizes. One plausible reason could be that variables that you might potentially not be using are getting accumulated on GPU over steps/epochs. See if the following helps, https://pytorch.org/docs/stable/notes/faq.html 8
st182753
Hi all, Im trying to implement a filter that runs over a spectrogram frequencies for each time frame and filters out inharmonic frequencies. What I need is, given a time frame, for each frequency k in this timeframe, to access frequencies multiple of k (the harmonics), then perform some operation over them. So for frequency k, k=1,…,F, I need to access indexes, k, 2k, 3k,…, N*k (N in total), perform some operation and set the result to the position k (original frequency bin). Same calculation for all timeframes. In resume, it would be something like this: spec[k, :] = operation(spec[mutiple_indexes,:]) Basically its like running a operation sliding window starting in each frequency, but for each frequency this sliding window is more space (spaced by the frequency value). I can do this iteratively, but Im wondering if there is a vectorized way of doing this. I was thinking about torch.unfold, but it can only work on sequential patches, not of increasing stride. Any suggestions? Thanks!
st182754
I’m gonna assume you want to run 1D convolutions just in those specific bins rather than 2D convolutions around the harmonics. If that were the case you can preset an slice operator which retrieves those harmonics, then you can run a 2D convolution with kernel size 1 for the frequency. With a proper padding you can keep the shape and then using the preset slice to “paste” back the values.
st182755
Kind of like this: import torch N = 1 T = 5 pad = (0,T // 2) convolution = torch.nn.Conv2d(in_channels=1, out_channels=N, kernel_size=(1, T), padding=pad) sp = torch.rand(256, 128) preset_slice = [2*i for i in range(128)] # Choose your own harmonics = sp[preset_slice] processed_harmonics = convolution(harmonics[None,None,...]) new_sp = sp.clone() new_sp[preset_slice] = processed_harmonics[0,0] In case of not using convolutions you can handle something similar with fold and unfold I woudl say
st182756
I referred to the TDNN, TDNN-LSTM, TDNN-Attention models provided by Kaldi. I wanted to use this to implement the model with Pytorch, but it was difficult to implement the following: delay : the delay to be used in the recurrence of LSTMs decay-time : an approximate maximum on how many frames can be remembered via summation into the cell contents projection-dim : the output dimension of the recurrent-projection-matrix Has anyone ever had a github repository for reference or implemented such a model? I’d appreciate it if you could show me the implementation code.
st182757
Currently, we are a group doing a project about implementing WaveNet in a Tacotron2 → WaveNet → ASR (Given by firm) for midterm project. We are all novices to PyTorch, but recommended to try this library for constructing our WaveNet. We have a problem with the padding and the F.cross_entropy problem for a given .wav-file. The main issue is when we compute the loss function. Our output (from WaveNet) is a tensor of shape: output = tensor([1, 256, 225332]) # [batch_size, sample_size, audio_length] input = tensor([1, 256, 225360]) There is a problem here, and from what I can see and talk to my supervisor about it is padding the input of the WaveNet. (Cross_entropy wants (N, C) as input and (N) as target, from what I gather, and the dimensions are wrong) He said “use ‘same’ padding”, but that is currently only operable in TF/Keras as far as I know. We’ve tried to read across multiple posts, but since we’re novices, we can’t seem to figure it out. Any help is appreciated. This is our WaveNet, which probably has some issues (particularly padding and perhaps causal convolution seems iffy?). """ Wavenet model """ from torch import nn import torch #TODO: Add local and global conditioning def initialize(m): """ Initialize CNN with Xavier_uniform weight and 0 bias. """ if isinstance(m, torch.nn.Conv1d): nn.init.xavier_uniform_(m.weight) nn.init.constant_(m.bias, 0.0) class CausalConv1d(torch.nn.Module): """ Causal Convolution for WaveNet - Jakob """ def __init__(self, in_channels, out_channels, kernel_size, dilation = 1, bias = True): super(CausalConv1d, self).__init__() # padding=1 for same size(length) between input and output for causal convolution self.dilation = dilation self.kernel_size = kernel_size self.in_channels = in_channels self.out_channels = out_channels self.padding = padding = (kernel_size-1) * dilation # kernelsize = 2, -1 * dilation = 1, = 1. - Jakob. self.conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, padding=padding, dilation=dilation, bias=bias) # Fixed for WaveNet but not sure def forward(self, x): output = self.conv(x) if self.padding != 0: output = output[:, :, :-self.padding] return output class Wavenet(nn.Module): def __init__(self, layers=3, blocks=2, dilation_channels=32, residual_block_channels=512, skip_connection_channels=512, output_channels=256, output_size=32, kernel_size=3 ): super(Wavenet, self).__init__() self.layers = layers self.blocks = blocks self.dilation_channels = dilation_channels self.residual_block_channels = residual_block_channels self.skip_connection_channels = skip_connection_channels self.output_channels = output_channels self.kernel_size = kernel_size self.output_size = output_size # initialize dilation variables receptive_field = 1 init_dilation = 1 # List of layers and connections self.dilations = [] self.residual_convs = nn.ModuleList() self.filter_conv_layers = nn.ModuleList() self.gate_conv_layers = nn.ModuleList() self.skip_convs = nn.ModuleList() # First convolutional layer self.first_conv = CausalConv1d(in_channels=self.output_channels, out_channels=residual_block_channels, kernel_size = 2) # Building the Modulelists for the residual blocks for b in range(blocks): additional_scope = kernel_size - 1 new_dilation = 1 for i in range(layers): # dilations of this layer self.dilations.append((new_dilation, init_dilation)) # dilated convolutions self.filter_conv_layers.append(nn.Conv1d(in_channels=residual_block_channels, out_channels=dilation_channels, kernel_size=kernel_size, dilation=new_dilation)) self.gate_conv_layers.append(nn.Conv1d(in_channels=residual_block_channels, out_channels=dilation_channels, kernel_size=kernel_size, dilation=new_dilation)) # 1x1 convolution for residual connection self.residual_convs.append(nn.Conv1d(in_channels=dilation_channels, out_channels=residual_block_channels, kernel_size=1)) # 1x1 convolution for skip connection self.skip_convs.append(nn.Conv1d(in_channels=dilation_channels, out_channels=skip_connection_channels, kernel_size=1)) # Update receptive field and dilation receptive_field += additional_scope additional_scope *= 2 init_dilation = new_dilation new_dilation *= 2 # Last two convolutional layers self.last_conv_1 = nn.Conv1d(in_channels=skip_connection_channels, out_channels=skip_connection_channels, kernel_size=1) self.last_conv_2 = nn.Conv1d(in_channels=skip_connection_channels, out_channels=output_channels, kernel_size=1) #Calculate model receptive field and the required input size for the given output size self.receptive_field = receptive_field self.input_size = receptive_field + output_size - 1 def forward(self, input): # Feed first convolutional layer with input x = self.first_conv(input) # Initialize skip connection skip = 0 # Residual block for i in range(self.blocks * self.layers): (dilation, init_dilation) = self.dilations[i] # Residual connection bypassing dilated convolution block residual = x # input to dilated convolution block filter = self.filter_conv_layers[i](x) filter = torch.tanh(filter) gate = self.gate_conv_layers[i](x) gate = torch.sigmoid(gate) x = filter * gate # Feed into 1x1 convolution for skip connection s = self.skip_convs[i](x) #Adding skip & Match size with decreasing dimensionality of x if skip is not 0: skip = skip[:, :, -s.size(2):] skip = s + skip # Sum all skip connections # Feed into 1x1 convolution for residual connection x = self.residual_convs[i](x) #Adding Residual & Match size with decreasing dimensionality of x x = x + residual[:, :, dilation * (self.kernel_size - 1):] # print(x.shape) x = torch.relu(skip) #Last conv layers x = torch.relu(self.last_conv_1(x)) x = self.last_conv_2(x) soft = torch.nn.Softmax(dim=1) x = soft(x) return x The training file: model = Wavenet(layers=3,blocks=2,output_size=32).to(device) model.apply(initialize) # xavier_uniform_ : Does this work? model.train() optimizer = optim.Adam(model.parameters(), lr=0.0003) for i, batch in tqdm(enumerate(train_loader)): mu_enc_my_x = encode_mu_law(x=batch, mu=256) input_tensor = one_hot_encoding(mu_enc_my_x) input_tensor = input_tensor.to(device) output = model(input_tensor) # TODO: Inspect input/output formats, maybe something wrong.... loss = F.cross_entropy(output.T.reshape(-1, 256), input_tensor[:,:,model.input_size - model.output_size:].long().to(device)) # subtract receptive field instead of pad it, workaround for quick debugging of loss-issue. print("\nLoss:", loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() if i % 1000 == 0: print("\nSaving model") torch.save(model.state_dict(), "wavenet.pt")
st182758
Adding the padding to the conv layers is the right approach, but I’m unsure why you are slicing it in the forward: def forward(self, x): output = self.conv(x) if self.padding != 0: output = output[:, :, :-self.padding] return output This would remove the “right hand side” of the padded output, so is this intended? Also, nn.CrossEntropyLoss expects raw logits, so remove the nn.Softmax in your model and just pass the output of the last layer to the criterion.
st182759
Yeah the slicing definitely seems fishy. We will look into that, thanks a lot. Regarding the softmax, it is part of the WaveNet structure as it passes in the output. The architechture below: image1013×538 69.6 KB Where should the softmax then be implemented, if output cannot be passed through said function for our loss? Appreciate the help!!! (Been stuck here for a week)
st182760
Yeah, be a bit careful about the architectures presented in papers, as they might not reflect the corresponding framework implementation. nn.CrossEntropyLoss will apply F.log_softmax and nn.NLLLoss internally, so you should not apply an nn.Softmax layer at the end, since the workflow would then be: (model -> out -> softmax) -> (log_softmax -> nll_loss) where the first part is inside the model and the second in the criterion. If the mentioned Softmax in the Figure is only used for the output of the model (not internally between layers), then just remove it.
st182761
Alright, that makes sense. We’re essentially double softmaxing. We’ll look into that. Turning back to the padding problem, I can see what you mean with slicing as mentioned earlier. However, I’ve tried to look a bit at the output when I try to not slice as previously, but just pass the foward as going through the convolutional layer. After running it through the model, I still have the dimensionality issue: input = tensor([1, 256, 225360] output = tensor([1, 256, 225331]) # This is -29, which is exactly our receptive field. It evades us still when and how to exactly pad a tensor explicitly. Our intuition says it is before we pass it to the convolutional layer, ie. in my forward(self, x) function: class CausalConv1d(torch.nn.Module): """ Causal Convolution for WaveNet Causality can be introduced with padding as (kernel_size - 1) * dilation (see Keras documentation) or it can be introduced as follows according to Golbin. """ def __init__(self, in_channels, out_channels, kernel_size, dilation = 1, bias = True): super(CausalConv1d, self).__init__() # padding=1 for same size(length) between input and output for causal convolution self.dilation = dilation self.kernel_size = kernel_size self.in_channels = in_channels self.out_channels = out_channels self.padding = (kernel_size-1) * dilation self.conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, dilation=dilation, bias=bias) # Fixed for WaveNet but not sure def forward(self, x): output = self.conv(x) return output 256 is our “class” parameter, and we know that we can transpose and reshape the tensor to get it in the format nn.torch.functional.Cross_entropy needs. That would be the last brick in the wall. Yet again, thank you for your patience and help, sir. =)
st182762
Usually you would add the padding directly to the initialization of the conv layer. Unfortunately, there is no option to use the 'same' padding argument (there should be some methods to calculate it automatically online for convenience) and you would thus have to calculate the padding manually (as seems to be the case in your CausalConv1d). However, since you are currently not passing the self.padding to nn.Conv1d, the spatial size of the output might be smaller than the input.
st182763
Hi, I am training an attention-based model. Multiple extractors are applied to my time-series input, resulted in multichannel signals with different timesteps. Apparently, I have to align/merge those channels before I can feed them to the model. I’d like to know what is the common practice for this kind of tasks. I can think of a few ways to do this: a) padding with 0s [a0 a1 a2 a3 a4] [b0 b1] [c0 c1 c2] will become [a0 a1 a2 a3 a4] [b0 b1 0 0 0 ] [c0 c1 c2 0 0 ] The most obvious problem of this way is that the model has to align the timesteps for different channels. Especially when positional encoding is applied evenly among channels. b) align signal with preprocessing [a0 a1 a2 a3 a4] [b0 b1] [c0 c1 c2] will become [a0 a1 a2 a3 a4] [b0 0 0 0 b1] [c0 0 c1 0 c2] It makes more sense because the relative position is preserved, but I wonder will I have to consider the higher frequency components introduced by brutally inserting 0. Are there documents targeting these questions?
st182764
I am getting an issue with my model using an unexpectedly large amount of RAM (around 15GB per sample when using 1 transformer block and 1 attention head). I have developed a UNet + Transformer architecture where the bottom of the UNet contains the transformer blocks. I want to use 6 transformer blocks with a 6-headed multihead attention. The attention and encoding size are both 16 and there are 1152 words. At the bottom of the UNet, the dimensions of the image has been downsampled to 64x18 with 128 channels and a batch size of 2. Before feeding into the transformer, I join the batch and channels dimensions and I also flatten the image before embedding into a 16-size vector for each word. This means that the dimensions for the inputs to the embedding layer are 256 (batches) x 1 (used to be channels) x 1152 (flattened image) x 1 and the outputs are 256 x 1 x 1152 x 16 (embedding size). This is then passed to the transformer. So far, none of these numbers stand out to me as being problematically large, though I feel like I am missing something. Is there anything that stands out as wasting or just generally consuming a lot of memory? Also, is there any way to conviently monitor peak GPU VRAM usage by module? For the convolutional portions, I am using 16, 32, 64, and 128 for the number of channels and I am starting with relatively large images as inputs (1024 x 302). The image is complex so I am using complex versions of many of the layers found here: GitHub - wavefrontshaping/complexPyTorch: A high-level toolbox for using complex valued neural networks in PyTorch. These are my convolutional encoder and decoder blocks for downsampling and upsampling on the UNet: class ConvEncoderBlock(nn.Module): def __init__(self, in_channels, out_channels): super(ConvEncoderBlock, self).__init__() self.conv1 = nn.Sequential(ComplexConv2d( in_channels, out_channels, kernel_size=config.encoder_kernel_size, padding=config.encoder_padding), ComplexReLU()) self.conv2 = nn.Sequential(ComplexConv2d( out_channels, out_channels, kernel_size=config.encoder_kernel_size, padding=config.encoder_padding), ComplexReLU()) self.downsample = ComplexMaxPool2d(2) def forward(self, x): x = self.conv1(x) skip = self.conv2(x) x = self.downsample(skip) return x, skip class DecoderBlock(nn.Module): def __init__(self, in_channels, out_channels): super(DecoderBlock, self).__init__() upsample_channels = in_channels * 2 // 3 skip_channels = in_channels * 2 // 3 self.upsample = ComplexConvTranspose2d( upsample_channels, upsample_channels // 2, kernel_size=2, stride=2) upsample_channels = upsample_channels // 2 self.conv1 = nn.Sequential(ComplexConv2d(upsample_channels + skip_channels, out_channels, kernel_size=config.decoder_kernel_size, stride=1, padding=config.decoder_padding), ComplexReLU()) self.conv2 = nn.Sequential(ComplexConv2d(out_channels, out_channels, kernel_size=config.decoder_kernel_size, stride=1, padding=config.decoder_padding), ComplexReLU()) def forward(self, x, skip): # print('in decoder block', x.shape, skip.shape) # input: [batch, channel, freq, time] x = self.upsample(x) # [batch, channel//2, freq*2, time*2] # print('catting', skip.shape, x.shape) x = cat(skip, x, dimension=1) # print('feeding into conv1', x.shape) x = self.conv1(x) # [batch, channel, freq*2, time*2] x = self.conv2(x) # [batch, channel, freq*2, time*2] # print('out of convolutionals', x.shape) return x There are my transformer components: class Embedding(nn.Module): def __init__(self, params, in_channels=1): super(Embedding, self).__init__() self.patch_embeddings = ComplexConv2d( in_channels, config.encoding_size, kernel_size=config.patch_size, stride=config.patch_size) self.positional_embeddings = nn.Parameter(torch.zeros( 1, in_channels, config.num_patches, config.encoding_size, 2)) # shape of x: [batch, channel, f, w] :=> type(torch.complex64) def forward(self, x): # print('embedding input shape', x.shape) # [batch * channels, 1, f, w] :=> type(torch.complex64) x = self.patch_embeddings(x) x = x.permute(0, 3, 2, 1) # [batch * channels, encoding_size, patches_per_column * w, 1] :=> type(torch.complex64) # [batch * channels, 1, patches_per_column * w, encoding_size] :=> type(torch.complex64) # x = x.permute(0, 2, 3, 1) # print('after permute', x.shape) x = torch.view_as_real(x) x = x + self.positional_embeddings x = torch.view_as_complex(x) # [batch * channels, 1, patches_per_column * w, encoding_size] :=> type(torch.complex64) return x class AttentionHead(nn.Module): def __init__(self, in_channels=2, out_channels=1): super(AttentionHead, self).__init__() #self.num_heads = config.num_heads self.keys = ComplexLinear( config.encoding_size, config.attention_size, bias=config.attention_bias) self.queries = ComplexLinear( config.encoding_size, config.attention_size, bias=config.attention_bias) self.values = ComplexLinear( config.encoding_size, config.attention_size, bias=config.attention_bias) self.complex_map = nn.Conv2d(in_channels, out_channels, 3, padding=1) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, x): keys = self.keys(x) queries = self.queries(x) values = self.values(x) scores = complex_matmul(queries, keys.transpose(-1, -2)) scores /= config.attention_size ** 0.5 scores = torch.view_as_real(scores) scores = scores[:, 0, :, :] scores = scores.permute(0, 3, 1, 2) scores = self.complex_map(scores) scores = nn.Softmax(dim=-1)(scores) scores = self.dropout(scores) scores = torch.complex(scores, torch.zeros_like(scores)) #print('after attention', complex_matmul(scores, values)) return complex_matmul(scores, values) class MSA(nn.Module): def __init__(self, params): super(MSA, self).__init__() self.device = params["device"] self.heads = nn.ModuleList([AttentionHead(2, 1) for _ in range(config.num_heads)]) self.w = ComplexLinear(config.attention_size * config.num_heads, config.encoding_size, bias=config.attention_bias) self.dropout = ComplexDropout(config.dropout_rate) def forward(self, x): all_heads = self.heads[0](x) for i, head in enumerate(self.heads[1:]): all_heads = torch.cat((all_heads, head(x)), dim=-1) x = self.w(all_heads) x = self.dropout(x) #print('after MSA', x) return x class MLP(nn.Module): def __init__(self): super(MLP, self).__init__() self.fc1 = ComplexLinear(config.encoding_size, config.mlp_size) self.fc2 = ComplexLinear(config.mlp_size, config.encoding_size) self.activation = ComplexReLU() self.dropout = ComplexDropout(config.dropout_rate) def forward(self, x): x = self.fc1(x) x = self.activation(x) x = self.dropout(x) x = self.fc2(x) x = self.dropout(x) #print('after mlp', x) return x class TransformerBlock(nn.Module): def __init__(self, params): super(TransformerBlock, self).__init__() self.attn_norm = NaiveComplexLayerNorm( (params["num_patches"], config.encoding_size), eps=config.norm_eps) self.attn = MSA(params) self.ffn_norm = NaiveComplexLayerNorm( (params["num_patches"], config.encoding_size), eps=config.norm_eps) self.ffn = MLP() def forward(self, x): # print('transformer input', x.shape) h = x x = self.attn_norm(x) x = self.attn(x) x = x + h h = x x = self.ffn_norm(x) x = self.ffn(x) x = x + h # print('after transformer', x) return x This is how I define my convolutional layers and transformer layers together in an overarching encoder class. class Encoder(nn.Module): def __init__(self, params): super(Encoder, self).__init__() self.convEncoderBlock1 = ConvEncoderBlock(1, 16) self.convEncoderBlock2 = ConvEncoderBlock(16, 32) self.convEncoderBlock3 = ConvEncoderBlock(32, 64) self.convEncoderBlock4 = ConvEncoderBlock(64, 128) self.embedding = Embedding(params=params) self.transformers = nn.Sequential(OrderedDict( [("Block " + str(i), TransformerBlock(params)) for i in range(config.num_transformers)])) self.unembedding = nn.Sequential( ComplexConv2d(config.encoding_size, 1, kernel_size=1) ) def forward(self, x): # print('encoder input shape', x.shape) # Convolutional Layers x, skip1 = self.convEncoderBlock1(x) x, skip2 = self.convEncoderBlock2(x) x, skip3 = self.convEncoderBlock3(x) x, skip4 = self.convEncoderBlock4(x) batch_size = x.shape[0] freq_size = x.shape[2] time_size = x.shape[3] # print("transformer input: ", x.shape) x = x.reshape((batch_size*x.shape[1], 1, x.shape[2], x.shape[3])) # print("channels to batches: ", x.shape) batch_channels = x.shape[0] x = x.permute(0, 1, 3, 2).reshape(batch_channels, 1, freq_size*time_size, 1) # print("transformer words as rows: ", x.shape) x = self.embedding(x) # print("embedding output: ", x.shape) x = self.transformers(x) # print("transformer output: ", x.shape) # Unembedding x = x.permute(0, 3, 2, 1) # print("unembed input: ", x.shape) x = self.unembedding(x) # print("unembed output: ", x.shape) x = x.reshape(batch_channels, 1, time_size, freq_size).permute(0, 1, 3, 2) # print("reshape as spec: ", x.shape) x = x.reshape( (batch_size, x.shape[0]//batch_size, x.shape[2], x.shape[3])) # print("separate batch and channels: ", x.shape) return x, [skip1, skip2, skip3, skip4]
st182765
Hello, I was passing some mel arguments into the torchaudio.transforms.MFCC function but it doesn’t seem to recognize the argument “center”. Center is a parameter to the Mel Spectorgram so I don’t understand why it’s not working. Code: melkwargs = { "center": False, "n_fft": n_fft, "hop_length": hop_length, "power": 2, "f_min": f_min, "f_max": f_max } mfcc_module = torchaudio.transforms.MFCC(sample_rate=sample_rate, n_mfcc=20, melkwargs=melkwargs) torch_mfcc = mfcc_module(torch.tensor(waveform)) Error: TypeError: init() got an unexpected keyword argument ‘center’ Any help would be appreciated.
st182766
Solved by ptrblck in post #2 You might need to update torchaudio to 0.8.0, as the center argument doesn’t seem to be available in previous versions.
st182767
You might need to update torchaudio to 0.8.0, as the center argument doesn’t seem to be available in previous versions.
st182768
I’m making a genre classifier using the GTZAN dataset, where I’m generating overlapping patches of Melspectrograms for each input. The final output of the model is the sum (or average) output for each patch. Jupyter notebook with outputs 2 The problem is that after the second batch or so the network is not deviating from 1 class prediction (even with ‘average’ combination for the final prediction). The output from the model is also weird, because the predictions are identical for each input of the batch. Same with no activation on the output. The network is 2 conv2D layers with maxpooling and a fully connected one with 10 outputs. Optimizer: Adam with weight decay (5e-4) and lr (0.01) Loss: CrossEntropy Is the network built wrong? All the help would be much appreciated! EDIT: Fixed the issue, remember to activate your fully connected layers kids
st182769
Hu guys i follow this project in GitHub GitHub - LearnedVector/A-Hackers-AI-Voice-Assistant: A hackers AI voice assistant, built using Python and PyTorch. 4 based pytorch all’ going ok but when i train neural network i use this Command … when i train neural network based pytorch i use this Command : python3 train.py --sample_rate 8000 --epochs 100 --batch_size 32 --eval_batch_size 32 --lr 0.0001 --model_name ./new_wakeword_v0 --save_checkpoint_path neuralnet/checkpoints --train_data_json data/json/train.json --test_data_json data/json/test.json --num_workers 4 --hidden_size 64 I have GPU AMD there is a way to use amd video card …? I install CUDA-toolkit now Regards
st182770
I’ve read in Attention is All You Need that Transformers perform better than RNNs (Dual-Path RNN) in speech separation but had ten times the number of parameters. I’ve also read that it could better retain information from early inputs in the input sequence. However, how well does a Transformer network perform in real-time speech separation? Does the number of parameters and the way it deals with the inputs affect its ability to perform real-time speech separation?
st182771
Hi everyone! I am trying to build torchaudio from source. In my environment, CUDA and its related libraries (e.g. cuDNN) are installed in non-default paths. How can I pass custom build arguments to cmake via setup.py? Thanks a lot. Kind regards, GT
st182772
We do not have CUDA specific code yet, and so the torchaudio build doesn’t depend the nvidia compiler. What error are you encountering? If you do run into an error with this, I’d recommend opening an issue directly on github.com/pytorch/audio 1
st182773
When creating a custom dataset loader like that shown here 6. Is it advisable to do something like class CustomDataset(Dataset): def __init__(self, csv_file, root_dir): self.annotations = pd.read_csv(csv_file) self.root_dir = root_dir def __len__(self): return len(self.annotations) def __getitem__(self, index): audio_path = os.path.join(self.root_dir, self.annotations.iloc[index, 0]) target_path = os.path.join(self.root_dir, self.annotations.iloc[index, 1]) audio, _ = torchaudio.load(audio_path) target, _ = torchaudio.load(target_path) return audio, target when both the input and expected output are waveforms? In my case, I created a csv file that contains the filenames of both audios and I just load the path-like object into torchaudio.load(). Also, how do I ensure that all the audio and target in a batch are of the same length when being loaded?
st182774
Well there are two main possibilities. One is that you use a fix length of audio. You should make chunks of audio given a sample. Another option is that you use variable length (typically when using transformers) In that case you should rewrite dataloader’s collate function.
st182775
What I implemented was def batch_pad(batch): batch = [item.t() for item in batch] batch = torch.nn.utils.rnn.pad_sequence(batch, batch_first=True, padding_value=0.) return batch.permute(0, 2, 1) def collate_fn(batch): tensors = [] targets = [] for waveform, target in batch: tensors += [waveform] targets += [target] tensors = batch_pad(tensors) targets = batch_pad(targets) return tensors, targets Or is there some way I can use batch_pad on both tensors and targets together?
st182776
Hi I’m trying to make an autoencoder for speech data. The network’s input and output are Mel spectrograms. How can I obtain the audio waveform from the generated mel spectrogram?
st182777
https://librosa.org/doc/latest/generated/librosa.feature.inverse.mel_to_audio.html?highlight=mel%20spectrogram 91
st182778
neovand: How can I obtain the audio waveform from the generated mel spectrogram? librosa.istft() 20 should also work, I think. (from Wave-U-Net utils: Wave-U-Net/Utils.py at master · f90/Wave-U-Net · GitHub 9)
st182779
As JuanFMontesinos said Librosa 1 is a great and easy way of converting Mel-spectrograms. I’ve also used WaveGlow in the past to do the same task, you can use the Nvidia implementation here 13
st182780
neovand: Hi I’m trying to make an autoencoder for speech data. The network’s input and output are Mel spectrograms. How can I obtain the audio waveform from the generated mel spectrogram? Here’s a small example using librosa.istft from this FactorGAN 3 implementation: def spectrogramToAudioFile(magnitude, fftWindowSize, hopSize, phaseIterations=10, phase=None, length=None): ''' Computes an audio signal from the given magnitude spectrogram, and optionally an initial phase. Griffin-Lim is executed to recover/refine the given the phase from the magnitude spectrogram. :param magnitude: Magnitudes to be converted to audio :param fftWindowSize: Size of FFT window used to create magnitudes :param hopSize: Hop size in frames used to create magnitudes :param phaseIterations: Number of Griffin-Lim iterations to recover phase :param phase: If given, starts ISTFT with this particular phase matrix :param length: If given, audio signal is clipped/padded to this number of frames :return: ''' if phase is not None: if phaseIterations > 0: # Refine audio given initial phase with a number of iterations return reconPhase(magnitude, fftWindowSize, hopSize, phaseIterations, phase, length) # reconstructing the new complex matrix stftMatrix = magnitude * np.exp(phase * 1j) # magnitude * e^(j*phase) audio = librosa.istft(stftMatrix, hop_length=hopSize, length=length) else: audio = reconPhase(magnitude, fftWindowSize, hopSize, phaseIterations) return audio Link: FactorGAN/Utils.py at ae57301195984092ee40742273e1034f3ae27e32 · f90/FactorGAN · GitHub 7
st182781
Hi! I was wondering if there’s a proper way to learn the parameters of a sinusoid by gradient descent? Here’s an example setup: x = torch.arange(0, 5, 1 / sr) # =5s of audio y = torch.sin(2 * np.pi * 2.85 * x) # want to get sinusoid with frequency 2.85 f = nn.Parameter(torch.tensor(2.0)) # initialize frequency parameter as 2.85 opt = torch.optim.AdamW([f], lr=0.001) loss = nn.MSELoss() for i in range(100): out = torch.sin(2 * np.pi * f * x) l = loss(out, y) l.backward() opt.step() opt.zero_grad() This won’t learn the correct value for f (2.85), but rather stay around 2. Is there a way to set this up so it can learn through a periodic function? Thanks for any input! edit: simplified code somewhat
st182782
I think maybe evaluate the loss in frequency domain via fft will be easier, because directly train on time domain, in my experiences, is generally very hard
st182783
Hello! So I’m planning on working on a project that deals with speech enhancement and noise cancellation and I just had a few questions in mind: What are the recommended resources that I may read related to this project? Are RNNs used when dealing with audio data? What are the common loss functions when dealing with audio data?
st182784
I’m creating an incremental TTS model, the model is based on Incremental Machine Speech Chain Towards Enabling Listening while Speaking in Real-time In this paper, they describe passing a small section of audio (windowing) for training as shown in the diagram below: image718×555 48.1 KB What is the best way to load my data for training? (The dataset consists of Pulse-Code Modulation (PCM) and labels ) What I’ve tried so far? For each tensor in a batch, I create a new tenor using the method shown Here. and this new tensor is used for training. for i, batch in enumerate(train_loader): x, y = ttsModel.parse_batch(batch) # send data to GPU for tensor in x: windowed_datas = window_split(tensor) for windowed_data in windowed_datas: tts_output, hd0, he0 = ttsModel(windowed_data, hd0, he0) I already see an issue, the model is no longer training on batches but single tensors. I’ve tried this approach in the torch.utils.data.Dataloader using the collate_fn Nvidia: Tacotron 2, but I’m not sure if this is correct for the model. class TextAudioCollate(): """ Zero-pads model inputs and targets based on number of frames per setep """ def __init__(self, n_frames_per_step): self.n_frames_per_step = n_frames_per_step def __call__(self, batch): """Collate's training batch from normalized text and mel-spectrogram PARAMS ------ batch: [text_normalized, mel_normalized] """ # Right zero-pad all one-hot text sequences to max input length input_lengths, ids_sorted_decreasing = torch.sort( torch.LongTensor([len(batch[i][1]) for i in range(len(batch))]), dim=0, descending=True) max_input_len = input_lengths[0] # max_input_len = 512 text_padded = torch.LongTensor(len(batch), max_input_len) text_padded.zero_() for i in range(len(ids_sorted_decreasing)): text = batch[ids_sorted_decreasing[i].numpy()][1] text_padded[i, :text.size(0)] = text # Right zero-pad mel-spec num_mels = batch[0][0].size(0) max_target_len = max([batch[i][0].size(1) for i in range(len(batch))]) if max_target_len % self.n_frames_per_step != 0: max_target_len += self.n_frames_per_step - max_target_len % self.n_frames_per_step assert max_target_len % self.n_frames_per_step == 0 # include mel padded and gate padded mel_padded = torch.FloatTensor(len(batch), num_mels, max_target_len) mel_padded.zero_() gate_padded = torch.FloatTensor(len(batch), max_target_len) gate_padded.zero_() output_lengths = torch.LongTensor(len(batch)) for i in range(len(ids_sorted_decreasing)): mel = batch[ids_sorted_decreasing[i].numpy()][0] mel_padded[i, :, :mel.size(1)] = mel gate_padded[i, mel.size(1)-1:] = 1 output_lengths[i] = mel.size(1) # Windowing text_windowed = [] for text in text_padded: text_windowed.append(window_split(text)) audio_windowed = [] for audio in mel_padded: audio_windowed.append(window_split(audio)) return (text_windowed, audio_windowed)
st182785
I am researching on using pretrained VGGish 86 model for audio classification tasks, ideally I could have a model classifying any of the classes defined in the google audioset. I came across a nice pytorch port 68 for generating audio features. The original model generates only audio features as well. The original team suggests generally the following way to proceed: As a feature extractor : VGGish converts audio input features into a semantically meaningful, high-level 128-D embedding which can be fed as input to a downstream classification model. The downstream model can be shallower than usual because the VGGish embedding is more semantically compact than raw audio features. So, for example, you could train a classifier for 10 of the AudioSet classes by using the released embeddings as features. Then, you could use that trained classifier with any arbitrary audio input by running the audio through the audio feature extractor and VGGish model provided here, passing the resulting embedding features as input to your trained model. vggish_inference_demo.py shows how to produce VGGish embeddings from arbitrary audio I’m not sure how to go about using getting the released embeddings and using them for training in pytorch. I’m also not sure how to translate the embeddings into classification. Could any one kindly share some pointers? Thanks!
st182786
Based on the description you’ve posted it seems the authors call the output features embeddings. This might be a bit confusing, as there are nn.Embedding layers, which are apparenrently not meant here. If I understand the use case correctly, you could store each output feature of the VGGish model with its corresponding target, create a new classification model, and use these output features + targets to train this new classifier. According to their claim, this classifier can be “shallower”, as the “embeddings” are so great. Let me know, if that makes sense.
st182787
you could store each output feature of the VGGish model with its corresponding target, create a new classification model, and use these output features + targets to train this new classifier. Cool. Thanks for the tip! Would this seem like reasonable steps to train a new model? Download audio wav samples from audioset 32 as training data Send each audio wav sample thru the VGGish to get a corresponding 128-dimension vector (output feature) Define a Dataset comprising the VGGish output feature as input (x) and the corresponding target (y) Using nn.module, define a “shallow” model using single layer…say Linear() . train
st182788
Hi, I want to get a 128-dimension feature of my own video data, what should I do?
st182789
For audio classification of video data, perhaps you could first extract the audio into wav file and slice them into short audio clips using a tool like FFMPEG with filename corresponding to the time segment of the video. The audio clips can be sent into the network to extract features.
st182790
Hi @kepler62f. How was the model? I am working with the VGGish on a similar topic but I’m having some issued. Could you please share the code? I’m new in PyTorch and some example makes me more confident. Thanks
st182791
how can we do this in pytorch? After training the model, get the learned representations from the 2nd last FC layer for t-SNE visualization. The Model is defined below class VisualNet(nn.Module): def __init__(self): super(VisualNet, self).__init__() self.conv1 = nn.Conv2d() #Ignore the function values slef.conv2 = nn.Conv1d() slef.conv3 = nn.Conv1d() slef.conv4 = nn.Conv1d(512,512) self.fc1 = nn.Linear(512,256) self.fc2 = nn.Linear(256, 8) def forward(self, x): x = self.conv1(self.conv2(self.conv3(self.conv4(x))) x_fc1 = self.fc1(x) x = self.fc2(x_fc1) return x model = VisualNet() After finishing Training, How can I get x_fc1 embeddings for t-SNE ??
st182792
Assuming you are looking for the output activations of self.fc1 (unsure what embeddings would mean in this context), you could use forward hooks as described here 16.
st182793
Hello, I am trying to train this on a Raspberry 2 which has 512MBs of RAM. I reduced the original Google Commands Dataset into 12 classes and 34000 training samples and I am using a Resnet model with 8 layers. class ResNet(BaseModel): def __init__(self, config): super().__init__() self.n_layers = config["n_layers"] n_maps = config["n_feature_maps"] self.layers = nn.ModuleDict() self.layers["conv_0"] = nn.Conv2d(1, n_maps, (3, 3), padding=1, bias=False) for i in range(1, self.n_layers + 1): if config["use_dilation"]: padding_size = int(2**((i-1) // 3)) dilation_size = int(2**((i-1) // 3)) self.layers[f"conv_{i}"] = nn.Conv2d(n_maps, n_maps, (3, 3), padding=padding_size, dilation=dilation_size, bias=False) else: self.layers[f"conv_{i}"] = nn.Conv2d(n_maps, n_maps, (3, 3), padding=1, bias=False) self.layers[f"bn_{i}"] = nn.BatchNorm2d(n_maps, affine=False) if "pool" in config: self.layers["pool"] = nn.AvgPool2d(config["pool"]) self.layers["output"] = nn.Linear(n_maps, config["n_labels"]) self.activations = nn.ModuleDict({ "relu": nn.ReLU() }) def forward(self, x): x = x.unsqueeze(1) x = self.layers["conv_0"](x) x = self.activations["relu"](x) if "pool" in self.layers: x = self.layers["pool"](x) prev_x = x for i in range(1, self.n_layers + 1): x = self.layers[f"conv_{i}"](x) x = self.activations["relu"](x) if i % 2 == 0: x = x + prev_x prev_x = x x = self.layers[f"bn_{i}"](x) x = x.view(x.size(0), x.size(1), -1) # shape: (batch, features, o3) x = x.mean(2) x = self.layers["output"](x) return x It can be trained successfully on my laptop, however, I get MemoryError on the rpi2. I split the wav list thus they all won’t be loaded to the memory at once. (There are almost 1,5 GBs of wav files, rpi2 has only 512 MBs of RAM) I set --num_workers argument to 0. (CPU memory gradually leaks when num_workers > 0 in the DataLoader) However I can’t solve the problem. Environment: PyTorch Version: 1.3.0 Python version: 3.7.3 OS: Raspbian GNU/Linux 10 (buster) CPU: armv7l What could be the problem? Thank you so much!
st182794
Could you reduce the batch size to a single sample and rerun the code, in case your current batch size is larger? If that doesn’t help, could you remove the data loading as a debugging step and feed a single sample as x = torch.randn(...) to the model to check the memory usage?
st182795
Thank you so much for the reply! I’ve tried with batch_size=1 and got the same error. With a random tensor, the memory usage is 0.004444MB.
st182796
This would most likely point to the data loading pipeline using a lot of memory. If I understand this loader 1 correctly, the wav files are all preloaded and just indexed in the __getiteim__ method? If so, try to use lazy loading and try to move the actual data loading into the __getitem__ method while the __init__ method gets the paths etc.
st182797
I’m currently trying to use torchaudio.transforms.Spectrogram with power = None to return the complex spectrogram. The forward method of this class returns a tensor with a shape of (...,2). I am aware that the last two dimensions represent the real and imaginary parts, but it is not clear from the documentation which one is which and how they represent a complex tensor. For example, if torchaudio.transforms.Spectrogram returns x with shape (256,256,2), is x[...,0] the real part or the imaginary part? Also, if x was a complex tensor, then would it be expressed as x[...,0] + x[...,1]j or x[...,1] + x[...,0]j?
st182798
Solved by Mahmoud_Abdelkhalek in post #3 So it’s x[...,0] + x[...,1]j?
st182799
I’m a college student trying to implement Wavenet using PyTorch, this is my first time writing custom modules for a model in PyTorch and I’m having a problem with my model in that it won’t train. Essentially the project is this: I have a set of wav files that I am reading in, processing and quantizing as in the Wavenet paper, and am arranging into series of 1024 data points (the model takes a series of 1024 amplitudes from the wav as input and should output a tensor of 256 values describing the probability that the next item in the series is one of those 256 values). I’m currently trying to train the model on a single music file, hoping to get it to overfit so that I can be sure that it actually learns from the data. Here lies the problem: the loss won’t decrease as I train. I’ve tried making the model smaller, changing the loss function, changing what kind of layer the output layer is, and making the learning rate larger but nothing seems to work. I suspect that the problem is somewhere in the model itself, the way that it is constructed may be wrong. It’s possible that I could have linked the custom modules together with the model in a way that interferes with back propagation, at least that is my best guess. My code for the model and my training code is below. I would really appreciate some help! #model https://github.com/Dankrushen/Wavenet-PyTorch/blob/master/wavenet/models.py #https://github.com/ryujaehun/wavenet/blob/master/wavenet/networks.py #https://medium.com/@satyam.kumar.iiitv/understanding-wavenet-architecture-361cc4c2d623 #https://discuss.pytorch.org/t/causal-convolution/3456/4 import torch import torch.optim as optim from torch import nn from functools import reduce #causal convolution (citation above) class CausalConv1d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, dilation=1, **kwargs): super(CausalConv1d, self).__init__() self.pad = (kernel_size - 1) * dilation self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = kernel_size self.dilation = dilation self.conv = nn.Conv1d(in_channels, out_channels, kernel_size, padding=self.pad, dilation=dilation, **kwargs) def forward(self, x): return self.conv(x) class ResidualBlock(nn.Module): def __init__(self, input_channels, output_channels, kernel_size, skip_size, skip_channels, dilation=1): super(ResidualBlock, self).__init__() self.dilation = dilation self.skip_size = skip_size self.conv_s = CausalConv1d(input_channels, output_channels, kernel_size, dilation)#dim self.sig = nn.Sigmoid() self.conv_t = CausalConv1d(input_channels, output_channels, kernel_size, dilation)#dim self.tanh = nn.Tanh() self.conv_1 = nn.Conv1d(output_channels, output_channels, 1)#dim -> k = 1 self.skip_conv = nn.Conv1d(output_channels, skip_channels, 1) def forward(self, x): o = self.sig(self.conv_s(x)) * self.tanh(self.conv_t(x)) skip = self.skip_conv(o) skip = skip[:,:,-self.skip_size:] #dim control for adding skips residual = self.conv_1(o) return residual, skip class WaveNet(nn.Module):#SET SKIP SIZE default def __init__(self, skip_size=256, num_blocks=2, num_layers=10, num_hidden=128, kernel_size=2): super(WaveNet, self).__init__() self.layer1 = CausalConv1d(1, num_hidden, kernel_size)#dim self.res_stack = nn.ModuleList() for b in range(num_blocks): for i in range(num_layers): self.res_stack.append(ResidualBlock(num_hidden, num_hidden, kernel_size, skip_size=skip_size, skip_channels=1, dilation=2**i))#dim #self.hidden = nn.ModuleList(self.hidden) self.relu1 = nn.ReLU() self.conv1 = nn.Conv1d(1,1,1)#dim self.relu2 = nn.ReLU() self.conv2 = nn.Conv1d(1,1,1)#dim self.output = nn.Softmax() def forward(self, x): skip_vals = [] #initial causal conv o = self.layer1(x) #run res blocks for i, layer in enumerate(self.res_stack): o, s = layer(o) skip_vals.append(s) #sum skip values and pass to last portion of network o = reduce((lambda a,b: a+b), skip_vals) o = self.relu1(o) o = self.conv1(o) o = self.relu2(o) o = self.conv2(o) return self.output(o) #overfit model to test if it will train device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(device) net = WaveNet(num_layers=1) #send to gpu net.to(device) criterion = nn.CrossEntropyLoss() #preproc data, stream, train, remember to reformat y label as onehot vector for classification optimizer = optim.Adam(net.parameters(),lr=0.001) num_epochs = 20 losses = [] _, inp = wav_to_data(data_path+'/'+wav_files[0]) data = encode(inp) batch_size = 32 i = 0 for epoch in range(num_epochs): i += 1 for s in range(0, len(data) - 1024, batch_size): (x, y) = create_singular_input_stream(data, s, batch_size) optimizer.zero_grad() output = net(torch.reshape(torch.FloatTensor(x).to(device), (batch_size,1,1024))) print(output.shape) print(torch.Tensor([y]).shape) #find loss between distributions of amplitudes #https://discuss.pytorch.org/t/indexerror-target-1-is-out-of-bounds-nlloss/68656 loss = criterion(torch.squeeze(output), torch.squeeze(torch.Tensor(y)).type(torch.LongTensor).to(device)) loss.backward() optimizer.step() losses.append(loss.item()) print('Epoch {}/{}, Loss: {:.6f}'.format(i, num_epochs, loss.item())) A helpful note: create_singular_input_stream returns x (1,1024 tensor with series) and y (next series value [0,255]) Again, any help is appreciated - I’d love to know what I’m doing wrong.