id
stringlengths
3
8
text
stringlengths
1
115k
st182800
@Arham_Khan Just some mistakes I have saw: The padding in CausalConv1d is not causal. You’re going to pad (kernel_size - 1) * dilation on both side of the input’s last dimension. The correct way should be padding the input before the Conv1d layer in the forward call and set padding to 0 in Conv1d. ... self.conv = nn.Conv1d(in_channels, out_channels, kernel_size, padding=0, dilation=dilation, **kwargs) def forward(self, x): x = F.pad(x, (self.pad, 0)) # only pad on the left side return self.conv(x) The skip_size param doesn’t make any sense to me, especially in this line: skip = skip[:,:,-self.skip_size:] #dim control for adding skips You set skip_size to 256 to match the one-hot vector of y, which represent the value of the sample just next to 1024 input samples. But you take 256 values from the last axis which represent time, so you’re somehow mapping 256 different time position values to 256 different amplitude values, looks weird to me . Those 256 hidden values should be taken from the second axis, which represent hidden channels. You directly cast x to float type, which make the quantization completely useless. In my opinion, the shape of x should be (batch, 256, 1024) with one-hot vector along the second axis, and the shape of y should be (batch, 256, 1) with one-hot vector along the second axis. I think you might have some misunderstanding of WaveNet, but this is normal, because the original paper didn’t write everything very clearly (to me), so I recommend you to take a look at others’ implementation to get a wider view of WaveNet. The wavenet model I have implemented: github.com yoyololicon/wavenet-like-vocoder/blob/3591786b6507173323b7b657a98b9bdffabbee46/model/model.py#L23 17 dilations = radix ** torch.arange(n_layers) dilations = dilations.tolist() if descending: dilations = dilations[::-1] self.dilations = dilations * n_blocks self.cls = classes self.r_field = sum(self.dilations) + 1 self.rdx = radixclass WaveNet(_BaseModel): def __init__(self, n_blocks=4, n_layers=10, classes=256, radix=2, descending=False, aux_channels=80, dilation_channels=256, residual_channels=256, skip_channels=256,
st182801
Thanks for the thorough explanation! Sorry for the late reply but I’ve been busy with finals. The skip size parameter is something that has confused me as well but I saw it pop up in various implementations, including some which may be linked in my original post. It seems that they are pulling skip values for the last set of fully connected layers to act on, but I’m confused as to why we would truncate those vectors. If I remember correctly, I had casted X to float because some of the layers in the model would not process X without it being a float. Is what you recommended essentially that for each sequence element I should pass a one-hot encoded vector to the model? So the model would be fed B batches, and each sequence of 1024 that I wished to pass to the model would be represented by 1024 one-hot encoded vectors of length 256, to represent the encoding space? Could you comment on how the above change affects training? The way it currently works is that the input is quantized but even in float form the elements of the vector will still be of the form (120. , 243., …). In addition, how would we then process the one hot encoded vectors using convolutions? I see in your implementation you include an Embedding element, what is the use of this element in the context of wavenet? I see that it is a lookup table from the class to the embedding but what is it trained to embed each class into? Basically, how is the Embedding layer used in your implementation? Also, I now see why the padding on the causal convolution is incorrect - admittedly I had pulled that code from another post on these forums. Are there any resources and books you’d recommend to get a good understanding of machine learning theory and model development? Particularly form the point of view of someone teaching how to design novel model architectures?
st182802
Arham_Khan: If I remember correctly, I had casted X to float because some of the layers in the model would not process X without it being a float. Is what you recommended essentially that for each sequence element I should pass a one-hot encoded vector to the model? So the model would be fed B batches, and each sequence of 1024 that I wished to pass to the model would be represented by 1024 one-hot encoded vectors of length 256, to represent the encoding space? Could you comment on how the above change affects training? The way it currently works is that the input is quantized but even in float form the elements of the vector will still be of the form (120. , 243., …). In addition, how would we then process the one hot encoded vectors using convolutions? I see in your implementation you include an Embedding element, what is the use of this element in the context of wavenet? I see that it is a lookup table from the class to the embedding but what is it trained to embed each class into? Basically, how is the Embedding layer used in your implementation? I use Embedding layer just to map 256 classes to differenct latent vectors with size equals to residual channels, no other meaning. If you feed one-hot encoded vectors into the first layer it’s actually do the same thing as embedding. Directly feeding the quatized value will lose some complexity on the input presentation but may not affect the performance too much, I guess. Arham_Khan: Thanks for the thorough explanation! Sorry for the late reply but I’ve been busy with finals. The skip size parameter is something that has confused me as well but I saw it pop up in various implementations, including some which may be linked in my original post. It seems that they are pulling skip values for the last set of fully connected layers to act on, but I’m confused as to why we would truncate those vectors. The trucation you see in other implementations I guess is to truncate those time steps that are not presented in the output sequence. Take your code for example, your truncation should be: skip = skip[:,:,-1:] so the output shape is the same as (batch, 256, 1) just like your target. Also, your current training method is quite inefficient. WaveNet is a fully convolutional model, a fully convolutional model means it can be parallized when training. My suggestion is, you can try feeding a sequence of samples as output instead of taking just one time sample. For example, if I want to use 5000 samples time sequences named x during training, my input and target output will be: input = x[:, :, :4999] target = x[:, :, 1:] #right shift one sample The truncation step can be removed because now the model output and target has the same shape. Hope these will help you.
st182803
I now understand how the embedding layers function in your implementation, but in what sense would feeding the quantized values directly “lose complexity on the input presentation”? What sort of considerations for model performance lead you to that conclusion? Also, in the more efficient training method you proposed, are the dimensions you are assuming for x, (batch, 256, seq_len)? Is it normal to create a one-hot-encoding by having the dimensions be (num_classes, 1) for each encoded vector? I ask because in PyTorch I thought that this would mean that we have 256 channels, each with a value representing the 0,1 class value - or in the case of the output the probability that the next item belongs to each class. The reason I ask the above question is that I am currently using CrossEntropyLoss 1 which was described in the Wavenet paper. According to the PyTorch documentation, this expects input with dimensions (minibatch, num_classes) and a target describing a class index (in my case: [0,255]). Given this, I would think I should make the one-hot-encoded vectors such that the input has dimensions (batch, seq_len, 256) and the output should be (batch, 1, 256) or (batch, 256). How would using the above dimensions affect the model, given that it is using convolutional layers? Would they still work? Am I misunderstanding CrossEntropyLoss? Thank you for the help! I’ve been able to make more progress in the last week conceptually and in code than I have over the past few months with your help. I really appreciate it!
st182804
I now understand how the embedding layers function in your implementation, but in what sense would feeding the quantized values directly “lose complexity on the input presentation”? What sort of considerations for model performance lead you to that conclusion? Let’s give you a simple example. If we feeding the value directly, the input channel size is 1 and the first layer map them to size of residual channel, so the parameter size in the first layer is (res_channels, 1, *), it’s actually just one vector; if we use one-hot encoding (or embedding layer), the parameter size of first layer is (res_channels, 256, *). You can see that, the first method is actually just scale the amplitude of a hidden vector base on the input, but the second method the model will try to learn 256 different hidden vector for each quantized input value, which has much more freedom than the first method. Also I want to point out why the first method might still work in this case is that our input data is waveform, although being mulaw-quantized, its raw value still represent some kind of information; but if our input is some kinds of discrete data, like discrete tokens in language processing, its category number can be permutaded and have no extra information like amplitude. Also, in the more efficient training method you proposed, are the dimensions you are assuming for x, (batch, 256, seq_len)? Is it normal to create a one-hot-encoding by having the dimensions be (num_classes, 1) for each encoded vector? I ask because in PyTorch I thought that this would mean that we have 256 channels, each with a value representing the 0,1 class value - or in the case of the output the probability that the next item belongs to each class. Exactly, or you can just feed (batch, seq_len) with integer type, and map them to size (batch, seq_len, 256) using embedding layer. The reason I ask the above question is that I am currently using CrossEntropyLoss which was described in the Wavenet paper. According to the PyTorch documentation, this expects input with dimensions (minibatch, num_classes) and a target describing a class index (in my case: [0,255]). Given this, I would think I should make the one-hot-encoded vectors such that the input has dimensions (batch, seq_len, 256) and the output should be (batch, 1, 256) or (batch, 256). Yes, or if you use the paralle training method I suggested, your output can also be (batch, seq_len, 256) with target as (batch, seq_len).
st182805
I see, that explanation about the encoding makes a lot of sense. Are there any resources describing the parallelization of CNNs in PyTorch? I couldn’t find anything in the documentation other than resources talking about general parallelization using some specific PyTorch modules. Thanks again for the help, my model loss is decreasing steadily now - just have to beat some loss plateaus. I also now understand the implementation of the architecture much better!
st182806
I’m running into another problem with my model during training and generation now and I was wondering whether you could provide any insight on that: #model https://github.com/Dankrushen/Wavenet-PyTorch/blob/master/wavenet/models.py #https://github.com/ryujaehun/wavenet/blob/master/wavenet/networks.py #https://medium.com/@satyam.kumar.iiitv/understanding-wavenet-architecture-361cc4c2d623 #https://discuss.pytorch.org/t/causal-convolution/3456/4 import torch import torch.optim as optim import torch.nn.functional as F from torch import nn from functools import reduce #causal convolution (citation above) class CausalConv1d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, dilation=1, **kwargs): super(CausalConv1d, self).__init__() self.pad = (kernel_size - 1) * dilation self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = kernel_size self.dilation = dilation self.conv = nn.Conv1d(in_channels, out_channels, kernel_size , dilation=dilation, **kwargs) def forward(self, x): #pad here to only add to the left side x = F.pad(x, (self.pad, 0)) return self.conv(x) class ResidualBlock(nn.Module): def __init__(self, input_channels, output_channels, kernel_size, skip_channels, dilation=1): super(ResidualBlock, self).__init__() self.dilation = dilation self.conv_sig = CausalConv1d(input_channels, output_channels, kernel_size, dilation)#dim self.sig = nn.Sigmoid() self.conv_tan = CausalConv1d(input_channels, output_channels, kernel_size, dilation)#dim self.tanh = nn.Tanh() #separate weights for residual and skip channels self.conv_r = nn.Conv1d(output_channels, output_channels, 1)#dim -> k = 1 self.conv_s = nn.Conv1d(output_channels, skip_channels, 1) def forward(self, x): o = self.sig(self.conv_sig(x)) * self.tanh(self.conv_tan(x)) skip = self.conv_s(o) #print("SKIP: " + str(skip.shape)) #skip = skip[:,-self.skip_size:,:] #dim control for adding skips #skip = skip[:,:,-1:] #print("SK: " + str(skip.shape)) residual = self.conv_r(o) #print("RES: " + str(residual.shape)) return residual, skip class WaveNet(nn.Module): def __init__(self, skip_channels=256, num_blocks=3, num_layers=10, num_hidden=256, kernel_size=2): super(WaveNet, self).__init__() self.embed = nn.Embedding(skip_channels, skip_channels) self.layer1 = CausalConv1d(skip_channels, num_hidden, kernel_size) self.res_stack = nn.ModuleList() for b in range(num_blocks): for i in range(num_layers): self.res_stack.append(ResidualBlock(num_hidden, num_hidden, kernel_size, skip_channels=skip_channels, dilation=2**i)) self.relu1 = nn.ReLU() self.conv1 = nn.Conv1d(skip_channels, skip_channels, 1) self.relu2 = nn.ReLU() self.conv2 = nn.Conv1d(skip_channels, skip_channels, 1) #self.output = nn.Softmax() def forward(self, x): o = self.embed(x) print('EX: ' + str(o.size())) dims = o.size() o = o.reshape(dims[0], dims[2], dims[3]) o = o.transpose(1,2) print('ETX: ' + str(o.size())) skip_vals = [] #initial causal conv o = self.layer1(o) #run res blocks for i, layer in enumerate(self.res_stack): o, s = layer(o) skip_vals.append(s) #sum skip values and pass to last portion of network o = reduce((lambda a,b: a+b), skip_vals) o = self.relu1(o) o = self.conv1(o) o = self.relu2(o) o = self.conv2(o) return o #self.output(o) data_path = '../input/musicwav/80643 Delon Dalcan - Panik (Original Mix).wav' model_path = 'model_3B_6L.pt' #num iterations after which we should save the model R = 500 #overfit model to test if it will train device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(device) net = WaveNet(num_blocks=3, num_layers=6) #send to gpu net.to(device) criterion = nn.CrossEntropyLoss() #preproc data, stream, train, remember to reformat y label as onehot vector for classification optimizer = optim.Adam(net.parameters(),lr=0.001) num_epochs = 1#00 losses = [] total_steps = 0 if os.path.exists(model_path): state = load_model(model_path) net.load_state_dict(state['state_dict']) optimizer.load_state_dict(state['optimizer']) total_steps = state['total_steps'] _, inp = wav_to_data(data_path) data = encode(inp) print(data.shape) batch_size = 64 seq_len = 1024 net.train() from math import floor n_steps = floor(len(data) / batch_size)#seq_len) i = 0 for epoch in range(num_epochs): i += 1 t = 0 for s in range(0, len(data) - seq_len - batch_size, batch_size): t += 1 #(x, y) = create_singular_input_stream(data, s, batch_size) x = [] y = [] for g in range(batch_size): x.append(data[s+g:s+g+seq_len]) y.append(data[s+g+1:s+g+seq_len+1]) #print("X: " + str(torch.Tensor(x).shape)) optimizer.zero_grad() output = net(torch.reshape(torch.LongTensor(x).to(device), (batch_size,1,seq_len))) #print(output.shape) #print(torch.Tensor(y).shape) loss = criterion(output, torch.Tensor(y).type(torch.LongTensor).to(device)) loss.backward() optimizer.step() losses.append(loss.item()) #save model for future training after set amount of iterations if t % R == 0: save_model(net, optimizer, total_steps+t, model_path) print('Epoch {}/{}, Timestep: {}/{}, Loss: {:.6f}'.format(i, num_epochs, t, n_steps, loss.item())) import torch.nn.functional as F #generate unguided audio #take first sequence as input and continuously generate a sequence using it model_path = 'model_3B_6L.pt' seq_len = 1024 num_samples = 1323000 - seq_len out_path = 'gen.wav' #grab first sequence to start with curr_seq = data[:1024] generated_seq = curr_seq net = WaveNet(num_blocks=3, num_layers=6) state = load_model(model_path) net.load_state_dict(state['state_dict']) net.eval() for i in range(num_samples): print('Sample: ' + str(i+1) + ' / ' + str(num_samples)) output = net(torch.reshape(torch.LongTensor(curr_seq).to(device), (1,1,seq_len))) timestep_next = output[0,:,-1:] timestep_next = torch.squeeze(timestep_next) timestep_next = F.softmax(timestep_next) print(timestep_next.size()) print(timestep_next.cpu().detach()) t_next = np.argmax(timestep_next.cpu().detach().numpy(), axis=0) print(t_next) #append to sequence generated_seq = generated_seq + [t_next] #write out file timeseries_to_wav(np.array(generated_seq), out_path) When I’m training the model, the loss fluctuates a good deal (though this could be because of the small batch size of 64 I’m using due to memory constraints) but generally does trend downwards. At inference time however, I find that the model produces a constant output of 0. I’ve read around that this could be caused by incorrect use of the quantized inputs, I’ve used the embedding layer rather than passing the input straight to the first convolutional layer as you suggested, but I’m unsure if somewhere I’m making a mistake. One candidate could be the reshaping I am doing in forward() to correct the dimensions of the tensor, another thing I noticed was that you used a TanH activation after your embedding layer which I did not understand - is there a purpose to using an activation function on an embedding vector?
st182807
Arham_Khan: output = net(torch.reshape(torch.LongTensor(curr_seq).to(device), (1,1,seq_len))) You are feeding the same input curr_seq in each step. The Tanh() function I added is a design choice adopted from Deep Voice 3.
st182808
Wow, I was thinking about way to complicated a set of problems. Thank you again for all the help, the model is functioning much better now - hopefully it will work with enough training!
st182809
I’m currently trying to reconstruct speech signals that are 3,000 samples long using an autoencoder. I currently have 90,000 examples of these speech signals to train on. Here is a summary of the autoencoder architecture: Input shape: (8, 1, 3000) Normalized shape: (8, 1, 3000) Normalized mean: 6.45319602199379e-08 Normalized variance: 0.9997082948684692 ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv1d-1 [-1, 16, 2998] 64 PTanh-2 [-1, 16, 2998] 0 Conv1d-3 [-1, 16, 1499] 528 BatchNorm1d-4 [-1, 16, 1499] 32 Conv1d-5 [-1, 32, 1495] 2,592 PTanh-6 [-1, 32, 1495] 0 Conv1d-7 [-1, 32, 747] 2,080 BatchNorm1d-8 [-1, 32, 747] 64 Conv1d-9 [-1, 64, 741] 14,400 PTanh-10 [-1, 64, 741] 0 Conv1d-11 [-1, 64, 370] 8,256 BatchNorm1d-12 [-1, 64, 370] 128 Upsample-13 [-1, 64, 740] 0 Conv1d-14 [-1, 32, 734] 14,368 PTanh-15 [-1, 32, 734] 0 Upsample-16 [-1, 32, 1468] 0 Conv1d-17 [-1, 16, 1464] 2,576 PTanh-18 [-1, 16, 1464] 0 Upsample-19 [-1, 16, 3006] 0 PTanh-20 [-1, 16, 3006] 0 Conv1d-21 [-1, 1, 3000] 113 ================================================================ Total params: 45,201 Trainable params: 45,201 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.01 Forward/backward pass size (MB): 5.47 Params size (MB): 0.17 Estimated Total Size (MB): 5.65 ---------------------------------------------------------------- And here is the autoencoder architecture itself: class PTanh(torch.nn.Module): # PTanh(x) = a * tanh(b * x) def __init__(self,in_channels): super().__init__() # need this for __repr__ method self.in_channels = in_channels # initialize a a = torch.full((in_channels,1), 1.7159) self.a = torch.nn.Parameter(a,requires_grad = True) # initialize b b = torch.full((in_channels,1), 2/3) self.b = torch.nn.Parameter(b,requires_grad = True) def __repr__(self): return 'PTanh(' + str(self.in_channels) + ')' def forward(self,x): x = torch.multiply(x,self.a) x = torch.multiply(torch.nn.Tanh()(x),self.b) return x class Autoencoder(torch.nn.Module): def __init__(self,batch_norm = False): super().__init__() # encoder -------------------------------------------------------- encoder = [] # first layer layer = [torch.nn.Conv1d(in_channels = 1, out_channels = 16, kernel_size = 3, stride = 1, bias = True), PTanh(16), torch.nn.Conv1d(in_channels = 16, out_channels = 16, kernel_size = 2, stride = 2, bias = True)] encoder.extend(layer) if batch_norm: encoder.append(torch.nn.BatchNorm1d(num_features = 16, eps = 1e-08, momentum = 0.1, affine = True, track_running_stats = True)) # second layer layer = [torch.nn.Conv1d(in_channels = 16, out_channels = 32, kernel_size = 5, stride = 1, bias = True), PTanh(32), torch.nn.Conv1d(in_channels = 32, out_channels = 32, kernel_size = 2, stride = 2, bias = True)] encoder.extend(layer) if batch_norm: encoder.append(torch.nn.BatchNorm1d(num_features = 32, eps = 1e-08, momentum = 0.1, affine = True, track_running_stats = True)) # third layer layer = [torch.nn.Conv1d(in_channels = 32, out_channels = 64, kernel_size = 7, stride = 1, bias = True), PTanh(64), torch.nn.Conv1d(in_channels = 64, out_channels = 64, kernel_size = 2, stride = 2, bias = True)] encoder.extend(layer) if batch_norm: encoder.append(torch.nn.BatchNorm1d(num_features = 64, eps = 1e-08, momentum = 0.1, affine = True, track_running_stats = True)) self.encoder = torch.nn.Sequential(*encoder) # decoder -------------------------------------------------------- decoder = [] # fourth layer layer = [torch.nn.Upsample(scale_factor = 2, mode = 'nearest'), torch.nn.Conv1d(in_channels = 64, out_channels = 32, kernel_size = 7, stride = 1, bias = True), PTanh(32)] decoder.extend(layer) if batch_norm: encoder.append(torch.nn.BatchNorm1d(num_features = 32, eps = 1e-08, momentum = 0.1, affine = True, track_running_stats = True)) # fifth layer layer = [torch.nn.Upsample(scale_factor = 2, mode = 'nearest'), torch.nn.Conv1d(in_channels = 32, out_channels = 16, kernel_size = 5, stride = 1, bias = True), PTanh(16)] decoder.extend(layer) if batch_norm: encoder.append(torch.nn.BatchNorm1d(num_features = 16, eps = 1e-08, momentum = 0.1, affine = True, track_running_stats = True)) # sixth layer layer = [torch.nn.Upsample(size = 3006, mode = 'nearest'), PTanh(16), torch.nn.Conv1d(in_channels = 16, out_channels = 1, kernel_size = 7, stride = 1, bias = True)] decoder.extend(layer) self.decoder = torch.nn.Sequential(*decoder) def forward(self,x): x = self.encoder(x) x = self.decoder(x) return x I have trained this autoencoder for 30 epochs using a batch size of 8, the SmoothL1Loss loss function, and the Adam optimizer as follows: loss_func = torch.nn.SmoothL1Loss(reduction = 'mean', beta = 1.0) optimizer = torch.optim.Adam(params = net.parameters(), lr = 0.0003) However, when I test the autoencoder on new examples, it simply tries to reconstruct the mean. In other words, a low-pass filtered version of the input. For example: gD9Ml.png1554×787 The original speech signal is at the top and the reconstructed signal is at the bottom. Any suggestions on how I can solve this problem?
st182810
Hi, I am trying to implement a VAE on audio and want to listen to the reconstructed audio via tensorboard. The input and output of my network is a spectrogram (computed with torchaudio.transforms.Spectrogram) so I assume I should use torchaudio.transforms.GriffinLim to get a listenable waveform. However, when I send the output of torchaudio.transforms.GriffinLim to TensorBoard, I get warning: audio amplitude out of range, auto clipped. and there is no “audio” tab in TensorBoard. It seems that GriffinLim outputs a waveform with values between -2 and 2, but I do not know what kind of audio format is supported by TensorBoard so I do not know what kind of transform I should perform after applying GriffinLim. Does anyone have an idea?
st182811
The docs to suggest using of logarithmized probabilities for an input of CTCLoss https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html#torch.nn.CTCLoss 6 However, there is example in pytorch repo (the DeepSpeech model) where softmax func (instead of log_softmax) is used just for evaluation but not training. github.com pytorch/pytorch/blob/22a34bcf4e5eaa348f0117c414c3dd760ec64b13/benchmarks/functional_autograd_benchmark/torchaudio_models.py#L135-L140 class InferenceBatchSoftmax(nn.Module): def forward(self, input_): if not self.training: return F.softmax(input_, dim=-1) else: return input_ So I want to clarify what should I use for training and evaluation in CTCLoss: softmax/log_softmax for train/eval? identity for the training and softmax/log_softmax for eval like in example I shared above?
st182812
As far as I know, for training you need log_softmax. For inference you can just do argmax. But using argmax might only give you Top-1 accuracy. If you use softmax and get top 5 scores you can get Top-5 accuracy. I think DeepSpeech model does something similar.
st182813
Thank you for the reply. So for the training I need to use log_softmax it’s clear now. For the inference I can use softmax to get top k scores. What isn’t clear is that why DeepSpeech implementation is not using log_softmax in the repo? I suppose there should be an explicit call of log_softmax in the model definition or the model calling, right? Or did I miss something? model definition github.com pytorch/pytorch/blob/22a34bcf4e5eaa348f0117c414c3dd760ec64b13/benchmarks/functional_autograd_benchmark/torchaudio_models.py#L193-L245 3 class DeepSpeech(nn.Module): def __init__(self, rnn_type, labels, rnn_hidden_size, nb_layers, audio_conf, bidirectional, context=20): super(DeepSpeech, self).__init__() self.hidden_size = rnn_hidden_size self.hidden_layers = nb_layers self.rnn_type = rnn_type self.audio_conf = audio_conf self.labels = labels self.bidirectional = bidirectional sample_rate = self.audio_conf["sample_rate"] window_size = self.audio_conf["window_size"] num_classes = len(self.labels) self.conv = MaskConv(nn.Sequential( nn.Conv2d(1, 32, kernel_size=(41, 11), stride=(2, 2), padding=(20, 5)), nn.BatchNorm2d(32), nn.Hardtanh(0, 20, inplace=True), This file has been truncated. show original model calling github.com pytorch/pytorch/blob/22a34bcf4e5eaa348f0117c414c3dd760ec64b13/benchmarks/functional_autograd_benchmark/audio_text_models.py#L51-L63 3 model = models.DeepSpeech(rnn_type=nn.LSTM, labels=labels, rnn_hidden_size=1024, nb_layers=5, audio_conf=audio_conf, bidirectional=True)model = model.to(device)criterion = nn.CTCLoss()params, names = extract_weights(model)def forward(*new_params: Tensor) -> Tensor: load_weights(model, names, new_params) out, out_sizes = model(inputs, inputs_sizes) out = out.transpose(0, 1) # For ctc loss loss = criterion(out, targets, out_sizes, targets_sizes) return loss
st182814
I am curious what are best practices for loading training instances that are 220K floats from disk, over 500GB of instances. I am training on audio. I am maxing out at minibatch size 2048, otherwise GPU memory is exhausted. I am using pytorch-lightning but fine with converting to vanilla torch. I am concerned that if I switch to multiple GPUs, the training speed with nonetheless be gated by the speed of the harddisk, not how many GPUs I have.
st182815
I am trying to train an LSTM on audio signal data. I gave used pad_sequnce() and pack_padded_sequence() to get resultant PackedSequence data. However this data is 1 dimensional. I have checked it using x.data.shape in forward() function and it is a 1d tensor. (x here is a PackedSequence data) And I’ve used batch_first=True parameter also when defining lstm. I’m getting this error: ~/.conda/envs/ml/lib/python3.8/site-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes) 172 expected_input_dim = 2 if batch_sizes is not None else 3 173 if input.dim() != expected_input_dim: --> 174 raise RuntimeError( 175 'input must have {} dimensions, got {}'.format( 176 expected_input_dim, input.dim())) RuntimeError: input must have 2 dimensions, got 1 Would adding a dummy dimension to this data work? How can I do this to PackedSequence data?
st182816
Solved by torch_bearer in post #2 I solved this by adding a newaxis to sequence data before padding and packing in collate_fn() of dataloader.
st182817
I solved this by adding a newaxis to sequence data before padding and packing in collate_fn() of dataloader.
st182818
Here is my code: import sys import torch import torchaudio def train(net,dataloader,loss_func,optimizer,device): # put in training mode net.train() # to compute training accuracy num_true_pred = 0 # to compute epoch training loss total_loss = 0 for i,(signals,labels) in enumerate(dataloader): # track progress sys.stdout.write('\rProgress: {:.2f}%'.format(i*dataloader.batch_size/len(dataloader))) sys.stdout.flush() # load onto GPU signals = signals.to(device).unsqueeze(dim=1) labels = labels.to(device).type_as(signals) # needed for BCE loss # compute log Mel spectrogram mel_spec = torchaudio.transforms.MelSpectrogram(sample_rate = 16000, n_fft = 1024, n_mels = 256, hop_length = 63).to(device) to_dB = torchaudio.transforms.AmplitudeToDB().to(device) images = to_dB(mel_spec(signals)) # zero the accumulated parameter gradients optimizer.zero_grad() # outputs of net for batch input outputs = net(images).squeeze() # compute (mean) loss loss = loss_func(outputs,labels) # compute loss gradients with respect to parameters loss.backward() # update parameters according to optimizer optimizer.step() # record running statistics # since sigmoid(0) = 0.5, then negative values correspond to class 0 # and positive values correspond to class 1 class_preds = outputs > 0 num_true_pred = num_true_pred + torch.sum(class_preds == labels) # loss is not mean-reduced total_loss = total_loss + loss train_loss = total_loss.item() / len(dataloader.dataset) train_acc = num_true_pred.item() / len(dataloader.dataset) return net,train_loss,train_acc I keep getting the following error: RuntimeError Traceback (most recent call last) <ipython-input-20-b175e934d6f9> in <module>() 18 loss_func, 19 optimizer, ---> 20 device) 21 22 print('Training Loss: {:.4f}'.format(train_loss)) 6 frames /content/drive/My Drive/Colab Notebooks/disc_baseline/net_train.py in train(net, dataloader, loss_func, optimizer, device) 34 n_mels = 256, 35 hop_length = 63).to(device) ---> 36 to_dB = torchaudio.transforms.AmplitudeToDB().to(device) 37 images = to_dB(mel_spec(signals)) 38 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torchaudio/transforms.py in forward(self, waveform) 424 Tensor: Mel frequency spectrogram of size (..., ``n_mels``, time). 425 """ --> 426 specgram = self.spectrogram(waveform) 427 mel_specgram = self.mel_scale(specgram) 428 return mel_specgram /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torchaudio/transforms.py in forward(self, waveform) 82 """ 83 return F.spectrogram(waveform, self.pad, self.window, self.n_fft, self.hop_length, ---> 84 self.win_length, self.power, self.normalized) 85 86 /usr/local/lib/python3.6/dist-packages/torchaudio/functional.py in spectrogram(waveform, pad, window, n_fft, hop_length, win_length, power, normalized) 160 # default values are consistent with librosa.core.spectrum._spectrogram 161 spec_f = torch.stft( --> 162 waveform, n_fft, hop_length, win_length, window, True, "reflect", False, True 163 ) 164 /usr/local/lib/python3.6/dist-packages/torch/functional.py in stft(input, n_fft, hop_length, win_length, window, center, pad_mode, normalized, onesided) 463 input = F.pad(input.view(extended_shape), (pad, pad), pad_mode) 464 input = input.view(input.shape[-signal_dim:]) --> 465 return _VF.stft(input, n_fft, hop_length, win_length, window, normalized, onesided) 466 467 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! I am using Google Colab. The device variable is device(type='cuda'). I have already moved torchaudio.transforms.MelSpectrogram and torchaudio.transforms.AmplitudeToDB to the GPU but I am not sure what else to do to fix this. Any suggestions?
st182819
I cannot reproduce. Can you provide a minimal example? import torchaudio device = "cuda" waveform, sample_rate = torchaudio.load("sinewave.wav") # from audio/test/torchaudio_unittest/assets waveform = waveform.to(device) mel_spec = torchaudio.transforms.MelSpectrogram(sample_rate = sample_rate, n_fft = 1024, n_mels = 256, hop_length = 63).to(device) to_dB = torchaudio.transforms.AmplitudeToDB().to(device) images = to_dB(mel_spec(waveform))
st182820
Hey @vincentqb, thanks for the quick reply. Before I create the minimal example, is it necessary to move both torchaudio.transforms.MelSpectrogram and torchaudio.transforms.AmplitudeToDB to the GPU using the to method even though waveform is on the GPU? Can’t I just do this instead: mel_spec = torchaudio.transforms.MelSpectrogram(sample_rate = sample_rate, n_fft = 1024, n_mels = 256, hop_length = 63) to_dB = torchaudio.transforms.AmplitudeToDB() images = to_dB(mel_spec(waveform))
st182821
Hi, We’re using sox_io backend for torchaudio. New version of the AudioMetaData class returned by the info method doesn’t have all the data I need for analysis. info method in the deprecated sox backend used to return sox_signalinfo_t and ‘sox_encodinginfo_t’ that had much more detailed information about encoding, bit depth etc. What should I use to get the info about audio file’s encoding, bit depth and other details?
st182822
Thanks for the feedback! Could you create an issue on torchaudio’s github 1? More context about this BC-breaking change is also given here.
st182823
Hi everyone, I’m trying to implement torchaudio.functional.vad() but I’m having problems with it. First it takes to much time to work. Is there any method how to filter all the data and then save them? I mean to filter the data once and every time I want to train my modell just to use the filtered one. Here 5 stays: The effect can trim only from the front of the audio, so in order to trim from the back, the reverse effect must also be used. I’m not understanding how to trim from the back. Can someone please help me?
st182824
In an Audio Classification problem, I am firstly loading a pretrained model, then running my own data through the model. In audio problems, I am searching for optimum parameters (hop length, window size, etc) for transforming features into Mel Spectrograms. However, this changes the size of the original inputs. Thus, there is an error when attemping to load_state_dict of the model. RuntimeError: Error(s) in loading state_dict for Cnn14Extractor: size mismatch for spectrogram_extractor.stft.conv_real.weight: copying a param with shape torch.Size([513, 1, 1024]) from checkpoint, the shape in current model is torch.Size([552, 1, 1102]). size mismatch for spectrogram_extractor.stft.conv_imag.weight: copying a param with shape torch.Size([513, 1, 1024]) from checkpoint, the shape in current model is torch.Size([552, 1, 1102]). size mismatch for logmel_extractor.melW: copying a param with shape torch.Size([513, 64]) from checkpoint, the shape in current model is torch.Size([552, 64]). All other discussions I have found talk about Image problems (size doesn’t change each time you change a parameter). My main question is: How can I change these 3 layers weights? Can I make the model accept the new weights shape? What are my options besides: Training from scratch (led to poorer results) Resizing feature to fit the expected shape (loss of data?)
st182825
shaneR: However, this changes the size of the original inputs. Thus, there is an error when attemping to load_state_dict of the model. Changing the input shape shouldn’t change the parameter shapes and the inputs would also not be stored in the model.state_dict(). It seems you’ve manipulated the model parameters instead. You would have to apply these manipulations to the original model before loading the state_dict.
st182826
Is it possible to expand on this a bit? It seems the problem is: I loaded a pretrained model & weights with set parameters I then created a new model, and attempted to load weights into it, but the parameters don’t match So the answer would seem to be to change parameters of the state_dict()? Per the following code example, I attempted to overwrite the first layers of the state_dict() from scratch. However, the problem persisted. backbone = FeatureExtractor(sample_rate, window_size, hop_size, mel_bins) if pretrained: state_dict = load_state_dict_from_url(model_urls['cnn_url']) conv1 = nn.Conv1d(1, 552, kernel_size=(1102,) ~) weight = torch.rand(552,1,1102) state_dict["model"]["spectrogram_extractor.stft.conv_real.weight"] = weight
st182827
The approach should work and you can manipulate the parameters in the state_dict so that they match the new architecture. Here is a small example: modelA = models.resnet18() modelB = models.resnet18() # Change modelB modelB.fc = nn.Linear(512, 10) # Try to load the pretrained state_dict state_dict = modelA.state_dict() modelB.load_state_dict(state_dict) # error # Manipulate state_dict to match parameter shape state_dict['fc.weight'] = state_dict['fc.weight'][:10] state_dict['fc.bias'] = state_dict['fc.bias'][:10] # Load again modelB.load_state_dict(state_dict) # works
st182828
Hello, I hope you’re all doing fine. I’m trying to preprocess .wav files with torchaudio, when i run the instruction waveform, sample_rate = torchaudio.load(filename) the waveform tensor is of a shape [number_of_channels, some_number], sometimes the number of channels is 1 and sometimes it’s 2. I want to know whether there is a way to force the number of channels to always be one. I tried to look on the we to understand what is the meaning of the number of channel but I did’nt find anything useful.
st182829
If the two channels are identical, and even if they aren’t, you can always just use one of them. If you have truly stereo audio, then you have to decide how you want to monoize it. Usually just averaging the channels works OK, but it could be problematic especially if the two channels might be out of phase, which can happen due to processing or just because the microphones were far away from each other. There are probably more sophisticated techniques than averaging that could be useful in some cases, but I never tried them.
st182830
That’s very helpful, thank you very much. Do you have any good reference so that I can read about waveforms?
st182831
Hi. I was preprocessing my audio file. after scaling my mel spectogram with standard scaler , minmax (I’ve tried both) It occurs a runtime error. but the input without scaling was also float type. RuntimeError Traceback (most recent call last) in () 5 criterion_multioutput = nn.CrossEntropyLoss() 6 optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) ----> 7 model =train_model(model, criterion_multioutput, optimizer) 8 9 SAVE_PATH = os.path.join(os.getcwd(),‘models’) 11 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 418 _pair(0), self.dilation, self.groups) 419 return F.conv2d(input, weight, self.bias, self.stride, –> 420 self.padding, self.dilation, self.groups) 421 422 def forward(self, input: Tensor) -> Tensor: RuntimeError: expected scalar type Double but found Float
st182832
The input tensor or the model parameters seem to be DoubleTensors while the others are FloatTensors. You could transform the data type via model.float() or tensor.float() depending which of these two objects is using the double type.
st182833
I am using PyTorch to process audio data through conv1d layers, however, I am running out RAM (only 8GB available). I have an input .wav and a target .wav for the network to learn, and each file is 40MB (about 4 minutes of audio). In this model, one sample of audio is learned from the previous 200 samples. (Input 200 samples and output 1 sample). In order to accomplish this, I am taking (for example) the input 8000000 samples and “unfolding” it into (8000000, 200, 1), where each audio sample becomes an array of the previous 200 samples. I then train on “unfolded_input_samples” as the training data “target_samples” as the validation data. The problem is I quickly run out of RAM when unfolding the input data. Is there a way around creating this massive array while still telling PyTorch to use the previous 200 samples for each output data point? Can I break up the unfolded input array into chunks and train on each part without starting a new epoch? Or is there an easier way to accomplish this using some kind of built in method in Pytorch. Thanks!
st182834
Solved by superunification in post #2 I’m not sure of the motivation for this ‘unfolded’ tensor. The data is 99.5% redundant, and the 200-sample window can be created at train-time from the audio-data tensor. Just take a slice audio[i:i+200] of the 8m audio tensor, and audio[i+201] as your target. The only extra ingredient, then, is th…
st182835
I’m not sure of the motivation for this ‘unfolded’ tensor. The data is 99.5% redundant, and the 200-sample window can be created at train-time from the audio-data tensor. Just take a slice audio[i:i+200] of the 8m audio tensor, and audio[i+201] as your target. The only extra ingredient, then, is the set of offset indices into your audio data, which you could sample randomly or sequentially.
st182836
That’s exactly what I’m looking for, thanks! I should have been more specific, I’m actually using PyTorch lightning, so I may need make a lower level PyTorch training loop to index the data in that way during training. Thanks for the quick response!
st182837
I coverted Conv-TasNet from torch to onnx. Follow steps from website (Pytoch onnx). Then used $“testing.assert_almost_equal” to verify output. The orignal TasNet pre-trained model worked, however the result of onnx are zeros. Its weights seems to be all zero, and the output dimension also different. I cann’t figure which step was wrong. Here is my project 4. Run demo_tasnet.py 1 onnx will be save in checkpoints. Also write a test.py to campare result.
st182838
Hi, I tried to figure out the problem. So I checked the code again. torch.onnx.export(model, input, “.path/to/conv_tasnet.onnx”, export_params=True ) I set up export_params=True, and test .onnx with onnxruntime the ouput are still all zeros. It seems that the weights are not be saved.
st182839
I am finetuning wav2vec2 on my own data. I am running this on k80 and as it doesn’t support fp16. When I was training with fp16 flag got loss scale reached to 0.0001 FloatingPointError: Minimum loss scale reached (0.0001). Your loss is probably exploding. Try lowering the learning rate, using gradient clipping or increasing the batch size. Then I switched to FP32 but loss became nan this time: getting log: /data/fairseq/fairseq/utils.py:306: UserWarning: amp_C fused kernels unavailable, disabling multi_tensor_l2norm; you may get better performance by installing NVIDIA's apex library "amp_C fused kernels unavailable, disabling multi_tensor_l2norm; " /data/fairseq/fairseq/utils.py:306: UserWarning: amp_C fused kernels unavailable, disabling multi_tensor_l2norm; you may get better performance by installing NVIDIA's apex library 2020-10-01 10:25:50 | WARNING | root | NaN or Inf found in input tensor. 2020-10-01 10:25:50 | WARNING | root | NaN or Inf found in input tensor. 2020-10-01 10:25:50 | WARNING | root | NaN or Inf found in input tensor. 2020-10-01 10:25:50 | INFO | train | {"epoch": 73, "train_loss": "nan", "train_ntokens": "15558.8", "train_nsentences": "427.098", "t rain_nll_loss": "nan", "train_wps": "2443", "train_ups": "0.16", "train_wpb": "15558.8", "train_bsz": "427.1", "train_num_updates": " 9291", "train_lr": "1e-08", "train_gnorm": "nan", "train_loss_scale": null, "train_train_wall": "867", "train_wall": "0"} 2020-10-01 10:25:50 | INFO | fairseq.trainer | begin training epoch 74 2020-10-01 10:40:41 | INFO | fairseq_cli.train | begin validation on "valid" subset 2020-10-01 10:42:40 | WARNING | root | NaN or Inf found in input tensor. 2020-10-01 10:42:40 | WARNING | root | NaN or Inf found in input tensor. 2020-10-01 10:42:40 | INFO | valid | {"epoch": 74, "valid_loss": "nan", "valid_ntokens": "2570.42", "valid_nsentences": "71.4286", "v alid_nll_loss": "nan", "valid_uer": "100", "valid_wer": "100", "valid_raw_wer": "100", "valid_wps": "2780.3", "valid_wpb": "2570.4", "valid_bsz": "71.4", "valid_num_updates": "9454", "valid_best_wer": "100"} Any suggestions on how to overcome such conditions.
st182840
Are you using opt_level='O0' in apex.amp? If so, could you remove apex and run the model in “pure” float32 and check if you are still seeing invalid outputs? If so, could you check, if the inputs are containing valid values via torch.isfinite(input).all()?
st182841
Hi pytorch audio community. I’ve ported bss_eval_sources from mir_eval library to pytorch in a tiny effort to speed up everything. GitHub JuanFMontesinos/torch_mir_eval 7 Pytorch implementation of https://craffel.github.io/mir_eval/. Not backpropagable. - JuanFMontesinos/torch_mir_eval It’s not backpropagable but it’s roughly 5 times faster if you use GPU. Regaards
st182842
Hello! I just read the Wav2Letter paper (https://arxiv.org/abs/1609.03193 3), and would like to try reproducing it in pytorch. I noticed that there’s a model available in torchaudio: https://pytorch.org/audio/models.html#wav2letter 6 But it doesn’t seem to be pretrained. My attempts to run it appear to produce nonsense output: https://colab.research.google.com/drive/1KpceNl5eT08kIpiX-TLjD5PkKZR7pK7u?usp=sharing 4 Is there a way to use the weights that the paper used or some other set of pretrained weights? (Apologies if this is a silly question — I’m new to pytorch and ML frameworks, so I’m not sure if I’m missing some standard way to load pretrained weights) Thanks!
st182843
Hi, I’m trying to train a simple audio classification model on Colab, but my GPU memory (running on a 16GB instance) use keeps expanding and getting out of control every few epochs. Here is the model definition and a minimal snippet of my training code: class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0): super().__init__() self.layers = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding), nn.BatchNorm2d(out_channels), nn.ELU(), ) def forward(self, x): x = self.layers(x) return x class AudioClassifier(nn.Module): def __init__(self, stereo=True, dropout=0.1): super().__init__() in_channels = 2 if stereo else 1 self.spec = MelspectrogramStretch(hop_length=None, num_mels=128, fft_length=2048, norm='whiten', stretch_param=[0.4, 0.4]) self.features = nn.Sequential(*[ ConvBlock(in_channels=2, out_channels=32, kernel_size=3, stride=1), nn.MaxPool2d(3,3), nn.Dropout(p=dropout), ConvBlock(in_channels=32, out_channels=64, kernel_size=3, stride=1), nn.MaxPool2d(4,4), nn.Dropout(p=dropout), ConvBlock(in_channels=64, out_channels=64, kernel_size=3, stride=1), nn.MaxPool2d(4,4), nn.Dropout(p=dropout), ]) self.min_len = 80896 self.gru_hidden_size = 64 self.gru_layers = 2 self.rnn = nn.GRU(128, self.gru_hidden_size, num_layers=self.gru_layers) self.ret = nn.Sequential(*[nn.Linear(self.gru_hidden_size,1), nn.Sigmoid()]) def modify_lengths(self, lengths): def safe_param(elem): return elem if isinstance(elem, int) else elem[0] for name, layer in self.features.named_children(): if isinstance(layer, (nn.Conv2d, nn.MaxPool2d)): p, k, s = map(safe_param, [layer.padding, layer.kernel_size,layer.stride]) lengths = ((lengths + 2*p - k)//s + 1).long() return torch.where(lengths > 0, lengths, torch.tensor(1, device=lengths.device)) def _many_to_one(self, t, lengths): return t[torch.arange(t.size(0)), lengths - 1] def init_hidden(self, batch_size, device): return torch.zeros(self.gru_layers, batch_size, self.gru_hidden_size, device=device) def forward(self, wave, lengths): x = wave raw_lengths = lengths xt = x.float().transpose(1,2) xt, lengths = self.spec(xt, raw_lengths) xt = self.features(xt) lengths = self.modify_lengths(lengths) x = xt.transpose(1, -1) batch, time = x.size()[:2] x = x.reshape(batch, time, -1) lengths = lengths.clamp(max=x.shape[1]) # Handle variable input size x_pack = torch.nn.utils.rnn.pack_padded_sequence(x, lengths.clamp(max=x.shape[1]), batch_first=True) x_pack, self.hidden = self.rnn(x_pack) x, _ = torch.nn.utils.rnn.pad_packed_sequence(x_pack, batch_first=True) x = self._many_to_one(x, lengths) x = self.ret(x) return x def train(): for epoch in range(1,epochs+1): model.train() batch_losses=[] for batch_idx, batch in enumerate(pbar): optimizer.zero_grad() wave, lengths, lbl = batch model.hidden = model.init_hidden(BATCH_SIZE, device) pred = model(wave.to(device), lengths.to(device)).squeeze() loss = loss_fn(pred, lbl.to(device)) loss.backward() batch_losses.append(loss.detach().item()) optimizer.step() del loss, pred, wave, lengths, lbl I have tried deleting the prediction and input variables every loop, made sure I was detaching every variable I keep, and added an init_hidden() function to refresh the hidden states. These have helped me get to around 3-4 epochs before crashing, but it still happens. I’m running a batch size of just 8, and the input audio files are at most 15-20 seconds long. Is there anything I can do without reducing the batch size even further? Sorry if there are any stupid mistakes there but I’m a CV guy just getting into audio processing. The code from the classifier was heavily inspired by https://github.com/ksanjeevan/crnn-audio-classification 1.
st182844
Hi Everybody, I am using the torchaudio.transforms.Spectrogram to get the Spectrogram of a sin wave which is as follows: Fs = 400 freq = 5 sample = 400 x = np.arange(sample) y = np.sin(2 * np.pi * freq * x / Fs) Then, I get the Spectrogram of the mentioned sin wave as follows: specgram = torchaudio.transforms.Spectrogram(n_fft=256, win_length=256, hop_length=184, window_fn=torch.hamming_window, power=1, normalized=True) output = specgram(torch.from_numpy(y)) As you see, the sin wave only has a frequency 5 Hz, so I expect the output does not change in different time bins, but I got the following figure which is so strange: specg1514×649 8.23 KB I believe this is wrong, so I decided to used the signal library to get Spectrogram of the mention sin wave as follows: frequencies_samples, time_segment_sample, spectrogram_of_vector = signal.spectrogram( x=y, fs=fs, nperseg=256, noverlap=184, window=“hamming”, detrend=False, mode=‘magnitude’) And, then by using signal.spectrogram, I get the following figure: spec21498×649 4.26 KB As we expected, the spectrogram should not change in different time bins because we have only one frequency. So, what is the matter with the spectrogram when I use torch library? why it changes over the time when I have only one frequency?
st182845
This probably is from the phase shift incurred when the partitioning isn’t a multiple of the wave length in samples.
st182846
Many thanks for your reply, I applied a window which is a multiple of the wave length in samples, and now it is as I expected. However, I have another question. The output of torchaudio.transforms.Spectrogram is Exponent for the magnitude spectrogram. So, why we can see the effect of phase shift?
st182847
I think the phase changes the jump at the discontinuity at the boundary of basic cell and that leads to varying degrees of uncleanliness of the Fourier transform. If you had a periodic thing, the phase shift would correspond to an argument shift of the (complex) coefficients but leaving the modules of the coefficients alone.
st182848
Many thanks for your reply. I am just wondering why we do not see this effect when we get the spectrogram by Signal library (signal.spectrogram).
st182849
Is the torchaudio.transforms.MelSpectrogram 27 class in torchaudio differentiable? As in, can I backpropagate through it? If so, can someone point to me towards some documentation on how this backpropagation works?
st182850
Solved by tom in post #2 I think so, just do requires_grad_ on your input. In the end it goes through torchaudio.transforms.functional.spectrogram and uses the torch.stft function. This calls torch.fft (I think), which has a derivative defined. There are several texts about how the inner parts of PyTorch work, I wrote some…
st182851
I think so, just do requires_grad_ on your input. In the end it goes through torchaudio.transforms.functional.spectrogram and uses the torch.stft function. This calls torch.fft (I think), which has a derivative defined. There are several texts about how the inner parts of PyTorch work, I wrote something simple a long time ago 13 and @ezyang has an awesome comprehensive tour of PyTorch internals 8. Best regards Thomas
st182852
Thanks a lot for the information! Especially the links about the inner workings of PyTorch!
st182853
Hello, What is up with the audio tutorials? When trying to search for pytorch audio quickstart-ish code only found https://pytorch.org/tutorials/beginner/audio_classifier_tutorial.html which seems to be old and missing code. The one tutorial that is in pytorch/tutorials is https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html which only shows file loading and few transforms. Is there a way one could contribute on writing those?
st182854
Contributions are always welcome! Could you open an issue here 9 and describe what kind of tutorial you would like to write to get some feedback?
st182855
I’m trying to determine if I should refactor the way may dataset __getitem__ method pre-processes. Are there any heuristics beyond the obvious time complexity and code readability I should consider? For a given audio file that I load I want to: Pad the length to match a given interval size. Cut the audio into equal pre-determined interval lengths Create several types of spectrograms of each piece of this new sequence. It’s a lot. Is there any reason that I shouldn’t do all of these things right inside of __getitem__?
st182856
Solved by JuanFMontesinos in post #3 __getitem__ is the process which is spawn with multiprocess. The workload should be there (mainly the spectrograms) As @hash-ir mentions, any other task can be performed in the __init__ function (or even outside the dataset class) It’s all about how many RAM do you have. Do you need to read audi…
st182857
If any of the tasks are independent of the audio file, you can perform them in the __init__ method. Else, instead of doing everything in the __getitem__ method, you can write custom transforms and use them in the order you want. I am not really sure if this is more efficient than doing everything in the __getitem__ method but you can try and check if it is.
st182858
__getitem__ is the process which is spawn with multiprocess. The workload should be there (mainly the spectrograms) As @hash-ir mentions, any other task can be performed in the __init__ function (or even outside the dataset class) It’s all about how many RAM do you have. Do you need to read audio in __getitem__? Well, if your dataset fits in your RAM you can preload it in init. You can use any python function inside __getitem__, therefore it can still be readable.
st182859
Will it be possible down the line to have torch.stft or torch.istft support PackedSequence objects? I have provided a minimum working example here: from torch.nn.utils.rnn import pad_sequence from torch.nn.utils.rnn import pack_padded_sequence from torch.nn.utils.rnn import pad_packed_sequence fft_size = 1024 hop_size = 256 batched_audio = torch.rand((3, 50000)) window = torch.hann_window(fft_size) Y = torch.stft(batched_audio, n_fft=fft_size, hop_length=hop_size, window=window) print(torch.is_tensor(Y)) # Out: True In the above cell, all the audio is equal length (50000 samples). Below is an example with varying length audio (50000, 40000, and 30000 samples respectively). batched_audio = [torch.rand(50000), torch.rand(40000), torch.rand(30000)] lengths = [len(x) for x in batched_audio] padded_audio = pad_sequence(batched_audio, batch_first=True) packed_audio = pack_padded_sequence(padded_audio, lengths, batch_first=True) Y = torch.stft(packed_audio, n_fft=fft_size, hop_length=hop_size, window=window) # Out: AttributeError: 'PackedSequence' object has no attribute 'dim' This would be really handy to have to batch process variable-length audio; taking the STFT and ISTFT are important operations. I guess the workaround for this right now would be to pre-compute STFTs of all the waveforms before calling pad_sequence, but I’m trying to reduce the GPU memory footprint of my tests. I thought on-the-fly STFT computation would be more efficient.
st182860
Hello everyone, I am fine tuning a model for sound event detection, taken from https://github.com/qiuqiangkong/audioset_tagging_cnn 11, on the Urbansed dataset. In this task, the model should predict a [batch, n_classes, time_steps] matrix, with a value of one indicating the presence of an event at a certain time step. However, my network does not seem to train. Specifically, after about the first 10 epochs, my loss stops decreasing. If i check the predictions of the model, the output is composed entirely of 0.5s. I have tried: Changing the amount of l2 regularization, even turning it off completely Changing the learning rate Doing a mock training with only 2 samples to see if the network could learn the simple problem. The result was the same matrix of 0.5. Different optimizers (Adam and SGD so far) BCEWithLogitsLoss with reduction = mean and sum (using this loss as in theory multiple classes can be active at a time) My loss and optimizer: criterion = nn.BCEWithLogitsLoss(reduction='sum') optimizer = optim.SGD(model.parameters(), lr=0.001) My training loop: for i, data in enumerate(dataloader_train): inputs, labels = data inputs = inputs.type(torch.FloatTensor) optimizer.zero_grad() outputs = model(inputs).cpu() loss = 0 loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() I can’t figure out what’s wrong. Any thoughts? Thanks, Federico
st182861
Thank you for your suggestion. Tried to visualize the data and nothing seems out of place, the spectrograms appear correct.
st182862
I think u should manually initalize ur networks weights and try again Btw what is ur loss func, learning rate and optimizer?
st182863
Thanks, I will try that, although that would mean I cannot do transfer learning. As for loss function, optimizer and lr I’m using: criterion = nn.BCEWithLogitsLoss(reduction='sum') optimizer = optim.SGD(model.parameters(), lr=0.001) Although I also tried Adam and different learning rates and l2 regularization values.
st182864
Well try using the Adams optimization included with the weights initialization (weight values should be very close to zero by not too small and uniformly distributed) If it still continues not to learn then try changing network architecture
st182865
I have a dataset that consists of 18 audio files with a duration of 5 minutes, annotated with affective labels (arousal) per timestep (25hz or per 0.04s). 9 of these files are used for evaluating performance. For each audio file I create a feature vector with MFCC (20 bins) and MFCC deltas (also 20) per labeled timestep with overlapping. Dataset def mfcc_features(filename, n_mfcc=20, n_mels=128, frame_time=0.08, hop_time=0.04): filepath = os.path.join(gf.audio_path[0], filename+'.wav') waveform, sample_rate = librosa.load(filepath, sr=None) frame_length = int(sample_rate * frame_time) hop_length = int(sample_rate * hop_time) melkwargs = {"n_fft" : frame_length, "n_mels" : n_mels, "hop_length": hop_length, "f_min" : 0, "f_max" : None, "window_fn" : torch.hamming_window} mfcc = torchaudio.transforms.MFCC(sample_rate=sample_rate, n_mfcc=n_mfcc, dct_type=2, norm='ortho', log_mels=True, melkwargs=melkwargs)(torch.from_numpy(waveform))[:,:-1] mfcc_deltas = torchaudio.functional.compute_deltas(mfcc, win_length=3) feature_vector = torch.cat([mfcc, mfcc_deltas]) return torch.FloatTensor(feature_vector).T def label_vector(filename, target_value='arousal'): target_values = ['arousal', 'valence'] if target_value not in target_values: raise ValueError("Invalid target value. Expected one of: %s" % target_values) filepath = os.path.join(gf.gold_standard_path[target_values.index(target_value)], filename+'.csv') df = pd.read_csv(filepath) return torch.FloatTensor(df['gold_standard'].values).unsqueeze(0).T class AudioDataset(torch.utils.data.Dataset): def __init__(self, list_IDs): self.list_IDs = list_IDs def __len__(self): return len(self.list_IDs) def __getitem__(self, index): # Select sample ID = self.list_IDs[index] # Load data and get label X = mfcc_features(ID) y = label_vector(ID) return X, y Therefore I will have an X and y for each file with the following shape: $ print(mfcc_features('P16').shape, label_vector('P16').shape) torch.Size([7500, 40]) torch.Size([7500, 1]) I am currently having trouble using this dataset in a Network with mini-batches in the DataLoader. Network architecture import torch.nn as nn import torch.nn.functional as F class Network(nn.Module): def __init__(self, input_size, hidden_size): super(Network, self).__init__() self.dense_h1 = nn.Linear(in_features=input_size, out_features=hidden_size) self.relu_h1 = nn.ReLU() self.dropout = nn.Dropout(p=0.5) self.dense_out = nn.Linear(in_features=hidden_size, out_features=1) def forward(self, x): out = self.relu_h1(self.dense_h1(x)) out = self.dropout(out) y_pred = self.dense_out(out) return y_pred The rest of the code with the training sequence is as follows: Initialization and training use_cuda = torch.cuda.is_available() device = torch.device("cuda:0" if use_cuda else "cpu") torch.backends.cudnn.benchmark = True # Parameters params = {'batch_size': 3, 'shuffle': True, 'num_workers': 6} learningRate = 1e-4 max_epochs = 100 # Model input_size, hidden_size = 40, 20 model = Network(input_size, hidden_size) if torch.cuda.is_available(): model.cuda() criterion = ConcordanceCorrelationCoefficient() optimizer = torch.optim.Adam(model.parameters(), lr=learningRate) # Datasets partition = { "train": ['P39', 'P23', 'P41', 'P46', 'P37', 'P16', 'P21', 'P25', 'P56'], "validation": ['P45', 'P26', 'P64', 'P34', 'P42', 'P65', 'P30', 'P19', 'P28'] } # Generators training_set = AudioDataset(partition['train']) training_generator = torch.utils.data.DataLoader(training_set, **params) validation_set = AudioDataset(partition['validation']) validation_generator = torch.utils.data.DataLoader(validation_set, **params) # Loop over epochs for epoch in range(max_epochs): # Training for local_batch, local_labels in training_generator: # Transfer to GPU local_batch, local_labels = local_batch.to(device), local_labels.to(device) # Forward pass: Compute predicted y by passing x to the model outputs = model(local_batch) # Compute and print loss loss = criterion(outputs, local_labels) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() # Validation with torch.set_grad_enabled(False): for local_batch, local_labels in validation_generator: # Transfer to GPU local_batch, local_labels = local_batch.squeeze().to(device), local_labels.squeeze().to(device) # Forward pass: Compute predicted y by passing x to the model outputs = model(local_batch) # Compute and print loss loss = criterion(outputs, local_labels) print('Validation loss %.3f' % loss.item()) print('Epoch: {}, Loss: {}'.format(epoch, loss.item())) This sequence runs but the loss doesn’t really go down. I think it’s because the extra batch dimension creates a problem with my loss function, which is CCC. I have to use this metric for my research in addition to MSE. torch.Size([3, 7500, 40]) torch.Size([3, 7500, 1]) Loss function class ConcordanceCorrelationCoefficient(nn.Module): def __init__(self): super(ConcordanceCorrelationCoefficient, self).__init__() self.mean = torch.mean self.var = torch.var self.sum = torch.sum self.sqrt = torch.sqrt self.std = torch.std def forward(self, prediction, ground_truth): mean_gt = self.mean(ground_truth, 0) mean_pred = self.mean(prediction, 0) var_gt = self.var(ground_truth, 0) var_pred = self.var(prediction, 0) v_pred = prediction - mean_pred v_gt = ground_truth - mean_gt cor = self.sum (v_pred * v_gt) / (self.sqrt(self.sum(v_pred ** 2)) * self.sqrt(self.sum(v_gt ** 2))) sd_gt = self.std(ground_truth) sd_pred = self.std(prediction) numerator = 2 * cor * sd_gt * sd_pred denominator= var_gt + var_pred + (mean_gt - mean_pred) ** 2 ccc = numerator / denominator return 1-ccc Questions Does it make sense to use each file as a different mini-batch or should I go about this in a different way? I could try to concatenate all feature sets into one batch. How do I make the validation sequence more robust? Currently I only print the loss. Are there other machine learning techniques that I should adopt or try? Am I making it too complex for my use case? If so, what could I be doing differently? Thanks for taking your time to read my topic, if you have any questions about my code or project, feel free to ask them.
st182866
Just wondered if anyone has created or has any example code of a more recent pytorch using tourchaudio MFCC hopefully in a streaming model that can use Alsa sources? Or if anybody is up for the idea of kickstarting something. I quite like the Linto tensorflow HMG (Hotword model generator) as it allows you to create a profile of MFCC & model parameters. GitHub linto-ai/linto-desktoptools-hmg 9 GUI Tool to create, manage and test Keyword Spotting models using TF 2.0 - linto-ai/linto-desktoptools-hmg Would be great to have something similar with Pytorch with a GUI where maybe you could test GRU, CRNN, DS-CNN models with datasets. But you always live in hope Stuart
st182867
Hi Stuart, the idea sounds interesting. Would you be interested to work on a tutorial for this kind of functionality?
st182868
Yeah without a doubt, more than willing to put in the time. I was hoping Honk might get an update as a Honk2 and drop Librosa as it can be problematic on some platforms. GitHub castorini/honk 12 PyTorch implementations of neural network models for keyword spotting - castorini/honk But it would be a great addition as when it comes to opensource KWS that model creation is an easy process options are very limited. Its sort of strange as its easier to find more complex ready to go installs of Kaldi / Deepspeech than a KWS where the models can be easily created from a dataset. There are quite a few opensource options but often the models are blackboxes and why I posted HMG above as its not just the KWS but model creation tools that would be such a great contribution.
st182869
rolyan_trauts: Yeah without a doubt, more than willing to put in the time. That sounds great! Would you mind creating a feature request with your proposal here 22 and tag @vincentqb there so that he could take a look at it?
st182870
I recently discovered Google Colab and I uploaded my Pytorch project related to training models for processing audio. I got it training models using google’s TPUs, but I noticed that the models were less accurate than the ones I trained on my local machine. It turns out the the state dict weights and biases have about half the decimal places as the locally trained model. After some searching I read that Colab uses float16 By default instead of float32 precision to increase speed, but since the audio I’m training is in float32, it really needs to train in float32 precision. Is there a way to change this in Colab? Or is there a way to change my pytorch model to ensure float32 precision is kept? My model uses a stack of 1d convolutional layers, if that matters.
st182871
PyTorch uses float32 by default on CPU and GPU. I’m not deeply familiar with TPUs, but I guess you might be using bfloat16 on them? Could you try to call float() on the model and inputs and check, if the TPU run is forcing you to use this format?
st182872
I went back and looked at the stat_dict of both the locally trained GPU model and the cloud TPU model, and they do have the same precision, around 4 decimal places, so what I said in the original question was incorrect. After some more reading it sounds like the built in bfloat16 type is part of what makes TPUs so fast, and I don’t understand all the math but I think it can produce the same range of values, so that might not be my issue. I also should note that I’m using a pytorch_lightning module, although I wouldn’t think that would matter. I might try calling float() to see if that changes the output, thanks!
st182873
Here’s the reference I found on bfloat32: Google Cloud Blog BFloat16: The secret to high performance on Cloud TPUs | Google Cloud Blog 1 How the high performance of Google Cloud TPUs is driven by Brain Floating Point Format, or bfloat16
st182874
While bflaot16 uses the same range as float32, it does not provide the same “step size”. As I’m not deeply familiar with this numerical format, I don’t know if you would have to adapt your model to this format.
st182875
Hi, I am trying to figure out how a nn.conv1d processes an input for a specific example related to audio processing in a WaveNet model. I have input data of shape (1,1,8820), which passes through an input layer (1,16,1), to output a shape of (1,16,8820). That part I understand, because you can just multiply the two matrices. The next layer is a conv1d, kernel size=3, input channels=16, output channels=16, so the state dict shows a matrix with shape (16,16,3) for the weights. When the input of (1,16,8820) goes through that layer, the result is another (1,16,8820). What multiplication steps occur within the layer to apply the weights to the audio data? In other words, if I wanted to apply the layer(forward calculations only) using only numpy for this example, how would I do that?
st182876
Solved by yoyololicon in post #2 Hi @Keith72, this is how pytorch conv1d actually do in your case: x = torch.rand(1, 16, 8820) weight = torch.rand(16, 16, 3) # first pad zeros along the time dimension x = torch.pad(x, [1, 1]) #shape = (1, 16, 8822) #unfolded, so you have 8820 moving windows with size = (16, 3) x = x.unfold(2,…
st182877
Hi @Keith72, this is how pytorch conv1d actually do in your case: x = torch.rand(1, 16, 8820) weight = torch.rand(16, 16, 3) # first pad zeros along the time dimension x = torch.pad(x, [1, 1]) #shape = (1, 16, 8822) #unfolded, so you have 8820 moving windows with size = (16, 3) x = x.unfold(2, 3, 1) #shape = (1, 16, 8820, 3) # matrix multiplication, I use tensordot for simplicity y = torch.tensordot(x,weight, dims=([1, 3], [1, 2])) #shape = (1, 16, 8820) In numpy you can simply replace pad and tensordot with corresponding numpy function; for unfold you can use numpy.lib.stride_tricks.as_strided.
st182878
Thanks for the quick response! My initial implementation seems to fit using your steps, except the last step gave me a shape (16,1,8820), so I just swapped the first two dimensions. Now if I wanted to account for layer dilation, how would that work?
st182879
That can be achieved easily by using indexing: x = torch.pad(x, [1 * dilation] * 2) x = x.unfold(2, 2 * dilation + 1, 1)[..., ::dilation] #shape = (1, 16, 8820, 3) ... In numpy you can alternate the stride size of the ndarray to do dilated convolution. Here’s an example implemention 3, you can check it for details.
st182880
By reading this artical 4, it looks like the fbank is just the mel scaled spectrogram. Could anyone confirm that? I understand the results from transform.MelSpectrogram and from compliance.fbank might not end up the same even with the same parameter settings. Are the general concepts between the two functions match?
st182881
My task is to take an episode of a TV show and its subtitles. Then make the subtitle timings more accurate (from 200ms to 20ms). So I want to learn what is speech and what is not. I’ve now taken the audio, converted it into a spectrogram and separated each column of the spectogram to be a single data item. So now I have two arrays: print(train_speech.size()) # torch.Size([93482, 201]) print(train_silence.size()) # torch.Size([35038, 201]) All I want to do is a simple multi-linear NN to make a difference. train_speech is FFT’s of people talking and train_silence is no talking (used subtitles for the distinction). My question is what DataLoader can I use to take these into torch?
st182882
There is one DataLoader 1, which accepts a Dataset and provides different functionalities such as shuffling, creating batches using multiple workers, etc. To create a custom Dataset you could have a look at this tutorial 1.
st182883
What I don’t get is that my data is already a simple tensor… It doesn’t make sense to me that I need to create a separate abstraction just to fetch numbers from a few arrays…
st182884
If your data is already stored as tensors, you can just use TensorDataset or completely skip the abstraction and just feed to data to your model.
st182885
Since I had two classes in separate variables I ended up making the custom class. class MyDataset(Dataset): def __init__(self, speech, silence): self.data = list(map(lambda x: (x, 1), speech)) + list(map(lambda x: (x, 0), silence)) def __getitem__(self, index): return self.data[index] def __len__(self): return len(self.data) train_ds = MyDataset(train_speech, train_silence) train_dl = DataLoader(train_ds, shuffle=True, batch_size=1024) Thanks for helping me get through this.
st182886
Model training uses parameter sharing, but the saved size is the same as the model without sharing. How can we reduce the storage size of the model with shared weights?
st182887
Hello! I created a conda environment using a provided .yml file on this repo 1. However, when installing torchaudio , torch 1.5.1 was automatically installed and I needed pytorch 1.0.0 (required for the code). I ended up having torch 1.5.1 pytorch 1.0.0 on the same environment. Is his a problem? I think it is, because now I’m getting the following error when I execute the audio training: File “/home/fjaviersaezm/anaconda3/envs/infomax/lib/python3.6/site-packages/torchaudio/functional.py”, line 40, in @torch.jit.ignore AttributeError: module ‘torch.jit’ has no attribute ‘ignore’ Can I uninstall torch 1.5.1? How can I solve the “ignore” problem? Thanks guys!
st182888
torchaudio requires more recent version of torch, see dependencies. It appears the environment is looking at pytorch 1.0 and functionalities required by torchaudio are missing.
st182889
I am using an LSTM model with 3 hidden layers to produce speech embeddings. The input to the model are MFCC features taken on a window of 0.025 seconds and the hop length is 0.01 seconds. I have two inputs for inference and their size is as follows: torch.Size([54, 24, 40]) torch.Size([439, 24, 40]) The first 18 frames in both the inputs is same. I load the model as follows: embedder_net = SpeechEmbedder() embedder_net.load_state_dict(torch.load(hp.model.model_path)) embedder_net.eval() The state_dict looks like following: LSTM_stack.weight_hh_l0 torch.Size([3072, 768]) LSTM_stack.bias_ih_l0 torch.Size([3072]) LSTM_stack.bias_hh_l0 torch.Size([3072]) LSTM_stack.weight_ih_l1 torch.Size([3072, 768]) LSTM_stack.weight_hh_l1 torch.Size([3072, 768]) LSTM_stack.bias_ih_l1 torch.Size([3072]) LSTM_stack.bias_hh_l1 torch.Size([3072]) LSTM_stack.weight_ih_l2 torch.Size([3072, 768]) LSTM_stack.weight_hh_l2 torch.Size([3072, 768]) LSTM_stack.bias_ih_l2 torch.Size([3072]) LSTM_stack.bias_hh_l2 torch.Size([3072]) projection.weight torch.Size([256, 768]) projection.bias torch.Size([256]) When I use the model for inference on the two inputs, I get results of the following size: torch.Size([54, 256]) torch.Size([439, 256]) Since the first 18 frames are same in both the inputs, I expect the first 18 embeddings in both results to be the same. But this is not the case. In fact the results in both cases are very different from the result I get when I just take frames 18 frames as the input. Any idea why that would happen? Help appreciated
st182890
Lots of python libraries for reading mp3 files require spawning new processes that execute cli utilities. For example, AudioSegment runs ffpeg underneath. This is suboptimal for my particular use case because I convert files and I write to a database which works better with in memory files. From reading torchaudio source code I could not get a straightforward answer since it uses different backends and C++ wrapper code. Could you please confirm torchaudio.load does/does not spawn a new process for loading mp3?
st182891
Is there a way how to do online audio streaming with torch audio? I would like to get chunks of audio recorded from my microphone and calculate stft on the fly on the received chunks. Librosa has similar functionality but I was wondering if anything like this is also possible with torchaudio?
st182892
It appears that loading audio bytes from the memory currently the APIs do not support. Is there any way of using torchaudio for audio other than stored in files? My current use case to use DB to store audio as blobs.
st182893
I’m using a dataset of audio feature vectors (each sample is represented with 40 features). At the moment I create a matrix in the init_ method of my custom dataset and push it to GPU: self.frame_array_device = torch.from_numpy(self.frame_array) self.frame_array_device.to(device) the dataset getitem method is defined as follows: def __getitem__(self, idx): return self.frame_array_device[idx, :] Samples are drawn by the dataloader: train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size,shuffle=False) in the training loop I double check that the features are on the GPU: batch_features = batch_features.to(device) However, training is embarassingly slow and CPU usage is at 100%… I think that there is something wrong, could you give me some hint? Thanks a lot
st182894
I am newer to ML so take what I say with a grain Loading the data is what I think is the slowest part. I notice that between epochs my CPU is always 100-200% even thought my device prints as Device: Tesla P100-PCIE-16GB. Then, during a short portion of every epoch my GPU also goes to 100%. Subscribed to see what experts have to say.
st182895
This line of code: self.frame_array_device.to(device) won’t push the data to the device, if you don’t reassign the result via: self.frame_array_device = self.frame_array_device.to(device) That being said, you could try to profile the data loading using the data_time object from the ImageNet example or alternatively you could create a random CUDATensor and just execute the training without the data loading at all to check the GPU utilization.
st182896
Hello all, I am trying to make a CNN model for Encoder Parameter Suggestion for Audio data Input: Audio File(Length : 480000 samples) Output: Parameter(Length : 469, eg :[0001111222001233312000] ) I am getting very low Validation and Test Accuracy class AudioClassifier(nn.Module): def __init__(self): super(AudioClassifier, self).__init__() self.conv11 = nn.Conv1d(in_channels=1, out_channels=16, kernel_size=Kernel_size_conv, stride=1, bias=True, padding=int((Kernel_size_conv - 1) / 2)) self.conv12 = nn.Conv1d(in_channels=16, out_channels=16, kernel_size=Kernel_size_conv, stride=1, bias=True, padding=int((Kernel_size_conv - 1) / 2)) self.conv13 = nn.Conv1d(in_channels=16, out_channels=4, kernel_size=Kernel_size_conv, stride=1024, bias=True) def forward(self, X): out1 = torch.tanh(self.conv11(X)) out1 = torch.tanh(self.conv12(out1)) out1 = (self.conv13(out1)) return out1 optimizer = torch.optim.Adam(model.parameters(),lr=1e-2, weight_decay=1e-5) loss_fn = nn.CrossEntropyLoss() for epoch in range(E): for batch in range(B) pred_1= model(training_data) loss = loss_fn(pred_1, Target) optimizer.zero_grad() loss.backward() optimizer.step() Can Anybody suggest any other approach to rectify it.
st182897
What about you training loss and training accuracy? If it is good, then your model is overfitting (which would be surprising given that it is quite simple), if not, then your model is probably too simple for the task
st182898
Training loss is arnd 0.2 and Training Accuracy is on avg 92% . Currently Every epoch Data is shuffled and passed to network. I cannot find any mistake which I might have made in coding.
st182899
Are the classes balanced? If you have 1000 classes but 92% of your examples are of one class, then you can get an accuracy of 92% just by always predicting the same class. What kind of error does your model makes? Have a look at the predictions (training and validation) to check that they make sense If class imbalance is not an issue, then you might try different techniques against overfitting incl. (but not limited to): adding dropout, adding a regularisation term, decreasing the capacity of your network, etc.