id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182600 | I’d argue that a C++ implementation would be better, just based on the overall performance of these algorithms. Thanks for sharing though everyone. I’ll take a crack at it shortly! |
st182601 | Sounds good! Looking forward to your implementation🙂 Also I realized you can add your C++ implementation to torchaudio, as we have several C++ functions already such as audio/lfilter.cpp at main · pytorch/audio · GitHub 4. It will attract more researchers specially in audio and speech field. |
st182602 | Seems like there’s already a CQT proposal at torchaudio 13.
@nateanl Yeah I think implementing CQT in Python is efficient enough cuz most of the operation can be done as matrix operation. |
st182603 | hello everyone,
I am using multi slice-TCN model in my program. So each TCN generate the output and then multiply with weight and finally the combine all TCN’s result. equation like this (output = TCN’output * weight). So my question is that “How can i randomly initial the weight for TCN”. I am sending the screen shot. I am sending also my hand draw picture because i need like this picture.Thank your very much.
Screenshot from 2021-08-20 18-41-541920×1200 417 KB
2cb2dba4-dfd6-4ce2-9754-e7336da4404c (1)1960×2374 251 KB |
st182604 | Hi @Kuldeep_Rana! You can use torch.nn.init.uniform_ method to initialize your paramter. See this link 1 |
st182605 | Hello
I’m stucking with this problem for about a week.
Currently, i’m working on the code of Hifi-GAN official code with my own model.
But the main problem is that
my GPU0 suddenly increases and goes out of memory when the validation process goes on.
I tried to set batch size as 8 and 16 but both results came out same as out of memory…
I would appreciate anyone who can make me free from this problem…Thank you.
Validation
if steps % a.validation_interval == 0: # and steps != 0:
generator.eval()
torch.cuda.empty_cache()
val_err_tot = 0
with torch.no_grad():
for j, batch in enumerate(validation_loader):
x, y, _, y_mel = batch
y_g_hat = generator(x.to(device))
y_mel = torch.autograd.Variable(y_mel.to(device, non_blocking=True))
y_g_hat_mel = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels, h.sampling_rate,
h.hop_size, h.win_size,
h.fmin, h.fmax_for_loss)
val_err_tot += F.l1_loss(y_mel, y_g_hat_mel).item()
if j <= 4:
if steps == 0:
sw.add_audio('gt/y_{}'.format(j), y[0], steps, h.sampling_rate)
sw.add_figure('gt/y_spec_{}'.format(j), plot_spectrogram(x[0]), steps)
sw.add_audio('generated/y_hat_{}'.format(j), y_g_hat[0], steps, h.sampling_rate)
y_hat_spec = mel_spectrogram(y_g_hat.squeeze(1), h.n_fft, h.num_mels,
h.sampling_rate, h.hop_size, h.win_size,
h.fmin, h.fmax)
sw.add_figure('generated/y_hat_spec_{}'.format(j),
plot_spectrogram(y_hat_spec.squeeze(0).cpu().numpy()), steps)
val_err = val_err_tot / (j+1)
sw.add_scalar("validation/mel_spec_error", val_err, steps)
generator.train()
steps += 1
scheduler_g.step()
scheduler_d.step() |
st182606 | path=glob.glob('............./*.wav')
fig, ax = plt.subplots(nrows=4, ncols=3, sharex=True)
for i in range(4) :
y, sr = librosa.load(path[i], sr=16000)
plt.axis('off')
librosa.display.waveplot(y, sr, ax=ax[i, 0]) # put wave in row i, column 0
mfcc=librosa.feature.mfcc(y)
plt.axis('off')
librosa.display.specshow(mfcc, x_axis='time', ax=ax[i, 1]) # mfcc in row i, column 1
S = librosa.feature.melspectrogram(y, sr)
plt.axis('off')
librosa.display.specshow(librosa.power_to_db(S), x_axis='time', y_axis='log', ax=ax[i, 2]) # spectrogram in row i, column 2
I tried different positions for plt.axis(off) But none is working only on the last plot it’s working. any suggestions @ptrblck sir plz help, I know this is out of forum discussion but if you can guide me,
Regards |
st182607 | Just to close the conversation and help others , I got the answer . After defining subplot axis off is used like this:
fig, ax = plt.subplots(nrows=4, ncols=3, sharex=True)
[axi.set_axis_off() for axi in ax.ravel()]
and problem is solved
image946×625 51.6 KB |
st182608 | How do I allow my neural network to train on an audio stream? In particular, what I want to achieve is to take in, for example, an audio that is 4 seconds long and pass a sliding analysis window wherein I classify the audio in every window and return the class with the highest average probability in all the windows. I’m not sure how to do this in PyTorch. Does this have something to do with the data loaders? Or is it more on the design of the network? |
st182609 | A classic solution in this case would be to simply pad the audio to the longest length possible. |
st182610 | As I mentioned in another answer, @egy’s answer is correct to pad to the longest length possible and you can take the example of DataLoaders from Nvidia’s Tacotron2 source code here 8. |
st182611 | I understand that when dealing with audio with unequal length, one could define a collate function that would pad all audio to the same length, i.e. the length of the longest audio. However, if I were to do some transformations in my custom dataset’s __getitem__, like taking the log-Mel spectrogram, how do I pad audio of unequal length? My guess is to still use a collate function and pad along the last dimension of the batch of audio transformed to a log-Mel spectrogram, but I want to know what the best practices are with regards to this matter. Thanks! |
st182612 | Solved by shivammehta007 in post #2
What you mentioned is the best practice as per my knowledge and referring to the Nvidia’s Tacotron2 source code Tacotron2/data_utils.py the same is done here. |
st182613 | What you mentioned is the best practice as per my knowledge and referring to the Nvidia’s Tacotron2 source code Tacotron2/data_utils.py 7 the same is done here. |
st182614 | Dear Sir/Madam,
I am a beginner at the pytorch. I have occurred a problem. I want to update the weight after 15 epoch and keep the same weight till the end of training. How can I do this? Thank you very much. |
st182615 | Show code and tell what you mean
EDIT: Or simply make if statement and after your 15th epoch dont call
optimizer.zero_grad() loss.backward() optimizer.step() |
st182616 | Bug
I’m getting this error:
AttributeError: module 'torchaudio._internal.module_utils' has no attribute 'requires_sox'
while importing torchaudio
To Reproduce
I’m using a kaggle notebook
just executed these lines:
!pip install --upgrade torch
!pip install --upgrade torchaudio
import torchaudio
Environment
(collect_env.py output)
PyTorch version: 1.9.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.10
Python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] (64-bit runtime)
Python platform: Linux-5.4.120±x86_64-with-debian-buster-sid
Is CUDA available: True
CUDA runtime version: 11.0.221
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 450.119.04
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.7.1
[pip3] numpy==1.19.5
[pip3] pytorch-ignite==0.4.5
[pip3] pytorch-lightning==1.3.8
[pip3] torch==1.9.0
[pip3] torchaudio==0.9.0
[pip3] torchmetrics==0.4.1
[pip3] torchtext==0.8.0a0+cd6902d
[pip3] torchvision==0.8.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.0.221 h6bb024c_0 nvidia
[conda] libblas 3.9.0 9_mkl conda-forge
[conda] libcblas 3.9.0 9_mkl conda-forge
[conda] liblapack 3.9.0 9_mkl conda-forge
[conda] liblapacke 3.9.0 9_mkl conda-forge
[conda] mkl 2021.2.0 h06a4308_296
[conda] msgpack-numpy 0.4.7.1 pypi_0 pypi
[conda] numpy 1.19.5 py37haa41c4c_1 conda-forge
[conda] pytorch-ignite 0.4.5 pypi_0 pypi
[conda] pytorch-lightning 1.3.8 pypi_0 pypi
[conda] torch 1.9.0 pypi_0 pypi
[conda] torchaudio 0.9.0 pypi_0 pypi
[conda] torchmetrics 0.4.1 pypi_0 pypi
[conda] torchtext 0.8.0 py37 pytorch
[conda] torchvision 0.8.1 py37_cu110 pytorch |
st182617 | I’ve been using this script:
spgram = torchaudio.transforms.Spectrogram(512, hop_length=32)
audio = spgram(audio)
to get the spectrogram of some stereo music audio. I expected that the resulting spectrogram has the shape [2, 257, audio.shape[1]/32] However, that’s not the case. For examples, an audio clip with size [2, 199488] (with sr=24576) yields a spectrogram with size [2, 257, 6241] (note that 199488/32=6234). Why is that? and how can I convert from frame location to sample location? |
st182618 | Hello, I am trying to train a speech enhancement CNN based on Wave-U-Net but I get this error
invalid argument 0: Sizes of tensors must match except in dimension 1. Got 29 and 30 in dimension 2 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:71
class DownSamplingLayer(nn.Module):
def __init__(self, channel_in, channel_out, dilation=1, kernel_size=15, stride=1, padding=7):
super(DownSamplingLayer, self).__init__()
self.main = nn.Sequential(
nn.Conv1d(channel_in, channel_out, kernel_size=kernel_size,
stride=stride, padding=padding, dilation=dilation),
nn.BatchNorm1d(channel_out),
nn.LeakyReLU(negative_slope=0.1)
)
def forward(self, ipt):
return self.main(ipt)
class UpSamplingLayer(nn.Module):
def __init__(self, channel_in, channel_out, kernel_size=5, stride=1, padding=2):
super(UpSamplingLayer, self).__init__()
self.main = nn.Sequential(
nn.Conv1d(channel_in, channel_out, kernel_size=kernel_size,
stride=stride, padding=padding),
nn.BatchNorm1d(channel_out),
nn.LeakyReLU(negative_slope=0.1, inplace=True),
)
def forward(self, ipt):
return self.main(ipt)
class SE_Model(nn.Module):
def __init__(self, n_layers=12, channels_interval=24):
super(SEModel, self).__init__()
self.n_layers = n_layers
self.channels_interval = channels_interval
encoder_in_channels_list = [1] + [i * self.channels_interval for i in range(1, self.n_layers)]
encoder_out_channels_list = [i * self.channels_interval for i in range(1, self.n_layers + 1)]
self.encoder = nn.ModuleList()
for i in range(self.n_layers):
self.encoder.append(
DownSamplingLayer(
channel_in=encoder_in_channels_list[i],
channel_out=encoder_out_channels_list[i]
)
)
self.middle = nn.Sequential(
nn.Conv1d(self.n_layers * self.channels_interval, self.n_layers * self.channels_interval, 15, stride=1,
padding=7),
nn.BatchNorm1d(self.n_layers * self.channels_interval),
nn.LeakyReLU(negative_slope=0.1, inplace=True)
)
decoder_in_channels_list = [(2 * i + 1) * self.channels_interval for i in range(1, self.n_layers)] + [
2 * self.n_layers * self.channels_interval]
decoder_in_channels_list = decoder_in_channels_list[::-1]
decoder_out_channels_list = encoder_out_channels_list[::-1]
self.decoder = nn.ModuleList()
for i in range(self.n_layers):
self.decoder.append(
UpSamplingLayer(
channel_in=decoder_in_channels_list[i],
channel_out=decoder_out_channels_list[i]
)
)
self.out = nn.Sequential(
nn.Conv1d(1 + self.channels_interval, 1, kernel_size=1, stride=1),
nn.Tanh()
)
def forward(self, input):
tmp = []
o = input
# Up Sampling
for i in range(self.n_layers):
o = self.encoder[i](o)
print(o.shape)
tmp.append(o)
# [batch_size, T // 2, channels]
o = o[:, :, ::2]
print(o.shape)
o = self.middle(o)
print(o.shape)
# Down Sampling
for i in range(self.n_layers):
# [batch_size, T * 2, channels]
print(o.shape)
o = F.interpolate(o, scale_factor=2, mode="linear", align_corners=True)
print(o.shape)
# Skip Connection
o = torch.cat([o, tmp[self.n_layers - i - 1]], dim=1)
o = self.decoder[i](o)
print(o.shape)
o = torch.cat([o, input], dim=1)
o = self.out(o)
return o |
st182619 | These shape errors are often created for odd input shapes (or generally shapes, which weren’t used while developing the model). You could add custom shape checks and either slice the larger activation or pad the smaller one before concatenating them. Alternatively (and if possible) you could also use the expected shapes (often these kind of models work for the default shapes such as [batch_size, channels, 224, 224]). |
st182620 | Hi there
I am using below function for feature extraction. Using mfcc and melspectrograms in my model. Both outputs should be of similar dimension. There is not problem with mfcc alone but I am not getting why model is taking 5 dimensions inmelspectrogram. Can anyone figure It out plz
Regards
class FeatureExtractor(object):
def __init__(self, rate):
self.rate = rate
def get_features(self, features_to_use, X):
X_features = None
accepted_features_to_use = ('mfcc', ''melspectrogram')
if features_to_use not in accepted_features_to_use:
raise NotImplementedError("{} not in {}!".format(features_to_use, accepted_features_to_use))
if features_to_use in ('mfcc'):
X_features = self.get_mfcc(X,26)
if features_to_use in ('melspectrogram'):
X_features = self.get_melspectrogram(X)
return X_features
def get_mfcc(self, X, n_mfcc=13):
def _get_mfcc(x):
mfcc_data = librosa.feature.mfcc(x, sr=self.rate, n_mfcc=n_mfcc)
return mfcc_data
X_features = np.apply_along_axis(_get_mfcc, 1, X)
return X_features
def get_melspectrogram(self, X):
def _get_melspectrogram(x):
mel = librosa.feature.melspectrogram(y=x, sr=self.rate, n_fft=800, hop_length=400)[np.newaxis, :]
delta = librosa.feature.delta(mel)
delta_delta = librosa.feature.delta(delta)
out = np.concatenate((mel, delta, delta_delta))
return mel
X_features = np.apply_along_axis(_get_melspectrogram, 1, X)
return X_features
Here the shapes of X is (5984, 32000) |
st182621 | I’m unsure how this shape is created, but would guess that the transformations are creating it.
Could you add print statements before and after each transformation and check the shapes? |
st182622 | In case of mfcc
mfcc_data.shape=(26,63)
print(X_features.shape)=(8,26,63)
in melspectrogram
print(mel.shape)=(1,128,81)
print(X_features.shape)=(8,1,128,81)
The shape of both the features must be same. |
st182623 | Sir actually that [np.newaxis, :] is creating that extra axis
removing it giving both the shapes same. I am really sorry for this,
But I am confused what is the purpose of this
[np.newaxis, :] ?? |
st182624 | The purpose of array[np.newaxis] is to add a new dimension in the specified dim, so you are right in removing it in case it’s not needed. |
st182625 | github.com
K-BS/breathsound-use-stethoscope/blob/main/torch/torch_pilot_model_1.ipynb
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "4c2fc57b",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import librosa\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import librosa.display\n",
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"import torch.optim as optim\n",
"from torch.utils.data import Dataset, DataLoader\n",
"from tqdm import tqdm_notebook as tqdm\n",
This file has been truncated. show original |
st182626 | Applying Attention from paper
danielpovey.com
2018_interspeech_xvector_attention.pdf 13
266.24 KB
Suppose my Hidden audio representation shape is (after few CNN operations/layers)
H = torch.Size([128, 32, 64]) [Batch Size X FeatureDim X Length]
and I want to apply self-attention weights to the audio hidden frames as
A = softmax(ReLU(AttentionWeight1 * (AttentionWeight2 * H))
In order to learn these two self attention weight matrices. Do I need to register these two weights as Parameters in the init function like below
class Model(nn.Module):
def __init__(self, batch_size):
super(Model, self).__init__()
self.attention1 = nn.Parameter(torch.Tensor(self.batch_size,16, 32))
self.attention2 = nn.Parameter(torch.Tensor(self.batch_size,1, 16))
and in the forward do I need to do like this
def forward(self, input):
....
H = CNN(input) #[B X Features X length]
attention = nn.Softmax(nn.ReLU(torch.mm(self.attention2, torch.mm(self.attention1, H))
H = H*attention
return H
Please help. How can we apply attention here. A the above code is throwing error
RuntimeError: matrices expected, got 3D, 3D tensors at /opt/conda/conda-bld/pytorch_1591914985702/work/aten/src/TH/generic/THTensorMath.cpp:36 |
st182627 | torch.mm expects two matrices (2D tensors), while you seem to use two 3D tensors.
You could use torch.bmm or torch.matmul instead, which would work for these tensors.
However, usually the parameters are not depending on the batch size.
Are you sure you want to initialize them with the batch size in dim0? |
st182628 | @ptrblck
How to make these weights learnable.
Am I doing it right here
class Model(nn.Module):
def __init__(self, batch_size):
super(Model, self).__init__()
self.attention1 = nn.Parameter(torch.Tensor(self.batch_size,16, 32))
self.attention2 = nn.Parameter(torch.Tensor(self.batch_size,1, 16))
Attention Mechanism in Forward. The input here is the output after few CNN operations with Shape
[BatchSize X DimFeature X Length] = [128 X 32 X 64]
""" Get Attention Weights """
attn = input
attention = attn.permute(0, 2, 1).matmul(self.attention1)
attention = attention.matmul(self.attention2)
attention = self.relu(attention)
attention = attention.view(attention.size(0), -1)
attention = F.softmax(attention, 1)
""" Multiply Attention Weights with Audio Frames"""
input = input * attention.unsqueeze(1) #To Make it comptable with BatchSize X FeatureDim X FixedLength
@ptrblck Is this okay now?
I have also another question when I use nn.Softmax in place of F.softmax(attention, 1), why doesn’t it work ? |
st182629 | The code looks alright code-wise and you should be able to see valid gradients in model.attention1.grad and model.attention2.grad after a backward() call.
nn.Softmax should work like F.softmax, but you might have forgotten to create the module before calling it via:
nn.Softmax(dim=1)(input)
What kind of error are you seeing with nn.Softmax? |
st182630 | Thank you for your feedback.
I think I was using the syntax wrongly as below
nn.Softmax(input, 1)
but it is actually like this
nn.Softmax(dim=1)(input) |
st182631 | @shakeel608 Have you done your task ?
I am using a transformer network for my audio, ofcourse the encoder part only for multihead attention using key quey and value matrices.
Could you plz explain what is the purpose of this H at last? Is this only for rewighted H, for better classification.
Regards |
st182632 | If i get the below error what should my input tensor look like to the inverse mel scale function:
inv_mel = torchaudio.transforms.InverseMelScale(n_stft=1025)
inv_mel(s)
TypeError Traceback (most recent call last)
in ()
2 plt.imshow(s)
3 inv_mel = torchaudio.transforms.InverseMelScale(n_stft=1025)
----> 4 inv_mel(s)
1 frames
/usr/local/lib/python3.7/dist-packages/torchaudio/transforms.py in forward(self, melspec)
388 “”"
389 # pack batch
→ 390 shape = melspec.size()
391 melspec = melspec.view(-1, shape[-2], shape[-1])
392
TypeError: ‘int’ object is not callable |
st182633 | Based on the error message it seems that you are using a np.array instead of a tensor, as the former returns an int from the .size attribute:
x = np.random.randn(10)
x.size()
> TypeError: 'int' object is not callable
x = torch.randn(10)
x.size()
> torch.Size([10])
so you might want to use tensors. |
st182634 | Hey, Patrick thanks for getting back. When I use a tensor then i get the following error. I am bit confused what the input should look like given an mel spectrogram of shape: torch.Size([288, 432, 4])
AssertionError Traceback (most recent call last)
in ()
7
8 inv_mel = torchaudio.transforms.InverseMelScale(n_stft=1023)
----> 9 inv_mel(sample)
1 frames
/usr/local/lib/python3.7/dist-packages/torchaudio/transforms.py in forward(self, melspec)
394 freq, _ = self.fb.size() # (freq, n_mels)
395 melspec = melspec.transpose(-1, -2)
→ 396 assert self.n_mels == n_mels
397
398 specgram = torch.rand(melspec.size()[0], time, freq, requires_grad=True,
AssertionError: |
st182635 | I guess the input shape might be wrong.
From the docs 2:
n_mels (int, optional) – Number of mel filterbanks. (Default: 128)
…
melspec (Tensor) – A Mel frequency spectrogram of dimension (…, n_mels, time)
Based on this it seems the n_mels argument is set to 128 by default, while your input tensor has a value of 432 in this dimension. |
st182636 | Hey Patrick, so my input tensor can be of shape torch.Size([288, 432, 4]) so what do you think my input tensor shape should be in this case?
Why are there three dots here?: A Mel frequency spectrogram of dimension (…, n_mels, time)
does it imply something?
Thanks |
st182637 | The three dots should indicate additional dimensions while the last two dimensions would represent n_mels and time.
Could you set n_mels to 432 in InverseMelScale as it should work:
inv_mel = torchaudio.transforms.InverseMelScale(n_stft=1023, n_mels=432)
x = torch.randn(288, 432, 4)
out = inv_mel(x)
print(out.shape) |
st182638 | Hey Patrick,
I actually tried the above step you mentioned before by setting n_mels= 432 and then my cell (jupyter nb) ran for a very long time with no outputs. |
st182639 | I think a long runtime is expected, since InverseMelScale uses SGD to solve the mapping.
From the docs:
Solve for a normal STFT from a mel frequency STFT, using a conversion matrix. This uses triangular filter banks.
It minimizes the euclidian norm between the input mel-spectrogram and the product between the estimated spectrogram and the filter banks using SGD.
Corresponding lines of code 2.
You could change the tolerance to stop the optimization earlier, if needed. |
st182640 | Hi,
I was training some audiovisual source separation models.
I was computing STFT in the GPU for speeding up the preprocessing. The following code is a wrapper around a nn.Module to apply some preprocessing.
The relevant code is:
def forward(self, inputs: dict):
"""
Inputs contains the following keys:
mixture: the mixture of audios as spectrogram
sk: skeletons of shape N,C,T,V,M which is batch, channels, temporal, joints, n_people
video: video
"""
def numpy(x):
return x[0].detach().cpu().numpy()
self.n_sources = 2
with torch.no_grad():
llcp_embeddings = inputs['llcp_embedding'].transpose(1, 2)
srcm = inputs['audio']
srcs = inputs['audio_acmt']
# Computing STFT
spm = self.wav2sp(srcm) # Spectrogram main BxFxTx2
sps = self.wav2sp(srcs) # Spectrogram secondary BxFxTx2
sources = [spm, sps]
sp_mix_raw = sum(sources) / self.n_sources
# Downsampling to save memory
spm = spm[:, ::2, ...]
sps = sps[:, ::2, ...]
sp_mix = sp_mix_raw[:, ::2, ...] # BxFxTx2
x = sp_mix.permute(0, 3, 1, 2)
pred = self.core_forward(audio_input=x.detach().requires_grad_(),
visual_input=llcp_embeddings)
return pred
def core_forward(self, *, audio_input, visual_input):
outx = self.llcp(audio_input, visual_input)
output = {'mask': outx, 'ind_end_feats': None, 'visual_features': None}
return output
Where self.wav2sp uses Spectrogram operator from torchaudio and self.core_forward just calls the forward code.
I realised that if I call the code cloning the spectrogram, the speed is boosted.
pred = self.core_forward(audio_input=x.clone().requires_grad_(),
visual_input=llcp_embeddings)
Profiling results without cloning
Self CPU time total: 8.264s
CUDA time total: 8.264s
Process finished with exit code 0
Profiling results withing cloning.
Self CPU time total: 107.938ms
CUDA time total: 107.809ms
Process finished with exit code 0
Does anyone know why?
I found this occur for any¿? (i tried several models and always happens) model
Profiled like this:
if __name__=='__main__':
USE_W = True
N = 2
DEVICE = torch.device('cuda:0')
# DEVICE=torch.device('cpu')
if USE_W:
model = LlcpNet(audio_length=65535, audio_samplerate=16384,
n_fft=1022, hop_length=256, n_mel=128, sp_freq_shape=1022 // 2 + 1,
video_enabled=False, llcp_enabled=True,
skeleton_enabled=False, device=DEVICE).to(DEVICE)
inputs = {'llcp_embedding': torch.rand(N, 100, 512).to(DEVICE),
'audio': torch.rand(N, 65535).to(DEVICE),
'audio_acmt': torch.rand(N, 65535).to(DEVICE)}
else:
model = Llcp().to(DEVICE)
inputs = {'input_video': torch.rand(N, 512, 100).to(DEVICE),
'input_audio': torch.rand(N, 2, 256, 256).to(DEVICE)}
# input_audio will be (N,2,256,256)
# input_video will be of size (N,512,100)
with profiler.profile(with_stack=True, profile_memory=True, use_cuda=True) as prof:
output = model(inputs) if USE_W else model(**inputs)
print(prof.key_averages().table(sort_by='self_cpu_time_total', row_limit=-1))
print(prof.key_averages().table(sort_by='cuda_time_total', row_limit=-1))
A standalone script to check this can be found at:
gist.github.com
https://gist.github.com/JuanFMontesinos/90db013f3c736a0a702faae28e73f4ff 1
test_issue.py
from functools import partial
import torch
from torch import nn, istft
from torchaudio.transforms import Spectrogram, MelScale, InverseMelScale
import torch.nn.functional as F
import torch.autograd.profiler as profiler
from torch.autograd.profiler import record_function
class Audio_Model(nn.Module):
This file has been truncated. show original
This occur at least for RTX 3090, Quadro P6000, Titan V and some other gpus. |
st182641 | I am Training a Pytorch model. After some time, even if on shuffle, the model contains, besides a few finite tensorrows only NaN values:
tensor([[[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan],
...,
[ 1.4641, 0.0360, -1.1528, ..., -2.3592, -2.6310, 6.3893],
[ nan, nan, nan, ..., nan, nan, nan],
[ nan, nan, nan, ..., nan, nan, nan]]],
device='cuda:0', grad_fn=<AddBackward0>)
The detect_anomaly functions return:
RuntimeError: Function ‘LogSoftmaxBackward’ returned nan values in its 0th output.
in reference to the next line output = F.log_softmax(output, dim=2)
A normal tensor should look like this:
tensor([[[-3.3904, -3.4340, -3.3703, ..., -3.3613, -3.5098, -3.4344]],
[[-3.3760, -3.2948, -3.2673, ..., -3.4039, -3.3827, -3.3919]],
[[-3.3857, -3.3358, -3.3901, ..., -3.4686, -3.4749, -3.3826]],
...,
[[-3.3568, -3.3502, -3.4416, ..., -3.4463, -3.4921, -3.3769]],
[[-3.4379, -3.3508, -3.3610, ..., -3.3707, -3.4030, -3.4244]],
[[-3.3919, -3.4513, -3.3565, ..., -3.2714, -3.3984, -3.3643]]],
device='cuda:0', grad_fn=<TransposeBackward0>)
Please notice the double brackets, if they are import.
Code:
spectrograms, labels, input_lengths, label_lengths = _data
spectrograms, labels = spectrograms.to(device), labels.to(device)
optimizer.zero_grad()
output = model(spectrograms)
Additionally, I tried to run it with a bigger batch size (current batch size:1, bigger batch size: 6) and it run without errors until 40% of the first epoch in which I got this error.
Cuda run out of memory
Also, I tried to normalize the data torchaudio.transforms.MelSpectrogram(sample_rate=16000, n_mels=128, normalized=True)
And reducing the learning rate from 5e-4 to 5e-5 did not help either.
Additional information: My dataset contains nearly 300000 .wav files and the error came at 3-10% runtime in the first epoch.
I appreciate any hints and I will gladly submit further information.
Also asked on Stackoverflow: python - Why does my Pytorch tensor size change and contain NaNs after some batches? - Stack Overflow 7 |
st182642 | Hi all, I have pre-processed my dataset to obtained three sets as train test and validation. The shapes and type of each of them are as follows.
Shape of X_train: (3441, 7, 1, 128, 128)
type(X_train): numpy.ndarray
Shape of X_val: (143, 7, 1, 128, 128)
type(X_val): numpy.ndarray
Shape of X_test: (150, 7, 1, 128, 128)
type(X_test): numpy.ndarray
The class Dataset is created as given:
import torch
from torch.utils.data import Dataset
class Dataset(Dataset):
def __init__(this, X=None, y=None, mode="train"):
this.mode = mode
this.X = X
if mode == "train":
this.y = y
def __len__(this):
return this.X.shape[0]
def __getitem__(this, idx):
if this.mode == "train":
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
else:
return torch.FloatTensor(this.X[idx])
Then I have use PytorchDataloader as
train_set = Dataset(X=X_train, y=Y_train,mode="train")
tr_loader = DL(train_set, batch_size=32, shuffle=True)
test_set = Dataset(X=X_test, y=Y_test,mode="train")
ts_loader = DL(test_set, batch_size=32, shuffle=False)
Everything is working fine uptill now, (I think so :)). Then for training I used the code below.
verbose=True
Losses = []
Accuracies = []
epochs=10
DLS = {"train": tr_loader, "valid": ts_loader}
start_time = time.time()
for e in range(epochs):
epochLoss = {"train": 0, "valid": 0}
epochAccs = {"train": 0, "valid": 0}
for phase in ["train", "valid"]:
if phase == "train":
model.train()
else:
model.eval()
lossPerPass = []
accuracy = []
for X, y in DLS[phase]:
X, y = X.to(device), y.to(device).view(-1)
optimizer.zero_grad()
alpha=1.0
beta=1.0
with torch.set_grad_enabled(phase == "train"):
pred_emo= model(X)
emotion_loss = criterion(pred_emo,y)
total_loss = alpha*emotion_loss
if phase == "train":
total_loss.backward()
optimizer.step()
lossPerPass.append(total_loss.item())
accuracy.append(accuracy_score(torch.argmax(torch.exp(pred_emo.detach().cpu()), dim=1), y.cpu()))
torch.save(model.state_dict(),"E:/Python_On_All_Dataset/IEMO/long_codes/Kosta.jo/model_checkpoint_spec/Epoch_{}.pt".format(e+1))
epochLoss[phase] = np.mean(np.array(lossPerPass))
epochAccs[phase] = np.mean(np.array(accuracy))
Losses.append(epochLoss)
Accuracies.append(epochAccs)
if verbose:
print("Epoch : {} | Train Loss : {:.5f} | Valid Loss : {:.5f} \
| Train Accuracy : {:.5f} | Valid Accuracy : {:.5f}".format(e + 1, epochLoss["train"], epochLoss["valid"],
epochAccs["train"], epochAccs["valid"]))
Then I am getting this error traceback:
File “”, line 21, in
for X, y in DLS[phase]:
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 517, in next
data = self._next_data()
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py”, line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py”, line 47, in fetch
return self.collate_fn(data)
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py”, line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py”, line 83, in
return [default_collate(samples) for samples in transposed]
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py”, line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [0] at entry 0 and [3] at entry 1
Can someone comment on this, Is there any problem with class Dataset or in training. The tittle of the problem can be changed on suggestions.
please guide
Regards |
st182643 | I cannot reproduce the issue using:
X_train = torch.randn(3441, 7, 1, 128, 128)
y_train = torch.randint(0, 10, (3441,))
dataset = Dataset(X_train, y_train)
loader = DataLoader(dataset, batch_size=16)
for data, target in loader:
print(data.shape)
print(target.shape)
Could you compare my code snippet to yours and check, what the difference might be? |
st182644 | @ptrblck sir my Y_train shape is (3441,), is this the problem? If yes what to do:
also when I run your code on my class dataset it says:
slice() cannot be applied to a 0-dim tensor. in line (in class Dataset)
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
I have changed the numpy array into tensor using this:
X_test=torch.from_numpy(X_test)
then I got an error in line:
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
TypeError: expected Float (got Double)
Sir plz tell me if I need to share the model also. I think the whole problem is in loading the dataset .
Regards |
st182645 | krishna511:
sir my Y_train shape is (3441,), is this the problem?
This shouldn’t be a problem, since my code snippet uses the same shape for y_train as well as your Dataset definition.
krishna511:
then I got an error in line:
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
numpy uses float64 by default, so you might want to transform the tensor to float32 via:
X_test = torch.from_numpy(X_test).float()
krishna511:
Sir plz tell me if I need to share the model also. I think the whole problem is in loading the dataset .
A minimal and executable code snippet would be helpful to further debug the issue. |
st182646 | krishna511:
slice() cannot be applied to a 0-dim tensor. in line (in class Dataset)
@ptrblck sir what about this? I have changed the data to float() and targets as long(). Now it showing the above error at
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx]) |
st182647 | I would try to use the original numpy array and converted into torch tensor here:
class Dataset(Dataset):
def __getitem__(this, idx):
if this.mode == "train":
return torch.from_numpy(this.X[idx]).float(), torch.from_numpy(this.y[idx]).long()
else:
return torch.from_numpy(this.X[idx]).float()
Your original back tracing is interesting
krishna511:
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py”, line 47, in fetch
return self.collate_fn(data)
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py”, line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File “C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py”, line 83, in
return [default_collate(samples) for samples in transposed]
It falls into the logic here pytorch/collate.py at 94cc681fc2e5218c10493938a8ca01272c3c6fc0 · pytorch/pytorch · GitHub 1, which is not expected since you are returning tuple. Can you verify your output by printing out the data like the following code.
class Dataset(Dataset):
def __getitem__(this, idx):
if this.mode == "train":
res = torch.from_numpy(this.X[idx]).float(), torch.from_numpy(this.y[idx]).long()
print(res)
return res
else:
return torch.from_numpy(this.X[idx]).float() |
st182648 | @ptrblck @ejguan Sir
I am trying to explain the complete code here. The processed dataset is
Shape of X_train: (1147, 7, 1, 128, 128)
Shape of X_val: (143, 7, 1, 128, 128)
Shape of X_test: (150, 7, 1, 128, 128)
b,t,c,h,w = X_train.shape
X_train = np.reshape(X_train, newshape=(b,-1))
X_train = scaler.fit_transform(X_train)
X_train = np.reshape(X_train, newshape=(b,t,c,h,w))
b,t,c,h,w = X_test.shape
X_test = np.reshape(X_test, newshape=(b,-1))
X_test = scaler.transform(X_test)
X_test = np.reshape(X_test, newshape=(b,t,c,h,w))
b,t,c,h,w = X_val.shape
X_val = np.reshape(X_val, newshape=(b,-1))
X_val = scaler.transform(X_val)
X_val = np.reshape(X_val, newshape=(b,t,c,h,w))
The model is
import torch
import torch.nn as nn
# BATCH FIRST TimeDistributed layer
class TimeDistributed(nn.Module):
def __init__(self, module):
super(TimeDistributed, self).__init__()
self.module = module
def forward(self, x):
if len(x.size()) <= 2:
return self.module(x)
# squash samples and timesteps into a single axis
elif len(x.size()) == 3: # (samples, timesteps, inp1)
x_reshape = x.contiguous().view(-1, x.size(2)) # (samples * timesteps, inp1)
elif len(x.size()) == 4: # (samples,timesteps,inp1,inp2)
x_reshape = x.contiguous().view(-1, x.size(2), x.size(3)) # (samples*timesteps,inp1,inp2)
else: # (samples,timesteps,inp1,inp2,inp3)
x_reshape = x.contiguous().view(-1, x.size(2), x.size(3),x.size(4)) # (samples*timesteps,inp1,inp2,inp3)
y = self.module(x_reshape)
# we have to reshape Y
if len(x.size()) == 3:
y = y.contiguous().view(x.size(0), -1, y.size(1)) # (samples, timesteps, out1)
elif len(x.size()) == 4:
y = y.contiguous().view(x.size(0), -1, y.size(1), y.size(2)) # (samples, timesteps, out1,out2)
else:
y = y.contiguous().view(x.size(0), -1, y.size(1), y.size(2),y.size(3)) # (samples, timesteps, out1,out2, out3)
return y
class HybridModel(nn.Module):
def __init__(self,num_emotions):
super().__init__()
# conv block
self.conv2Dblock = nn.Sequential(
# 1. conv block
TimeDistributed(nn.Conv2d(in_channels=1,
out_channels=16,
kernel_size=3,
stride=1,
padding=1
)),
TimeDistributed(nn.BatchNorm2d(16)),
TimeDistributed(nn.ReLU()),
TimeDistributed(nn.MaxPool2d(kernel_size=2, stride=2)),
TimeDistributed(nn.Dropout(p=0.4)),
# 2. conv block
TimeDistributed(nn.Conv2d(in_channels=16,
out_channels=32,
kernel_size=3,
stride=1,
padding=1
)),
TimeDistributed(nn.BatchNorm2d(32)),
TimeDistributed(nn.ReLU()),
TimeDistributed(nn.MaxPool2d(kernel_size=4, stride=4)),
TimeDistributed(nn.Dropout(p=0.4)),
# 3. conv block
TimeDistributed(nn.Conv2d(in_channels=32,
out_channels=64,
kernel_size=3,
stride=1,
padding=1
)),
TimeDistributed(nn.BatchNorm2d(64)),
TimeDistributed(nn.ReLU()),
TimeDistributed(nn.MaxPool2d(kernel_size=4, stride=4)),
TimeDistributed(nn.Dropout(p=0.4)),
# 4. conv block
TimeDistributed(nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=3,
stride=1,
padding=1
)),
TimeDistributed(nn.BatchNorm2d(128)),
TimeDistributed(nn.ReLU()),
TimeDistributed(nn.MaxPool2d(kernel_size=4, stride=4)),
TimeDistributed(nn.Dropout(p=0.4))
)
# LSTM block
hidden_size = 64
self.lstm = nn.LSTM(input_size=128,hidden_size=hidden_size,bidirectional=False, batch_first=True)
self.dropout_lstm = nn.Dropout(p=0.3)
# Linear softmax layer
self.out_linear = nn.Linear(hidden_size,num_emotions)
def forward(self,x):
conv_embedding = self.conv2Dblock(x)
conv_embedding = torch.flatten(conv_embedding, start_dim=2) # do not flatten batch dimension and time
lstm_embedding, (h,c) = self.lstm(conv_embedding)
lstm_embedding = self.dropout_lstm(lstm_embedding)
# lstm_embedding (batch, time, hidden_size)
lstm_output = lstm_embedding[:,-1,:]
output_logits = self.out_linear(lstm_output)
output_softmax = nn.functional.softmax(output_logits,dim=1)
return output_logits, output_softmax
The train test definition are
def loss_fnc(predictions, targets):
return nn.CrossEntropyLoss()(input=predictions,target=targets)
def make_train_step(model, loss_fnc, optimizer):
def train_step(X,Y):
# set model to train mode
model.train()
# forward pass
output_logits, output_softmax = model(X)
predictions = torch.argmax(output_softmax,dim=1)
accuracy = torch.sum(Y==predictions)/float(len(Y))
# compute loss
loss = loss_fnc(output_logits, Y)
# compute gradients
loss.backward()
# update parameters and zero gradients
optimizer.step()
optimizer.zero_grad()
return loss.item(), accuracy*100
return train_step
def make_validate_fnc(model,loss_fnc):
def validate(X,Y):
with torch.no_grad():
model.eval()
output_logits, output_softmax = model(X)
predictions = torch.argmax(output_softmax,dim=1)
accuracy = torch.sum(Y==predictions)/float(len(Y))
loss = loss_fnc(output_logits,Y)
return loss.item(), accuracy*100, predictions
return validate
and final training previously was
EPOCHS=700
DATASET_SIZE = X_train.shape[0]
BATCH_SIZE = 32
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Selected device is {}'.format(device))
model = HybridModel(num_emotions=len(EMOTIONS)).to(device)
print('Number of trainable params: ',sum(p.numel() for p in model.parameters()) )
OPTIMIZER = torch.optim.SGD(model.parameters(),lr=0.01, weight_decay=1e-3, momentum=0.8)
train_step = make_train_step(model, loss_fnc, optimizer=OPTIMIZER)
validate = make_validate_fnc(model,loss_fnc)
losses=[]
val_losses = []
for epoch in range(EPOCHS):
# schuffle data
ind = np.random.permutation(DATASET_SIZE)
X_train = X_train[ind,:,:,:,:]
Y_train = Y_train[ind]
epoch_acc = 0
epoch_loss = 0
iters = int(DATASET_SIZE / BATCH_SIZE)
for i in range(iters):
batch_start = i * BATCH_SIZE
batch_end = min(batch_start + BATCH_SIZE, DATASET_SIZE)
actual_batch_size = batch_end-batch_start
X = X_train[batch_start:batch_end,:,:,:,:]
Y = Y_train[batch_start:batch_end]
X_tensor = torch.tensor(X,device=device).float()
Y_tensor = torch.tensor(Y, dtype=torch.long,device=device)
loss, acc = train_step(X_tensor,Y_tensor)
epoch_acc += acc*actual_batch_size/DATASET_SIZE
epoch_loss += loss*actual_batch_size/DATASET_SIZE
print(f"\r Epoch {epoch}: iteration {i}/{iters}",end='')
X_val_tensor = torch.tensor(X_val,device=device).float()
Y_val_tensor = torch.tensor(Y_val,dtype=torch.long,device=device)
val_loss, val_acc, _ = validate(X_val_tensor,Y_val_tensor)
losses.append(epoch_loss)
val_losses.append(val_loss)
print('')
print(f"Epoch {epoch} --> loss:{epoch_loss:.4f}, acc:{epoch_acc:.2f}%, val_loss:{val_loss:.4f}, val_acc:{val_acc:.2f}%")
Now this last step I need to change as the dataset is huge for my GPU so I m suggested to use pytorch dataloaders. I hope I am clear now.
Now along with previous dataset class whatelse I need to change please suggest.
Regards |
st182649 | My suggestion when you start to use DataLoader. You should use it without worker (by setting num_workers=0) to validate your dataset. Then, you can start to use multiple workers to accelerate your data loading speed. |
st182650 | @ejguan sir where is the option to set num_workers??
I have tried without data augmentation too, but still getting same error
IndexError:: slice() cannot be applied to a 0-dim tensor. |
st182651 | It’s an argument of DataLoader. (torch.utils.data — PyTorch 1.8.1 documentation 1)
krishna511:
IndexError:: slice() cannot be applied to a 0-dim tensor.
Can you please copy the whole traceback of this Error? |
st182652 | @ejguan Sir , I have made the change by putting num_workers=0 also, that made no difference here. Sir Now I think this whole code is executable, Plz check once.
This is the complete traceback
File "E:\Python_On_All_Dataset\IEMO\long_codes\Kosta.jo\stacked _cnn.py", line 471, in <module>
for X, y in DLS[phase]:
File "C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 517, in __next__
data = self._next_data()
File "C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\krishna\Anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "E:\Python_On_All_Dataset\IEMO\long_codes\Kosta.jo\Dataset.py", line 26, in __getitem__
rec= torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
IndexError: slice() cannot be applied to a 0-dim tensor. |
st182653 | The problem comes from here:
krishna511:
def __getitem__(this, idx):
if this.mode == "train":
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx])
else:
return torch.FloatTensor(this.X[idx])
Please try to replace torch.FloatTensor(this.X[idx]) by torch.tensor(this.X[idx], dtype=torch.float) and replace torch.LongTensor(this.y[idx]) by torch.tensor(this.y[idx], dtype=torch.long) |
st182654 | Thanks @ejguan I have tried this. Now how to replace this training part with dataloader? Now I am using the code below for model training.
verbose=True
Losses = []
Accuracies = []
epochs=10
DLS = {"train": tr_loader, "valid": ts_loader}
start_time = time.time()
for e in range(epochs):
epochLoss = {"train": 0, "valid": 0}
epochAccs = {"train": 0, "valid": 0}
for phase in ["train", "valid"]:
if phase == "train":
model.train()
else:
model.eval()
lossPerPass = []
accuracy = []
for X, y in DLS[phase]:
X, y = X.to(device), y.to(device).view(-1)
optimizer.zero_grad()
alpha=1.0
beta=1.0
with torch.set_grad_enabled(phase == "train"):
pred_emo= model(X)
emotion_loss = criterion(pred_emo,y)
total_loss = alpha*emotion_loss
if phase == "train":
total_loss.backward()
optimizer.step()
lossPerPass.append(total_loss.item())
accuracy.append(accuracy_score(torch.argmax(torch.exp(pred_emo.detach().cpu()), dim=1), y.cpu()))
torch.save(model.state_dict(),"E:/Python_On_All_Dataset/IEMO/long_codes/Kosta.jo/model_checkpoint_spec/Epoch_{}.pt".format(e+1))
epochLoss[phase] = np.mean(np.array(lossPerPass))
epochAccs[phase] = np.mean(np.array(accuracy))
# Epoch Checkpoint // All or Best
Losses.append(epochLoss)
Accuracies.append(epochAccs)
# if scheduler: # or use, if scheduler_1 or scheduler_2: // Use correct call method
# scheduler.step(epochLoss["valid"])
# #scheduler.step()
if verbose:
print("Epoch : {} | Train Loss : {:.5f} | Valid Loss : {:.5f} \
| Train Accuracy : {:.5f} | Valid Accuracy : {:.5f}".format(e + 1, epochLoss["train"], epochLoss["valid"],
epochAccs["train"], epochAccs["valid"]))
print("Time Taken [{} Epochs] : {:.2f} minutes".format(epochs, (time() - start_time) / 60))
print("Training Complete")
But in line
emotion_loss = criterion(pred_emo,y)
it throws attribute error : ‘tuple’ object has no attribute 'log_softmax’
I know the model returning two values, as in class HybridModel()
return output_logits, output_softmax
How can I resolve this. Also am I using correct code for training sir?
As previously without dataloaders. There three functions are used for training validation etc. |
st182655 | krishna511:
‘tuple’ object has no attribute 'log_softmax’
This is occuring because your model’s forward gives u two outputs output_logits, output_softmax , the loss criterion which is crossnetropyloss here expects one of those as a tensor but currently its a tuple just change the line
pred_emo= model(X)
to
logits,softmax = model(X)
and pass accordingly to ur criterion and u r good to go |
st182656 | That’s done , Thank you
But My main question is I want to replicate the model training as before data loaders, How it is to be done, that needs a high level understanding of pytorch training probably. |
st182657 | Can you tell what do you mean by “replicate the model training as before data loaders”? |
st182658 | Hi @nikhil6041 if you read the querry since starting you may get it. But lemme brief you again.
That code was without dalaloaders but not working for me due to cuda memory issues, the whole code is here before loaders above. Then I used pytorch dataloaders, now I want the same training as before. If you go through previous code here you gt the idea for sure.
Thanks |
st182659 | @krishna511 I dont see much change in the difference you are training your model by the earlier approach for the expected gpu error there r several ways u can avoid it for example by calling
torch.cuda.empty_cache()
after each epoch will help you probably. |
st182660 | No @nikhil6041 I set batch size of 1,its not executing even for one epoch then how to empty cashe |
st182661 | @krishna511 okay can u paste the entire code snippet which is throwing errors here once as a reply? |
st182662 | I have a train input of size [3082086, 7, 30], that is 3082086 frames, a window of 7 frames, each single frame contains 30 numbers.
The train labels is of size [3082086, 1], that is one label for every window of 7 frames.
I tried to stack them to create a train set as follow:
trainset = torch.hstack(train_input, train_labels)
…but I got an error RuntimeError: Sizes of tensors must match except in dimension 1. Got 30 and 1 in dimension 2 (The offending index is 1)
Any way I can create the dataset with the above dimensions? |
st182663 | Solved by bsridatta in post #2
You dont want to mix inputs and labels into the same tensor, it doesnt make sense to merge them. Instead create a dataset, have separate tensors self.inputs, self.labels in the datasets __init__ and In the __getitem__(idx), just return self.inputs[idx], self.labels[idx] |
st182664 | You dont want to mix inputs and labels into the same tensor, it doesnt make sense to merge them. Instead create a dataset, have separate tensors self.inputs, self.labels in the datasets __init__ and In the __getitem__(idx), just return self.inputs[idx], self.labels[idx] |
st182665 | This is a repost from the computer vision category as I didn’t see that there was an audio category
Here is my problem:
I am currently working on building a CNN for sound classification. The problem is relatively simple: I need my model to detect whether there is human speech on .wav recording sounds of a tropical ecosystem. I made a train / test set containing records of 3 seconds on which there is human speech (speech) or not (no_speech). From these 3 seconds fragments I get a mel-spectrogram of dimension 128 x 128 that is used to feed the model.
I checked if the script Audiodataset (given below) gives an expected output and I can’t find any problems. For the category “speech” the mel-spectrograms look similar to:
While the mel-spectrograms for “no-speech” look like:
Since it is a simple binary problem I thought the a CNN would easily detect human speech but I may have been too cocky. However, it seems that after 1 or 2 epoch the model doesn’t learn anymore, i.e. the loss doesn’t decrease and the number of correct prediction stays roughly the same. I tried to play with the hyperparameters but the problem is still the same. I tried a learning rate of 0.1, 0.01 … until 1e-7. Here are the runs displayed in Tensorboard:
image1117×655 33 KB
Then I thought it could be due to the script itself but I cannot find anything wrong. I would be glad you could have a quick look at the script and let me know what could go wrong! If you have other ideas of why this problem may occur I would also be glad to receive some advice on how to best train my CNN
I based the script on the LunaTrainingApp from “Deep learning in PyTorch” by Stevens as I found the script to be elegant. Of course I modified it to match my problem.
Here is the script creating the input for the model (from .wav file to mel-spectrogram):
"""
Define a class AudioDataset which take the folder of the training / test data as input
and make normalized mel-spectrograms out of it
Labels are also one hot encoded
"""
from torch.utils.data import Dataset
from pydub import AudioSegment
from sklearn.preprocessing import LabelEncoder
from fs import open_fs
from fs.osfs import OSFS
import numpy as np
import librosa
import torch
import os
class AudioDataset(Dataset):
def __init__(self, data_root, n_fft, hop_length, n_mels):
self.data_root_fs = open_fs(data_root)
self.samples = []
self.n_fft = n_fft
self.hop_length = hop_length
self.n_mels = n_mels
self.class_encode = LabelEncoder()
self._init_dataset()
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
audio_filepath, label = self.samples[idx]
with self.data_root_fs.open(audio_filepath, 'rb') as audio_fd:
audio_file = AudioSegment.from_file(audio_fd)
#audio_file = AudioSegment.from_file(audio_filepath)
array_audio = np.array(audio_file.get_array_of_samples(), dtype=float)
mel = self.to_mel_spectrogram(array_audio)
#mel_norm = self.normalize_row_matrix(mel)
mel_norm_tensor = torch.tensor(mel)
mel_norm_tensor = mel_norm_tensor.unsqueeze(0)
label_encoded = self.one_hot_sample(label)
label_class = torch.argmax(label_encoded)
return (mel_norm_tensor, label_class)
def _init_dataset(self):
folder_names = set()
for match in self.data_root_fs.glob("*/*.wav"):
folder = match.path.split('/')[-2]
folder_names.add(folder)
self.samples.append((match.path, folder))
self.class_encode.fit(list(folder_names))
def to_mel_spectrogram(self, x):
sgram = librosa.stft(x, n_fft=self.n_fft, hop_length=self.hop_length)
sgram_mag, _ = librosa.magphase(sgram)
mel_scale_sgram = librosa.feature.melspectrogram(S=sgram_mag, sr=16000, n_mels=self.n_mels)
mel_sgram = librosa.amplitude_to_db(mel_scale_sgram)
return mel_sgram
def normalize_row_matrix(self, mat):
mean_rows = mat.mean(axis=1)
std_rows = mat.std(axis=1)
normalized_array = (mat - mean_rows[:, np.newaxis]) / std_rows[:, np.newaxis]
return normalized_array
def to_one_hot(self, codec, values):
value_idxs = codec.transform(values)
return torch.eye(len(codec.classes_))[value_idxs]
def one_hot_sample(self, label):
t_label = self.to_one_hot(self.class_encode, [label])
return t_label
The model itself, which basically performs 2D convolutions on the mel-spectrogram:
import torch
import torch.nn as nn
import torch.nn.functional as F
import math
class VADNet(nn.Module):
def __init__(self, in_channels=1, conv_channels=8):
super().__init__()
self.tail_batchnorm = nn.BatchNorm2d(1)
self.block1 = ConvBlock(in_channels, conv_channels)
self.block2 = ConvBlock(conv_channels, conv_channels * 2)
self.block3 = ConvBlock(conv_channels * 2, conv_channels * 4)
self.block4 = ConvBlock(conv_channels * 4, conv_channels * 8)
self.head_linear = nn.Linear(8 * 8 * conv_channels * 8, 2)
self._init_weights()
def _init_weights(self):
for m in self.modules():
if type(m) in {
nn.Linear,
nn.Conv3d,
nn.Conv2d,
nn.ConvTranspose2d,
nn.ConvTranspose3d,
}:
nn.init.kaiming_normal_(
m.weight.data, a=0, mode='fan_out', nonlinearity='relu',
)
if m.bias is not None:
fan_in, fan_out = \
nn.init._calculate_fan_in_and_fan_out(m.weight.data)
bound = 1 / math.sqrt(fan_out)
nn.init.normal_(m.bias, -bound, bound)
def forward(self, input_batch):
bn_output = self.tail_batchnorm(input_batch)
block_out = self.block1(bn_output)
block_out = self.block2(block_out)
block_out = self.block3(block_out)
block_out = self.block4(block_out)
conv_flat = block_out.view(block_out.size(0),-1)
linear_output = self.head_linear(conv_flat)
return linear_output
class ConvBlock(nn.Module):
def __init__(self, in_channels, conv_channels):
super().__init__()
self.conv1 = nn.Conv2d(
in_channels, conv_channels, kernel_size=3, padding=1, bias=True,
)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(
conv_channels, conv_channels, kernel_size=3, padding=1, bias=True,
)
self.relu2 = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(2, 2)
def forward(self, input_batch):
block_out = self.conv1(input_batch)
block_out = self.relu1(block_out)
block_out = self.conv2(block_out)
block_out = self.relu2(block_out)
return self.maxpool(block_out)
And the script that I use for training the model:
import torch
import torch.nn as nn
import argparse
import numpy as np
import logging
logging.basicConfig(level = logging.INFO)
log = logging.getLogger(__name__)
from torch.optim import SGD
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DataLoader
from sklearn.metrics import confusion_matrix
from dataset_loader.audiodataset import AudioDataset
from models.vadnet import VADNet
from utils.earlystopping import EarlyStopping
class VADTrainingApp:
def __init__(self, sys_argv=None):
parser = argparse.ArgumentParser()
parser.add_argument("--train_path",
help='Path to the training set',
required=True,
type=str,
)
parser.add_argument("--test_path",
help='Path to the testing set',
required=True,
type=str,
)
parser.add_argument("--save_path",
help='Path to saving the model',
required=True,
type=str,
)
parser.add_argument("--save_es",
help='Save the checkpoints of early stopping call',
default="checkpoint.pt",
type=str,
)
parser.add_argument('--num-workers',
help='Number of worker processes for background data loading',
default=8,
type=int,
)
parser.add_argument("--batch_size",
help='Batch size to use for training',
default=32,
type=int,)
parser.add_argument('--epochs',
help='Number of epochs to train for',
default=50,
type=int,
)
parser.add_argument('--lr',
help='Learning rate for th stochastic gradient descent',
default=0.001,
type=float,
)
self.cli_args = parser.parse_args(sys_argv)
# related to the hardware
self.use_cuda = torch.cuda.is_available()
self.device = torch.device("cuda" if self.use_cuda else "cpu")
# directly related to the neural network
self.model = self.initModel()
self.optimizer = self.initOptimizer()
# For early stopping
self.patience = 7
# For metrics
self.METRICS_LABELS_NDX = 0
self.METRICS_PREDS_NDX = 1
self.METRICS_LOSS_NDX = 2
self.METRICS_SIZE = 3
def initModel(self):
"""Initialize the model, if GPU available computation done there"""
model = VADNet()
model = model.double()
if self.use_cuda:
log.info("Using CUDA; {} devices".format(torch.cuda.device_count()))
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model = model.to(self.device)
return model
def initOptimizer(self):
return SGD(self.model.parameters(), lr=self.cli_args.lr)#, momentum=0.8, weight_decay=0.01)
def adjust_learning_rate(self):
"""Sets the learning rate to the initial LR decayed by a factor of 10 every 20 epochs"""
self.cli_args.lr = self.cli_args.lr * (0.1 ** (self.cli_args.epochs // 20))
for param_group in self.optimizer.param_groups:
param_group['lr'] = self.cli_args.lr
def initTrainDL(self):
trainingset = AudioDataset(self.cli_args.train_path,
n_fft=1024,
hop_length=376,
n_mels=128)
batch_size = self.cli_args.batch_size
if self.use_cuda:
batch_size *= torch.cuda.device_count()
trainLoader = DataLoader(trainingset,
batch_size = batch_size,
shuffle=True,
num_workers=self.cli_args.num_workers,
pin_memory=self.use_cuda)
return trainLoader
def initTestDL(self):
testset = AudioDataset(self.cli_args.test_path,
n_fft=1024,
hop_length=376,
n_mels=128)
batch_size = self.cli_args.batch_size
if self.use_cuda:
batch_size *= torch.cuda.device_count()
testLoader = DataLoader(testset,
batch_size = batch_size,
shuffle=True,
num_workers=self.cli_args.num_workers,
pin_memory=self.use_cuda)
return testLoader
def main(self):
log.info("Start training, {}".format(self.cli_args))
train_dl = self.initTrainDL()
test_dl = self.initTestDL()
trn_writer = SummaryWriter(log_dir='runs' + '-trn')
val_writer = SummaryWriter(log_dir='runs' + '-val')
early_stopping = EarlyStopping(patience=self.patience, path=self.cli_args.save_es, verbose=True)
for epoch_ndx in range(1, self.cli_args.epochs + 1):
log.info("Epoch {} / {}".format(epoch_ndx, self.cli_args.epochs))
# Adjust the new learning rate
self.adjust_learning_rate()
# Train the model's parameters
metrics_t = self.do_training(train_dl)
self.logMetrics(metrics_t, trn_writer, epoch_ndx)
# Test the model
metrics_v = self.do_val(test_dl, val_writer)
self.logMetrics(metrics_v, val_writer, epoch_ndx, train=False)
# Add the mean loss of the val for the epoch
early_stopping(metrics_v[self.METRICS_LOSS_NDX].mean(), self.model)
if early_stopping.early_stop:
print("Early stopping")
break
# Save the model once all epochs have been completed
torch.save(self.model.state_dict(), self.cli_args.save_path)
def do_training(self, train_dl):
"""Training loop"""
self.model.train()
# Initiate a 3 dimension tensor to store loss, labels and prediction
trn_metrics = torch.zeros(self.METRICS_SIZE, len(train_dl.dataset), device=self.device)
for batch_ndx, batch_tup in enumerate(train_dl):
if batch_ndx%100==0:
log.info("TRAINING --> Batch {} / {}".format(batch_ndx, len(train_dl)))
self.optimizer.zero_grad()
loss = self.ComputeBatchLoss(batch_ndx,
batch_tup,
self.cli_args.batch_size,
trn_metrics)
loss.backward()
self.optimizer.step()
return trn_metrics.to('cpu')
def do_val(self, test_dl, early_stop):
"""Validation loop"""
with torch.no_grad():
self.model.eval()
val_metrics = torch.zeros(self.METRICS_SIZE, len(test_dl.dataset), device=self.device)
for batch_ndx, batch_tup in enumerate(test_dl):
if batch_ndx%100==0:
log.info("VAL --> Batch {} / {}".format(batch_ndx, len(test_dl)))
loss = self.ComputeBatchLoss(batch_ndx,
batch_tup,
self.cli_args.batch_size,
val_metrics)
return val_metrics.to('cpu')
def ComputeBatchLoss(self, batch_ndx, batch_tup, batch_size, metrics_mat):
"""
Return a tensor the loss of the batch
"""
imgs, labels = batch_tup
imgs = imgs.to(device=self.device, non_blocking=True)
labels = labels.to(device=self.device, non_blocking=True)
outputs = self.model(imgs)
_, predicted = torch.max(outputs, dim=1)
loss_func = nn.CrossEntropyLoss(reduction="none")
loss = loss_func(outputs, labels)
start_ndx = batch_ndx * self.cli_args.batch_size
end_ndx = start_ndx + labels.size(0)
metrics_mat[self.METRICS_LABELS_NDX, start_ndx:end_ndx] = labels.detach()
metrics_mat[self.METRICS_PREDS_NDX, start_ndx:end_ndx] = predicted.detach()
metrics_mat[self.METRICS_LOSS_NDX, start_ndx:end_ndx] = loss.detach()
return loss.mean()
def logMetrics(self, metrics_mat, writer, epoch_ndx, train=True):
"""
Function to compute custom metrics: accurracy and recall for both classes
and % of correct predictions. Log the metrics in a tensorboard writer
"""
# Confusion matrix to compute precision / recall for each class
tn, fp, fn, tp = torch.tensor(confusion_matrix(metrics_mat[self.METRICS_LABELS_NDX],
metrics_mat[self.METRICS_PREDS_NDX],
labels=[0,1]).ravel())
precision_no_speech = tp / (tp + fp)
recall_no_speech = tp / (tp + fn)
# class speech is labelled 0, so true positive = true negative for speech
precision_speech = tn / (tn + fn)
recall_speech = tn / (fp + tn)
# % of correct predictions - optional metrics that are nice
no_speech_count = (metrics_mat[self.METRICS_LABELS_NDX] == 0).sum()
speech_count = (metrics_mat[self.METRICS_LABELS_NDX] == 1).sum()
no_speech_correct = ((metrics_mat[self.METRICS_PREDS_NDX] == 0) & (metrics_mat[self.METRICS_LABELS_NDX] == 0)).sum()
speech_correct = ((metrics_mat[self.METRICS_PREDS_NDX] == 1) & (metrics_mat[self.METRICS_LABELS_NDX] == 1)).sum()
correct_all = (speech_correct + no_speech_correct) / float(speech_count + no_speech_count) * 100
correct_speech = speech_correct / float(speech_count) * 100
correct_no_speech = no_speech_correct / float(no_speech_count) * 100
loss = metrics_mat[self.METRICS_LOSS_NDX].mean()
writer.add_scalar("loss", loss, epoch_ndx)
writer.add_scalar("precision/no_speech", precision_no_speech, epoch_ndx)
writer.add_scalar("recall/no_speech", recall_no_speech, epoch_ndx)
writer.add_scalar("precision/speech", precision_speech, epoch_ndx)
writer.add_scalar("recall/speech", recall_speech, epoch_ndx)
writer.add_scalar("correct/all", correct_all, epoch_ndx)
writer.add_scalar("correct/speech", correct_speech, epoch_ndx)
writer.add_scalar("correct/no_speech", correct_no_speech, epoch_ndx)
if train:
log.info("[TRAINING] loss: {}, correct/all: {}% , correct/speech: {}%, correct/no_speech: {}%".format(loss,
correct_all,
correct_speech,
correct_no_speech))
else:
log.info("[VAL] loss: {}, correct/all: {}% , correct/speech: {}%, correct/no_speech: {}%".format(loss,
correct_all,
correct_speech,
correct_no_speech))
if __name__ == "__main__":
VADTrainingApp().main()
I tried to fit the model on a small subset of the data but the problem remains. I would be very grateful for any help as this problem is getting frustrating
Thank you! |
st182666 | Well ijt’s difficult to say.
I would use Binary Cross entropy rather than cross entropy as you have a binary case.
I also would do a pooling in after the CNN. Think that those features are spatial and you don’t really care about where the speech is located in the original mel. It also forces each neuron in the linear layer to distinguish between speech/non-speech.
Rather than taking a subset, take a single batch, load it in the gpu and backprop with a simple clean script. This can help you to debug whether there is a bad network desing or a bug in the code.
If everything is okay then I would start from checking the dataset and making sure it works as supposed. If it does you can go for a simpler problem. For example, two clean different categories. Using enhanced speech where the human voice is clearly predominant. In short you have to be sure the task is suitable. |
st182667 | I also would do a pooling in after the CNN
Do you mean, after the the 4th block to do a MaxPooling across the time axis to get a 1D array of frequencies? |
st182668 | I’m just a begginer, but i have gone through the Google’s speech recognition API. What they do is record 3-5 seconds of background noise, and then use the intensity of recorded background noise as base level. Any substantial change in this intensity triggers its recording and the pre-trained speech recognition determines wether the sound is a spoken word(say the wake word) or just a random sound.
Coming to the your CNN implementation , it really depends on one question . Can you differentiate between human speech and no human speech pictures? if yes then the CNN ideally would work. Also does the spectrogram show difference between noise and human speech?
The implementation , as Juan said above has only 1 minor issue. Use BinaryCrossEntropy for binary classification. I doubt that not using maxpool matters much , however it is something you should play around with. Take inspiration from other successful NN for the task. You can also do transfer learning , however this might be an overkill. |
st182669 | Well everything depends on if you are interested on frequency info or temporal info.
In short, if you look at CNN design through time, at the beggining people used to flatten features and send then to a linear layer.
Nowadays the common way is to use a pooling to get a 1D array and send that to a linear layer.
Note that you are not interested in keeping the spatial features. So imagine your output is 256x8x8.
If you have the speech at the beginning of the track, the elements with useful info would be the [:,:,:]4 ones in the left.
With max pooling the linear layer faces a simpler task. |
st182670 | Okay, thanks for the tip!
So I have been running a simplified version of my script and now the model is able to learn. I also tried to build a model based on 1D convolutions: the “time” axis becomes channels. I was reading that it makes more sense theoretically. It seems to work OK and way faster than the conv2D (due to less parameters)
However, I have another question regarding the training of the model. My dataset is very imbalanced: 95% of no speech for 5% of speech and it seems that the model struggle to “learn” what speech look like - I get up to 60% accuracy for speech. Is there any method or architecture I should look for to improve the performance on such dataset?
Note that the dataset is artifically created: I overlapped VoxCeleb / Librispeech voices on only 5% of the ecosystem record. This percentage was decided based on the real-life occurrence of human sound in an ecosystem.
Would it be better to build a balanced dataset for the model to better “learn” what speech look like, even though this is not a realistic situation. Or is it better to focus on improving the model with the 5% occurrence of speech?
Thanks for the help! |
st182671 | Building a balanced data set is always better. Try image augmentation. But such a disparity (95-5) would be hard to overcome even with image augmentation. |
st182672 | Hi everyone, is it possible to use Kaldi Voice Activity Detection (VAD) in Pytorch? |
st182673 | I’m not deeply familiar with Kaldi, but how would you like to use it and what have you tried so far?
Are you stuck at a specific point? |
st182674 | I’m actually working on my thesis with audio data and I want to filter out from every audio non speech frames, I mean sequences were people do not speak. I have read that kaldi VAD works good in this case. Or is there any other option in torchaudio how to do it? |
st182675 | Unfortunately, I don’t know how Kaldi detects the speech and if it’s a filtering algorithm or some kind of machine learning model. If Kaldi would work, you could stick to it and preprocess the data in this way. I’m unsure, if you would like to reimplement Kaldi’s algorithm in torchaudio or how they should be combined. |
st182676 | torchaudio has an implementation of VAD based on sox, see here 68, and another implemented as an example here 52. Let us know how your experience goes |
st182677 | I have implemented it this way:
waveform, sample_rate = torchaudio.load(file_path)
waveform = torchaudio.functional.vad(waveform, sample_rate)
and it seems to work but befor VAD it took only 10 - 15 Minutes to train an epoch, and now it needs almost 10 hours per epoch. Have I done something wrong? |
st182678 | Hi!
This phenomenon might be reasonable when the VAD takes too much time.
It might be feasible to exert VAD on all samples before your training instead of having VAD in DataLoader. |
st182679 | I want to convert ogg-file to torch.Tensor using torchaudio.load function, but in deployment I need to do it in C++ (I converted original model to TorchScript). Can I use it from C++? Or what can I do as alternative? |
st182680 | torchaudio does not yet have a C++ interface. However, torchaudio.load is a wrapper around https://github.com/pytorch/audio/blob/master/torchaudio/torch_sox.cpp 129 which you could use. |
st182681 | Any update on this?
I also need to convert “.wav” file to torchaudio tensor in c++ while deployment on mobile device. Is there any way to do this.
This link is no more: https://github.com/pytorch/audio/blob/master/torchaudio/torch_sox.cpp 35
Is there any alternate link or implementation of this?
Thanks. |
st182682 | Same here. Is there any built-in way to read a wav file into a torch tensor in C++? |
st182683 | I’m having an issue with my DNN model.
During train phase, the accuracy is 0.968 and the loss is 0.103, but during test phase with model.eval(), the accuracy is 0 and the running corrects is 0.
def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment):
model.train()
liveloss = PlotLosses()
data_len = len(train_loader.dataset)
with experiment.train():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(train_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
optimizer.zero_grad()
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
loss.backward()
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
experiment.log_metric('learning_rate', scheduler.get_last_lr(), step=iter_meter.get())
optimizer.step()
scheduler.step()
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(train_loader.dataset)
epoch_acc = running_corrects.float() / len(train_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
The evaluation script is:
def test(model, device, tst_loader, criterion, epoch, iter_meter, experiment):
print('\nevaluating...')
model.eval()
test_loss = 0
liveloss = PlotLosses()
data_len = len(tst_loader.dataset)
with experiment.test():
with torch.no_grad():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(tst_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
test_loss += loss.item() / len(tst_loader)
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(tst_loader.dataset)
epoch_acc = running_corrects.float() / len(tst_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
test(model, device, tst_loader, criterion, epoch, iter_meter, experiment)
Is there something with the code? |
st182684 | I want to know what version of python this is.
Some previous versions of python do not support +=
This is not likely the issue tho coz it’ll flag an error.
Now I noticed u are not saving the model after training.
U are passing the model into the training function as an argument and u are not returning it after training.
When u then pass the model into the testing function, it doesn’t pass the already trained model.
So try making model a global variable so that whatever changes u make to it from the training function, it’ll reflect on the testing function.
Or
After training the model, return the already trained model and make model = train() |
st182685 | Thank s for the reply. The += operator worked on model.train(). So I don’t think it’s the culprit. |
st182686 | main.py - traditional ML methods - Visual Studio Code 6_4_2021 11_32_06 AM1366×766 48.3 KB
Look at this image and correlate it with what my edit was talking about.
when i printed x, it did not change because I only multiplied the x that I passed as a parameter of the ‘me’ function by 4 and not the original x.
so x still remains 4. |
st182687 | I’m not sure about the background of the previous discussion about the inplace operation.
For your initial problem: are you seeing the same accuracy drop after you call model.eval() and use your test dataset for the sake of debugging? |
st182688 | I didn’t get your point, but I call model.train() with the exact script above, on train data it shows accuracy is 0.968 and the loss is 0.103.
Then I call model.eval(), and the accuracy is always 0.0000.
Do I have to save the model first before I call model.eval()? |
st182689 | No, you don’t need to save the model before calling eval().
Are you seeing the performance drop on both, the training and validation dataset, after calling model.eval()? |
st182690 | Sorry I’m a newbie. So I may be doing it all wrong.
When I call model.eval() it just uses the test loader, so I’m not sure how the train performance could be affected.
Please bear with me. Is there anything in the code above rhat looks off to you?
Here’s what I’m doing.
On jupyter norebook.
I set the train loader and test loader.
On one cell I run the train script
Once train is done, I run the test script |
st182691 | @lima @ptrblck
As I said earlier if you read my comment.
You passed the model into the train function and trained it without returning the it after training.
So the model you passed into the test function is not a trained model but rather a model that is yet to be trained.
The training and weight updates only happened in the train function, so if u call the model any other place, it won’t have those trained weights and It’ll still be the untrained model it is…simply because you passed the model as a parameter to the train function and made the trained model peculiar to just the train function.
So you either make the model global and make the train and test function to access and use it
Or
You either return the trained model after you trained it in the train function and assign it’s call to a variable for inference
Or
You save the already trained model and load it when you want to make inference. |
st182692 | @lima @ptrblck
Make the model global so the train and test function can both access it without passing it individually as a parameter
Given that: model = the_model_that_you_declared_outside()
def train(device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment):
model.train()
liveloss = PlotLosses()
data_len = len(train_loader.dataset)
with experiment.train():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(train_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
optimizer.zero_grad()
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
loss.backward()
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
experiment.log_metric('learning_rate', scheduler.get_last_lr(), step=iter_meter.get())
optimizer.step()
scheduler.step()
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(train_loader.dataset)
epoch_acc = running_corrects.float() / len(train_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
train(device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
#The evaluation script is:
def test(device, tst_loader, criterion, epoch, iter_meter, experiment):
print('\nevaluating...')
model.eval()
test_loss = 0
liveloss = PlotLosses()
data_len = len(tst_loader.dataset)
with experiment.test():
with torch.no_grad():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(tst_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
test_loss += loss.item() / len(tst_loader)
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(tst_loader.dataset)
epoch_acc = running_corrects.float() / len(tst_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
test(device, tst_loader, criterion, epoch, iter_meter, experiment)
Or you can return the model after training in the train function like so:
def train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment):
model.train()
liveloss = PlotLosses()
data_len = len(train_loader.dataset)
with experiment.train():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(train_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
optimizer.zero_grad()
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
loss.backward()
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
experiment.log_metric('learning_rate', scheduler.get_last_lr(), step=iter_meter.get())
optimizer.step()
scheduler.step()
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(train_loader.dataset)
epoch_acc = running_corrects.float() / len(train_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
return model
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
model = train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
#The evaluation script is:
def test(model, device, tst_loader, criterion, epoch, iter_meter, experiment):
print('\nevaluating...')
model.eval()
test_loss = 0
liveloss = PlotLosses()
data_len = len(tst_loader.dataset)
with experiment.test():
with torch.no_grad():
logs = {}
running_loss = 0.0
running_corrects = 0
for batch_idx, _data in enumerate(tst_loader):
features, labels = _data[:][:,:,:-1], _data[..., -1]
features = features.permute(0, 2, 1)
features, labels = features.to(device), labels.to(device)
output = model(features)
loss = criterion(output, torch.max(labels, 1)[1])
test_loss += loss.item() / len(tst_loader)
experiment.log_metric('loss', loss.item(), step=iter_meter.get())
iter_meter.step()
_, preds = torch.max(output, 1)
running_loss += loss.detach() * features.size(0)
running_corrects += torch.sum(preds == torch.max(labels, 1)[1])
epoch_loss = running_loss / len(tst_loader.dataset)
epoch_acc = running_corrects.float() / len(tst_loader.dataset)
logs['log loss'] = epoch_loss.item()
logs['accuracy'] = epoch_acc.item()
liveloss.update(logs)
liveloss.send()
iter_meter = IterMeter()
for epoch in range(1, epochs + 1):
test(model, device, tst_loader, criterion, epoch, iter_meter, experiment) |
st182693 | @Henry_Chibueze @ptrblck
I think I can pinpoint the issue but I still can’t get it solved.
The problem is I don’t know how to get the predictions to look like the labels.
test_acc = 0.0
for _data in loaders['test']:
with torch.no_grad():
data, target = _data[:][:,:,:-1], _data[..., -1]
data = data.permute(0, 2, 1)
data, target = data.to(device), target.to(device)
output = trained_model(data)
# calculate accuracy
_, pred = torch.max(output, dim=1)
correct = torch.sum(pred.eq(torch.max(target, 1)[1]))
print(f'prediction: {pred}')
print(f'label: {(torch.max(target, 1)[1])}')
test_acc += torch.mean(correct.float())
print('Accuracy of the network on {} test frames: {}%'.format(len(tst_data3), round(test_acc.item()*100.0/len(loaders['test']), 2)))
If I print the prediction and the label, I get different values:
print(f'prediction: {pred}')
prediction: tensor([194, 86, 492, 492, 132, 132, 263, 216, 241, 17, 263, 216, 127, 399,
492, 500, 420, 263, 390, 510, 216, 510, 132, 194, 263, 217, 263, 23,
216, 395, 132, 297, 390, 194, 263, 492, 114, 216, 194, 503, 20, 217,
297, 477, 476, 263, 263, 479, 466, 500, 132, 263, 361, 194, 92, 510,
393, 216, 500, 53, 194, 510, 216, 216, 269, 216, 228, 194, 119, 415,
477, 477, 114, 477, 476, 477, 503, 194, 170, 216, 263, 194, 221, 503,
263, 466, 263, 114, 263, 492, 46, 449, 286, 286, 263, 132, 420, 406,
492, 194, 263, 194, 263, 263, 194, 477, 114, 119, 477, 510, 241, 492,
286, 263, 170, 216, 216, 194, 127, 263, 492, 477, 216, 114, 194, 360,
479, 390], device='cuda:0')
print(f'label: {(torch.max(target, 1)[1])}')
label: tensor([3, 0, 0, 0, 1, 0, 6, 0, 2, 0, 5, 0, 1, 0, 4, 0, 0, 0, 0, 0, 0, 4, 0, 0,
3, 0, 4, 2, 0, 6, 0, 0, 3, 0, 0, 3, 0, 0, 0, 1, 0, 0, 5, 0, 0, 6, 0, 0,
0, 6, 0, 5, 0, 0, 6, 0, 1, 0, 4, 0, 0, 4, 0, 1, 0, 1, 0, 0, 0, 4, 0, 0,
0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 4, 0, 0, 6, 0, 0, 5, 0,
0, 0, 1, 0, 3, 0, 3, 4, 0, 5, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 3, 0,
0, 0, 1, 0, 1, 1, 0, 6], device='cuda:0') |
st182694 | Henry_Chibueze:
The training and weight updates only happened in the train function, so if u call the model any other place, it won’t have those trained weights and It’ll still be the untrained model it is…simply because you passed the model as a parameter to the train function and made the trained model peculiar to just the train function.
I don’t think that’s correct as seen e.g. here:
import torch
import torch.nn as nn
def train(model, optimizer):
for epoch in range(10):
optimizer.zero_grad()
output = model(torch.randn(1, 1))
loss = criterion(output, torch.randn(1, 1))
loss.backward()
optimizer.step()
print('epoch {}, loss {}, weight {}'.format(
epoch, loss.item(), model.weight))
if __name__=='__main__':
model = nn.Linear(1, 1)
print('before training')
print(model.weight)
optimizer = torch.optim.SGD(model.parameters(), lr=1.)
criterion = nn.MSELoss()
train(model, optimizer)
print('after training')
print(model.weight)
As you can see, the model is created in the if-clause guard, passed to the train function, and is not returned.
Nevertheless, the weight updates will be performed inplace on the model reference and thus the model in the same scope will contain the upgraded parameters. |
st182695 | Does Pytorch have an equivalent implementation to Tensorflow’s tf.signal.frame | TensorFlow Core v2.5.0 6 ? I’ve been searching everywhere and cannot seem to find anything, even in torchaudio. I know that librosa has an equivalent function, but it requires that the inputs be converted to numpy arrays first |
st182696 | Solved by ptrblck in post #2
Based on the description I guess that tensor.unfold would perform the same operation (with an additional F.pad before if needed). |
st182697 | Based on the description I guess that tensor.unfold 27 would perform the same operation (with an additional F.pad before if needed). |
st182698 | You are correct, thank you! For completeness, this method appears to produce the same results as tf.signal.frame (albeit not tested extensively)
def frame(signal, frame_length, frame_step, pad_end=False, pad_value=0, axis=-1):
"""
equivalent of tf.signal.frame
"""
signal_length = signal.shape[axis]
if pad_end:
frames_overlap = frame_length - frame_step
rest_samples = np.abs(signal_length - frames_overlap) % np.abs(frame_length - frames_overlap)
pad_size = int(frame_length - rest_samples)
if pad_size != 0:
pad_axis = [0] * signal.ndim
pad_axis[axis] = pad_size
signal = F.pad(signal, pad_axis, "constant", pad_value)
frames=signal.unfold(axis, frame_length, frame_step)
return frames
Reference:
https://superkogito.github.io/blog/SignalFraming.html 14 |
st182699 | I have a dataset of N audio frames of size (N , 13), and for the task of phoneme recognition
train_data = torch.hstack((train_feat, train_labels))
train_loader = torch.utils.data.DataLoader(train_data, batch_size= 128, shuffle=True)
print(train_data.shape)
torch.Size([3082092, 14])
How can I stack 7 frames each time to feed to the DNN? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.