id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182900 | Even after reducing the parameters , Model is over-fitting after a approx 8-9 epochs and takes a while to reduce the training loss to zero.
Any suggestions on this |
st182901 | Hard to say without seeing the data. How did you build your test set? Is it a sample from your original dataset or does it come from another source? What about the training set distribution (see my previous question)? How much example in each subset? With no information, it’s hard to make recommendations… |
st182902 | Test data was chosen from the same dataset as Training set .
Dataset is audio files int 16
Training set : 83 Audio files
Test set : 10 Audio files |
st182903 | I have a dumpy question I have a loss function that depends on Tensorflow and Keras backend, the question is can I use this loss function to train my PyTorch network.
Another question is there is any PyTorch function that equivalent to this TensorFlow command.
stft_true = tf.contrib.signal.stft(y_true,256,128,512,window_fn,pad_end=False)
Thanks |
st182904 | Hi, almighty guys .
I’m a beginner in digital audio signal processing. And I met some questions during reading a paper 2.
When I get the filter banks outputs from a 10s audio segment, should I send all of them into the model or just several frames?
I’m a little confused about the model architecture in the paper. How does the dense layer connect with the encoder and decoder. Does it convert a Bx1x1x128 tensor to a Bx128xhxw tensor?
image898×396 72 KB |
st182905 | Hello all,
I have query regarding the placement of optimizer.zero_grad() and optimizer.step()
for idx in range(epoch):
for kdx in range(batch):
y_pred = model(X)
loss = loss_fn(y_pred,y_target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Do we need to call them once for every epoch or for every batch iteration? |
st182906 | Solved by ptrblck in post #2
It depends on your use case.
The usual approach would be to zero out the old gradients, calculate the new gradients via backward(), and update the parameters via optimizer.step() for each batch (so once per iteration).
However, you could simulate a larger batch size by accumulating the gradients u… |
st182907 | It depends on your use case.
The usual approach would be to zero out the old gradients, calculate the new gradients via backward(), and update the parameters via optimizer.step() for each batch (so once per iteration).
However, you could simulate a larger batch size by accumulating the gradients using multiple batches and call the complete update steps after a specific number of iterations. |
st182908 | Hello everyone,
I am sorry for the long post but would really appreciate any help you guys can offer !
I am trying to create a custom RNN that would apply n different recurrent connections (that are in fact n biquadratic filters) on the input. Another way of thinking about it would be to have n different RNN that works on the input and to concatenate thair results afterwards, however I believe that would lead to very poor performances (please tell me if I am wrong).
For instance :
I have a mini-batch of size [32, 16000], and want to apply 128 filters on it, which means that my output size is [32,128,16000].
What I did so far is :
Expand and clone the input so I have a tensor of size : [32, 128, 16000].
Permute axes to get a size of [16000, 32, 128].
Iterate on the sequence and use matrices products to compute the input, since the filters are linears. In fact, I use this recurrence relation that works for only one sequence of size N (except the first two samples ofc) :
where the a_i and b_i are the learnable weights, x[n] is the n-th sample of the input, y[n] is the state at the time frame n, and i is for the i-th filter (or the i-th recurrence relation if you prefer).
I already tried two methods to make it work (see below). The problem is that my versions are too slow and I don’t have a good enough understanding of pytorch to optimize them.
So I would really appreciate any help you can provide on these points :
Is there a better way of implementing an RNN with n different recurrence relations ?
Do you see improvements I could make to my code (see below) that would yield to good performances ?
May computing the outputs of the different RNNs in parallel and concatenate (with torch.cat) the results yield to better results ?
May implementing in C++ (as it is the case for pytorch for the recurrence relation) become necessary to achieve good performance ?
Links for the pytorch RNN :
RNN.py
RNN.cpp
QuantizeLinear.cpp which seems to contain the function that achieve the loop on the sequence : fbgemm_linear_int8_weight_fp32_activation.
Please, tell me if there is anything unclear or if you need more info.
Thanks for reading this post, and thanks for any piece of advice you can provide !
Code :
In each of the following version, the loop on the sequence is the piece of code that is the longest to execute.
Version A :
def forward(self, X) :
bs = X.size()[0]
X = X.unsqueeze(1).expand(-1,self.kernels_number,-1).clone()
B0, A1, A2 = self.filters()
if(self.is_cuda):
out = torch.zeros(self.points_per_sequence, bs, self.kernels_number).cuda()
else:
out = torch.zeros(self.points_per_sequence, bs, self.kernels_number)
out[0] = torch.mul(X[0],B0) # [bs,1]*[1,128] = [bs,128]
out[1] = torch.mul(X[1],B0) - torch.mul(out[0],A1)
for n in range(2, X.size()[0]):
out[n] = self.f_2(out[n-1], out[n-2], X[n], X[n-2], B0, A1, A2)
out = torch.flip(out, dims=[2]).permute(1,2,0)
return out
(Since I am using pass-band filters, I only need the three tensors B0, A1, A2 of size [1,n_channels] each, there are computed from only two weights but it does not matter here).
The function self.f_2 :
def f_2(self, y_1, y_2, x, x_2, b0, a1, a2):
"""
Computing y[n] with y[n-1], y[n-2], x[n], x[n-1], x[n-2], b0, a1, a2
Sizes :
x : [bs,128]
b0,a1,a2 : [1,128]
y_1, y_2 : [bs, 128]
"""
return torch.mul(x-x_2,b0) - torch.mul(y_1,a1) - torch.mul(y_2,a2)
I have not tried this version on the backward pass but the forward pass works.
Version B :
For this one, I used the function lfilter 1 from torchaudio. Since the filters are all differents, I started by looping over the filters and applying lfilter which did not work well : it took longer than the previous version and had RAM issues.
Then I modified the function lfilter so it now accepts different filters. It now behaves, performance wise, as the version A.
Here is my version of the filter :
def m_lfilter(
waveform: torch.Tensor,
a_coeffs: torch.Tensor,
b_coeffs: torch.Tensor
) -> torch.Tensor:
r"""Perform an IIR filter by evaluating difference equation.
NB : contrary to the original version this one does not requires normalized input and does not ouput normalized sequences.
Args:
waveform (Tensor): audio waveform of dimension of `(..., number_of_filters, time)`.
a_coeffs (Tensor): denominator coefficients of difference equation of dimension of `(n_order + 1)`.
Lower delays coefficients are first, e.g. `number_of_filters*[a0, a1, a2, ...]`.
Must be same size as b_coeffs (pad with 0's as necessary).
b_coeffs (Tensor): numerator coefficients of difference equation of dimension of `(n_order + 1)`.
Lower delays coefficients are first, e.g. `number_of_filters*[b0, b1, b2, ...]`.
Must be same size as a_coeffs (pad with 0's as necessary).
Returns:
Tensor: Waveform with dimension of `(..., number_of_filters, time)`.
Note :
The main difference with the original version is that we are not packing anymore the batches (since we need to apply different filters)
"""
shape = waveform.size() # should returns [batch_size, number_of_filters, size_of_the_sequence]
assert (a_coeffs.size(0) == b_coeffs.size(0))
assert (len(waveform.size()) == 3)
assert (waveform.device == a_coeffs.device)
assert (b_coeffs.device == a_coeffs.device)
device = waveform.device
dtype = waveform.dtype
n_channel,n_filters, n_sample = waveform.size()
n_order = a_coeffs.size(1)
assert (a_coeffs.size(0) == n_filters) # number of filters to apply - for each filter k, the coefs are in a_coeffs[k] and b_coeffs[k]
n_sample_padded = n_sample + n_order - 1
assert (n_order > 0)
# Pad the input and create output
padded_waveform = torch.zeros(n_channel, n_filters, n_sample_padded, dtype=dtype, device=device)
padded_waveform[:,:,(n_order - 1):] = waveform
padded_output_waveform = torch.zeros(n_channel, n_filters, n_sample_padded, dtype=dtype, device=device) # padded_output_waveform = torch.zeros(n_channel, n_sample_padded, dtype=dtype, device=device)
# Set up the coefficients matrix
# Flip coefficients' order
a_coeffs_flipped = a_coeffs.flip(1).unsqueeze(0)
b_coeffs_flipped = b_coeffs.flip(1).t()
# calculate windowed_input_signal in parallel
# create indices of original with shape (n_channel, n_order, n_sample)
window_idxs = torch.arange(n_sample, device=device).unsqueeze(0) + torch.arange(n_order, device=device).unsqueeze(1)
window_idxs = window_idxs.repeat(n_channel, 1, 1)
window_idxs += (torch.arange(n_channel, device=device).unsqueeze(-1).unsqueeze(-1) * n_sample_padded)
window_idxs = window_idxs.long()
# (n_filters, n_order) matmul (n_channel, n_order, n_sample) -> (n_channel, n_filters, n_sample)
A = torch.take(padded_waveform, window_idxs).permute(0,2,1) # taking the input coefs
input_signal_windows = torch.matmul(torch.take(padded_waveform, window_idxs).permute(0,2,1),b_coeffs_flipped).permute(1,0,2)
# input_signal_windows size : n_samples x batch_size x n_filters
for i_sample, o0 in enumerate(input_signal_windows):
windowed_output_signal = padded_output_waveform[:, :, i_sample:(i_sample + n_order)].clone() # added clone here for back propagation
o0.sub_(torch.mul(windowed_output_signal,a_coeffs_flipped).sum(dim=2))
o0.div_(a_coeffs[:,0])
padded_output_waveform[:, : , i_sample + n_order - 1] = o0
output = padded_output_waveform[:, :,(n_order - 1):]
return output
As for the the forward function :
def forward(self, X):
# creating filters
A, B = self.filters() # A = [[a1_0, a2_0, a3_0],...], A = [[b1_0, b2_0, b3_0],...] - size : [128, 3]
X = X.unsqueeze(1).expand(-1,self.kernels_number,-1).clone() # we have to expand the input to the size : [bs, n_filters, n_samples]
# applying the filters
X = m_lfilter(X,A,B)
return X
This method works for the backward pass even if it takes ages to perform (I am working on implementing the TBPTT in parallel to improve these algorithms). |
st182909 | Pre-calculating non-recurrent term is a good approach. You can use Tensor.unfold to create (16000-2,32,3) feature tensor. You could then also apply 3x128 map to it (with conv1d or matmul), but most rnn implementations do just that with input-to-hidden matrix.
Now, recurrent part is tricky. There are RNN implementations with independent hidden-to-hidden transitions - SRU, IndRNN among others, they could almost do what you want (i.e. they act as a stack of width 1 rnns), with some tweaks. But I’m not aware of implementations that look two steps back (maybe it is possible to emulate this somehow, not sure).
PaulC:
May implementing in C++ (as it is the case for pytorch for the recurrence relation) become necessary to achieve good performance ?
And to this I would say yes. I’m sceptical about python loops with hundreds of steps already - timestep data slices are small, invocation overheads are huge, backward graph is a chain of small ops too. Actually, in my experience, such loops with GPU tensors are slower than with cpu tensors. |
st182910 | I tried to import default_collate from torch.utils as shown
from torch.utils.data.dataloader import default_collate
The problem is the default_collate is not found and I found _collate_fn_t are they similar functions? |
st182911 | Hi @Mohamed_Nabih,
I tried the import you shown and it works for me. Which version are you using?. Also, I remember I recently imported the same function using: from torch.utils.data._utils.collate import default_collate.
To check if the import you used pointed at the same funciton, I checked with the following code:
from torch.utils.data.dataloader import default_collate as dc1
from torch.utils.data._utils.collate import default_collate as dc2
print(dc1 is dc2)
# >> True
So yes, both ways import the actual default_collate method. I couldn’t find _collate_fn_t, but I think they refer to the same method (*.pyi files seem to be related with annotations, so they’re not actual declarations). |
st182912 | I am trying to combine CNN and LSTM for the audio data.
Let us say the output of my CNN model is torch.Size([8, 1, 10, 10] which is [B X C_out X Frequency X Time ]
and the LSTM requires [L X B X InputSize].
My question is what is the inputSize in LSTM and how shall I feed the output of CNN to the LSTM
Please help @ptrblck |
st182913 | shakeel608:
My question is what is the inputSize in LSTM
The mentioned inputSize in your shape information would correspond to the “feature” dimension.
Since your CNN output is 4-dimensional, you would have to decide which dimensions are corresponding to the temporal dimensions and which to the features.
Assuming you would like to use C_out and Fequency as the features, you could use:
x = torch.randn(8, 1, 10, 10)
x = x.view(x.size(0), -1, x.size(3)) # [batch_size, features=channels*height, seq_len=width]
x = x.permute(2, 0, 1) # [seq_len, batch_size, features]
and pass it to the RNN.
PS: Please don’t tag certain people, as this might discourage others to post a solution |
st182914 | Thank you @ptrblck, This is what I was looking for .
Suppose after feeding the data input to the CNN model, it outputs variable length like the example below.
CNN Shape==========> torch.Size([2, 128, 5, 28])
CNN Shape==========> torch.Size([2, 128, 9, 28])
Now When I am feeding this to the LSTM after performing the below mentioned operations
x = x.view(x.size(0), -1, x.size(3)) # [batch_size, features=channels*height, seq_len=width]
x = x.permute(2, 0, 1) # [seq_len, batch_size, features]
It is giving the error
input.size(-1) must be equal to input_size.
How the variable length from the CNNs is handled before feeding it the LSTM
Sure I will be careful in future.
Actually your explanations are very clear and to the point and I really enjoy those. |
st182915 | In an RNN the temporal dimension is variable, not the feature dim.
You could use the channels (dim1) as the feature dimension and the height*width as the temporal dimension as a workaround.
However, based on your description you would like to use the width as the time dim: [B X C_out X Frequency X Time ]. |
st182916 | Sorry
you are right
the temporal dimension is variable as
CNN Shape==========> torch.Size([16, 128, 40, 21])
CNN Shape==========> torch.Size([16, 128, 40, 28])
This is what I get output from the CNN.
Now how do we handle this variable length in RNNs (this is on the fly training) |
st182917 | Same as before: make sure to pass the inputs to the RNN as [seq_len, batch_size, features].
If dim3 is now the time dimension and (dim1+dim2) are the features:
x = x.view(x.size(0), -1, x.size(3))
x = x.permute(2, 0, 1) |
st182918 | Hi ptrblck,
During permutation I had to put (0, 2, 1) to match with batch_size, seq_length, out_channels in my case.
For video classification, my variable factor is batch_size so changing batch_size I can control the temporal part of a video. seq_length is coming from the previous block as a part of feature vector which I can’t change. I am little confused here. As temporal part should be controlled by seq_length not by the batch_size.
Please let me know. Thank you once again.
Regards,
ananda2020 |
st182919 | While the batch size can very, it doesn’t represent the temporal dimension, but just how many samples you are processing at once. If your seq_length is static, you are not working with a variable temporal dimension.
Make sure to permute the input to the expected dimensions. By default RNNs expect an input of [seq_len, batch_size, features]. With batch_first=True the input should be [batch_size, seq_len, features]. |
st182920 | I am extracting frames from videos. Then each frame is fed into the CNN to get the features and the output from CNN is fed into LSTM. So how can I change the seq_length?
Thanks in advance.
Regards,
ananda2020 |
st182921 | I assume your CNN creates features in the shape [batch_size, features] and you would like to use the batch size as the temporal dimension, since you made sure that the ordering of the input images is appropriate for the use case.
If that’s the case, just unsqueeze a fake batch dimension in dim1 and pass the outputs to the RNN. |
st182922 | Thank you ptrblck. Yes, I named the frames such way that they are sequenced. Thank you once again.
Regards,
Alakananda |
st182923 | Hi ptrblck,
I was wrong. My dataloader was not taking sequenced data. So I added a sample to get the sequence.
Thanks for posting the sampler code in another thread.
Regards,
ananda2020 |
st182924 | @ptrblck I have question regarding the hidden units of LSTM
self.lstm = nn.LSTM(
input_size = 64,
hidden_size = 128,
num_layers = 2)
Since the hidden units of LSTM are fixed 128. how does it handle the variable length inputs. It is bit confusing ?
Every time the input takes the batch, its input sequence length changes. so how does it handle it ? |
st182925 | The hidden size is defining the “feature dimension” of the input and is thus unrelated to the temporal dimension.
This lecture on RNNs 11 gives you a good overview how these shapes are used. |
st182926 | Hi,
I’m looking at QuartzNet in NeMo and trying to probe some of the internal tensors. I see that I can use evaluated_tensors = neural_factory.infer(tensors=[a,b,c]) to run inference and return the evaulations of a,b,c, but I can’t figure out how to get a list of the intermediate activation tensors, or to get a pointer to one. I’m looking for a method of either the neural_factory or the individual models (like jasper_encoder) that would return a list of tensors that I can pass to infer(). Any ideas?
Thanks
Edit: I know infer() and the neural_factory object are NeMo, so maybe out of scope for this board, but the underlying model is based on a PyTorch module, so I’m hoping a general PyTorch method for getting internal activations will be useful here. Here’s the inheritance tree for that encoder model.
[nemo.collections.asr.jasper.JasperEncoder,
nemo.backends.pytorch.nm.TrainableNM,
nemo.core.neural_modules.NeuralModule,
abc.ABC,
torch.nn.modules.module.Module,
object] |
st182927 | Solved by ptrblck in post #2
For a general PyTorch model you could use forward hooks to get the intermediate activations as described here. Since NeMo seems to be using a PyTorch model internally, you would have to access its layers to register the hooks. |
st182928 | For a general PyTorch model you could use forward hooks to get the intermediate activations as described here 7. Since NeMo seems to be using a PyTorch model internally, you would have to access its layers to register the hooks. |
st182929 | Thanks ptrblck! That worked. It takes a little poking around to figure out the class structure, but not too much. For example, using your example, I was able to use your example and this line:
encoder.encoder[1].mconv[0].conv.register_forward_hook(get_activation('B1(a).mconv.1D'))
to capture the output of the 1D convolution in the first block of a QuartzNet encoder model.
Again, thanks for the help. |
st182930 | After messing with this for a while, I wanted to add one caveat that I encountered. Some layers objects, like ReLU, are re-used throughout the network. I guess it’s any layer without parameters, but I’m not sure. The result is that if you put a hook on a ReLU layer, like encoder.encoder[1].mout[0].register_forward_hook(get_activation('B1.mout.relu')) in QuartzNet, it gets called for every ReLU in the whole model, not just the one you wanted. So the final result in the dictionary is actually the ReLU output for the final model output, not the ReLU output associated with the layer where you registered the hook. |
st182931 | Hi everyone,
I have a specific thing I want to achieve, and was wondering if it’s already possible using the current Dataset/DataLoader implementation. I saw a PR for a ChunkDataset API that may serve my needs, but it isn’t there yet.
Data: Audio, with lots of small (1-10s) sound files.
I want to process this audio in terms of frames, but also incorporate a hop parameter (ie. take the first 1024 samples, then the next frame will start at 256 instead of 1024)
What I want to do is concatenate all the short audio examples into long .wav files, of which two can fit into memory. I wrote code to index individual sounds and their respective frames and it works well.
The idea is to serve frames from one (long, e.g. 1GB) .wav file, and have another one loaded in the background. When all frames from the first file have been served, I replace the “current” file with the one that was loaded in the background, and load a new file.
Everything works, except for the fact that the IO on loading a new file will block the getitem call, interrupting training. I was thinking of some async io structure, but lack some experience there in getting it to interop with the Dataset/Loader classes.
How can I do a non-blocking IO call to replace the current “buffered” file while keep serving frames? |
st182932 | Solved by vincentqb in post #2
One way to manage async iterators is to use a background iterator to prefetch ahead of time. But the exact setup depends on what you are trying to do of course
If you start with lots the small files (for a total of 1 GB), then you could create a dataset that reads them on __getitem__. You could ei… |
st182933 | One way to manage async iterators is to use a background iterator 22 to prefetch ahead of time. But the exact setup depends on what you are trying to do of course
If you start with lots the small files (for a total of 1 GB), then you could create a dataset that reads them on __getitem__. You could either then cache the dataset in memory after loading, using a cache 8. And/or you could use a background iterator 22 to prefetch files ahead of time. The downside is lots of random disk seeks, but only on first read.
If you have a single large (1 GB) data file with offsets, you could pay the price of loading it once in memory, and then the dataset simply knows about the offsets on __getitem__. To load the 1 GB async, you would need to read per block as an iterator, and you could use the background_iterator to push that in the background. You could still use the cache to keep the data in memory. The benefit would be faster starting time at the beginning since you don’t wait for the whole file to be loaded.
Side note: You could also create a virtual RAM disk and copy the file(s) there once. Then everything after is done from RAM after that, so you could do lots of small file from there, or one big one. Reads are then fast. |
st182934 | Thanks for the reply!
I want to pay the price of loading the file once in memory and using the offsets (which I already have done), but the use case requires many of these large files, ie. the dataset could consist of 100 files of 1GB each. So what I imagined was always loading the next 1GB while the current 1GB is being served.
I don’t mind the penalty of waiting to load the first file, as long as subsequent loads do not affect training times - I want to make sure the GPU is utilised fully.
I will have a look at the bg_iterator, which may work!
edit: How would I have found audio/utils without going through the source/getting this recommendation?
Is there perhaps some documentation I missed?
edit 2: bg_iterator did it! Fairly simple too, just had to create a generator for all big files and specify the generator with maxsize=1 so it will buffer the next item. All my other logic still works, generating the correct frames/batches. Thanks! |
st182935 | Thanks for pointing out that the torchaudio documentation 19 needs to be updated to highlight bg_iterator
Created an issue 9 to track that |
st182936 | I want to compute kaldi pitch features at the end of a network. Is there a way to compute gradients of the kaldi pitch feature extraction? (Maybe something similar to spectrogram of torchaudio).
Thanks! |
st182937 | Hi,
I’m not familiar with kaldi pitch, how do you compute it? If you use pytorch construct, the autograd should work just fine |
st182938 | The original Kaldi library is written in C and usually called using .sh files. I do not know of a Pytorch function that implements it, although there are python envelopes such as pykaldi. |
st182939 | If there is no pytorch implementation, I’m afraid you’ll have to do one of the following:
reimplement a new version with pytorch operators to use the autograd
write a custom Function (https://pytorch.org/docs/stable/notes/extending.html 2) that uses the original library in the forward and for which you will specify the backward. |
st182940 | I need to send some audio I use a torch based text to speech to generate as a JSON. I cannot find a way to turn it into a base64 without the following awful code:
torchaudio.save('C:/Work/test_images/temp.wav', wav_array, sampling_rate)
wav_file = open('C:/Work/test_images/temp.wav', 'rb')
any ideas how to clean this up? there doesn’t appear to be any built-in functions with torch to do this, but I’m fairly new so would appreciate any help. |
st182941 | I’m not deeply familiar with torchaudio, but would it be possible to use an io.BytesIO object for temporal storage instead of a file work? |
st182942 | I am working with data that I spend a lot of time converting 1D signals to 2D spectrograms that are then fed to a CNN in Pytorch. I have a 2nd GPU and wanted to know if it was reasonable to accelerate my fft/spectrogram workflow through PyTorch on my GPU instead of what I am currently doing in my dataloader?
What I am doing now:
-In my custom dataset (using torch.utils.data dataset class) I initialize and build my dataset of time-series audio data before training begins (fast). Then inside my getitem method I have some things that get done to the time-series data before I pass the data sample to a custom fft function that builds the spectrograms in the way I need and then returns that spectrogram as the data sample.
-My spectrogram function is custom built for various things I need but the main workhorse is built around the np.fft.fft() function, which is where I need to accelerate.
-I am using the “num_workers” kwarg in the PyTorch dataloader to better use my cores.
What I would like to do:
Either in my getitem or elsewhere, I’d like to send it to my 2nd GPU (GTX 980ti) in hopes to faster generate the spectrograms then send them to my main GPU to pass the data through the model.
Is this reasonable? Can I expect to see accelerations in contrast to a Ryzen 3950x and utilizing the “num_workers” kwarg in the dataloader? How would I do this? I’m open to ideas for other libraries and tools that exist out of PyTorch, but not sure where to start and seeking insight from this community. |
st182943 | Hi,
I think the best way to speed this up would be to move it as preprocessing.
Have a seperate script that converts your audio data to the spectrogram and save them to disk.
Then your dataloader in the training script will just load the spectrograms directly. |
st182944 | Inconveniently this will not be accessible because I have certain transformations that are applied to my time-series data before I generate a spectrogram. Also this would eliminate any data augmentation that I have available in the time-series data as well. |
st182945 | Hi,
You can indeed operate on the main thread and to process that in a second gpu.
Even if I didn’t a proper profiling from my experience it may be worse to pre process them as the dimensionality is usually way higher. For example in my case an wavelength of 16k elements becomes into a 512x256x2.
Anyway I only recommend this is the main workload is the stft.if you have additional heavy preprocessing multiprocessing may be better |
st182946 | while installing torchaudio with conda install torchaudio -c pytorch, anaconda downgrades torchvision (0.6.0 to 0.2.2) and pytorch (1.5 to 1.4). How to avoid this? |
st182947 | Hi,
Thanks for the report. Which OS are you using?
It is torchaudio 0.5 that you are installing right?
If not, make sure that your conda is up to date as it sometimes prevent you from installing the latest version of libraries. |
st182948 | I can’t reproduce on MacOS
❯ conda update -n base -c defaults conda
❯ conda create -n 150and050 python=3.8
❯ conda activate 150and050
❯ conda install -c pytorch pytorch
❯ conda install -c pytorch torchaudio
❯ python -c "import torch; print(torch.__version__); import torchaudio; print(torchaudio.__version__);"
1.5.0
0.5.0a0+3305d5c |
st182949 | I can reproduce on linux:
❯ conda update -n base -c defaults conda
❯ conda create -n 150and050 python=3.8
❯ conda activate 150and050
❯ conda install -c pytorch pytorch
❯ python -c "import torch; print(torch.__version__);"
ModuleNotFoundError: No module named 'torch'
❯ conda install -c pytorch torchaudio
installs 0.4.0 and downgrades to 1.4.0
❯ python -c "import torch; print(torch.__version__);"
ModuleNotFoundError: No module named 'torch'
❯ python -c "import torchaudio; print(torchaudio.__version__);"
ModuleNotFoundError: No module named 'torchaudio' |
st182950 | I am also getting pytorch 1.4 and torchvision 0.6 on linux anaconda. My environment is as follows:
name: Pytorch-gpu
dependencies:
- python=3.7
- ipywidgets
- jupyterlab
- pytorch::pytorch
- pytorch::torchvision |
st182951 | Thanks so much. This is exactly what is happening to me. I’m on Debian GNU/Linux 9.4 |
st182952 | Hi,
We updated all the linux binaries now. So all should work fine. Can you double check that it works on your side? |
st182953 | Okey, I will try and let you know. I have been using the nightly version until now owing to the fact that when i installed torchvision this morning ,It does’t come with deeplabv3_resnet50 weights.
Are deeplabv3_resnet50 weights in torchvision 0.6? |
st182954 | Torchvision 0.6.0a0+82fd1c8 is getting installed from anaconda, which is a prerelease |
st182955 | If you install from the pytorch channel, you should get the realease: https://anaconda.org/pytorch/torchaudio 45 |
st182956 | I was installing from pytorch official anaconda channel. I decided to change to pip and now is working! |
st182957 | Hi, I’m trying to participate in DCASE 2020 Task4, a sound event detection.
And while modifying the baseline code, I could find augmentation in training process.
As you know, synthetic data have event label and onset, offset time.
So, when I augment data with time transform, I should adjust onset, offset labels to fit this.
I’m stuck in here.
So my question is:
When data have been augmented and have two data in tuple, how to modify labels?
When tuple have multi-data features, how pytorch link them with labels?
(ex. Data in Tuple : 4, label : 2)
Thank you for your help in advance. |
st182958 | I have trained a seq2seq model on some synthetic data, but am having trouble finetuning a very smallset of data in another domain. I have trained my seq2seq model on around 15k datapoints and my small dataset has around 50 datapoints. |
st182959 | I am creating a model for music generation, but my proble is that for some reason modelpredicts a current step, and not a next step as a label says, even though i compute loss between model output and labels, and labels are shifted by 1 forward:
Sequence - model input
Label - sequence shifted 1 step forward
Output - model output
And if i feed in the label as input, it “predicts” the label, so model is basically repeating the input
Dataset code:
class h5FileDataset(Dataset):
def __init__(self, h5dir, seq_length):
self.h5dir = h5dir
self.seq_length = seq_length + 1
with h5py.File(h5dir,'r') as datafile:
self.length = len(datafile['audio']) // self.seq_length
def __len__(self):
return self.length
def __getitem__(self,idx):
with h5py.File(self.h5dir,'r') as datafile:
seq = datafile["audio"][idx*self.seq_length:idx*self.seq_length+self.seq_length]
feature = seq[0:len(seq)-1].astype('float32') #from 0 to second-to last element
label = seq[1:len(seq)].astype('float32') #from 1 to last element
return feature,label
Model code:
class old_network(nn.Module):
def __init__(self, input_size=1, hidden_layer_size=1, output_size=1, seq_length_ = 1, batch_size_ = 128):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.batch_size = batch_size_
self.seq_length = seq_length_
self.lstm = nn.LSTM(input_size, hidden_layer_size, batch_first = False, num_layers = 2)
self.linear1 = nn.Linear(hidden_layer_size, output_size)
self.linear2 = nn.Linear(hidden_layer_size, output_size)
#self.tanh1 = nn.Tanh()
self.tanh2 = nn.Tanh()
def forward(self, input_seq):
lstm_out, _ = self.lstm(input_seq)
lstm_out = lstm_out.reshape(lstm_out.size(1),lstm_out.size(0),1) #reshape to batch,seq,feature
predictions = self.linear1(lstm_out)
#predictions2 = self.tanh1(predictions1)
predictions = self.linear2(predictions)
predictions = self.tanh2(predictions)
return predictions.reshape(predictions.shape[1],predictions.shape[0],1) #reshape to seq,batch,feature to match labels shape
Training loop:
epochs = 10
batches = len(train_data_loader)
losses = [[],[]]
eval_iter = iter(eval_data_loader)
print("Starting training...")
try:
for epoch in range(epochs):
batch = 1
for seq, labels in train_data_loader:
start = time.time()
seq = seq.reshape(seq_length,batch_size,1).to(DEVICE)
labels = labels.reshape(seq_length,batch_size,1).to(DEVICE)
optimizer.zero_grad()
y_pred = model(seq)
loss = loss_function(y_pred, labels)
loss.backward()
optimizer.step()
try:
eval_seq, eval_labels = next(eval_iter)
except StopIteration:
eval_iter = iter(eval_data_loader)
eval_seq, eval_labels = next(eval_iter)
eval_seq = eval_seq.reshape(seq_length,batch_size,1).to(DEVICE)
eval_labels = eval_labels.reshape(seq_length,batch_size,1).to(DEVICE)
eval_y_pred = model(eval_seq)
eval_loss = loss_function(eval_y_pred, eval_labels)
losses[1].append(eval_loss.item())
losses[0].append(loss.item())
print_inline("Batch {}/{} Time/batch: {:.4f}, Loss: {:.4f} Loss_eval: {:.4f}".format(batch,batches,time.time()-start, loss.item(), eval_loss.item()))
batch += 1
if batch%50 == 0:
print("\n Epoch: {}/{} Batch:{} Loss_train:{:.4f} Loss_eval: {:.4f}".format(epoch,epochs,batch,loss.item(),eval_loss.item()))
plt.close()
plt.plot(range(0,len(losses[0])),losses[0], label = "Learning dataset")
plt.plot(range(0,len(losses[1])),losses[1], label = "Evaluation dataset")
plt.legend()
plt.show()
torch.save({'model_state_dict':model.state_dict(), 'optimizer_state_dict' : optimizer.state_dict()},save_dir)
except KeyboardInterrupt:
plt.close()
plt.plot(range(0,len(losses[0])),losses[0], label = "Learning dataset")
plt.plot(range(0,len(losses[1])),losses[1], label = "Evaluation dataset")
plt.legend()
plt.show()
I am kinda running out of ideas by this point, not sure what is wrong |
st182960 | No real ideas, just some comments:
(1) I would never use reshape() or view() to adjust the tensor shape. I’ve seen to many cases where this was used incorrectly and broke the tensor. Just because the shape is correct in the network doesn’t throw an error doesn’t mean the tensor is correct. If possible, I always use transpose() or permute() since in almost all cases I only need to swap dimensions. For exanole, instead of
seq = seq.reshape(seq_length,batch_size,1).to(DEVICE)
I would do
seq = seq.tranpose(1,0).to(DEVICE)
or
seq = seq.permute(1,0,2).to(DEVICE)
This ensures that dimensions are only swapped but never “torn apart” which can happen with reshape() or view(). The latter are mostly needed to maybe (un-)flatten tensors, but that’s not needed here.
(2) I’m also not quite sure about
predictions = self.linear1(lstm_out)
since the shape of lstm_out is (batch_size, seq_len, features). I know that nn.Linear takes as input (N,∗,H_in) but I’m not sure if you really want go that way. Usually the last hidden state is used for prediction. So I would try:
lstm_out, (h, c) = self.lstm(input_seq)
predictions = self.linear1(h[-1])
h[-1] is the last layer of the last hidden state. |
st182961 | This is a classic result of using LSTM for time series analysis. LSTM is simply using the hidden state to relay back an earlier input without actually learning any patterns. In order to trick the LSTM into learning patterns, you can do the following
Reduce step size
Increase HiddenDim size
%matplotlib inline
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from torch.autograd import Variable
class LSTMSimple(nn.Module):
def __init__(self,inputDim,hiddenDim,batchSize,outputDim):
super(LSTMSimple,self).__init__()
torch.manual_seed(1)
self.lstm=nn.LSTM(inputDim,hiddenDim,1).cuda()
# Hidden state is a tuple of two states, so we will have to initialize two tuples
self.state_h = torch.randn(1,batchSize,hiddenDim).cuda()
self.state_c = torch.rand(1,batchSize,hiddenDim).cuda()
self.linearModel=nn.Linear(hiddenDim,outputDim).cuda()
def forward(self,inputs):
# LSTM
output, self.hidden = self.lstm(inputs, (self.state_h,self.state_c) )
self.state_h=self.state_h.detach()
self.state_c=self.state_c.detach()
# LINEAR MODEL
output=self.linearModel(output).cuda()
return output
def lossCalc(x,y):
return torch.sum(torch.add(x,-y))
# Model Object
batchSize=5
inputDim=1
outputDim=1
stepSize=5
hiddenDim=20
model=LSTMSimple(inputDim,hiddenDim,batchSize,outputDim).cuda()
loss = torch.nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.00001)
# Input Data
dataInput = np.random.randn(stepSize*batchSize,inputDim)
dataY=np.insert(dataInput[1:],len(dataInput)-2,0)
dataInput=Variable(torch.from_numpy(dataInput.reshape(stepSize,batchSize,inputDim).astype(np.float32))).cuda()
dataY=Variable(torch.from_numpy(dataY.reshape(stepSize,batchSize,inputDim).astype(np.float32))).cuda()
for epoch in range(10000):
optimizer.zero_grad()
dataOutput=model(dataInput).cuda()
curLoss=loss(dataOutput.view(batchSize*stepSize,outputDim),dataY.view(batchSize*stepSize,outputDim))
curLoss.backward()
optimizer.step()
if(epoch % 1000==0):
print("For epoch {}, the loss is {}".format(epoch,curLoss))
plt.plot(dataInput.cpu().detach().numpy().reshape(-1),color="red")
plt.plot(dataOutput.cpu().detach().numpy().reshape(-1),color="orange")
plt.plot(dataY.cpu().detach().numpy().reshape(-1),color="green")
plt.figure() |
st182962 | What do you mean exactly by reducing step size? Reducing input and output sequence lengths? |
st182963 | Tried increasing hidden dim to 100 and reducing seq_length to 250, still follows the input
Loss graph: |
st182964 | Oh, i thought that lstm needs long sequences, especially in things like music, to capture all long-term dependencies |
st182965 | I trained a PyTorch model for speech recognition and I want to save the output of the model as .ark file
Can anyone give me a help?
Thanks |
st182966 | Not really pytroch related it seems ?
Are you referring to this ? https://fileinfo.com/extension/ark 2
If so… it’s a pretty arcane format, I’m pretty sure people switched to .zip and .gzip a few decade ago, if you need it for some reason you can use arc (https://sourceforge.net/projects/arc/ 1), at least on linux (but should work on OSX and windows as well I’d assume)
Just output whatever you have to a text file and then use the arc tool to compress it. There’s also a python library for it seemingly: https://pypi.org/project/arc/ 2, but I’m not sure if it actually works. |
st182967 | So I need to save my network posteriors in any format can you give me a help on this |
st182968 | I am currently trying to download torchaudio for Conda to train an RNN on audio, but I can’t download it. I used the command conda install -c pytorch torchaudio , and also downloaded all of the required libraries, but when I try to download it, it says PackagesNotFoundError: The following packages are not available from current channels: torchaudio. Why does it happen, and how can I download torchaudio succesfully? |
st182969 | Hi,
Which python version are you using?
Also have you tried using pip or installing the nightly builds? |
st182970 | I am using Python 3.7, and yes, I have tried to use pip and install the nightly builds and it didn’t work. |
st182971 | This is surprising indeed, have you tried in an empty conda environment? You might have conflicting packages?
@vincentqb is there something special about the packages? |
st182972 | I have found out that torchaudio is not compatible with Windows 10, which I am using. Thank you for the help, anyway. |
st182973 | What platform are you using? torchaudio is currently available for linux and macos. Windows support is in progress, see this 63. |
st182974 | I am getting this error in the last batches while i am training a large speech dataset. I tried reducing batch sizes (16,8,4,2) but every time i got this error at the end of the epoch.
can someone give me the solution?
99%|█████████▉| 11706/11812 [29:32<01:08, 1.55it/s, avg_loss=tensor(76.1413, device=‘cuda:0’), iter=11705, loss=tensor(117.4730, device=‘cuda:0’)]
RuntimeError Traceback (most recent call last)
in
3 start = time.time()
4
----> 5 run_state = run_epoch(model, optimizer, train_ldr, *run_state)
6
7 msg = “Epoch {} completed in {:.2f} (s).”
~/Hasan/Project/SpeechRNNT/speech-master/train.py in run_epoch(model, optimizer, train_ldr, it, avg_loss)
28 optimizer.zero_grad()
29 loss = model.loss(batch)
—> 30 loss.backward()
31
32 grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), 200)
~/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to False.
92 “”"
—> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~/miniconda3/envs/ariyan/lib/python3.6/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
—> 90 allow_unreachable=True) # allow_unreachable flag
91
92
~/Hasan/Project/SpeechRNNT/speech-master/transducer/functions/transducer.py in backward(self, *args)
78 grads = parent.backward(*args)[0]
79 if self.size_average:
—> 80 grads = grads / grads.shape[0]
81 return grads, None, None, None
82
RuntimeError: CUDA error: out of memory |
st182975 | Hi,
I’m wondering would you mind to share your implementation of transducer loss? Is it a python native code or did you used c/c++ extensions.
Best, |
st182976 | Dear experts, when will the torchaudio be compatible with win10? I really need it. Thank you very much! |
st182977 | Hi,
I am applying the pyaudio spectrogram to a 512 sample audio sample.
An example below:-
waveform = 512 samples
specgram = torchaudio.transforms.Spectrogram(hop_length=64)(waveform)
I thought this would generate 512/64=8 Hops, so 8 SFTs for the spectrogram, however it generates 9.
waveform = 512 samples
specgram = torchaudio.transforms.Spectrogram(hop_length=128)(waveform)
I thought this would generate 512/128=4 SFTs for the spectrogram, however it generates 5.
I guess it may start at the -hop_length/2 and finishes at 512 + hop_length/2
Pad is set to zero by default. If it is set to 32, it increases the number of SFTs by 1 as expected. |
st182978 | I have a problem with multi-classification I built this Model, but the test accuracy is constant at 4
input_size = 13
hidden1_size = 1024
hidden2_size = 1024
hidden3_size = 1024
hidden4_size = 1024
hidden5_size = 1024
output_size = 1976
class DNN(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size, output_size):
super(DNN, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.sig1 = nn.Sigmoid()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.sig2 = nn.Sigmoid()
self.fc3 = nn.Linear(hidden2_size, hidden3_size)
self.sig3 = nn.Sigmoid()
self.fc4 = nn.Linear(hidden3_size, hidden4_size)
self.sig4 = nn.Sigmoid()
self.fc5 = nn.Linear(hidden4_size, hidden5_size)
self.sig5 = nn.Sigmoid()
self.fc6 = nn.Linear(hidden5_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.sig1(out)
out = self.fc2(out)
out = self.sig2(out)
out = self.fc3(out)
out = self.sig3(out)
out = self.fc4(out)
out = self.sig4(out)
out = self.fc5(out)
out = self.sig5(out)
out = self.fc6(out)
return out
model = DNN(input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size,
output_size)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.008
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range(1, 50):
for i, (X_train, y_train) in enumerate(train_loader):
model.train()
optimizer.zero_grad()
outputs = model(Variable(X_train))
loss = criterion(outputs, Variable(y_train))
print('Iter %d/%d --> loss %f' % (i, len(train_loader), loss.item()))
loss.backward()
optimizer.step()
correct = 0
total = 0
print('test')
for X_test, y_test in test_loader:
model.eval()
out = model(Variable(X_test)).detach()
pred = out.max(dim=1)[1]#.argmax(dim=1, keepdim=True)
total += y_test.size(0)
correct += (pred.squeeze() == y_test).sum() # pred.eq(y_test.view_as(pre d)).sum().item()
accuracy = 100 * correct / total
print('epoch: {}. Accuracy: {}'.format(epoch, accuracy)) |
st182979 | Try to overfit a small data sample (e.g. just 10 samples) to make sure you don’t have any hidden bugs in your code and that your model architecture works for this problem.
From my past experience I would claim, that relu activation functions might work better than sigmoids, so you could play around with the architecture and some hyperparameters. |
st182980 | @ptrblck
The problem is the network always estimates the most frequent class in the labels.
So, Do you have any idea to tackle this problem |
st182981 | In addition to using relu as activation, u should also add some dropouts. U can try to add weight to CrossEntropyLoss to reduce the over fitting caused by imbalanced data. |
st182982 | @G.M
Thanks a lot, but can you give me an example how can I add weights to BCE Loss |
st182983 | Aren’t u using CrossEntropyLoss? I think u should use cross entropy for multi-class classification.
For the ordinary BCELoss, according to here, the weight are just a number that is multiplied to each value in a batch, so the shape of weight must equal to the shape of a single batch.
>>> import torch as tc
>>> from torch import nn
>>> bsz = 10
>>> loss0 = nn.BCELoss(weight = tc.full([bsz], 0.5)) # "weight" can contain values of any number.
>>> loss1 = nn.BCELoss()
>>> inp, tar = tc.zeros(bsz), tc.ones(bsz)
>>> loss0(inp, tar)
tensor(13.8155)
>>> loss1(inp, tar)
tensor(27.6310) |
st182984 | @G.M
Sorry for the typo I already use CrossEntropyLoss so, can you edit the example according to CrossEntropy loss |
st182985 | That’s ok . For CrossEntropy, it’s straightforward: provide a tensor of shape [C]( C is the number of classes, and the id of the classes ranges from [0, C) ). Each value represent the weight of each class, the weight here should be positive. For example:
from torch import nn
import torch as tc
num_cls = 100
weights = tc.rand([num_cls])
loss = nn.CrossEntropyLoss(weight = weights) |
st182986 | @G.M
Thanks a lot, and this weight should be connected with any layers of the model or just implemented as you tell me.
Another thing if I want to accelerate my model I use ReLU activation function and dropout layers and Increase the hidden layers this makes the loss to decrease and the accuracy increase but slowly due you have ideas how can I increase them rapidly |
st182987 | Usually, the weight of a class is something like the inverse of the frequency of the class.
I think it is slow because dropout makes the model converges slower; my suggestion is to remove some Linearn layers. Removing some layers can accelerate the training process, reduce memory usage, and reduce over-fitting. Currently, u have 6 Linear layers, 2 or 3 should be enough . |
st182988 | First, this is my Dataloader
X_train, X_test, y_train, y_test = train_test_split(feat, labels, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1)
train = data_utils.TensorDataset(X_train, y_train)
train_loader = data_utils.DataLoader(train, batch_size=1000, shuffle=True)
test = data_utils.TensorDataset(X_test, y_test)
test_loader = data_utils.DataLoader(test, batch_size=1000, shuffle=False)
input_size = 13
hidden1_size = 13
hidden2_size = 64
hidden3_size = 128
hidden4_size = 256
hidden5_size = 1024
output_size = 3989
class DNN(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size, output_size):
super(DNN, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.drp1 = nn.Dropout(p=0.2, inplace=False)
self.relu1 = nn.ReLU()
self.tan1 = nn.Tanh()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.drp2 = nn.Dropout(p=0.2, inplace=False)
self.relu2 = nn.ReLU()
self.tan2 = nn.Tanh()
self.fc3 = nn.Linear(hidden2_size, hidden3_size)
self.drp3 = nn.Dropout(p=0.2, inplace=False)
self.relu3 = nn.ReLU()
self.tan3 = nn.Tanh()
self.fc4 = nn.Linear(hidden3_size, hidden4_size)
self.drp4 = nn.Dropout(p=0.2, inplace=False)
self.relu4 = nn.ReLU()
self.tan4 = nn.Tanh()
self.fc5 = nn.Linear(hidden4_size, hidden5_size)
self.drp5 = nn.Dropout(p=0.2, inplace=False)
self.relu5 = nn.ReLU()
self.tan5 = nn.Tanh()
self.fc6 = nn.Linear(hidden5_size, output_size)
self.tan6 = nn.Tanh()
def forward(self, x):
out = self.fc1(x)
out = self.drp1(out)
out = self.relu1(out)
out = self.tan1(out)
out = self.fc2(out)
out = self.drp2(out)
out = self.relu2(out)
out = self.tan2(out)
out = self.fc3(out)
out = self.drp3(out)
out = self.relu3(out)
out = self.tan3(out)
out = self.fc4(out)
out = self.drp4(out)
out = self.relu4(out)
out = self.tan4(out)
out = self.fc5(out)
out = self.drp5(out)
out = self.relu5(out)
out = self.tan5(out)
out = self.fc6(out)
out = self.tan6(out)
return out
batch_size = 10
n_iterations = 50
no_eps = n_iterations / (13 / batch_size)
no_epochs = int(no_eps)
model = DNN(input_size, hidden1_size, hidden2_size, hidden3_size, hidden4_size, hidden5_size, output_size)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.0001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
iter = 0
for epoch in range(no_epochs):
for i, (X_train, y_train) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(Variable(X_train))
loss = criterion(outputs, Variable(y_train))
print('Iter %d --> loss %f' % (i, loss.item()))
loss.backward()
optimizer.step()
correct = 0
total = 0
print('test')
for X_test, y_test in test_loader:
outputs = model(Variable(X_test))
pred = outputs.argmax(dim=1, keepdim=True)
total += y_test.size(0)
correct += (pred.squeeze() == y_test).sum() # pred.eq(y_test.view_as(pre d)).sum().item()
accuracy = 100 * correct / total
print('Iteration: {}. Accuracy: {}'.format(epoch, accuracy)) |
st182989 | you shouldn’t use two activation functions (here ReLU ad Tanh) , also if is a classifier you may use Sigmoid in final layer instead of tanh. And Variable is deprecated, pass the tensor directly instead |
st182990 | CrossEntropyLoss means you do not need an activation layer at the end of your network, also try Adam optimizer if SGD is not working, and as @simaiden mentioned, two act is not needed. |
st182991 | Hello, I need help
I have a torch tensor of size [1124823 x 13] and I want from the center " of the tensor to take five frames from the right and five frames from the left and concatenate them is there is any function do this?
" N.B." |
st182992 | Hi,
You can use things like t[base-5:base+5] where base is whatever you call the center of your Tensor |
st182993 | t would be your Tensor of size [1124823 x 13].
And base is the index (as a python number) of the center. So something like base = t.size(0) // 2. |
st182994 | Thanks albanD
But can I ask you if I want to go inside the center of each raw of the tensor take five elements from left and five from right
i = 0
j = 6
base = feat.szie(0)//2
for i in feat[i, j]:
x = feat[base - 5: base:+5]
i += 1 |
st182995 | If you want to take form the second dimension, you can use the indexing syntax like: x = feat[:, base-5, base+5]. Or you can use the specialized function to get a subset: x = feat.narrow(1, base - 5, 10). The two will give exactly the same result. |
st182996 | Not excatly, I want to go form the center of each raw take five numbers from the left and five numbers from right |
st182997 | Rows are the second dimension for a 2D Tensor. So that should work.
Maybe you want to share an example with a given Tensor and which values you expect to get? |
st182998 | x = feat.narrow(1, base - 5, 10)
IndexError: Dimension out of range (expected to be in range of [-13, 12], but got 562406) |
st182999 | This is because the center of the second dimension is not the same as the one in the first dimension.
I am really confused about what you’re trying to do. I think an example of input/output that you want would help. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.