id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st182400 | If you delete all references to the model and other tensors, the memory can be freed or reused. Here 54 is a small example.
Make sure you are not storing the model output, loss etc. without detaching it, as this would keep the computation graph with all intermediate tensors alive. |
st182401 | Hello, I’ve encountered a memory leak on a LSTM model and condensed the issue into the following code.
I’ve tried the following solutions:
Detach hidden state using repackage_hidden() https://discuss.pytorch.org/t/help-clarifying-repackage-hidden-in-word-language-model/226/8 1
gc.collect()
torch.nograd()
However, problem still persists using pytorch 1.5.1, and my machine has 64GB of ram. Any help is appreciated!
import torch
import torch.nn as nn
import psutil, os, gc
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h)
# Doesn't leak memory
# batch = 3
# hidden_size = 256
# batch = 3
# hidden_size = 512
# batch = 6
# hidden_size = 256
# Leaks memory
batch = 6
hidden_size = 512
rnn = nn.LSTM(320, hidden_size, num_layers=5, bidirectional=True)
x = torch.randn(5, batch, 320)
h0 = torch.randn(10, batch, hidden_size)
c0 = torch.randn(10, batch, hidden_size)
with torch.no_grad():
for i in range(1000):
print(i, psutil.Process(os.getpid()).memory_info().rss)
output, hidden = rnn(x, (h0, c0))
hidden = repackage_hidden(hidden)
gc.collect() |
st182402 | Hey,
Merely instantiating a bunch of LSTMs on a CPU device seems to allocate memory in such a way that it’s never released, even after gc.collect(). The same code run on the GPU releases the memory after a torch.cuda.empty_cache(). I haven’t been able to find any equivalent of empty_cache() for the CPU.
Is this expected behavior? My actual use-case involves training several models at once on CPU cores in a Kubernetes deployment, and involving LSTMs in any way fills memory until the Kubernetes OOM killer evicts the pod. The models themselves are quite small (and if I load a trained model in, they take up very little memory), but all memory temporarily used during training stays filled once training is done.
Code:
import torch
import torch.nn as nn
import gc
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
throwaway = torch.ones((1,1)).to(device) # load CUDA context
class Encoder(nn.Module):
def __init__(self, input_dim, hidden_dim, n_layers, dropout_perc):
super().__init__()
self.hidden_dim, self.n_layers = (hidden_dim, n_layers)
self.rnn = nn.LSTM(input_dim,hidden_dim,n_layers,dropout=dropout_perc)
def forward(self,x):
outputs, (hidden, cell) = self.rnn(x)
return hidden, cell
pile=[]
for i in range(500):
pile.append(Encoder(102,64,4,0.5).to(device))
del pile
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
I’m running PyTorch 1.5.1 and Python 3.8.3 on Ubuntu 18.04 LTS. |
st182403 | Simpler models don’t seem to exhibit this behavior. For example, this code fully deallocates the memory once all the references are deleted:
import torch
import torch.nn as nn
import gc
class Bloat(nn.Module):
def __init__(self, size):
super().__init__()
self.fc = nn.Linear(size,size)
def forward(self,x):
return self.fc(x)
pile = []
for i in range(10):
pile.append(Bloat(2**12))
del pile
gc.collect() |
st182404 | Are you only seeing the increase on system memory usage, if you are using the GPU or also a CPU implementation only? |
st182405 | Thanks for replying @ptrblck .
When using torch.device('cpu') the memory usage of allocating the LSTM module Encoder increases and never comes back down.
When using torch.device('cuda:0') the memory usage of the same comes down out of the GPU, and most of it comes down out of the system RAM as well. (I just did the experiment, and there was 16M unaccountably still allocated in system RAM).
So the problem seems at least mostly restricted to torch.device('cpu'). |
st182406 | Oh, I just realized you might mean a CPU-only distribution of PyTorch? I’m using a CUDA-enabled version, but with a CPU device. I haven’t tried with a CPU-only version of PyTorch because I do train on a GPU occasionally, though if this bug (if it is a bug) isn’t on a CPU only version of PyTorch, I can definitely switch to that for the actual deployment. |
st182407 | I just tried this on my Mac using the CPU-only distribution of PyTorch 1.5.1 for MacOS, and it did free all of its memory after the del pile; gc.collect(). So perhaps this bug only affects using a CPU device on the GPU-capable distribution of PyTorch. That at least gives me an angle of attack, though it would be far more convenient if the GPU-capable distribution of PyTorch behaved itself in CPU-mode for development and testing. |
st182408 | Ah, blast. I just tried 1.5.1+cpu on Linux, and it didn’t free the memory. The Mac version freed the memory after the del pile; gc.collect(), but the Linux version didn’t. |
st182409 | Also just tried 1.6.0.dev20200625+cpu on Linux, and it didn’t free the memory. The resident set only every increases, never decreases, until I kill the Python process itself.
Also interesting: I can see that there aren’t any references to tensors left with this code. The result is only the throwaway tensor after del pile:
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
print(f'type:{type(obj)}; shape:{obj.size()}; grad_fn:{repr(obj.grad_fn)}; requires_grad:{obj.requires_grad}')
except Exception as e:
pass
So it’s not that there’s still some kind of lingering reference.
One last thing: If I do this multiple times, say filling pile, deleting pile, and then filling it again, the memory usage doesn’t go up significantly until I fill pile more than I filled it the last time. So I think this is some kind of ever-expanding heap issue. |
st182410 | Thanks for the debugging so far.
Could you post, how you are measuring the allocated system memory? Are you checking it in a system utility or inside the Python process directly? |
st182411 | I’m also having the same CPU memory issue with LSTMs, strangely it is only affected if the batch size and hidden size are above a certain level. In the example code below, the memory measurement is on the Python process directly.
https://discuss.pytorch.org/t/lstm-cpu-memory-leak-on-specific-batchsize-and-hidden-size/89135 8 |
st182412 | @ptrblck I’m measuring allocated system memory by watching the memory for the Python process in htop rise and fall as I run my script in a repl.
I also just did a longer experiment using the MacOS version of CPU-only PyTorch 1.5.1 (which is the only place I’ve seen the LSTM memory released correctly so far). The memory usage remained bounded (below 400MB), and it was completely released at each stage.
I just started running the same script on the same data, with the only difference being the Linux version of CPU-only PyTorch 1.5.1 (this one specifically: https://download.pytorch.org/whl/cpu/torch-1.5.1%2Bcpu-cp38-cp38-linux_x86_64.whl). In about five minutes it has already consumed a gigabyte, and continues to climb.
I’ll see if I can figure out what’s different between the two versions of CPU-only PyTorch tomorrow, but I’m out of my depth if it’s an MKL-DNN bug or something.
By the way, the script I’m currently running involves my actual application code, including training, which I can’t share. However, if it would be helpful, I’d be happy to craft a minimal example I can share that exhibits the same behavior. It doesn’t appear hard to replicate. |
st182413 | @pol1 That’s interesting. My own parameters for the LSTM module are like so in my leaking example:
batch_size = 16
hidden_size = 64
n_layers = 4 |
st182414 | Does reducing it to batch_size=1 and hidden_size=16 help? Reducing them worked in my minimal example |
st182415 | A minimal code snippet to reproduce this issue would be great, but your initial code might also do the job?
Please create an issue here 6 with the code snippet, a brief description, and link to this topic for further information, so that we can track and fix it.
CC @pol1 in case you would like to add your information to the same issue. |
st182416 | Was able to stabilize the leak by setting OMP_NUM_THREADS=4. More info is here: https://github.com/pytorch/pytorch/issues/32008 36
A fix has been merged and looks to be available in future version 1.6. The nightly build works for this similar case: https://github.com/pytorch/pytorch/issues/40973 21` |
st182417 | Hmm… I’ll definitely try the workarounds in this post, but if 1.6 is supposed to have resolved it, my issue may be different. The 1.6.0.dev20200625+cpu nightly leaked in my above allocation test.
Do you know if the fix for the issue you linked would have been in by then? It’s the latest 1.6 nightly I could find. |
st182418 | I think I’m convinced I have a distinct issue now. Neither using 1.6, nor MKL_DISABLE_FAST_MM=1, nor OMP_NUM_THREADS=4 or a combination thereof solves the leak in my allocation test. I’ll file a PyTorch issue now. |
st182419 | @pol1 reducing the hidden size didn’t solve the problem in my allocation test, so I’m pretty sure I have a different problem. |
st182420 | I am trying to apply a group 1x1 convolution where one (whole) input is convolved with multiple sets of filters (ngroups), that is: conv = nn.Conv2d(ch, ch*ngroups, 1, groups=ch).
What is the correct way to separate the result of this convolution into respective groups?
conv(input).view(bs, ch, ngroups, h, w), or conv(input).view(bs, ngroups, ch, h, w) |
st182421 | Solved by trougnouf in post #2
I think it’s view(bs, ch, ngroups, h, w) based on the following experiment:
bs = = h = w = 1
ch = 3
ngroups = 4
>>> input = torch.zeros(bs, 3, h, w)
>>> input[0,1] = 1
>>> conv = torch.nn.Conv2d(ch, ch*ngroups, 1, groups=ch)
>>> conv.bias
Parameter containing:
tensor([ 0.7412, 0.8724, -0.6242, … |
st182422 | I think it’s view(bs, ch, ngroups, h, w) based on the following experiment:
bs = = h = w = 1
ch = 3
ngroups = 4
>>> input = torch.zeros(bs, 3, h, w)
>>> input[0,1] = 1
>>> conv = torch.nn.Conv2d(ch, ch*ngroups, 1, groups=ch)
>>> conv.bias
Parameter containing:
tensor([ 0.7412, 0.8724, -0.6242, -0.1312, -0.1991, -0.1502, 0.1400, 0.9860,
-0.7265, 0.2633, 0.3402, -0.7472], requires_grad=True)
>>> conv(input)
tensor([[[[ 0.7412]], #(=bias)
[[ 0.8724]], #(=bias)
[[-0.6242]], #(=bias)
[[-0.1312]], #(=bias)
[[ 0.7028]], # not bias -> ch1
[[-0.3131]], # not bias -> ch1
[[ 0.0071]], # not bias -> ch1
[[ 0.6137]], # not bias -> ch1
[[-0.7265]], #(=bias)
[[ 0.2633]], #(=bias)
[[ 0.3402]], #(=bias)
[[-0.7472]]]], grad_fn=<MkldnnConvolutionBackward>)
>>>conv(input).view(bs, ch, ngroups, 1, 1)[0, 1]
Tensor([[[ 0.7028]],
[[-0.3131]],
[[ 0.0071]],
[[ 0.6137]]], grad_fn=<SelectBackward>) |
st182423 | Hi There,
I’m trying to build an image classifier but there seems to be a persistent issue of memory filling. As soon as I execute this function, the memory starts to fill up and before it’s epoch 2, the kernel crashes.
The following is the training function:
def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False):
since = time.time()
val_acc_history = []
print("Device = ", device)
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
# Get model outputs and calculate loss
# Special case for inception because in training it has an auxiliary output. In train
# mode we calculate the loss by summing the final output and the auxiliary output
# but in testing we only consider the final output.
if is_inception and phase == 'train':
# From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958
outputs, aux_outputs = model(inputs)
loss1 = criterion(outputs, labels)
loss2 = criterion(aux_outputs, labels)
loss = loss1 + 0.4*loss2
else:
outputs = model(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
_, preds = torch.max(outputs, 1)
del inputs
del labels
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
loss.detach()
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
if phase == 'val':
val_acc_history.append(epoch_acc)
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model, val_acc_history
Then I print some parameters and move the model to GPU
model_ft, input_size = initialize_model(model_name, n_classes, use_pretrained = True)
# Move Model To GPU
model_ft = model_ft.to(device)
print("Params to learn:")
params_to_update = model_ft.parameters()
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
print("\t",name)
optimizer_ft = optim.Adam(params_to_update, learning_rate)
criterion = nn.CrossEntropyLoss()
Using the very function defined above, I train this model, which then crashes my kernel.
trained_model, hist = train_model(model_ft, dataloaders, criterion, optimizer_ft, num_epochs = epochs)
print(model_ft)
Please tell me if further information is required so I can get this issue resolved.
Thanks in advance. |
st182424 | Could you remove the .data usage, as it’s not recommended and might yield unwanted side effects.
This shouldn’t create the increased memory usage, but is nevertheless recommended.
Also, the loss.detach() operation won’t have any effect, as it’s not an inplace operation (also shouldn’t create the memory issue).
To debug the increased memory usage, could you add print statements into your code and check, where the memory is increasing?
You could use:
print(torch.cuda.memory_allocated() / 1024**2)
Usually users forget to detach() tensors before storing them in lists, so that the complete computation graph will be stored as well.
However, the epoch_acc should already be detached. You could check if by printing the tensor’s .grad_fn. |
st182425 | scipy.interpolate.interp2d 1 might work.
Note that this question is not related to PyTorch so you might get faster and better answers on StackOverflow or a dedicated forum for other libraries. |
st182426 | I would like to reshape my data, which represents a single image, such that the width and height (dims 2,3) are lowered and the batch size increases; effectively making small crops as batches out of an image.
I want a specific dimension, the channels (1) to contain the same data.
I initially wrote a pair of functions to do something like this, but I fear it’s not backprop compatible
def img_to_batch(imgtensor, patch_size: int):
_, ch, height, width = imgtensor.shape
assert height%patch_size == 0 and width%patch_size == 0, 'img_to_batch: dims must be dividable by patch_size. {}%{}!=0'.format(imgtensor.shape, patch_size)
bs = math.ceil(height/patch_size) * math.ceil(width/patch_size)
btensor = torch.zeros([bs,ch,patch_size, patch_size], device=imgtensor.device, dtype=imgtensor.dtype)
xstart = ystart = 0
for i in range(bs):
btensor[i] = imgtensor[:, :, ystart:ystart+patch_size, xstart:xstart+patch_size]
xstart += patch_size
if xstart+patch_size > width:
xstart = 0
ystart += patch_size
return btensor
def batch_to_img(btensor, height: int, width: int, ch=3):
imgtensor = torch.zeros([1, ch, height, width], device=btensor.device, dtype=btensor.dtype)
patch_size = btensor.shape[-1]
xstart = ystart = 0
for i in range(btensor.size(0)):
imgtensor[0, :, ystart:ystart+patch_size, xstart:xstart+patch_size] = btensor[i]
xstart += patch_size
if xstart+patch_size > width:
xstart = 0
ystart += patch_size
return imgtensor
There is a simple view / reshape function in the torch library, but when I use it the channels do not keep their respective data. eg (I would like the data shown to contain the same elements even if out of order):
>>> imgtens = torch.rand(1,3,4,4)
>>> imgtens[:,0,:,:]
tensor([[[0.6830, 0.2091, 0.8786, 0.6002],
[0.0325, 0.7217, 0.1479, 0.3478],
[0.0880, 0.8705, 0.0929, 0.7978],
[0.7604, 0.2658, 0.3518, 0.1969]]])
>>> reshaped = imgtens.view([4,3,2,2])
>>> reshaped[:,0,:,:]
tensor([[[0.6830, 0.2091],
[0.8786, 0.6002]],
[[0.7604, 0.2658],
[0.3518, 0.1969]],
[[0.3787, 0.0042],
[0.3481, 0.2722]],
[[0.4175, 0.8700],
[0.1930, 0.7646]]])
I read that the fold/unfold function may be able to help by creating a sliding window. |
st182427 | Solved by trougnouf in post #6
I think I got this I moved the channel axis to the front s.t. it would be safe and it passes my manual tests.
def img_to_batch(img, patch_size: int):
_, ch, height, width = img.shape
assert height%patch_size == 0 and width%patch_size == 0, (
'img_to_batch: dims must be dividable by… |
st182428 | Hi,
trougnouf:
I read that the fold/unfold function may be able to help by creating a sliding window
Yes it works, actually, in your case you are extracting patches using sliding window then permuting channels.
Here is the code that will do the job for your arbitrary example:
x.unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(3, -1, 2, 2).permute((1, 0, 2, 3))
But if you have question how it works, here is another post I have explained the calculations:
Change the dimension of tensor
Hi,
I have a tensor with dimension [1, 1, 4, 6] like this:
a = torch.tensor([[[ 1, 2, 3, 4, 5, 6],
[ 7, 8, 9, 10, 11, 12],
[13, 14, 15, 16, 17, 18],
[19, 20, 21, 22, 23, 24]]])
I want to change it to a tensor like this:
[[ [[1, 2],
[7, 8]],
[[3, 4],
[9, 10]],
[[5, 6],
[11, 12]],
[[13, 14],
[19, 20]],
[[15, 16],
[21, 22]],
[[17, 18],
[23, 24]] ]]
Is it possible?
if we use this cod…
Bests |
st182429 | Thank you @Nikronic !
If I understand this correctly, then the img_to_batch function I wrote above would be simply:
def img_to_batch(imgtensor, patch_size: int):
_, ch, height, width = imgtensor.shape
assert height%patch_size == 0 and width%patch_size == 0, (
'img_to_batch: dims must be dividable by patch_size. {}%{}!=0'.format(
imgtensor.shape, patch_size))
return imgtensor.unfold(2,patch_size,patch_size).unfold(
3,patch_size,patch_size).contiguous().view(
ch,-1,patch_size,patch_size).permute((1,0,2,3))
Note that I made a slight modification from the equivalent of “x.unfold(2, 2, 2)[0].unfold(2,” to “x.unfold(2, 2, 2).unfold(3,”, I think that it can then also handle arbitrary batch size (and just like the channels they wouldn’t get mixed up). Correct me if I’m wrong. |
st182430 | I do not think you can do this as tensor.unfold depending on some criteria creates another dimension like .unsqueeze(0) and because of that .view and .permute will no longer be valid due to number of dimension mismatch.
trougnouf:
“x.unfold(2, 2, 2)[0].unfold(2,” to “x.unfold(2, 2, 2).unfold(3,”
Actually, I have never thought of generalizing fold and unfold method due to its tricky behavior (or at least my bad understanding). |
st182431 | You are right, the following test shows that data from channel 1 can be placed in channel 0
>>> img = torch.rand(4,3,8,8)
>>> img_to_batch(img,4)[4,0]
tensor([[0.4258, 0.3276, 0.8221, 0.6588],
[0.5438, 0.9239, 0.0490, 0.7193],
[0.5852, 0.7115, 0.0703, 0.1770],
[0.4305, 0.4190, 0.2891, 0.0326]])
>>> img[0,1]
tensor([[0.4258, 0.3276, 0.8221, 0.6588, 0.9918, 0.6219, 0.4951, 0.4356],
[0.5438, 0.9239, 0.0490, 0.7193, 0.6819, 0.0627, 0.0361, 0.3178],
[0.5852, 0.7115, 0.0703, 0.1770, 0.3855, 0.0666, 0.7337, 0.0240],
[0.4305, 0.4190, 0.2891, 0.0326, 0.3457, 0.7378, 0.5640, 0.7104],
[0.3787, 0.2371, 0.4585, 0.6150, 0.7169, 0.6518, 0.4671, 0.1212],
[0.8061, 0.4295, 0.1194, 0.7166, 0.7526, 0.8067, 0.1612, 0.2812],
[0.3896, 0.8208, 0.5835, 0.6830, 0.0191, 0.7138, 0.9124, 0.7285],
[0.0963, 0.4236, 0.2779, 0.8006, 0.1528, 0.5168, 0.6543, 0.7928]]) |
st182432 | I think I got this I moved the channel axis to the front s.t. it would be safe and it passes my manual tests.
def img_to_batch(img, patch_size: int):
_, ch, height, width = img.shape
assert height%patch_size == 0 and width%patch_size == 0, (
'img_to_batch: dims must be dividable by patch_size. {}%{}!=0'.format(
img.shape, patch_size))
assert img.dim() == 4
return img.unfold(2, patch_size, patch_size).unfold(
3, patch_size, patch_size).transpose(1,0).reshape(
ch, -1, patch_size, patch_size).transpose(1,0) |
st182433 | Hello,
I would like to multiply a (symmetric) sparse tensor with a diagonal tensor. Since both tensors are sparse and the output is also sparse, I expect that this can be done very efficiently. However, my current solution use torch.sparse.mm and I have to convert one of tensor to be dense, which is not very memory efficient.
Do you have some good suggestions? Thank you! |
st182434 | Sparse matrix multiplication doesn’t seem to be implemented yet as seen in this feature request 26, so I assume you would have to use a dense matrix for one tensor. |
st182435 | Thank you. I end up transferring the tensor to cpu and use scipy.sparse to do the job. Hope there will be more features to support common sparse tensor computation. |
st182436 | I know that in pytorch 1.5 to() and clone() can preserve formats and therefore we can send non-contiguous tensors between devices.
I wonder, what is the case for copy_()? can we send non-contagious tensors with it?
If not, is there any suggested workaround for avoiding copy?
for example
a = torch.randn(10,1, device="cuda:1").share_memory_()
b = torch.randn(10,2, device="cuda:0")
b = torch.transpose(b, 0,1)
a.copy_(b) # is it OK?
In the example above we want to avoid using to()/clone() to avoid creating a new tensor and moving it to shared memory. |
st182437 | Solved by albanD in post #2
Hi,
.copy_() will not change the contiguity of any Tensor.
It will just read the content from b and write it to a. Not changing the size/strides. |
st182438 | Hi,
.copy_() will not change the contiguity of any Tensor.
It will just read the content from b and write it to a. Not changing the size/strides. |
st182439 | a.copy_ is inplace operation and it never changes strides (and memory format) of a. So the result of a.copy_(b) going to be a with data of b and strides of a. |
st182440 | If a is the same size as b, you can aggressively restride a with a.as_strided_(b.shape, b.stride()) and do a.copy_(b) as next step. But this will surely break autograd. |
st182441 | For (good) reasons, .as_strided() is actually supported by the autograd, so that will work
But even beyond that, here, since you override all the content of a anyway with the copy, all the gradients will flow towards b and the original value of a will just get 0s.
So autograd will work just fine |
st182442 | I somewhat still get data corruption, doing just the forward pass and calling as_strided_ when sizes are equal.
If I replace copy_() with to(), its totally OK.
I wonder if its related to cuda streams or something?
a.copy_(b) when a,b are on different devices? |
st182443 | Well, it’s not because they have the same size that the as_strided will be valid.
Is a contiguous ? If a has some overlapping memory already, then it’s backing memory won’t be big enough |
st182444 | I made sure its contiguous too
Here is what I did.
device = ...
ranks = [....]
saved = [None for rank in ranks]
a = saved[rank]
if a is not None and b.size() == a.size() and b.storage_offset() == a.storage_offset() and b.stride() == a.stride() and b.is_contiguous() and a.is_contiguous():
# no need to call as_strided_
a.copy_(b)
else:
a = b.to(device)
saved[rank] = a
when we replace that ugly if with if False: everything works.
(what happens next is sending a through queue and cloning the the receiver, and then normal neural net) |
st182445 | One if the risks of as_strided is that you can do fairly bad stuff. In particular, these are no checks and you can end up reading out of bounds of the original Tensor’s values (or even out of the Storage backing it).
In the ocd above, if you just do .to() the first time and .copy_() afterwards. The layout of a will just be the layout of the first b. Is that not ok?
Why is it so important to keep b’s layout at each iteration? |
st182446 | I thought it would be OK (that’s exactly what I did at first, just the is not None check)
but it didn’t work.
Then I gradually increased the checks. |
st182447 | So you mean this does not work?
device = ...
ranks = [....]
saved = [None for rank in ranks]
a = saved[rank]
if a is not None:
a.copy_(b)
else:
a = b.to(device)
saved[rank] = a
What is the issue you see with this? |
st182448 | exactly.
I tried 2 networks.
WideResnet: it works fine with copy_(). all b tensors are contiguous there.
GPT2 (from huggingface): does not work. I know that some b tensors are non-contiguous.
For the GPT2 the task is zero-shot on wikitext2.
With to(device) I restore the perplexity from the paper (~29)
With copy_() the perplexity explodes.(high crazy numbers, like 8400, 121931 and so on)
I tired to look at the tensors with the debugger, they look fine. |
st182449 | Ho ok… that looks more like a multiprocessing issue then if the Tensors looks fine?
Layout won’t change anything about the values computed (unless bugs). So you should not see any difference here !
Can you make a small code sample that repro this? |
st182450 | Can you please try to cuda.synchronize() after copy_ calls? When you are looking at tensors in debugger you are actually synchronizing to make them (data) observable. |
st182451 | Hi,
I found the bug. It was indeed was some multiprocessing/threading/multi-cuda-stream/ issue + contiguous we discussed.
Everything solved and copy_() works. Thanks |
st182452 | In case it will help someone:
I actually do notice some deadlocks when combining to() and copy_() like mentioned above.
Deadlock happens inside (i.e compiled cuda code) of to() at the second call, that is
to(), copy_(), copy_(),…,copy_(), to(), deadlock.
Anyway, I got frustrated and now I’m using only to(), as its quite minor optimization I wasted too much time on.
I think its related to RTX2080ti not supporting p2p (I looked at the cuda code that does the to() few weeks ago, and I think it assumes something about p2p, but I did not bother to check it thoroughly.)
I can’t share my code (yet), but as soon as I will I’ll share the full example |
st182453 | Hi,
If you have a repro that deadlock on a given hardware (and is reproducible on other similar card to be sure it’s not a faulty card), please open an issue on github! Thanks. |
st182454 | I am training an lstm model and currently I am performing random search.
Initially after each random search I was emptying the cache (torch.cuda.empty_cache()), however I was getting the OOM error after some number of random searches (usually around 3).
Then I read that in order for the memory to be freed I need to do del variable first. However, even after that I continued having the same issue. I am tracing the allocated gpu memory ( torch.cuda. memory_allocated()) and I can see that after each random search the memory is being freed. Although, when a new random search starts the memory allocated is a bit higher than the previous random search. I don’t think that this is caused by some variable that is not erased. Is there something that I am missing? |
st182455 | What is the random search doing? Could it change some hyperparameters and thus increase the model parameters, which could yield the OOM issue? |
st182456 | I’m getting the following error (during the backward pass):
RuntimeError: CUDA out of memory. Tried to allocate 4.93 GiB (GPU 0; 23.65 GiB total capacity; 9.11 GiB already allocated; 10.44 GiB free; 12.34 GiB reserved in total by PyTorch) (malloc at /opt/conda/conda-bld/pytorch_1587428398394/work/c10/cuda/CUDACachingAllocator.cpp:289)
I have exclusive access to this GPU while the process is running. It seems like PyTorch isn’t reserving enough memory? Clearly there is enough free memory (10.44 GiB), unless memory fragmentation is so bad that I can’t use ~50% of the GPU memory. This always happens on the backward pass - I’ve even tried using torch.utils.checkpoint, but it didn’t make much of a difference since the forward pass does not take up nearly as much memory even with grad.
My model uses Transformer layers as well as sparse-dense matrix multiplication, with variable-length sequences, so I do expect some fragmentation, but could it be causing it to this degree when I call loss.backward()? How could I fix this?
Does backpropagation on sparse-dense matrix multiplication w.r.t. the sparse tensor return a sparse tensor? If it is returning a dense tensor, this could be the reason for high memory consumption. |
st182457 | out = self.conv1(x)
out = self.norm1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.norm2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.norm3(out)
training of conv and norm is false
track_running_stats of norm is True
When I use pycharm to debug the code, from conv1 to conv3, the gpu memory not increase. But I exec norm3, the gpu memory increase.
If training of conv and norm is True, I test that I exec every sentence, the gpu memory increase.
When I exec every sentence, that create new out, why the gpu memory not increase?
But why I exec norm3 the gpu memory increase? |
st182458 | I see the requires_grad of weight and bias in norm is False.
The code in mmdet/models/backbones/resnet.py of mmdetection. |
st182459 | Hi,
Keep in mind that the GPU api is asynchronous. So you might want to add a torch.cuda.syncrhonize() after the line to make sure it finished executing.
Also since, you reuse the out variable, the old Tensor that out was pointing to is not reachable anymore and can be deleted when you’re not training (when training, it needs to be kept around to be able to compute the backward). |
st182460 | Hello,
I am trying to create a model that understand patterns in human voice and have a lot of voice samples (133K different files overall size 40GB). I run a lot of preprocessing and then generate a feature cube which I want to give to Pytorch model.
So far I have been doing the preprocessing and cube generation offline so that I create the feature cubes and write them to a “*.pt” file using Torch.save().
I have currently only used 5K samples that generated an *.pt file of 1GB (I would then expect a file 26GB big).
I then do a Torch.load on the training host loading everything in memory using TensorDataset and DataLoader
features = torch.load(featpath)
labels = torch.load(labpath)
dataset = torch.utils.data.TensorDataset(features,labels)
return torch.utils.data.DataLoader(dataset,shuffle=True,batch_size=batch_size,num_workers=num_workers)
It worked ok with fewer samples and it probably will work if I give enough memory / disk space (I use S3 and Sagemaker)
I am just wondering am I doing the right thing ? Is there a best practices for large datasets ? Streaming from disk ? or do they have to be all loaded in memory ? I am assuming here that DataSet/Loader do this. |
st182461 | Solved by futscdav in post #4
Well, to avoid unnecessary pain, you would split the data so that each sample tensor will have its own .pt file. Then, in the __getitem__, you call the load, like this
class MyDataset(torch.util.data.Dataset):
def __init__(self, root):
self.root = root
self.files = os.listdir(roo… |
st182462 | As long as you have enough memory to do so, there’s no problem with what you are doing. If the data were to get too large, the easiest solution would be to split the samples into individual files and only load them in a custom Dataset __getitem__ method. |
st182463 | Thanks just to clarify.
The getitem will have to flip from one to the other when one has finished the cycle ?
This means that for every epoch I would be flipping from one to another and back to the first a the beginning of next epoch?
Would be nice if there was an example but this seems reasonable in case I run out of memory. I still do not have a lot of good handle on Datasets.
Thanks you again. |
st182464 | Well, to avoid unnecessary pain, you would split the data so that each sample tensor will have its own .pt file. Then, in the __getitem__, you call the load, like this
class MyDataset(torch.util.data.Dataset):
def __init__(self, root):
self.root = root
self.files = os.listdir(root) # take all files in the root directory
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
sample, label = torch.load(os.path.join(self.root, self.files[idx])) # load the features of this sample
return sample, label
And then use this dataset in the same way. Keeping the data in several chunks rather than individual samples would be … problematic. |
st182465 | Thanks, hopefully I will fit in memory but this seems a pretty simple solution.
Would there be any issue with Shuffling and DataLoader ? I wonder how would the shuffling work in this case.
Thanks again. |
st182466 | With shuffling enabled, the dataloader randomizes the idx parameter of __getitem__, effectively choosing a random file each time. |
st182467 | Hi, here is one toy code about this issue:
import torch
torch.cuda.set_device(3)
a = torch.rand(10000, 10000).cuda()
# monitor cuda:3 by "watch -n 0.01 nvidia-smi"
a = torch.add(a, 0.0)
# keep monitoring
When using same variable name “a” in torch.add, I find the old a’s memory is not freed in cuda, it still exists even though the reference is updated, and I cannot reach tp original memory since the reference is gone. How could I make the memory of old reference automatically freed when use same variable name in left/right of torch operations? |
st182468 | Solved by ptrblck in post #2
a will be freed automatically, if no reference points to this variable.
Note that PyTorch uses a memory caching mechanism, so nvidia-smi will show all allocated and cached memory as well as the memory used by the CUDA context.
Here is a small example to demonstrate this behavior:
# Should be empt… |
st182469 | a will be freed automatically, if no reference points to this variable.
Note that PyTorch uses a memory caching mechanism, so nvidia-smi will show all allocated and cached memory as well as the memory used by the CUDA context.
Here is a small example to demonstrate this behavior:
# Should be empty
print('allocated ', torch.cuda.memory_allocated() / 1024**2)
print('cached ', torch.cuda.memory_cached() / 1024**2)
> allocated 0.0
> cached 0.0
# Initial setup
a = torch.rand(1024, 1024, 128).cuda()
print('allocated ', torch.cuda.memory_allocated() / 1024**2)
print('cached ', torch.cuda.memory_cached() / 1024**2)
> allocated 512.0
> cached 512.0
# torch.add will use a temp variable, as it's not inplace
a = torch.add(a, 0.0)
print('allocated ', torch.cuda.memory_allocated() / 1024**2)
print('cached ', torch.cuda.memory_cached() / 1024**2)
> allocated 512.0
> cached 1024.0
# Delete reference
a = 1.
print('allocated ', torch.cuda.memory_allocated() / 1024**2)
print('cached ', torch.cuda.memory_cached() / 1024**2)
> allocated 0.0
> cached 1024.0 |
st182470 | Hi, I met an out-of-memory error when I use closure in my new optimizer, which needs the value of the loss f(x) to update the parameters x in each iteration. The scheme is like
g(x) = f’(x)/f(x)
x = x - a*g(x)
So I define a closure function before optimizer.step
def closure():
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
return loss, outputs
loss, outputs = optimizer.step(closure)
It works fine when I apply this optimizer to train a CNN model for MNIST. However, when I use this optimizer to train ResNet34 for cifar10, even on an HPC cluster, the program will be killed after a few iterations because of an out-of-memory error.
I think the memory of the compute node (128G) is large enough, and it works fine when I change the optimizer to torch.optim.SGD, with the same other settings. The corresponding code for SGD is:
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
So the only difference I can notice is the use of closure function in the new optimizer.
I have two questions:
Did I use the closure function correctly? So far this new optimizer works fine for some smaller data like MNIST, but since closure is rarely used in optimizers, and there are not many examples, so I am not so sure if I used the closure correctly.
Is the out-of-memory error caused by the use of closure function? It’s not clear for me how the closure and optimizer.step work in PyTorch so I have no idea where this out-of-memory error comes from.
Any help is highly appreciated! Thanks! |
st182471 | Solved by ptrblck in post #2
I’m not sure how your new optimizer is defined, but e.g. LBFGS (which also requires a closure) can be memory intensive as stated in the docs:
This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn’t fit in memory try reducing the hi… |
st182472 | I’m not sure how your new optimizer is defined, but e.g. LBFGS (which also requires a closure) can be memory intensive as stated in the docs:
This is a very memory intensive optimizer (it requires additional param_bytes * (history_size + 1) bytes). If it doesn’t fit in memory try reducing the history size, or use a different algorithm.
If your optimizer’s step method expects the output and loss as a return value, then the closure looks correct. You could compare it to this example 2.
Might be and it depends, what optimizer.step is doing with the closure. If you are storing the history of the losses and outputs, then an increased memory usage is expected. |
st182473 | Thank you for your reply! I later realized that I just need the value of the the loss, not all the metadata. so instead of saving loss and outputs, I changed my code as
def closure():
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
return loss.detach(), outputs.detach()
loss, outputs = optimizer.step(closure)
That works fine, no out-of-memory error. |
st182474 | I have a trained NTS-NET that uses 108 MB on file storage. In my server I do not have enough space but is only some MB. So I compress “state_dict” using “tar.gz” and I arrive to 100 MB. It i just enought. So to load the model I use the funcion
import pickle
import tarfile
from torch.serialization import _load, _open_zipfile_reader
def torch_load_targz(filep_ath):
tar = tarfile.open(filep_ath, "r:gz")
member = tar.getmembers()[0]
with tar.extractfile(member) as untar:
with _open_zipfile_reader(untar) as zipfile:
torch_loaded = _load(zipfile, None, pickle)
return torch_loaded
if __name__ == '__main__':
torch_load_targz("../models/nts_net_state.tar.gz")
# equivalet for torch.load("../models/nts_net_state.pt") for .tar.gz
So at the end I read the torch model from tar.gz directly. But in this way the prediction are too slow.
Exist some better solution at this problem?
(I’m using torch-1.4.0, and python 3.6) |
st182475 | Is the loading of the state_dict slow or the model predictions?
The latter shouldn’t be influenced by how the state_dict is loaded or are you reloading it in every iteration? |
st182476 | The slow part is the extract tar.gz and it save only 8MB into the state_dict stored file.
I’m speaking about only predictions (of small data) so I do not have iterations inside a single call. |
st182477 | It is expected that unzipping a file will take longer than e.g. reading a binary file.
What kind of system are you using that you are running out of memory for 108MB? |
st182478 | No. I fill my server memory with:
with torch and torchvision and other libraries
and 108MB of trained model
For example I see that transform a tensorflow model using tensorflow-lite the size in MB of the model can be reduced a lot. I was wondering for something like that using pytorch. If exist some other way may be it is less slow. |
st182479 | You could try to quantize 193 your model to reduce the size.
However, I’m not sure, how experimental this feature is at the moment, |
st182480 | I have a Python script which is running a PyTorch model on a specific device, passed by using the .to() function and then the integer number of the device. I have four devices available so I alternate between 0, 1, 2, 3. This integer is set to a variable args.device_id that I use around my code.
I’m getting this situation where I see some phantom data appearing on a GPU device which I didn’t specify ever. See the picture below (this output is from gpustat which is a nvidia-smi wrapper):
Note that when I force-quit my script, all the GPUs return to zero usage. So there are no other programs running from other users.
I specifically added this snippet to my main training/test loop, based on a previously-found PyTorch Discuss thread regarding finding-all-tensors. gc here is the Python garbage collection built-in module.
...
for obj in gc.get_objects():
try:
if (torch.is_tensor(obj) or
(hasattr(obj, 'data') and torch.is_tensor(obj.data))):
print(type(obj), obj.size(), obj.device)
except:
pass
...
Here is the result that I see on the command prompt:
2020-04-17 20:56:06 [INFO] Process ID: 3761, Using GPU device 1...
<class 'torch.Tensor'> torch.Size([1000, 63, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 15872]) cuda:1
<class 'torch.Tensor'> torch.Size([51920]) cuda:1
<class 'torch.Tensor'> torch.Size([51920]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 51920]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 51920]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 51920]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 513, 203, 2]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 203, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 513, 203, 2]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 203, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 203, 513, 2]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 203, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1, 203, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([]) cpu
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 513]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([513, 512]) cuda:1
<class 'torch.nn.parameter.Parameter'> torch.Size([513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 16000]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 16000]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 16000]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 63, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 63, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 63, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 63, 513, 2]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 63, 513]) cuda:1
<class 'torch.Tensor'> torch.Size([1000]) cuda:1
<class 'torch.Tensor'> torch.Size([1000]) cuda:1
<class 'torch.Tensor'> torch.Size([1000, 2]) cuda:1
So everything on this list (except for one random outlier) all say “cuda:1”. How can I identify what data is sitting in “cuda:3” then? Appreciate the help.
Another tid-bit, the phantom data always builds up to 577MiB exactly, never more than that. This behavior occurs even if I set the classic os.environ['CUDA_VISIBLE_DEVICES'] variable at the top of the script. |
st182481 | Could you set the env variable in your terminal via:
CUDA_VISIBLE_DEVICES=1,3 python script.py args
If you are using the os.environ method inside your script, you would have to make sure it’s called before any torch imports. |
st182482 | I want to train a network for smoothing a point cloud.
It is applied independently for each point and has the surrounding points as input.
The function below performs the actual smoothing by splitting the points (x) and the neighborhoods (y) into batches and then calling the model.
def do_smoothing(x, y, model, batch_size):
model.eval()
torch.no_grad() # Make sure that no gradient memory is allocated
print('Used Memory: {:3f}'.format(torch.cuda.memory_allocated('cuda') * 1e-9 ))
xbatches = list(torch.split(x, batch_size))
ybatches = list(torch.split(y, batch_size))
assert(len(xbatches) == len(ybatches))
print('Used Memory: {:3f}'.format(torch.cuda.memory_allocated('cuda') * 1e-9 ))
for i in range(len(xbatches)):
print("Batch {} [Memory: {:3f}]".format(i + 1, torch.cuda.memory_allocated('cuda') * 1e-9))
tmp = self.forward(_xbatches[i])
xbatches[i] += tmp
del tmp
print("Batch {} [Memory: {:3f}]".format(i + 1, torch.cuda.memory_allocated('cuda') * 1e-9))
print("------------------------------------")
return x
Unfortunately, the memory consumption is very high leading to an out of memory error. The console output is:
Used Memory: 0.256431
Used Memory: 0.256431
Prior batch 1 [Memory: 0.256431]
Post batch 1 [Memory: 2.036074]
------------------------------------
Prior batch 2 [Memory: 2.036074]
-> Out of memory error
If I remove the line xbatches[i] += tmp the allocated memory is not changing (as expected).
If I also remove the line del tmp on the other hand the code once again allocates huge amounts of GPU memory.
I assume that torch.split 2 creates views onto the tensor and therefore the update should not use additional memory.
Did I miss anything or is this unintended behavior?
I am using pytorch 1.4.0 with CUDA 10.1 on a Windows 10 machine.
Thanks in advance for any tips. |
st182483 | Solved by ptrblck in post #2
Could you try to use torch.no_grad() in a with statement:
model.eval()
with torch.no_grad():
xbatches = ...
If I’m not mistaken, your current approach shouldn’t change the gradient behavior. |
st182484 | Could you try to use torch.no_grad() in a with statement:
model.eval()
with torch.no_grad():
xbatches = ...
If I’m not mistaken, your current approach shouldn’t change the gradient behavior. |
st182485 | (Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters. Until you edit this description or create topics, this category won’t appear on the categories page.)
Use the following paragraphs for a longer description, or to establish category guidelines or rules:
Why should people use this category? What is it for?
How exactly is this different than the other categories we already have?
What should topics in this category generally contain?
Do we need this category? Can we merge with another category, or subcategory? |
st182486 | Dear everyone: Is there any tutorial on using wav2vec for pre-training to extract high-dimensional speech features from datasets? thanks, best wishes |
st182487 | Hi @Gorgen, you can check the Wav2Vec2Bundle 2 for extracting features by Wav2Vec2 or HuBERT pre-trained models. |
st182488 | Hello,
I’m creating this Topic because I need help to fix my Deep Learning model which is supposed to learn music genres with GTZAN dataset. I tried a lot of things but on validation set it is always around 10%/11% of accuracy and I am out of idea. To write my code I started with something that I’ve already done in one of my course to learn on the CIFAR10 dataset and tried to adapt it. I took a model to learn GTZAN that is supposed to work and I tried to create the loader by different means (loading all GTZAN, take GTZAN_train, val and test already created, cut my array to create a 3s sample etc.) but still nothing… Here is my code :
# Loading database
gtzan = dset.GTZAN('Dataset', download=True)
gtzan_train = dset.GTZAN('Dataset', subset='training', download=True)
gtzan_val = dset.GTZAN('Dataset', subset='validation', download=True)
gtzan_test = dset.GTZAN('Dataset', subset='testing', download=True)
# Corresponding dictionnary between genre and number
Genres = {'blues' : 0, 'classical' : 1, 'country' : 2, 'disco' : 3, 'hiphop' : 4,
'jazz' : 5, 'metal' : 6, 'pop' : 7, 'reggae' : 8, 'rock' : 9}
# Mean and variance which were computed
mu, std = -0.0010253926739096642, 0.15775565803050995
length_min = 660000
# Necessary because tensors don't have the same lengh
def resize(tensor, random=False):
if (random):
mask = torch.ones(tensor.numel(), dtype=torch.bool)
indices = np.arange(tensor.numel())
np.random.shuffle(indices)
indices = indices[:tensor.numel()-lenght_min]
mask[indices] = False
return tensor[mask]
else:
return tensor[:length_min]
# Very simple class to work with our dataset
class MusicDataset(Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return len(self.data)
#-----------------Loading Datasets------------------
dataset = []
segment = length_min
batch_size = 10
print_every = 20
for music in gtzan:
dataset += [(torch.stack([(resize(music[0][0])-mu)/(std**2)]), Genres[music[2]])]
Musics = MusicDataset(dataset)
loader_train = DataLoader(Musics, batch_size=batch_size, sampler=sampler.SubsetRandomSampler(range(440)))
loader_val = DataLoader(Musics, batch_size=batch_size, sampler=sampler.SubsetRandomSampler(range(440,640)))
loader_test = DataLoader(Musics, batch_size=batch_size, sampler=sampler.SubsetRandomSampler(range(660,1000)))
#-----------------Loading Datasets------------------
dataset_train = []
dataset_val = []
dataset_test = []
segment = length_min
batch_size = 10
print_every = 20
for music in gtzan_train:
dataset_train += [(torch.stack([(resize(music[0][0])-mu)/(std**2)]), Genres[music[2]])]
for music in gtzan_val:
dataset_val += [(torch.stack([(resize(music[0][0])-mu)/(std**2)]), Genres[music[2]])]
for music in gtzan_test:
dataset_test += [(torch.stack([(resize(music[0][0])-mu)/(std**2)]), Genres[music[2]])]
#-----------------Loading Datasets------------------
# We take 3s of each sample to have more data to train on
dataset_train = []
dataset_val = []
dataset_test = []
segment = length_min//10
batch_size = 64
print_every = 20
for music in gtzan_train:
resized = (resize(music[0][0])-mu)/(std**2)
audio_cut = [resized[i*segment:(i+1)*segment] for i in range(10)]
for audio in audio_cut:
dataset_train += [(torch.stack([audio]), Genres[music[2]])]
for music in gtzan_val:
resized = (resize(music[0][0])-mu)/(std**2)
audio_cut = [resized[i*segment:(i+1)*segment] for i in range(10)]
for audio in audio_cut:
dataset_val += [(torch.stack([audio]), Genres[music[2]])]
for music in gtzan_test:
resized = (resize(music[0][0])-mu)/(std**2)
audio_cut = [resized[i*segment:(i+1)*segment] for i in range(10)]
for audio in audio_cut:
dataset_test += [(torch.stack([audio]), Genres[music[2]])]
#------Creating Loaders----------
Musics_train = MusicDataset(dataset_train)
loader_train = DataLoader(Musics_train, batch_size=batch_size)
Musics_val = MusicDataset(dataset_val)
loader_val = DataLoader(Musics_val, batch_size=batch_size)
Musics_test = MusicDataset(dataset_test)
loader_test = DataLoader(Musics_test, batch_size=batch_size)
def flatten(x):
N = x.shape[0]
return x.view(N, -1)
def check_accuracy(loader, model):
print("Checking accuracy")
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
scores = model.forward(x)
prediction = torch.max(scores,1).indices
num_samples += x.shape[0]
num_correct += sum(prediction==y)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
def run_val(loader, model):
model.eval()
loss = None
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
scores = model.forward(x)
if (loss):
loss += F.cross_entropy(scores,y).to(device=device)
else :
loss = F.cross_entropy(scores,y).to(device=device)
return loss
def train_module(model, optimizer, epochs=1):
losses = {}
losses_val = {}
model = model.to(device=device)
for e in range(epochs):
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
losses[e * len(loader_train) + t] = loss.item()
check_accuracy(loader_val, model)
print()
loss_val = run_val(loader_val, model)
losses_val[(e + 1) * len(loader_train)] = loss_val.item()
return losses, losses_val
class Flatten(nn.Module):
def forward(self, x):
return flatten(x)
learning_rate = 0.001
model = nn.Sequential(
Flatten(),
nn.Linear(segment,512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512,256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 128),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(128, 64),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(64,10),
)
optimizer = optim.Adam(model.parameters(),lr=learning_rate)
_ = train_module(model, optimizer, epochs=3)
Output:
Iteration 0, loss = 2.2613
Checking accuracy
Got 23 / 197 correct (11.68)
Iteration 20, loss = 9.7579
Checking accuracy
Got 22 / 197 correct (11.17)
Iteration 40, loss = 3.9026
Checking accuracy
Got 11 / 197 correct (5.58)
Iteration 0, loss = 64.3667
Checking accuracy
Got 17 / 197 correct (8.63)
Iteration 20, loss = 21.6154
Checking accuracy
Got 26 / 197 correct (13.20)
Iteration 40, loss = 12.0306
Checking accuracy
Got 20 / 197 correct (10.15)
Iteration 0, loss = 41.9825
Checking accuracy
Got 18 / 197 correct (9.14)
Iteration 20, loss = 9.2098
Checking accuracy
Got 28 / 197 correct (14.21)
Iteration 40, loss = 8.7790
Checking accuracy
Got 29 / 197 correct (14.72)
I also tried different learning rates and optimizer but none worked…
Plus, when I check my accuracy on my training set to see if my model do learn then I sometimes get 90% of accuracy after a few epochs (but not on my validation set) and sometimes don’t…
So, I’m really out of ideas now and if someone has clues on my problem it would really help me. |
st182489 | As the title said, lfilter on cpu cost about 1 ms to run. But on the gpu, it is 1000x slower! I posted my code and test result below. Could anyone tell me why and how to accelerate the speed on gpu?
device="cuda"
waveform = torch.rand([16000]).to(device)
a = torch.tensor([1.0, 0.0]).to(device)
b = torch.tensor([1.0, -0.97]).to(device)
for i in range(5):
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
waveform = torchaudio.functional.lfilter(waveform, a, b)
end.record()
torch.cuda.synchronize()
print("run on %s cost %.3f ms" % (device, start.elapsed_time(end)))
Test result
run on cpu cost 1.473 ms
run on cpu cost 1.531 ms
run on cpu cost 0.905 ms
run on cpu cost 0.774 ms
run on cpu cost 1.007 ms
run on cuda cost 961.567 ms
run on cuda cost 955.971 ms
run on cuda cost 962.749 ms
run on cuda cost 957.605 ms
run on cuda cost 965.437 ms
My env is v100 gpu, cuda 11, torch ‘1.10.1+cu113’ and torchaudio ‘0.10.1+cu113’
Thank you very much! |
st182490 | If I’m not mistaken, this code 1 will be used, which will launch a lot of GPU kernels (nsys shows 160045 cudaLaunchKernel calls) which should create a lot of CPU overhead.
To reduce the overhead caused by the kernel launches, you could try to use CUDA graphs as described here 3 and see if it would match your use case.
Example:
import torch
import torchaudio
import time
device = 'cuda'
nb_iters = 100
# Placeholder input used for capture
waveform = torch.rand([16000]).to(device)
a = torch.tensor([1.0, 0.0]).to(device)
b = torch.tensor([1.0, -0.97]).to(device)
# warmup
for _ in range(10):
waveform = torchaudio.functional.lfilter(waveform, a, b)
# profile
torch.cuda.synchronize()
t0 = time.perf_counter()
for _ in range(nb_iters):
waveform = torchaudio.functional.lfilter(waveform, a, b)
torch.cuda.synchronize()
t1 = time.perf_counter()
print('Eager, {}s/iter'.format((t1 - t0)/nb_iters))
# CUDA graphs
g = torch.cuda.CUDAGraph()
# Warmup before capture
s = torch.cuda.Stream()
s.wait_stream(torch.cuda.current_stream())
with torch.cuda.stream(s):
for _ in range(3):
waveform = torchaudio.functional.lfilter(waveform, a, b)
torch.cuda.current_stream().wait_stream(s)
# Captures the graph
# To allow capture, automatically sets a side stream as the current stream in the context
with torch.cuda.graph(g):
waveform = torchaudio.functional.lfilter(waveform, a, b)
torch.cuda.synchronize()
t0 = time.perf_counter()
for _ in range(nb_iters):
g.replay()
torch.cuda.synchronize()
t1 = time.perf_counter()
print('CUDA graphs, {}s/iter'.format((t1 - t0)/nb_iters))
Output:
Eager, 0.6360926309000934s/iter
CUDA graphs, 0.07975950390042272s/iter |
st182491 | The wav2vec2.0 base 960h model never seems to return a beginning of sentence or end of sentence token (or ’ or unknown so far). Is that expected? I can’t seem to find this discussed anywhere. Why are those tokens in the decoding dictionary? Why are those those options in the final emission matrix? Or am I just feeding in audio that is too difficult for the model to determine eos/bos? If so, can someone provide a counter-example? |
st182492 | Hi,
So I initialize my melspectrogram as follows:
transform = torchaudio.transforms.MelSpectrogram(sample_rate=8000, n_mels=80, win_length=200, hop_length=80, center=False)
Then here’s how I use it:
x_in.shape == [1,5360]
x_out = transform(x_in)
x_out.shape == [1, 80, 63]
However, based on my (introductory) understanding of Fourier Transform, I thought the output length is supposed to be (input_length - win_length) / hop_length + 1 = (5360 - 200)./ 80 + 1 = 65.5
So shouldn’t be either 65 or 66, instead of 63? Or do I just not understand FT?
Thanks,
Kevin |
st182493 | Hello everyone, I am the novice to audio coding. I simply use “torchaudio” to load WAV files whose format is PCM16 with sample rate=16kHz and bit depth=16bits. In addition, datatype of loaded tensor is torch.float32.
My questions are:
(1) What is the mechanism of representing PCM16bits by float32?
(2) Since original data has been quantized in terms of 16bits already, finer representation(>16bits) cannot specify more details yet consume more hardware memories. Thus, what are necessity and benefit of using float32 to represent PCM16 by “torchaudio”?
Sincerely thanks for your reply! |
st182494 | Hey @Felix_astw , welcome to the forum!
In audio file containing 16 bit PCM samples, the values are restricted to the range [-32768, 32767]. The conversion from from 16bit to to float is done by dividing each sample value by 32768. This results in signal represented in floating point values in the range [-1.0, 1.0].
Using float is the canonical way of representing audio samples. There are many advantages, for example: getting full 32-bit precision, normalisation - regardless of whether data in the file is 8, 16, 24 or 32 bit wide, simple representation of clipping and easy application of operations (e.g. mixing signals is as easy as adding sample values together). At the end, it is also fast to scale/convert to the target bit depth - it’s just matter of multiplying by target range. |
st182495 | Hi,
I was evaluating the stft function in pytorch vs librosa. I found that the precision is only upto 2 decimal places. I am wondering if that can be improved to atleast 4-5 decimal places.
Also, regarding the speed of execution,
librosa is ~2x faster than pytorch on CPU (Didn't expect this, thought CPU times should be similar.)
pytorch is ~4x faster than librosa on GPU (Expected this.)
Rig: TitanX pascal, 4-core CPU, pytorch 0.5.0a0+8fbab83
Do these numbers look ok? Or is there a possibility I am doing something wrong?
Regards
Nabarun |
st182496 | Solved by NoahDrisort in post #2
I found an answer from stackoverflow said that “The difference is from the difference between their default bit. NumPy’s float is 64bit by default. PyTorch’s float is 32bit by default.” |
st182497 | I found an answer from stackoverflow said that “The difference is from the difference between their default bit. NumPy’s float is 64bit by default. PyTorch’s float is 32bit by default.” |
st182498 | Hi,
I am trying to train a VQVAE. I used this 5 implementation.
As for the input, I tried various types of input, either taking some part of the original audio or taking that audio and converting it to a 2D matrix and applying minmax normalization. I tried various input types and even different loss functions but the model isn’t able to reconstruct the original audio. I have tried MSE and L1 loss. I have even tried UNet, but none of them seem to work. This is how I load the data:
class data_gen(torch.utils.data.Dataset):
def __init__(self, files):
self.data = files
def __getitem__(self, i):
tmp = self.data[i]
x = np.load('/aud_images/'+tmp)
noise = np.random.normal(0.3,0.01,(256,256))
x_n = x+noise
x = np.reshape(x,(1,x.shape[0],x.shape[1]))
x = torch.from_numpy(x).float()
x_n = np.reshape(x_n,(1,x_n.shape[0],x_n.shape[1]))
x_n = torch.from_numpy(x_n).float()
return x_n, x
def __len__(self):
return len(self.data)
What should I do since almost everywhere I read that MSE loss works fine for audio data. Let me know. |
st182499 | As I work with Speech Synthesis, from what I have seen people generally have a hard time reconstructing the raw waveforms and I have not seen a lot of people trying to do it as you lose a lot of information in the frequency domain which could be useful for the properties of the Audio data, I think a good idea will be to turn it into more audio friendly representation’s first like Spectrogram or MFCC or Mel-Spectrogram if it is speech.
More information could be found regarding this in torchaudio.transforms.Spectrogram and torchaudio.transforms.MFCC.
Then you can use any vocoder to reconstruct it into the raw waveforms.
I hope it helps |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.