id
stringlengths
3
8
text
stringlengths
1
115k
st82968
It’s not a matter of using that. I don’t know the content of those matrices but I expect you understand that masking is not backpropagable, thus, learnable. Problem there is that when you compute labels_cpu you are moving labels to gpu but it still refers to labels in memory. You have to use Labels.cpu().clone().detach() So depending on what labels[picks>thresh]= self.ignore does you may not be able to fix it. If you only want to compute loss over those features try to mask both, gt and labels, not to modify them
st82969
I just need to change the labels according to the scores of the features. The features that are used to compute the loss are not modified when they are fed to the cross_entropy. Since labels are input tensors, their values should be easily to be modified. I change the associated lines into: scores = F.softmax(logits, dim=1).cpu().clone().detach() labels_cpu = labels.cpu().clone().detach() But the problem still exists. How could I make it work please?
st82970
What about this Line ? labels[picks>thresh] = self.ignore_lb You are assigning a value in place there, right?
st82971
Yes, I have just found that without this line , the code can work. I just need to change some value of the labels into the ignored value. I tried to use labels.requires_grad=False, but it still cannot work. Why I cannot modify the labels please?
st82972
I tried this: labels[picks>thresh] = self.ignore_lb labels = labels.clone().detach() and I works, I still cannot understand why the values of training labels cannot be changed .
st82973
Because deep learning is based on computing gradients. If you just manually change the value of a tensor you cannot compute gradient because it simply does not exist. The closest thing you can do is Criteria(labels[p>t],scores[p>t]) You can compute the loss over those values, but you cannot modify the tensor unless you apply a continuous function over those values
st82974
Could you please show me how to fix this problem? I changed labels = labels.clone() to labels = labels.clone().detach(), but it still not works. Thank you so much.
st82975
PyTorch currently implements sparse tensors as COO matrix but is there a way to convert these tensors into CSR format like scipy sparse csr but using pytorch tensors?
st82976
print("CUDA available: ", torch.cuda.is_available()) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") CUDA available: True x = torch.tensor([1, 2, 3]).to(device) y = torch.tensor([1, 2, 3]) %%timeit -n 10000 z = x + 1 10000 loops, best of 3: 15.6 µs per loop %%timeit -n 10000 z = y + 1 10000 loops, best of 3: 5.19 µs per loop
st82977
Note that CUDA operations are asynchronously executed, so you would need to time the operations in a manual loop and use torch.cuda.synchronize() before starting and stopping the timer. Also, the workload is tiny, thus the kernel launch times might even be larger than the actual execution time.
st82978
do you mean something like this? torch.cuda.synchronize(device=device) # before starting the timer begin = time.time() for i in range(1000000): z = x + 1 torch.cuda.synchronize(device=device) # before ending the timer end = time.time() total = (end - begin)/1000000; total 1.4210415363311767e-05 begin = time.time() for i in range(1000000): z = y + 1 end = time.time() total = (end - begin)/1000000; total 5.057124853134155e-06 still GPU takes longer time, so should I carry broadcasting operations on CPU, when using relatively small shape tensors?
st82979
Like I said, it depends on the workload. If you would like to use only 3 numbers and your tensor is already on the CPU, just use the CPU. Here is a quick comparison: def time_gpu(size, nb_iter): x = torch.randn(size).to('cuda') torch.cuda.synchronize() # before starting the timer begin = time.time() for i in range(nb_iter): z = x + 1 torch.cuda.synchronize() # before ending the timer end = time.time() total = (end - begin)/nb_iter return total def time_cpu(size, nb_iter): y = torch.randn(size) begin = time.time() for i in range(nb_iter): z = y + 1 end = time.time() total = (end - begin)/nb_iter return total sizes = [10**p for p in range(9)] times_gpu = [] for size in sizes: times_gpu.append(time_gpu(size, 10000)) times_cpu = [] for size in sizes: times_cpu.append(time_cpu(size, 1000)) import numpy as np import matplotlib.pyplot as plt fig, axarr = plt.subplots(2, 1) axarr[0].plot(np.log10(np.array(sizes)), np.array(times_gpu), label='gpu') axarr[0].plot(np.log10(np.array(sizes)), np.array(times_cpu), label='cpu') axarr[0].legend() axarr[0].set_xlabel('log10(size)') axarr[0].set_ylabel('time in seconds') axarr[1].plot(np.log10(np.array(sizes)), np.log10(np.array(times_gpu)), label='gpu') axarr[1].plot(np.log10(np.array(sizes)), np.log10(np.array(times_cpu)), label='cpu') axarr[1].legend() axarr[1].set_xlabel('log10(size)') axarr[1].set_ylabel('log10(time)') As you can see, the broadcast operation will have an approx. constant overhead up to a size of 10**5. Using bigger tensors is a magnitude faster on the GPU.
st82980
I was trying make a model that built a model with many layers depending on how many the user had. I saw somewhere (https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents/504_batch_normalization.py 33) that setattr can be used. Is it stricly necessary? for example: class MyNet(nn.Module): ## 5 conv net FC def __init__(self,C,H,W, Fs, Ks, FC): super(MyNet, self).__init__() self.nb_conv_layers = len(Fs) ''' Initialize Conv layers ''' self.convs = [] out = Variable(torch.FloatTensor(1, C,H,W)) in_channels = C for i in range(self.nb_conv_layers): F,K = Fs[i], Ks[i] conv = nn.Conv2d(in_channels,F,K) #(in_channels, out_channels, kernel_size) self.convs.append(conv) ## in_channels = F out = conv(out) ''' Initialize FC layers''' CHW = out.numel() self.fc = nn.Linear(CHW,FC) vs class MyNet(nn.Module): ## 5 conv net FC def __init__(self,C,H,W, Fs, Ks, FC): super(MyNet, self).__init__() self.nb_conv_layers = len(Fs) ''' Initialize Conv layers ''' self.convs = [] out = Variable(torch.FloatTensor(1, C,H,W)) in_channels = C for i in range(self.nb_conv_layers): F,K = Fs[i], Ks[i] conv = nn.Conv2d(in_channels,F,K) #(in_channels, out_channels, kernel_size) setattr(self,f'conv{i}',conv) self._set_init(conv) self.convs.append(conv) ## in_channels = F out = conv(out) ''' Initialize FC layers''' CHW = out.numel() self.fc = nn.Linear(CHW,FC)
st82981
Any submodule that isn’t declared as a direct attribute of the class, will not be registered as a submodule of the model and its parameters won’t show up in model.parameters() so it won’t get trained. For example, if you do this… self.convs = [] self.convs.append(conv) then your model doesn’t know that it has conv as a submodule. I know of three possible fixes… use settattr() as you pointed out. self.add_module(f'conv{i}', conv) replace self.convs = [] with self.convs = nn.ModuleList(). Then treat self.convs like an ordinary python list and anything you append to it will be properly registered. The third solution is the most elegant in my opinion.
st82982
I am initiating a sparse tensor using x = torch.sparse.FloatTensor(100,100,3). I want to fill it up later at different times. How can I do so? I am using this as a representation of a growing graph. Doing x[1,1,:] = torch.Tensor([1,2,3]) throws RuntimeError: sparse tensors do not have strides. Or just doing x[1] returns the same error.
st82983
A sparse tensor in pytorch is represented as a pair of dense tensors: a tensor of values and a 2D tensor of indices. sparse tensor don’t have strides, the directed indexing operation such as x[1] is not allow. torch.sparse module is currently experimental, maybe using dense tensor is preferred torch.randn(2, 3).to_sparse() """ tensor(indices=tensor([[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]), values=tensor([ 0.5527, -0.7956, 0.7942, 0.6294, 0.9592, 0.3199]), size=(2, 3), nnz=6, layout=torch.sparse_coo) """
st82984
import torch import torch.nn as nn m = nn.LogSoftmax(dim=1) loss = nn.NLLLoss() a=[[2., 0.], [1., 1.]] input = torch.tensor(a, requires_grad=True) target = torch.tensor([1, 1]) output = loss(m(input), target) output.backward() print(input.grad) tensor([[ 0.4404, -0.4404], [ 0.2500, -0.2500]]) How to get input.grad?What’s the formula?
st82985
Hi @laizewei I am not sure if you are asking what input.grad represents or the full formula (which would require to write the full derivation of the network w/ the chain rule). input.grad gives you the gradients of output w.r.t. input (since you ran output.backward()). Maybe this tutorial 32 could help you gain some intuition about what is happening behind. Hope this answers your question
st82986
The example code is: # input_size: N * 1024 * 1 output = torch.squeeze(input) # output_size: N * 1024 where N is batch_size. When the code is running with single GPU mode, the output size is correct, i.e., N * 1024. However, when using multiple GPUs (for example, 4 GPUs), the output size is wired: When N is large, e.g. 64, the output size is correct. When N is small, e.g. N=2, the output size will be 2048… PS, when using output = input.squeeze(-1), the output is always correct.
st82987
Solved by ptrblck in post #2 torch.squeeze will squeeze all dimensions with size 1: x = torch.randn(1, 2, 1, 2, 1, 2, 1) print(x.squeeze().shape) > torch.Size([2, 2, 2]) In the case of a small batch size, each device might get a chunk where N==1, which will also squeeze the batch dimension. To avoid such errors, I would reco…
st82988
torch.squeeze will squeeze all dimensions with size 1: x = torch.randn(1, 2, 1, 2, 1, 2, 1) print(x.squeeze().shape) > torch.Size([2, 2, 2]) In the case of a small batch size, each device might get a chunk where N==1, which will also squeeze the batch dimension. To avoid such errors, I would recommend to use your second approach and to specify which dimension should be squeezed.
st82989
My model has its tensors on cuda, and its state is saved as such. When I load the model state for inference with checkpoint = torch.load(weights_fpath), the model state keeps the same device configuration. This adds overhead to torch.load (in my case 1-2 seconds). Calling checkpoint = torch.load(weights_fpath, "cpu") takes around 1 millisecond however. I don’t really understand the point to all of this, because when I’ll call model.load_state_dict(checkpoint["model_state"]) after, all my tensors are going to end up on cpu, regardless of the device specified in the model state. What’s the point of setting the device of the tensors in the model state, what’s the use of the map_location argument of torch.load? Is there a way to directly load cuda tensors when doing model.load_state_dict() that would be faster than loading cpu tensors then calling model.cuda()?
st82990
Solved by ptrblck in post #2 The map_location argument remaps the tensors to the specified device. This is useful, if your current machine does not have a GPU and you are trying to load a tensor, which was saved as a CUDATensor. From the docs: torch.load() uses Python’s unpickling facilities but treats storages, which und…
st82991
The map_location argument remaps the tensors to the specified device. This is useful, if your current machine does not have a GPU and you are trying to load a tensor, which was saved as a CUDATensor. From the docs: torch.load() uses Python’s unpickling facilities but treats storages, which underlie tensors, specially. They are first deserialized on the CPU and are then moved to the device they were saved from. If this fails (e.g. because the run time system doesn’t have certain devices), an exception is raised. However, storages can be dynamically remapped to an alternative set of devices using the map_location argument. You could push the model to the GPU before loading the state_dict. However, the tensors have to be pushed somewhere to the device, so I guess you won’t save any time.
st82992
I’m literally copying this example from documentations. Everything works (scalars, images etc), aside from graph tab, which shows nothing. Pytorch 1.2. Tensorboard 1.14 Works like a charm in Pytorch 1.1 import torch import torchvision from torch.utils.tensorboard import SummaryWriter from torchvision import datasets, transforms # Writer will output to ./runs/ directory by default writer = SummaryWriter() transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) model = torchvision.models.resnet50(False) # Have ResNet model take in grayscale rather than RGB model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) images, labels = next(iter(trainloader)) grid = torchvision.utils.make_grid(images) writer.add_image('images', grid, 0) writer.add_graph(model, images) writer.close()
st82993
I’m trying to create my own funcor for use in transformations, but it keeps failing with the error : **TypeError** : 'Resize' object is not callable This my example : import torchvision.transforms.functional as F class Resize(object): def __init__(self, size, interpolation): self.size = size self.interpolation = interpolation def __cal__(self, input): return F.resize(input, self.size, self.interpolation) to_tensor_f = ToTensor() import PIL.Image as Image resize_f = Resize(3, Image.BILINEAR) numpy_tensor = np.random.rand(2, 3) result_tensor = to_tensor_f(numpy_tensor) print(result_tensor) #resize this tensor! x=resize_f(result_tensor) print(x) Thank you all in advance
st82994
Solved by Shisho_Sama in post #2 Got it. silly me mis typed __call__ as __cal__ fixed it and all is good
st82995
I have trained a NN with a vector input and scalar output (regression). Now I want to find the global minimun of the NN using GD with pytorch. I’m new to programming in general, python specifically, and pytorch even more specifically. I believe what I’m trying to do must have been done a thousand times before, if not ten thousand times. I’ll be super happy and grateful if anyone could point me to some code somewhere (maybe in github) where there’s an example of what I’m trying to do that I could adjust to my needs.
st82996
Now I want to find the global minimun of the NN using GD with pytorch. I believe what I’m trying to do must have been done a thousand times before, if not ten thousand times. Have a look at this example here: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-tensors-and-autograd 22 It is updating the parameters manually in the step with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad to minimize the loss function, but you can also use the SGD optimizer, which is explained here: https://pytorch.org/docs/stable/optim.html#module-torch.optim 3 If you want to do GD specifically, not SGD, then each batch would basically be your entire training set (if you can fit it into memory). Otherwise, just accumulate the error before you do the manual update
st82997
thank you very much ! I have changed the code you sent me so as to fit my problem. my problem is not to train the net parameters, as in the code you sent, but rather to “train” the input. in other words, after having trained my net, I want to find the input that will give maximum output. the chages I did are as follows: I added ‘requires grad’ to the input I changed the loss to be ‘-output’ (the output with minus sign) because I’m looking the maximum output Instead of updating the parameters (w1, w2 in the example) I’m updating the input (x in the example) here is the code with my changes: I workes well. however, what I want to do now is to load my pre-trained net and then instead of: y_pred = x.mm(w1).clamp(min=0).mm(w2) I will have: y_pred = net(x) however, this does not work for some reason. meaning, even after the ‘.backward()’ command the ‘.grad’ of ‘x’ remains ‘None’. note this does not happen if I do: y_pred = x.mm(w1).clamp(min=0).mm(w2) !!! very strange… I don’t understand why with these ‘.mm’ commands the ‘.grad’ of inputs is being calculated and saved and with ‘net()’ it isn’t happening here is what my net looks like (it has 3 hidden layers, input layer size is 11, output layer size is 1, all 3 middle layers have a size of 100): class non_linear_Net_3_hidden(nn.Module): def init(self): super(non_linear_Net_3_hidden, self).init() self.fc1 = nn.Linear(input_layer_size, middle_layer_size) self.fc2 = nn.Linear(middle_layer_size, middle_layer_size) self.fc4 = nn.Linear(middle_layer_size, middle_layer_size) self.fc3 = nn.Linear(middle_layer_size, final_layer_size) def forward(self, x): x = F.sigmoid(self.fc1(x)) x = F.sigmoid(self.fc2(x)) x = F.sigmoid(self.fc4(x)) x = self.fc3(x) return x
st82998
I see, so in this case you can take the same approach but go into the opp direction of the gradient of the input features rather than the weights. I think that’s what you are doing now. I will have: y_pred = net(x) however, this does not work for some reason. meaning, even after the ‘.backward()’ command the ‘.grad’ of ‘x’ Don’t know why it wouldn’t work, but you sure do have a lot of x’s there x = F.sigmoid(self.fc1(x)) x = F.sigmoid(self.fc2(x)) x = F.sigmoid(self.fc4(x)) x = self.fc3(x) return x To disentangle that a bit, I would make sure that you keep your input “x” distinct from the rest and see if that solves the problem
st82999
thanks! by ‘disentangle’, do you mean something like this: def forward(self, x): y = F.sigmoid(self.fc1(x)) y = F.sigmoid(self.fc2(y)) y = F.sigmoid(self.fc4(y)) y = self.fc3(y) return y have tried it. didn’t help. here is something I’ve discovered which may shed some light on this problem: the output also remains with ‘grad=None’, even though it also has ‘requires grad = true’. … edit: I was given some more good advice and decided to do it like this: grad = torch.autograd.grad(loss, input) grad = torch.tensor(grad[0]) input -= learning_rate * grad which works, only now I get another problem: my input need to go through a softmax function (needs to be all positive and with a sum of 1) and for some reason it always becomes a one-hot-vector. (one element equals 1 and all the others 0) which makes the softmax function output ‘Nan’. every time it’s a different number which becomes 1 with all the others being zero.
st83000
Ok. thank you very much for your help. Everything works now. Have a great autumn.
st83001
It seems you had solved the problem. Could you show the solution in a brief way?
st83002
I have resnet-18 which takes in an image , a cnn vocal encoder which takes in numpy vector . The outputs from them are concatenated and stacked into a sequence. This sequence is passed to an LSTM which gives me the output and then back-propagation through the entire above network. The issue is that the gradients quickly decrease to very low values like 10e-10 or 10e-11 after around only 50-100 sequences(Each sequence consists of 150 outputs from resnet and the vocal encoder). And there is even no learning . The training accuracy is equivalent to picking a random number and the loss never goes down. If I shutoff the resnet or the vocal encoder separately with the LSTM the same is the case. After looking into the data and the inputs I am pretty sure there is a bug in my code and some component is not being back propagated through. But I am not able to figure out which one. Help is appreciated ! """----------------------------------------------------Imports-------------------------------------------------------""" import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F import torchvision.models as models from matplotlib import pyplot as plt import numpy as np import h5py from PIL import Image from sklearn.externals import joblib import shutil import os import random import pickle import time import gc '-----------------------------------------------------Vocal Net--------------------------------------------------------' class VocalNet(nn.Module): def __init__(self): super(VocalNet, self).__init__() self.conv1 = nn.Conv1d(in_channels=1, out_channels=20, kernel_size=40, stride=1, padding=20) self.conv2 = nn.Conv1d(in_channels=20, out_channels=40, kernel_size=40, stride=1, padding=20) def forward(self, vocal_input): x = F.leaky_relu(F.max_pool1d(self.conv1(vocal_input), 2)) x = F.leaky_relu(F.max_pool1d(self.conv2(x), 5)) x = x.view(vocalnet_output_size, -1) return x '-----------------------------------------------------LSTM Decoder-----------------------------------------------------' class DecoderLSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers, no_of_emotions): """Set the hyper-parameters and build the layers.""" super(DecoderLSTM, self).__init__() self.lstm = nn.LSTM(input_size, hidden_size, num_layers) self.linear = nn.Linear(hidden_size, seq_len) def forward(self, features): """Decode Vocal feature vectors and generates emotions""" hiddens, _ = self.lstm(features) outputs = self.linear(hiddens[-1]) return outputs '------------------------------------------------------Hyperparameters-------------------------------------------------' using_vision_network = True using_vocal_network = True batch_size = 2 mega_batch_size = 2 hidden_size = 256 num_layers = 2 no_of_emotions = 6 seq_len = 150 use_CUDA = True no_of_epochs = 1000 use_pretrained = False test_mode = False show_image = True '----------------------------------------------------------------------------------------------------------------------' resnet = models.resnet18(pretrained=False).cuda() # Define resnet18 model modules = list(resnet.children())[:-1] # delete the last fc layer. resnet = nn.Sequential(*modules) '-----------------------------------Parameters NOT subject to change---------------------------------------------------' len_waveform = 320 # This is the length of a 1 frame long waveform vector vocalnet_output_size = 1280 # VocalNet outputs a 1280X1 feature vector resnet18_output_size = 512 # Resnet Outputs a 1X512X1X1 feature vector. if using_vision_network and using_vocal_network: LSTM_input_size = vocalnet_output_size+resnet18_output_size elif using_vision_network: LSTM_input_size = resnet18_output_size else: LSTM_input_size = vocalnet_output_size Vocal_encoder = VocalNet() # Define the vocalnet model lstm_decoder = DecoderLSTM(input_size=LSTM_input_size, hidden_size=hidden_size, num_layers=num_layers, no_of_emotions=no_of_emotions) # Define the shared LSTM Decoder. curr_epoch = 0 total = 0 correct_5 = 0 correct_2 =0 '----------------------------------------------------------------------------------------------------------------------' criterion = nn.MSELoss().cuda() params = list(lstm_decoder.parameters()) +list(resnet.parameters())+ list(Vocal_encoder.parameters()) optimizer = torch.optim.Adam(params, lr=0.01) '------------------------------------------Saving Intermediate Models--------------------------------------------------' def save_checkpoint(state, is_final, filename='resnet18_vocalnet_MOSI_withLSTM_sample.pth.tar'): torch.save(state, filename) if is_final: shutil.copyfile(filename, 'model_final.pth.tar') '-------------------------------------------Setting into train mode----------------------------------------------------' lstm_decoder.zero_grad() Vocal_encoder.zero_grad() Vocal_encoder.cuda() lstm_decoder.cuda() lstm_decoder.train() Vocal_encoder.train() resnet.train() '----------------------------------------------------------------------------------------------------------------------' combined_seq_total = "" target_seq_total = "" directory = "./all_mosi_sequences/train" prev_loss = 0 sequences = {} i = 0 forbidden = ["seq_c5xsKMxpXnc"] for files in os.listdir(directory): if files[0:15] not in forbidden: sequences.update({i:files}) i += 1 # you can't shuffle a dictionary, but what you can do is shuffle a list of its keys keys = list(sequences.keys()) if use_pretrained: checkpoint = torch.load('resnet18_vocalnet_MOSI_withLSTM.pth.tar') lstm_decoder.load_state_dict(checkpoint['lstm_decoder']) Vocal_encoder.load_state_dict(checkpoint['Vocal_encoder']) resnet.train().load_state_dict(checkpoint['resnet18']) optimizer.load_state_dict(checkpoint['optimizer']) use_pretrained = False random.shuffle(keys) for epoch in range(curr_epoch, no_of_epochs): correct_5 = 0 correct_2 =0 total_loss = 0 if(test_mode): break lstm_decoder.zero_grad() Vocal_encoder.zero_grad() resnet.zero_grad() input_list = [(key, sequences[key]) for key in keys] for j in range(0, len(input_list), batch_size): if j%mega_batch_size==0: # print("GRADIENT ZEROED") total_loss = 0 optimizer.zero_grad() lstm_decoder.zero_grad() Vocal_encoder.zero_grad() resnet.zero_grad() if ((len(sequences) - j) > batch_size): input_batch = input_list[j:j+batch_size] else: break for batch in range(batch_size): with open(directory+"/"+str(input_batch[batch][1]), 'rb') as f: data = pickle.load(f) target_seq = np.array(data[0], dtype = np.float32) vocal_seq = np.array(data[1], dtype = np.float32) vision_seq = data[2] vision_seq_i3 = np.empty((seq_len,3,224,224), dtype=np.float32) vocal_seq_i1 = np.empty((seq_len, 1,320), dtype = np.float32) for seq in range(seq_len): file_name = vision_seq[seq] img = Image.open(".."+file_name[7:]) pixels = np.array(img,dtype = np.uint8)/255.0 mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) pixels = std * pixels + mean if show_image: plt.imshow(pixels, interpolation='nearest') plt.show() show_image = False pixels = pixels.transpose(2, 0 ,1) vision_seq_i3[seq,:,:,:] = pixels vocal_seq_i1[seq,:,:] = vocal_seq[seq] vision_seq_i4 = Variable(torch.FloatTensor(vision_seq_i3).cuda()).cuda() vision_seq_o =resnet(vision_seq_i4).view(seq_len, 1, resnet18_output_size) target_seq_o = Variable(torch.from_numpy(target_seq)).view(1, seq_len).cuda() vocal_seq_i = Variable(torch.from_numpy(vocal_seq_i1)).cuda() vocal_seq_o = Vocal_encoder(vocal_seq_i) vocal_seq_o = vocal_seq_o.view(seq_len, 1, vocalnet_output_size) if using_vision_network and using_vocal_network: combined_seq_i = torch.cat((vocal_seq_o, vision_seq_o), 2).cuda() elif using_vision_network and not using_vocal_network: combined_seq_i = vision_seq_o else: combined_seq_i = vocal_seq_o if batch == 0: combined_seq_total = combined_seq_i target_seq_total = target_seq_o else: combined_seq_total = torch.cat((combined_seq_total, combined_seq_i), 1) target_seq_total = torch.cat((target_seq_total, target_seq_o), 0) # print("DONE" + str(batch)) # print(target_seq_total.size()) # print(combined_seq_total.size()) lstm_output = lstm_decoder(combined_seq_total) loss = criterion(lstm_output, target_seq_total) # print(lstm_output) # print(target_seq_total) loss.backward() print(list(resnet.train().parameters())[-1].grad[0]) print(list(resnet.parameters())[-1][0]) # print(list(Vocal_encoder.parameters())[-1].grad) # print(list(lstm_decoder.parameters())) predicted = lstm_output.data.cpu().numpy() actual = target_seq_total.data.cpu().numpy() predicted_5 = np.floor((predicted+3)*4.9999/6.0) actual_5 = np.floor((actual+3)*4.9999/6.0) predicted_2 = np.floor((predicted+3)*1.9999/6.0) actual_2 = np.floor((actual+3)*1.9999/6.0) correct_5 += (predicted_5 == actual_5).sum() correct_2 += (predicted_2 == actual_2).sum() total_loss += loss.data[0] if (j+2)%mega_batch_size==0: total_frames = (j+2)*seq_len print('Training -- Epoch [%d], Sample [%d], Average Loss: %.4f, Accuracy: %.4f, Accuracy-2: %.4f' % (epoch+1, j+2, 2*total_loss/mega_batch_size, (100.0*correct_5)/total_frames,(100.0*correct_2)/total_frames)) if (j+2)%mega_batch_size==0: optimizer.step()
st83003
It definitely sounds like your gradients aren’t right. I haven’t looked through your code, but in general, my advice would be to break it up into smaller functions that you can test individually. Good luck!
st83004
Hi Richard. Yes I did that and the update is that there is definitely a problem in the LSTM. As an experiment I froze the LSTM gradients only and directly allowed the output after the LSTM with to take gradients with the resnet and the vocalnet. And everything was normal. So the resnet was even able to learn the initialization of the LSTM and modify it’s filters according to the output. The gradients also didn’t decrease to the above values and my training accuracy increased every epoch . Now I’m not able to figure out what could possibly be wrong with my LSTM. It seems pretty simple.
st83005
@apaszke @smth Could you please help ? I have been trying a LOT. Any directions would be great
st83006
Yes, the issue was with the normalization of input data.I switched from using my normalization function to torch’s default image normalization and everything was fine.
st83007
Hello, everyone! In my project, I need to know the indices of the sampled data points in the training set. But I don’t know how to do this. I tried an alternative below by setting the shuffle to False, in which I don’t need to know the data indices because the sampled data will simply has the same order as in the training set. But it seems the dataloader still reorders the samples. sampler = torch.utils.data.DataLoader( trainset, batch_size = args.train_batch, shuffle = False, num_workers = args.workers ) sampler = iter(sampler) for i in range(len(trainloader)): inputs, targets =next(sampler) print(torch.sum(torch.abs( inputs[0]-trainset[i * args.train_batch][0] ))) If the dataloader has returned data exactly as the data’s order in the training set, this snippet should print 0s. But it prints non-zero values
st83008
Solved by Oli in post #2 Helooo The best way would be to create your own dataset class. In it you could return your images (or whatever data you have) plus your index for that data item. Try this and report back if you have a hard time creating your dataset
st83009
Helooo The best way would be to create your own dataset class. In it you could return your images (or whatever data you have) plus your index for that data item. Try this and report back if you have a hard time creating your dataset
st83010
Hello Everyone, I am a newcomer to PyTorch, and I found this project very interesting. I am interested in knowing if there is a testing/QA process used by the PyTorch project, and if there is, how one could contribute to it. I have some experience with PyTest, if it is relevant.
st83011
You can find the tests here 10. A good start to get familiar with them could be to have a look in test_nn.py 11. Please let us know, if you have some ideas on how to improve them.
st83012
Is it possible to use one DataLoader, with a custom HDF5 dataset, from one HDF5 file to do train/val/test split with out having to open the HDF5 file multiple times?
st83013
Why is it that the following returns a tensor with shape [10,4]? test=torch.ones(10,4) print(test[torch.arange(10)][torch.arange(4)].shape) Intuitively it should return the entire original tensor, test.
st83014
You are indexing in dim0 twice. This code would yield the same result: test=torch.randn(10,4) a = test[torch.arange(10)] b = a[torch.arange(4)] print((test[:4] == b).all())
st83015
If you want to use an index tensor (e.g. [0, 1]) for all elements in dim0, this would work: test=torch.randn(10,4) idx = torch.tensor([0, 1]) test[:, idx]
st83016
But what I want to happen is to select certain indices for each dimension, that is if I were to do test=torch.ones(10,4) print(test[torch.arange(9),torch.arange(3)]) then it would print a 2-D tensor of ones of size [9,3]; that is, the first 9 rows, and for each of those rows, the first 3 elements. What would be the equivalent functionality? The above code returns IndexError: shape mismatch: indexing tensors could not be broadcast together with shapes [9], [3]
st83017
I want to have the standard LSTM/GRU/RNN set up but swap the linear function with a convolution. Is that possible to do in Pytorch in an clean and efficient manner? Ideally it still works with packing, varying sequence length etc. Small sample code of a trivial way to pass data through it would be super useful like: # Based on Robert Guthrie tutorial import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from pdb import set_trace as st torch.manual_seed(1) def step_by_step(net,sequence,hidden): ''' an example of an LSTM processing all the sequence one token at a time (one time step at a time) ''' ## process sequence one element at a time print() print('start processing sequence') for i, token in enumerate(sequence): print(f'-- i = {i}') #print(f'token.size() = {token.size()}') ## to add fake batch_size and fake seq_len h_n, c_n = hidden # hidden states, cell state processed_token = token.view(1, 1, -1) # torch.Size([1, 1, 3]) print(f'processed_token.size() = {processed_token.size()}') print(f'h_n.size() = {h_n.size()}') #print(f'processed_token = {processed_token}') #print(f'h_n = {h_n}') # after each step, hidden contains the hidden state. out, hidden = lstm(processed_token, hidden) ## print results print() print(out) print(hidden) def whole_seq_all_at_once(lstm,sequence,hidden): ''' alternatively, we can do the entire sequence all at once. the first value returned by LSTM is all of the hidden states throughout #the sequence. The second is just the most recent hidden state (compare the last slice of "out" with "hidden" below, they are the same) The reason for this is that: "out" will give you access to all hidden states in the sequence "hidden" will allow you to continue the sequence and backpropagate, by passing it as an argument to the lstm at a later time Add the extra 2nd dimension ''' h, c = hidden Tx = len(sequence) ## concatenates list of tensors in the dim 0, i.e. stacks them downwards creating new rows sequence = torch.cat(sequence) # (5, 3) ## add a singleton dimension of size 1 sequence = sequence.view(len(sequence), 1, -1) # (5, 1, 3) print(f'sequence.size() = {sequence.size()}') print(f'h.size() = {h.size()}') print(f'c.size() = {c.size()}') out, hidden = lstm(sequence, hidden) ## "out" will give you access to all hidden states in the sequence print() print(f'out = {out}') print(f'out.size() = {out.size()}') # (5, 1, 25) ## h_n, c_n = hidden print(f'h_n = {h_n}') print(f'h_n.size() = {h_n.size()}') print(f'c_n = {c_n}') print(f'c_n.size() = {c_n.size()}') if __name__ == '__main__': ## model params hidden_size = 6 input_size = 3 lstm = nn.LSTM(input_size=input_size,hidden_size=hidden_size) ## make a sequence of length Tx (list of Tx tensors) Tx = 5 sequence = [torch.randn(1, input_size) for _ in range(Tx)] # make a sequence of length 5 ## initialize the hidden state. hidden = (torch.randn(1, 1, hidden_size), torch.randn(1, 1, hidden_size)) #step_by_step(lstm,sequence,hidden) whole_seq_all_at_once(lstm,sequence,hidden) print('DONE \a') Cross posted: https://stackoverflow.com/questions/57511784/how-does-one-implement-convolutional-grus-lstms-rnns-in-pytorch 3 How to implement convolutional GRUs/LSTMs/RNNs
st83018
I want to apply 1 fully connected layer first and then start apply convolutions. But I noticed that the 3D structure might be lost. Is there a way to recover that 3D structure and apply 2d convolutions or am I doomed to only apply 1d convolutions thereafter? Cross posted: https://forums.fast.ai/t/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer-or-a-flatten-layer/52489 5 How do I recover the 3D structure of a layer after a Fully Connected layer (or a flatten layer)? https://ai.stackexchange.com/questions/13950/how-do-i-recover-the-3d-structure-of-a-layer-after-a-fully-connected-layer 4
st83019
Hi, You can unflatten after the fully connected layer: original_size = input.size() flat_input = input.view(original_size[0], -1) flat_ouput = linear(flat_input) output = flat_output.view(original_size)
st83020
I want to build an unflatten layer. How would one do that? e.g. of the flatten layer: #mini class to add a flatten layer to the ordered dictionary class Flatten(nn.Module): def forward(self, input): return input.view(input.size(0),-1) from your code I assume the only way is to do this: #mini class to add an unflatten layer to the ordered dictionary class UnFlatten(nn.Module): def forward(self, flat_ouput, original_size): return flat_ouput.view(original_size)
st83021
torch.nn.modules.module.Module shouldn’t it be something like torch.nn.modules.Module or maybe just torch.nn.modules why are there multiple keywords for same name in the hierarchy? similarly there is torch.nn.modules.activation.torch multiple same names in the hierarchy, isn’t this a bug that needs to be fixed?
st83022
I’m pretty sure the hierarchy is for encapsulating certain functions. For example, with torch.nn.modules.module, folder modules contains module.py which in itself contains a Module class. Feel free to check out the repository as it provides much better context as to why the functions are organized in the manner they are: https://github.com/pytorch/pytorch/tree/master/torch/nn/modules 1
st83023
I know pytorch DistributedDataParallel use torch.distributed.Reducer class to ensure different GPUs finish gradients calculation in one node and then start ring all reduce process.But how to ensure other nodes finish gradients calculation?For example,there are three nodes.The first nodes finished gradient calculation and prepared to start all reduce.How does the first node know whether other nodes finished gradient calculation?
st83024
a = torch.rand(2, 3) b = torch.rand(4, 3) c = b.expand_as(a) print(c) a = torch.rand(2, 3) b = torch.rand(2, 4) c = b.expand_as(a) print(c) I dont know why it does not work. What is the right way to figure it out?
st83025
Hello @ronghui, From the document, Expand this tensor to the same size as other . self.expand_as(other) is equivalent to self.expand(other.size()) . And the .expand() operation involves some broadcasting semantics here 1.0k. I think you could not “expand” a large size tensor to a smaller one. So the code snippet above will not work. If I am wrong, please correct me. Thanks.
st83026
So you are saying two tensors must be “broadcastable” before we can apply expand_as on them?
st83027
After some try, it is just my own view that the .expand() operation maybe involves broadcasting semantics.
st83028
print out a,b will be easy for understanding. Example 1) output: a: tensor([[0.9619, 0.0384, 0.7012], [0.5561, 0.3637, 0.9272]]) b: tensor([[0.5986, 0.2582, 0.6261], [0.6928, 0.9175, 0.6737], [0.9951, 0.8568, 0.6015], [0.7922, 0.5019, 0.8162]]) the reason is 1) the size of b is bigger than a’s, you can not expand b by a. 2) the dimension is not match, to output different c, you can size of b to (2, 2, 3) or others. Shown as below, a = torch.rand(2, 3) b = torch.rand(2,2, 3) print('a:',a) print('b:',b) c = a.expand_as(b) print('c:',c) outputs: a: tensor([[0.4748, 0.5521, 0.7741], [0.0785, 0.2785, 0.5222]]) b: tensor([[[0.7777, 0.3046, 0.8019], [0.7398, 0.1424, 0.6398]], [[0.9034, 0.8937, 0.8674], [0.1737, 0.3192, 0.4451]]]) c: tensor([[[0.4748, 0.5521, 0.7741], [0.0785, 0.2785, 0.5222]], [[0.4748, 0.5521, 0.7741], [0.0785, 0.2785, 0.5222]]]) Example 2) is same problem with example 1.
st83029
I know we can write loss.backward() to finish grad calculation.But when I read source code about backward,I am confused. I add some print information in pytorch python source code to trace the backward process. Basically loss.backward() will call torch.autograd.init.py backward function and get into Variable._execution_engine.run_backward function.This is a cpp function.When we use DataParallel,the source code finally will call torch.nn.parallel.Gather.backward function and torch.nn.parallel.Broadcast.backward function.How are these two class called? In addition,Gather.backward funtion will call Scatter.apply function, and Scatter.apply function will call Scatter.forward function.How does it woks? The Scatter class inherits from torch.autograd.Function class.Function class inherits from a python metaclass.The metaclass bulid a class that the class inherits from BackwardCFunction class. So in my opinion,if we call Scatter.apply function,the Scatter.backward function will be called.But why not? 屏幕快照 2019-08-15 下午2.28.41.png1562×284 35.4 KB
st83030
Because there is a static apply function that is called when you do Scatter.apply() and a non-static one that is called on an instance of scatter: Scatter().apply().
st83031
Is the static apply function from _C._FunctionBase class? I mean that it is from THPFunction_apply?Thanks for your reply!
st83032
Interesting!And I have a final question.Could you tell me where does the non static apply function called?
st83033
Sure. In the engine, when a new task can be executed here 24. It first does some checking and input/output preparation here 8. It then does all the required hooks, the use the operator() function on the actual Node (previously called Function) here 13. This is routed to the main Node class here 9. Which calls the apply function of Node, which is a PyNode in your case as it is implemented in python. And the PyNode gets and calls the .apply attribute from the class instance here 17. Looking at it this way, there are a few levels of indirection indeed
st83034
I tried to make a recursive network. It is composed of convolution part and fully connected part, and would use previous output. The shape looks like this: RecursiveModel( (conv_part): Sequential( (conv_0): Conv1d(1, 4, kernel_size=(128,), stride=(1,), padding=(32,)) (conv_1): Conv1d(4, 4, kernel_size=(128,), stride=(1,), padding=(32,)) ) (full_part): Sequential( (full_conn_0): Linear(in_features=648, out_features=377, bias=True) (acti_lrelu_0): LeakyReLU(negative_slope=0.01) (full_conn_1): Linear(in_features=377, out_features=219, bias=True) (acti_lrelu_1): LeakyReLU(negative_slope=0.01) (full_conn_2): Linear(in_features=219, out_features=128, bias=True) (acti_lrelu_2): LeakyReLU(negative_slope=0.01) ) ) The forward part is: def forward(self, x, prev_output): input_shape = x.size() assert(len(input_shape) == 3) assert(input_shape[0] == 1) # batch == 1 assert(input_shape[1] == 1) # channel == 1 assert(input_shape[2] == self.frame_size) conv_output = self.conv_part.forward(x) conv_output_plain = conv_output.view(1, self.conv_output_shape[0]*self.conv_output_shape[1]) full_input = torch.cat( (conv_output_plain, prev_output), 1 ) result = self.full_part.forward(full_input) return result Where previous output will be concatenated with convolution output, and sent into fully connected part. However, it would claim an error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I also tried to detach previous output: full_input = torch.cat( (conv_output_plain, prev_output.detach()), 1 ) This would run, but the error never converge. At last, this is the training part of code, I cut input sequence into segments and each time run one segment: def do_training(): i_iter = 0 while True: try: i_dataset = random.randrange(0, num_dset) dset_input = all_input[i_dataset] dset_refout = all_refout[i_dataset] num_frame = len(dset_input) assert(num_frame == len(dset_refout)) prev_output = zero_output for i_frame_start in range(0, num_frame, options.batch_sz): i_iter += 1 optimizer.zero_grad() loss = 0 for j in range(options.batch_sz): i_frame = i_frame_start + j if i_frame >= num_frame: break pseudo_batch_input[0][0] = dset_input[i_frame] / 60.0 + 1.0 # normalize to -1 ~ 1 pseudo_batch_refout[0] = dset_refout[i_frame] curr_output = model.forward(pseudo_batch_input.to(device), prev_output) prev_output = curr_output curr_loss = cri( curr_output, pseudo_batch_refout.to(device) ) if math.isnan(float(curr_loss)): pdb.set_trace() continue loss += curr_loss curr_loss.backward() # run backward for each batch step loss_stat_re = loss_stat.record(float(loss)) if loss_stat_re is not None: line = str(i_iter) + "\t" + "\t".join(map(str, loss_stat_re)) logger.info(line) #loss.backward() optimizer.step() if i_iter > options.n_iter: return except KeyboardInterrupt: print("early out by int at iter %d" % i_iter) return
st83035
Solved by albanD in post #2 Hi, The problem is that prev_output is linked to the previous forward you did. And so if you call backward after the new forward, it will try to backward through the previous forward as well. Hence the error you see. You can use retain_graph=True when you call .backward() to avoid this issue but t…
st83036
Hi, The problem is that prev_output is linked to the previous forward you did. And so if you call backward after the new forward, it will try to backward through the previous forward as well. Hence the error you see. You can use retain_graph=True when you call .backward() to avoid this issue but then you will backward all the way to the first foward you made (and will most likely OOM quite quickly). If you don’t want gradients to flow back into the previous forward you should use .detach() on prev_output to stop the gradient.
st83037
Thanks! I will try to use retain_graph=True as this seems like what I want. I will keep segments short to avoid too much memory use.
st83038
Hi, Consider the following example: idxs = [[0, 1, 2, 1], [3, 2, 1, 3]] scores = [[0.5000, 0.8000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000, 0.9000]] Within a batch of idxs, I would like to only retain values with the largest corresponding score entry and replace the rest with -1. So I would expect an output: out = [[ 0, 1, 2, -1], [ -1, 2, 1, 3]] How can I achieve something like this? Thanks
st83039
Hi, Your output does not match your description I think. Don’t you expect: [[ -1, 1, -1, -1], [-1, -1, -1, 3]] ?
st83040
Hi albanD, For the first batch [0, 1, 2, 1], only 1s are duplicate, so the index with larger score remains and the other 1 is replaced by -1, similarly in the second batch [3, 2, 1, 3], 3s are duplicate, so only 3s with the smaller scores are replaced
st83041
Ho I missed the part about duplicates in your explanation. I’m afraid you won’t have a specialized function to do this, so you will have to do it by handm checking each index.
st83042
Thanks for the reply. I do have an idea of how I want to implement this, but it is limited to just one batch. If I flatten the all batches and then use that function, the search space for comparison will be too large; since each duplicate value will be compared with all the other values in flattened tensor. Is there away I can apply a function to all batches individually?
st83043
You can do a outer for loop and just look at input = full_input.select(0, batch_idx) every time. Note that select() does not copy memory and so any inplace change of input will be reflected into full_input !
st83044
I was hoping to avoid using a for loop, but I guess it is inevitable. Thanks for all your help.
st83045
I am confused about which device would a Tensor comes from when creating a new Tensor? I am using a piece of exactly same code in two different projects. In one project, ‘self.priors = torch.Tensor(prior_data).view(-1, 4)’, self.priors is on GPU. But for the other project, self.priors is on CPU. What’s the reason? The two pieces of code are completely same.
st83046
This is most likely because either set_default_tensor_type 1 or set_default_dtype has been called in one project and not the other.
st83047
Solved by albanD in post #2 Hi, This happens because log is only defined for strictly positive inputs. And so we return -inf for all values <= 0 as it is the limit at 0.
st83048
Hi, This happens because log is only defined for strictly positive inputs. And so we return -inf for all values <= 0 as it is the limit at 0.
st83049
I’ve been trying to use Dask to parallelize the computation of trajectories in a reinforcement learning setting, but the cluster doesn’t appear to be releasing the GPU memory, causing it to OOM. I’m working around this problem currently, but I’d love to better understand why this happens. I’ve reduced the problem to a simpler test case: import multiprocessing as mp import torch import torch.nn import torch.optim import itertools import time from typing import Tuple big_number = 10000 class HugeModel(torch.nn.Module): def __init__(self): super().__init__() self.lin1 = torch.nn.Linear(big_number, big_number) self.lin2 = torch.nn.Linear(big_number, 1) def forward(self, t1: torch.Tensor): return self.lin2(self.lin1(t1)) def create_huge_model(): huge_model = HugeModel() training_x = torch.randn(10, big_number) training_y = torch.sum(training_x, dim=-1) optimizer = torch.optim.Adam(huge_model.parameters()) for i in range(4): optimizer.zero_grad() loss = torch.nn.SmoothL1Loss()(huge_model(training_x).squeeze(dim=-1), training_y) print(f'Training... Iteration {i}: {loss.item()}') loss.backward() optimizer.step() return huge_model def do_some_inference(tup: Tuple[torch.nn.Module, torch.Tensor]): with torch.no_grad(): model, batch = tup result = model(batch).cpu().numpy() del tup del model del batch clean_up_the_pool() return result def clean_up_the_pool(*args, **kwargs): if torch.cuda.is_available(): torch.cuda.empty_cache() if __name__ == '__main__': pool_size = 40 mp.set_start_method('spawn') pool = mp.Pool(pool_size) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") big_model = create_huge_model().to(device) with torch.no_grad(): batches = torch.randn(2, big_number).to(device) batches_list = [batches[i:i+1, :] for i in range(2)] for i in range(30): work = zip(itertools.repeat(big_model), batches_list) print("Doing some inference with multiprocessing...") results = pool.map(do_some_inference, work) print("Waiting...") time.sleep(5) print("Cleaning up") pool.map(clean_up_the_pool, range(pool_size * 4)) print("Waiting...") time.sleep(5) pool.close() Essentially, if I create a large pool (40 processes in this example), and 40 copies of the model won’t fit into the GPU, it will run out of memory, even if I’m computing only a few inferences (2) at a time. nvidia-smi shows that even after the pool.map completes, the process still retains its allocation of around 500 MB of GPU memory, even though I’ve tried my best to clear it with torch.cuda.empty_cache(). This results in an OOM error when another process in the pool tries to get its slice of the GPU. nvidia-smi starts filling up with these processes that aren’t actually doing anything, but taking up memory: image.jpg1126×730 272 KB Is there any way, short of closing the pool, to get PyTorch to release the memory that it doesn’t need anymore, at least until it gets a task from pool.map? I don’t want to close the pool, because I’ve had problems with launching a Dask LocalCluster after messing around with CUDA in the parent process. Closing and re-opening a pool also seems hacky.
st83050
Hi, These 500MB are most likely just the memory used by the CUDA initialization. So there is not way to remove it unless you kill the process. It seems that the model is only stored in your first process 34296 and the others are using it as expected but just the cuda initialization state is taking a lot of memory
st83051
What’s the difference between padding and output padding in torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros') I am trying it to apply to a input of 256×32×32 to obtain output of 256×64×64 with filter size 2*2 and stride 2 but couldn’t understand the role of different padding here.
st83052
Hi, The Theano tutorial on convolution should have all the information you need here 598. It is quite clear with a lot of illustrations.
st83053
Hi, When I use num_workers > 0, I found that it will also create the multi number of tensorboard files. However, it only writes the information in the first creation files. 1.PNG1104×161 4.31 KB Since it doesn’t show any error, I’m not sure which part of the code should provide for your reference. What can I do to prevent it create lots of files?
st83054
I was doing total_loss += loss.detach() after optimizer.step() but now I use a closure. How can I have access access to the loss now? I use the total_loss to know when to stop the learning process.
st83055
Hi, I would use a variable from a common parent scope to store it. Reset it before the training and each closure function modify this variable by adding to it. For example by making total_loss a global variable if you use python 2.7 or nonlocal in python 3.5+.
st83056
Dear @albanD thank you for your reply. Does the total_loss will still be accurate for optimizer that require a closure? I didn’t really understand why a closure is necessary for certain optimizer but from my current understanding the optimizer will run the closure multiple times each step. Does the total_loss will still be a valid way to stop training in that case? Thank you so much
st83057
Hi, That will depend on your problem definition and the optimizer. In some case, you may only want to use the first eval of the closure by the optimizer. In other cases, you can just average over all the evaluations as it is still valid information. That would depend on your application.
st83058
Hi, I look through the questions about the closure, but I still don’t understand why to use closure. Do you have some classicial and simple examples? For example, if you want to do xxx, you have to use closure, or other examples like this. Thank you so much!
st83059
Hi, Optimizers that need to evaluate the function multiple times will require closure. For example LBFGS 26.
st83060
I want to use torch.lstsq(), but I got a this error msg. " module ‘torch’ has no attribute ‘lstsq’ " Why does this error occur?
st83061
This method was introduces in PyTorch 1.2.0, so make sure you are using the latest stable release. You can find the install instructions on the website 55.
st83062
As we know, in PyTorch, model.train() and model.eval() are needed when training or testing. Can we train and test a model in parallel?For example, training data (batch_size=128) and testing data(batch_size=16) is concat and a dataloader 128+16 is gotten. And the training loss computation is called, for the first 128 samples and the testing loss is computed based on the last 16 samples? Is this a good way to do NAS or AutoML?
st83063
I am trying to apply F.conv2d() on a one-channel image with an average kernel. It takes too much time on the GPU. Can anyone explain? I finally use F.ave_pool2d(), but I am curious what’s wrong. import torch import torch.nn.functional as F import time # dummy image img = torch.randn([1, 1, 512, 768]) # average kernel aveKernel = torch.ones([1, 2, 2]) / 2 ** 2 aveKernel = aveKernel[:, None, :, :] # one channel CPU tic = time.time() ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=1) toc = time.time() - tic print('CPU conv2d one channel time:', toc) # one channel GPU img = img.cuda() aveKernel = aveKernel.cuda() tic = time.time() ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=1) toc = time.time() - tic print('GPU conv2d one channel time:', toc) # Three channel GPU img = torch.randn([1, 3, 512, 768]) aveKernel = torch.ones([3, 2, 2]) / 2 ** 2 aveKernel = aveKernel[:, None, :, :] img = img.cuda() aveKernel = aveKernel.cuda() tic = time.time() ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=3) toc = time.time() - tic print('GPU conv2d three channel time:', toc) The output : CPU conv2d one channel time: 0.0025970935821533203 GPU conv2d one channel time: 0.14447641372680664 GPU conv2d three channel time: 7.82012939453125e-05
st83064
Note that CUDA calls are asynchronous, so that your current times are skewed by other sync points. Add torch.cuda.synchronize() before starting and stopping the timer and run the code again.
st83065
Thank you for your timely reply. I have added this line right before ‘tic = time.time()’ but it does not help. I tested this script on two machines and got similar results.
st83066
Thanks for the information. The single iteration might still give some bias. I’ve added cudnn benachmarking to the script, but feel free to disable it: import torch import torch.nn.functional as F import time torch.backends.cudnn.benchmark = True nb_iter = 1000 # dummy image img = torch.randn([1, 1, 512, 768]) # average kernel aveKernel = torch.ones([1, 2, 2]) / 2 ** 2 aveKernel = aveKernel[:, None, :, :] # one channel CPU tic = time.time() for _ in range(nb_iter): ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=1) toc = time.time() - tic print('CPU conv2d one channel time:', toc) # one channel GPU img = img.cuda() aveKernel = aveKernel.cuda() # warmup for _ in range(50): out = F.conv2d(img, aveKernel, padding=1, groups=1) torch.cuda.synchronize() tic = time.time() for _ in range(nb_iter): ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=1) torch.cuda.synchronize() toc = time.time() - tic print('GPU conv2d one channel time:', toc) # Three channel GPU img = torch.randn([1, 3, 512, 768]) aveKernel = torch.ones([3, 2, 2]) / 2 ** 2 aveKernel = aveKernel[:, None, :, :] img = img.cuda() aveKernel = aveKernel.cuda() # warmup for _ in range(50): out = F.conv2d(img, aveKernel, padding=1, groups=3) torch.cuda.synchronize() tic = time.time() for _ in range(nb_iter): ave_img1 = F.conv2d(img, aveKernel, padding = 1, groups=3) torch.cuda.synchronize() toc = time.time() - tic print('GPU conv2d three channel time:', toc) > CPU conv2d one channel time: 4.251241207122803 > GPU conv2d one channel time: 0.04510688781738281 > GPU conv2d three channel time: 0.032311201095581055
st83067
I am writing an applications that requires transmitting objects between processes. However, the transmission time is high due to high latency of network. I am wondering if theres a way to let these operations, for example, torch.reduce(), torch.gather(), have shared memory support? thank you very much.