id
stringlengths
3
8
text
stringlengths
1
115k
st30168
I tried to train my model just now and it just stopped with such an error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) but in debug mode everything can run normally, and yesterday it was everything OK and I haven’t change it. What’s the problem?
st30169
It’s extremely strange for me. I just added a cuda check in the top of my code like that: import ...... print(torch.cuda.is_available()) ... ... ... And the error vanished with my code running normally… Does anyone know what’s the matter? And I never add such a check before when it could still run normally.
st30170
Happened to me too. But it seems to occur randomly. Once after 2 epochs, and once after 33. Also I tried executing the script inside gdb to get a stacktrace of the crash, it didn’t segfault for the entire training. Really confused here as well. BTW, I am on 0.5.0a0+8fbab83 on a TitanX pascal with cuda 8.0
st30171
Hi Mactarvish I have exact same problem as you. Just imported torch and torch.cuda in the console to see if cuda is available by using torch.cuda.is_available() and now no PyCharm project is working. Have you found a solution to this? (EDIT) Turns out the problem was my NVIDIA drivers, i switched back to Intel drivers and it worked fine (after I reinstalled conda and pytorch which was a drag.)
st30172
Hi,have you solved this problem?When I test a model after I’ve trained it, it always comes up randomly. If I restart the computer or test it a few more times, it maybe effective.But it won’t always work out.
st30173
Wow,it is so strange for it ,and it is same happen to me . When I ran my code that was OK before somedays , the error is : Process finished with exit code 139 (interrupted by signal 11: SIGSEGV). And I just added some cuda check : import torch print(torch.cuda.is_available()) And the code could runing normally … It is so strange and I want to know whether anyone know why?
st30174
Hmm I’m having this issue all of a sudden where my training is suddenly throwing these errors: Using cuda 1%|▏ | 21/1584 [00:00<01:00, 26.03it/s] Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) It seems like it gets through some iterations and then bam, segmentation fault. I checked nvidia-smi: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.119.03 Driver Version: 450.119.03 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 108... Off | 00000000:01:00.0 On | N/A | | 0% 50C P5 21W / 250W | 4867MiB / 11177MiB | 1% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ Having just upgraded to torch 1.9.0, is it possible my gpu drivers need to be updated?
st30175
With ATEN_CPU_CAPABILITY=avx2, adding two float tensors is slower than ATEN_CPU_CAPABILITY=default, assuming a high value for the number of threads. eg. On a machine with 32 physical cores & 64 logical cores, with 16 threads, ATEN_CPU_CAPABILITY=avx2 is slower than ATEN_CPU_CAPABILITY=default. For just one thread, adding two float tensors is only around 10% faster with ATEN_CPU_CAPABILITY=avx2. What’s the reason behind this? Both of them are memory-bound. Thanks!
st30176
Solved by imaginary in post #3 Resolved at On CPU, vectorized float tensor addition might be slower than unvectorized float tensor addition · Issue #60202 · pytorch/pytorch · GitHub. Basically, memory allocation & zero-filling costs are worse for AVX2.
st30177
Could you create an issue on GitHub for this so that the code owners could have a look, please?
st30178
Resolved at On CPU, vectorized float tensor addition might be slower than unvectorized float tensor addition · Issue #60202 · pytorch/pytorch · GitHub 12. Basically, memory allocation & zero-filling costs are worse for AVX2.
st30179
Hello, I am looking for a modern, powerful and fast instance segmentation program in C++ working in Windows please. Thank you Best regards
st30180
hey, I’m trying to resume training from a given checkpoint using pytorch CosineAnnealingLR scheduler. let’s say I want to train a model for 100 epochs, but, for some reason, I had to stop training after epoch 45 but saved both the optimizer state and the scheduler state. I want to resume training from epoch 46. I’ve followed what has previously been chatted on this forum to resume training from a given epoch, but when plotting learning rates values as a function of epochs, I get a discontinuity at epoch 46 (see figure below, plot on the left). For comparison, I run the full 100 epochs and plotted the learning rate to show what the expected plot should look like (see figure below, plot in the center). We can see both plots do not match when displaying them on the same figure (see figure below, plot on the right ; in green: expected plot ; in blue: plot with discontinuity) figure1240×302 22.4 KB Here is a snippet of the code I’ve used to resume training: intial_epoch = 0 nepochs_first = 45 nepochs_total = 100 base_lr = 0.0001 optimizer_first = torch.optim.Adam(model.parameters(), lr=base_lr) scheduler_first = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_first, T_max=nepochs_total, last_epoch=intial_epoch-1) lr_first = [] for i in range(intial_epoch+1, nepochs_first+1): scheduler_first.step() lr_first.append(scheduler_first.get_last_lr()[-1]) optimizer_state, scheduler_state = optimizer_first.state_dict(), scheduler_first.state_dict() optimizer = torch.optim.Adam(model.parameters(), lr=1) # I deliberately set the initial lr to a different value than base_lr, and it should be overwritten when loading the state_dict optimizer.load_state_dict(optimizer_state) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=nepochs_total, last_epoch=nepochs_first-1) scheduler.load_state_dict(scheduler_state) prev_epoch = scheduler_state['last_epoch'] lr = [] for i in range(prev_epoch+1, nepochs_total+1): scheduler.step() lr.append(scheduler.get_last_lr()[-1]) I’ve tried a bunch of things but I couldn’t manage to get over this discontinuity. Thanks you for your help!
st30181
the discontinuity impacts all LR values computed from epoch 46 to epoch 100. indeed, see what this quick snippet gave: for i, val in enumerate(lr_first+lr): if val != lr_expected[i]: print(f'epoch: {i+1} \t actual lr: {val} \t expected lr: {lr_expected[i]}') Screen Shot 2020-08-17 at 3.29.50 PM1602×684 124 KB
st30182
Have you tried saving the learning rate from the previous model and putting that in when you set up the optimizer instead of relying on the state dict? optimizer = torch.optim.Adam(model.parameters(), lr=previously_saved_lr)
st30183
hi, thanks for the suggestion but the learning rate scheduler returns an error if last_epoch != -1 and optimizer’s state doesn’t have a ‘initial_lr’ key The issue I’m facing may come from pytorch _LRScheduler class __init__ method, which ends with a self.step() call that may change the learning rate value… I’m thinking about this because as soon as I instantiate the LR scheduler, the optimizer’s learning rate is modified: # let's say we have stopped training after epoch 45 out of 100 # and we want to resume training epoch_restart = 45 new_optimizer = torch.optim.Adam(model.parameters(), lr=1) # load optimizer state saved when training stopped new_optimizer.load_state_dict(optimizer_state) # retrieve optimizer's lr value before instantiating LR scheduler lr_first = new_optimizer.state_dict()['param_groups'][0]['lr'] new_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR( new_optimizer, T_max=100, eta_min=0, last_epoch=epoch_restart, ) # retrieve optimizer's lr value after instantiating LR scheduler lr_then = new_optimizer.state_dict()['param_groups'][0]['lr'] if you compare lr_first and lr_then, they’ll be different! why?
st30184
Hello everybody! I am quite a novice in PyTorch and Python also. The reason I try to familiarize myself with PyTorch is the application of ANNs to PDEs which seems an extremely interesting topic. So I assigned myself to extend this implementation NNets-and-Diffeqns/non-linear-pde-final.ipynb at master · dbgannon/NNets-and-Diffeqns · GitHub 16 to 2D. More specifically this code approximates the solution of a 1D Poisson equation with a feedforward neural network. I would like to ivestigate, along the same lines, the following 2D BVP Delta u(x,y)=-2pi^2sin(pi x)sin(pi y) u(0,y)=u(1,y)=u(x,0)=u(x,1)=0 where Delta is the Laplace operator. An analytic solution is u_exact=sin(pi x) sin(pi y). The extension of some points is straightforward, but I got stuck in the imposition of the 2D boundary condition i.e. the appropriate modification of In [67] and the modification of In [70] where, frankly, I don’t really understand what it does (Creates mini batches?). Could someone provide some guidelines for my implementation or/and other related implementations and projects using PyTorch?
st30185
Hello Dimitri, so the notebook you linked pre-generates 20 batches of 10 random evaluation points in the interior (batches) as well as the rhs (fbatches) of the same shape and 2 evaluation points on the boundary (and for 1d, the set of these two points is the boundary), the latter in 67 as variable bndry. It then sets Truebndry to zeros of the same shape as bndry. For both bndry and batches the points are stacked along the batch dimension. A training step roughly looks like zero gradients, set b, fb to a random batch of 10 points of (f)batches evaluation of the mynet at bndry and assign it to output_bndry, in the function Du: evaluation of mynet at b, computation of the second derivatives using autograd.grad and returning them as outputb in the train loop, use mse loss (mean squared error) to compute the discrepancy between outputb and fb in the interior, use mse loss to compare output_bndry to the desired boundary values Truebndry. backwards through both separately and calling the two different optimizers (but without calling zero grad between the first step and the second backward, which is most certainly not intended, as the second will get the gradients of the first, too). Some notes on this. Keep in mind that this critique is only intended to articulate some things where I thought a different approach might be better in terms of learning PyTorch, Python and programming for numerical analysis, and is neither complete nor do I intend to fault of the original author of the code (i.e. if the code does what they need for them - great - but maybe don’t follow it too closely for learning): You would need to do some non-trivial thinking around the evaluation points (and probably need many more) to do 2d, note that output_bndry is values of the would-be solution while outputb is values of the second derivatives of the would-be solution, The two optimizers and backwards are almost certainly not a good idea. A way to achieve the equivalent much cleaner is to compute a weighted average of the two losses and then call backward from this average. Note that to compute the loss weights you have to incorporate the number of points in the batch and different learning rates. Calling backward with retain_graph all the time is not a good idea, either, but calling it only once lets you get rid of it easily. And as an aside, extensively uses the tensor creation style that has been considered obsolete 3 years ago when PyTorch 0.4 came out and seems is needlessly complicated in many aspects. By itself, that seems OK, but I would advise against using this code to learn PyTorch, Python or coding. Again, the above critique is merely a suggestion for your own learning path/ideas how to write more clearer code achieving the same and does not contain any criticism of the author’s code itself which may or may not achieve what the author had intended to do with it. Best regards Thomas
st30186
Thomas, thank you for your comprehensive reply! I have a better understanding of the code now and I agree that some points are unnecessarily complicated. Also, I agree that the two backwards is not a good idea, as you wrote, the weighted sum of the two losses is a better and more standard approach. I found a way to generate the 2D set of evaluation points and now I should find out how to feed them in the network. Obviously the first nn.Linear should has two inputs now instead of one.
st30187
Hi all, Could someone tell the np.pi 82 equivalent in Pytorch? Is there any built-in way to access pi value from torch library? Thanks in advance.
st30188
Solved by LeviViana in post #2 No there isn’t a pretty built-in way. There are however a lot of ugly ways of doing so. You can, for instance, define torch.pi at run-time: import torch torch.pi = torch.acos(torch.zeros(1)).item() * 2 # which is 3.1415927410125732 You can even modify the __init__.py file (which is located in pri…
st30189
No there isn’t a pretty built-in way. There are however a lot of ugly ways of doing so. You can, for instance, define torch.pi at run-time: import torch torch.pi = torch.acos(torch.zeros(1)).item() * 2 # which is 3.1415927410125732 You can even modify the __init__.py file (which is located in print(torch.__file__)) in your environment by adding a line with pi = 3.14 etc, so that you will always be able to use the torch.pi constant.
st30190
Hi! Just stumbled across this. Why would you do this instead of torch.tensor(math.pi). Is it because OP said “built-in”? Is there any other practical advantage to your approach?
st30191
Hello all ! I want the libtorch cpp (not python) version to compile with Visual Studio 2015. I can’t find that, could you help me please ? Thank you Best regards
st30192
Hi all, I want to take a convex combination of the final probabilities from 2 different models in a classification problem. I want to do something like this logits1 = model1(input) logits2 = model2(input) prob1 = F.softmax(logits1, dim=1) prob2 = F.softmax(logits2, dim=1) final_prob = beta*prob1+(1-beta)*prob2 log_prob = torch.log(final_prob) loss = torch.nn.NLLLoss(log_prob, target) Is there any faster or a computationally more stable way to do this?
st30193
I have pre-trained VGG models on two different but related types of images. Then I have combined these two models. class MyEnsemble(nn.Module): def __init__(self, modelA, modelB, nb_classes=2): super(MyEnsemble, self).__init__() self.modelA = modelA self.modelB = modelB # Remove last linear layer self.modelA.classifier[6] = nn.Identity() self.modelB.classifier[6] = nn.Identity() # Create new classifier self.classifier = nn.Linear(4096+4096, nb_classes) def forward(self, x1, x2): x1 = self.modelA(x1) # clone to make sure x is not changed by inplace methods x2 = self.modelB(x2) x = torch.cat((x1, x2), dim=1) x = self.classifier(F.relu(x)) return x # Train your separate models # ... # We use pretrained torchvision models here modelA = models.vgg16(pretrained=True) num_ftrs = modelA.classifier[6].in_features modelA.classifier[6] = nn.Linear(num_ftrs,2) modelB = models.vgg16(pretrained=True) num_ftrs = modelB.classifier[6].in_features modelB.classifier[6] = nn.Linear(num_ftrs,2) modelB.load_state_dict(torch.load('checkpoint1.pt')) modelA.load_state_dict(torch.load('checkpoint2.pt')) model = MyEnsemble(modelA, modelB) Now I want to test the combined model using test images. Can anyone please help me with how to give pair of input images? or how can I test the combined model using two different but related types of images?
st30194
You can directly pass the two input tensors to the model and would get the output as in a standard use case: x1 = torch.randn(1, 3, 224, 224) x2 = torch.rannd(1, 3, 224, 224) out = model(x1, x2) I’m not sure where you are currently stuck, so feel free to explain the trouble a bit more.
st30195
ptrblck: x1 = torch.randn(1, 3, 224, 224) x2 = torch.rannd(1, 3, 224, 224) out = model(x1, x2) Thanks, @ptrblck for your reply. Maybe my question is very silly. Still, I am very much confusing. If I have 1 type of image I can test the model for classification using the following code def test(model, criterion): model.to(device) running_corrects = 0 running_loss=0 pred = [] true = [] output =[] pred_wrong = [] true_wrong = [] image = [] for j, (inputs, labels) in enumerate(test_loader): inputs = inputs.to(device) labels = labels.to(device) model.eval() outputs = model(inputs) loss = criterion(outputs, labels) outputs = sm(outputs) _, preds = torch.max(outputs, 1) running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) preds = preds.cpu().numpy() labels = labels.cpu().numpy() preds = np.reshape(preds,(len(preds),1)) labels = np.reshape(labels,(len(preds),1)) inputs = inputs.cpu().numpy() for i in range(len(preds)): pred.append(preds[i]) true.append(labels[i]) if(preds[i]!=labels[i]): pred_wrong.append(preds[i]) true_wrong.append(labels[i]) image.append(inputs[i]) mat_confusion=confusion_matrix(true, pred) print('Confusion Matrix:\n',mat_confusion) return feature_extract = True sm = nn.Softmax(dim = 1) test_transforms = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) test_data= datasets.ImageFolder(test_dir,transform=test_transforms) num_workers = 0 print("Number of Samples in Test ",len(test_data)) test_loader = torch.utils.data.DataLoader(test_data, batch_size, num_workers=num_workers, shuffle=False) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") criterion = nn.CrossEntropyLoss() test(model, criterion) But in this case, we have two types of input. How we will use these two types of input images in the test function?
st30196
The accuracy computation wouldn’t need to be changes, since your model would still output a single prediction for each input pair. However, you would want to change the data loading pipeline to get both images. For this you could write a custom Dataset and return both images in the __getitem__ as well as the label. This will make sure that the DataLoader loop will yield a batch of both images and the target tensors.
st30197
Thanks for your kind suggestion. I have written a custom Dataset and call the above test code. But I am getting an error. Can you please tell me whether I am going right direction or something problem? from torch.utils.data import Dataset class bothDataset(Dataset): def __init__(self, csv_path, root_dir1, root_dir2, transform=None): self.img_names = pd.read_csv(csv_path) self.transform = transform self.root_dir1 = root_dir1 self.root_dir2 = root_dir2 def __len__(self): return len(self.img_names) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_name1= os.path.join(self.root_dir1, self.img_names.iloc[idx, 0]) image1 = Image.open(img_name1+'.bmp') res = transforms.Resize((224, 224)) image1 = res(image1) image1torchvision.transforms.functional.to_tensor(image1) img_name2 = os.path.join(self.root_dir2, self.img_names.iloc[idx, 1]) image2 = Image.open(img_name2+'.bmp') image2 = res (image2) image2=torchvision.transforms.functional.to_tensor(image2) label = self.img_names.iloc[idx, 2] return image1, image2, label test_loader = bothDataset(csv_path='RelationBS1.csv', root_dir1=test_dir1, root_dir2=test_dir2, transform=test_transforms) print('Num test images: ', len(test_loader)) changed the test function part as for j, (input1, input2, labels) in enumerate(test_loader): input1 = input1.to(device) input2 = input2.to(device) labels = labels.to(device) model.eval() outputs = model(input1, input2)
st30198
The Dataset code looks alright, but you would need to change the DataLoader loop into: for j, (input1, input2, labels) in enumerate(test_loader):
st30199
Thanks. Now I am getting 2 problems. If I use labels = labels.to(device) AttributeError: ‘numpy.int64’ object has no attribute 'to If I did not use CUDA. The above problem is solved. But get this error RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 3 3, but got 3-dimensional input of size [1, 224, 224] instead
st30200
The second error is raised, if your input tensors do not have the expected 3 channels, but are apparently missing the channel dimension (and might thus be grayscale images originally). If that’s the case, add the channel dimension via unsqueeze and change the first conv layer to accept a single input channel by replacing it in the pretrained models.
st30201
Thanks @ptrblck for helping me. But I am not sure whether I am getting it correctly or not. Do you mean input1= input1.unsqueeze(0) input2= input2.unsqueeze(0) and modelA.features[0]= nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1) Can you please explain to me a little bit?
st30202
It depends where you are using these methods. You should check the shape of the image tensors inside the __getitem__ method and if both have only two dimensions, then use unsqueeze(0). Alternatively, you could also check the shape of the batch returned by the DataLoader and call unsqueeze(1), if needed, but I would prefer to use the former approach (inside __getitem__). Depending on the models you are using, the replacement of the conv layer might work. However, you need to check, if the first conv layer is indeed accessible via model.features[0] or model.conv1 etc. To check it, have a look at the source code of the model or use print(model) to see the layer names.
st30203
Hello! I am building a spiking neural network with surrogate gradient learning according to the tutorial of fzenke.net with some little adaptations but running into problems during model inference. The model is not a PyTorch NN module, which is why I cannot use the normal .eval() and state_dict() functions for inference and saving. Instead, I save the learned weights of the model in a .pt file using torch.save(). My problem occurs when I load the weights and make predictions with the same code (just without training), where I get different model performances at every restart of the Python kernel (I use Google Colab). In theory, all operations should be deterministic given the same input and weights, which is why I am confused about this behaviour. Does someone maybe have an idea what might cause the problem? Here are the relevant code snippets (imports excluded): Initialising weights in a one-layer network (only one weight matrix) random.seed(123) torch.manual_seed(123) np.random.seed(123) weight_scale = 0.2 w1 = torch.empty((nb_inputs, nb_outputs), device=device, dtype=dtype, requires_grad=True) torch.nn.init.normal_(w1, mean=0.0, std=weight_scale/np.sqrt(nb_inputs)) Saving and loading the parameters: torch.save(w1, 'path/onelayersnn.pt') w1 = torch.load('path/onelayersnn.pt') Code that runs the network on some input and the predict function (alpha and beta are float constants) def run_snn(inputs, batch_size, syn, mem): """ Runs the SNN for 1 batch within one epoch :param inputs: spiking input to the network :param batch_size: batch size to be run :param syn: last synaptic current value :param mem: last membrane potential value :returns: membrance potentials and output spikes """ # initialize synaptic currents and membrane potentials as the former potential --> no reset of state variables syn_here = syn mem_here = mem # lists to record membrane potential and output spikes in the simulation time mem_rec = [] spk_rec = [] # Compute hidden layer activity out = torch.zeros((batch_size, nb_outputs), device=device, dtype=dtype) # initialization # multiplication of input spikes with the weight matrix, this will be fed to the synaptic variable syn and the membrane potential mem h1 = torch.einsum("abc,cd->abd", (inputs, w1)) #loop over time for t in range(nb_steps): mthr = mem_here-1.0 # subtract the threshold to see if the neurons spike out = spike_fn(mthr) # get the layer spiking activity rst = out.detach() # We do not want to backprop through the reset new_syn = alpha*syn_here +h1[:,t] # calculate new input current for the next timestep of the synapsis (PSP?) new_mem =(beta*mem_here +syn_here)*(1.0-rst) # calculate new membrane potential for the timestep mem_rec.append(mem_here) #record the membrane potential spk_rec.append(out) #record the spikes mem_here = new_mem #set new membrane potential at this timestep syn_here = new_syn #set the new synaptic current at this timestep # last synaptic current and membrane potential last_syn = syn_here last_mem = mem_here # merge the recorded membrane potentials into single tensor mem_rec = torch.stack(mem_rec,dim=1) # merge output spikes into single tensor spk_rec = torch.stack(spk_rec,dim=1) return mem_rec, spk_rec, last_syn, last_mem def predict(x_data, y_data): """ Predicts the class of the input data based on maximum membrane potential :param x_data: X :returns: y_pred """ syn = torch.zeros((len(y_data),nb_outputs), device=device, dtype=dtype) mem = torch.zeros((len(y_data),nb_outputs), device=device, dtype=dtype) for x_local, y_local in sparse_data_generator_from_hdf5_spikes(x_data, y_data, len(y_data), nb_steps, nb_inputs, max_time, shuffle=False): output,_,_,_ = run_snn(x_local.to_dense(), len(y_data), syn, mem) m,_= torch.max(output,1) # max over time _,am=torch.max(m,1) # argmax over output units preds = am.float() return preds.cpu().numpy() Thank you very much!
st30204
It’s suggested using pytorch multiprocessing insted of multiprocessing from python ,However in python3 I can use something like def toy_fun(boxes): boxes += 20 return boxes with concurrent.futures.ProcessPoolExecutor as executor : results = [ executor.submit(toy_fun,boxes) for _ in range(10) ] for f in concurrent.futures.as_completed(results): #do something with f.result() However I do not know how can I process or get the result get from method using in MULTIPROCESSING BEST PRACTICES 3 How I can I do this ? Thanks!
st30205
IMG_20210618_1522583583×2463 1.27 MB Is this because the version of cuda is 11.2 which is not compatible with torch 1.8.1+cuda11.1, or the SVD algorithm cannot be accelerated by GPU?
st30206
I try to train MobileFaceNet 1 using MS1M-IBUG 3 (85K ids/3.8M images) But I am facing low CPU and GPU utilization. CPU keeps a low util like 30%, GPU goes to 70% for one second and keeps 0% most of the time. Following is one of my implementations for dataloader. My another implementation is converting mxnet record to normal jpeg files beforehand and use PIL/opencv to read it, but it’s also very slow. I tried num_worker, pin_memory, none of them significantly speed up the data loading. But if I skip the loading part and generate some random tensor, the GPU can reach 90% utilization. So any suggestions? import torch import torchvision.transforms as transforms import mxnet as mx from mxnet import recordio class MyDataset(torch.utils.data.Dataset): def __init__(self, mxnet_record = 'train.rec', mxnet_idx = 'train.idx'): self.data = recordio.MXIndexedRecordIO(mxnet_idx, mxnet_record,'r') self.transform = transforms.Compose([transforms.RandomHorizontalFlip(), transforms.ToTensor() ]) def __len__(self): return 3804846 def __getitem__(self, index): header, s = recordio.unpack(self.data.read_idx(index+1)) image = mx.image.imdecode(s).asnumpy() label = int(header.label) image = self.transform(image) return image, torch.tensor(label, dtype = torch.long)
st30207
This post 7 explains some common data loading bottlenecks and proposes workarounds as well, so you might want to take a look at it.
st30208
Hi, when i run the following code, an error raises. Can anybody help? Thanks! my code: if len(args.test_model) != 0: assert os.path.exists(args.test_model), \ "file not found: {}".format(args.test_model) checkpoint_ = torch.load(args.test_model) # FIXME key_m, key_u = net.load_state_dict(checkpoint_['model'], strict=False) if key_u: print('Unexpected keys :') print(key_u) if key_m: print('Missing keys :') print(key_m) raise KeyError error: Traceback (most recent call last): File “/home4/user_from_home1/wangyufei/dc/dsd/test.py”, line 625, in main(config) File “/home4/user_from_home1/wangyufei/dc/dsd/test.py”, line 621, in main test(args) File “/home4/user_from_home1/wangyufei/dc/dsd/test.py”, line 516, in test checkpoint_ = torch.load(args.test_model) File “/home1/wangyufei/anaconda3/envs/py3.8bindtr/lib/python3.8/site-packages/torch/serialization.py”, line 585, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File “/home1/wangyufei/anaconda3/envs/py3.8bindtr/lib/python3.8/site-packages/torch/serialization.py”, line 765, in _legacy_load result = unpickler.load() ModuleNotFoundError: No module named ‘metrics’
st30209
The recommended way to serialize a model would be to store its state_dict as described here 12. Based on the error you are seeing it seems you’ve stored the entire model via torch.save(model, path). If that’s the case, you would have to make sure the code structure while loading the model is equal to the one used during the saving.
st30210
Thanks for your reply! The point in your reply may be a case that caused the problem. But I found that I got the above error because I defined a metrics package that has same name with built-in package. Add from future import absolute_import can solve it.
st30211
There is some chatter online that I can’t deepcopy a model… Is this right? Additionally, is there a way after loading a model to move it between cpu and gpu?
st30212
Solved by ptrblck in post #2 You can deepcopy a model: model = nn.Linear(1, 1) model_copy = copy.deepcopy(model) with torch.no_grad(): model.weight.fill_(1.) print(model.weight) > Parameter containing: tensor([[10.]], requires_grad=True) print(model_copy.weight) > Parameter containing: tensor([[-0.5596]], requires_grad=…
st30213
You can deepcopy a model: model = nn.Linear(1, 1) model_copy = copy.deepcopy(model) with torch.no_grad(): model.weight.fill_(1.) print(model.weight) > Parameter containing: tensor([[10.]], requires_grad=True) print(model_copy.weight) > Parameter containing: tensor([[-0.5596]], requires_grad=True) To move a model, just call: model.to('cuda:0') # moves model (its parameters) to GPU0 model.to('cpu') # moved model to CPU
st30214
Hey! Many thanks for the quick reply Pleased about the deepcopy. So the model.to syntax… I was under the impression this didn’t move the model, just changed its formatting as the documentation here (https://pytorch.org/tutorials/beginner/saving_loading_models.html 125) suggests: “converts the initialized model to a CUDA optimized model using model.to(torch.device('cuda')) .” It sounds like the map_location bit in torch.load(PATH, map_location=device) specifies where the model is?
st30215
The map_location argument specifies where to put the loaded parameters. E.g. if you saved the model.state_dict() of a model, which was pushed to the GPU, and would like to load this state_dict on a CPU-only machine, you could specify map_location='cpu' to restore the parameters.
st30216
Does that mean that if I load a model from file (saved using torch.save(model) on either a cpu or gpu) and set map_location=‘cpu’ when loading, I don’t need to call .to - if I’m using it on the cpu. I only need to .to it if I’m moving it onto the gpu once again?
st30217
I would recommend to save and load the mode.state_dict(), not the model directly. That being said, I prefer to push the model to CPU first before saving the state_dict. This approach makes sure that I’m able to restore the model on all systems, even when no GPU was found. After loading the model, I use model.to('cuda') to push it to the GPU again.
st30218
what about this code: ` model = nn.Linear(1, 1) model.to(‘cuda:0’) model_copy = copy.deepcopy(model) ` will this create a copy in the GPU? or will I now have two models pointing to the same location in the GPU? (hopefully not) thanks
st30219
Hi, I have a customized model and cannot be deepcopy. What would be the major causes of a model cannot be deepcopied? could you shed some light for me to debug my model? Thanks a lot for your help. The customized model is the squeezenet ssd lite model in this repo (GitHub - qfgaohao/pytorch-ssd: MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in Pytorch 1.0 / Pytorch 0.4. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv. 1) The squeezenet model is defined under vision/nn The ssd model is defined under vision/ssd if I do: base_net = squeezenet1_1(False).features I can deep copy base_net no problem: dbcpy = copy.deepcopy(base_net) But if I do net = create_squeezenet_ssd_lite(no_classes, is_test=True) then I have trouble to deepcopy the net: dnet = copy.deepcopy(net) it will report the following error: Traceback (most recent call last): File “/home/paul/.eclipse/360744286_linux_gtk_x86_64/plugins/org.python.pydev.core_7.5.0.202001101138/pysrc/pydevd.py”, line 3129, in main() File “/home/paul/.eclipse/360744286_linux_gtk_x86_64/plugins/org.python.pydev.core_7.5.0.202001101138/pysrc/pydevd.py”, line 3122, in main globals = debugger.run(setup[‘file’], None, None, is_module) File “/home/paul/.eclipse/360744286_linux_gtk_x86_64/plugins/org.python.pydev.core_7.5.0.202001101138/pysrc/pydevd.py”, line 2195, in run return self._exec(is_module, entry_point_fn, module_name, file, globals, locals) File “/home/paul/.eclipse/360744286_linux_gtk_x86_64/plugins/org.python.pydev.core_7.5.0.202001101138/pysrc/pydevd.py”, line 2202, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File “/home/paul/.eclipse/360744286_linux_gtk_x86_64/plugins/org.python.pydev.core_7.5.0.202001101138/pysrc/_pydev_imps/_pydev_execfile.py”, line 25, in execfile exec(compile(contents+"\n", file, ‘exec’), glob, loc) File “/home/paul/pytorch/od-ssd/ssd-quantized/test_3d.py”, line 50, in dnet = copy.deepcopy(net) File “/usr/lib/python3.6/copy.py”, line 180, in deepcopy y = _reconstruct(x, memo, *rv) File “/usr/lib/python3.6/copy.py”, line 280, in _reconstruct state = deepcopy(state, memo) File “/usr/lib/python3.6/copy.py”, line 150, in deepcopy y = copier(x, memo) File “/usr/lib/python3.6/copy.py”, line 240, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File “/usr/lib/python3.6/copy.py”, line 169, in deepcopy rv = reductor(4) TypeError: can’t pickle module objects The reason I am asking this question is when I do the QAT (Quantization Aware Training), and try to save the quantized model, using: net.eval() net_int8 = torch.quantization.convert(net) net_int8.save(model_path) I will encounter the above deepcopy error. So what shall I do? fix the model to make it deepcopy-able? or QAT doesn’t work for ssd model? could you help to point me to the right direction? Thanks a lot for your help!
st30220
Based on the error message it seems that pickle is failing to copy the object, if it has a class attribute that references a module 11. I don’t know, if this is the case for QAT or if this is caused by your custom model, but could you check for module references inside your model?
st30221
Thanks for your help. I looked into the create_sqeezenet_ssd_lite() code below, do you think that ModuleList() causing the problem? If yes, how do I fix this type of issue and enable the model created by this code deepcopyable? This type of model creation is typically used in a SSD network creation. Does this imply QAT doesn’t work for SSD type of network? Thanks again for your help! def create_squeezenet_ssd_lite(num_classes, is_test=False): base_net = squeezenet1_1(False).features # disable dropout layer source_layer_indexes = [ 12 ] extras = ModuleList([ Sequential( Conv2d(in_channels=512, out_channels=256, kernel_size=1), ReLU(), SeperableConv2d(in_channels=256, out_channels=512, kernel_size=3, stride=2, padding=2), ), Sequential( Conv2d(in_channels=512, out_channels=256, kernel_size=1), ReLU(), SeperableConv2d(in_channels=256, out_channels=512, kernel_size=3, stride=2, padding=1), ), Sequential( Conv2d(in_channels=512, out_channels=128, kernel_size=1), ReLU(), SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1), ), Sequential( Conv2d(in_channels=256, out_channels=128, kernel_size=1), ReLU(), SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1), ), Sequential( Conv2d(in_channels=256, out_channels=128, kernel_size=1), ReLU(), SeperableConv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=1) ) ]) regression_headers = ModuleList([ SeperableConv2d(in_channels=512, out_channels=6 * 4, kernel_size=3, padding=1), SeperableConv2d(in_channels=512, out_channels=6 * 4, kernel_size=3, padding=1), SeperableConv2d(in_channels=512, out_channels=6 * 4, kernel_size=3, padding=1), SeperableConv2d(in_channels=256, out_channels=6 * 4, kernel_size=3, padding=1), SeperableConv2d(in_channels=256, out_channels=6 * 4, kernel_size=3, padding=1), Conv2d(in_channels=256, out_channels=6 * 4, kernel_size=1), ]) classification_headers = ModuleList([ SeperableConv2d(in_channels=512, out_channels=6 * num_classes, kernel_size=3, padding=1), SeperableConv2d(in_channels=512, out_channels=6 * num_classes, kernel_size=3, padding=1), SeperableConv2d(in_channels=512, out_channels=6 * num_classes, kernel_size=3, padding=1), SeperableConv2d(in_channels=256, out_channels=6 * num_classes, kernel_size=3, padding=1), SeperableConv2d(in_channels=256, out_channels=6 * num_classes, kernel_size=3, padding=1), Conv2d(in_channels=256, out_channels=6 * num_classes, kernel_size=1), ]) return SSD(num_classes, base_net, source_layer_indexes, extras, classification_headers, regression_headers, is_test=is_test, config=config)
st30222
The nn.ModuleList looks correct, but I also don’t know if QAT might be causing this issue. Are you seeing the same error without using QAT?
st30223
“Are you seeing the same error without using QAT?” Yes. I am seeing this link indicating that Can’t save a model with torch.save if model has a torch.Device attr #7545 and I tried to remove the self.device attribute in SSD class and still not working. I did try to run the code you suggested in the link to exam where is the error by pickle_trick() it: net = create_squeezenet_ssd_lite(no_classes, is_test=True) print(pf(pickle_trick(net))) But the code crashes indicating there is exception caught within the exception even I increase the maximum depth to 10,000.
st30224
@ptrblck after line by line elimination test, finally found out the root causes that make the SSD() not deepcopyable: class SSD(nn.Module): def init(self, num_classes: int, base_net: nn.ModuleList, source_layer_indexes: List[int], extras: nn.ModuleList, classification_headers: nn.ModuleList, regression_headers: nn.ModuleList, is_test=False, config=None, device=None): “”“Compose a SSD model using the given components. “”” super(SSD, self).init() self.num_classes = num_classes self.base_net = base_net self.source_layer_indexes = source_layer_indexes self.extras = extras self.classification_headers = classification_headers self.regression_headers = regression_headers self.is_test = is_test self.config = config If I comment out the “self.config = config” line, then SSD() is deepcopyable. Where the config is a module imported from: from .config import squeezenet_ssd_config as config And the squeezenet_ssd_config is as below: import numpy as np from vision.utils.box_utils import SSDSpec, SSDBoxSizes, generate_ssd_priors image_size = 300 image_mean = np.array([127, 127, 127]) # RGB layout image_std = 128.0 iou_threshold = 0.45 center_variance = 0.1 size_variance = 0.2 specs = [ SSDSpec(17, 16, SSDBoxSizes(60, 105), [2, 3]), SSDSpec(10, 32, SSDBoxSizes(105, 150), [2, 3]), SSDSpec(5, 64, SSDBoxSizes(150, 195), [2, 3]), SSDSpec(3, 100, SSDBoxSizes(195, 240), [2, 3]), SSDSpec(2, 150, SSDBoxSizes(240, 285), [2, 3]), SSDSpec(1, 300, SSDBoxSizes(285, 330), [2, 3]) ] priors = generate_ssd_priors(specs, image_size) How should I modify the code to keep the “config” and still make the SSD() deepcopyable? Any advice will be highly appreciated. Thanks a lot for your help in advanced.
st30225
Is squeezenet_ssd_config only containing the posted code or any class definitions? Based on the import statement I would assume it’s a class or any other object, but the posted code shows just executable Python code with more imports. You could try to deepcopy each imported class and see, if one of these classes might fail due to the previously mentioned reason.
st30226
@ptrblck Thank you so much for your help. Eventually I “work around” the problem by doing the following: avoid “self.config=config” referencing, but directly instantiate it’s content: Was: in init self.config = config …<later in the body, self.config were referenced as> self.config.center_variance self.config.size_variance Change to: in init #self.config = config #comment out self.config_center_variance = config.center_variance self.config_size_variance = config.size_variance …<later in the code, no more reference to self.config > self.config_center_variance self.config_size_variance Then the SSD() is deepcopyable, but QAT still having problems…
st30227
@ptrblck although SSD() is deepcopyable, but still having problem to run QAT…, the problem is: cannot torch.save(net_int8) and it complaints: torch.save(net_int8, model_path) #to save the entire model File “/home/paul/pytorch/lib/python3.6/site-packages/torch/serialization.py”, line 370, in save _legacy_save(obj, opened_file, pickle_module, pickle_protocol) File “/home/paul/pytorch/lib/python3.6/site-packages/torch/serialization.py”, line 443, in _legacy_save pickler.dump(obj) AttributeError: Can’t pickle local object ‘_with_args.._PartialWrapper’ the code with problem area is: net.eval() net_int8 = torch.quantization.convert(net) #torch.save(net_int8.state_dict(), model_path) torch.save(net_int8, model_path) #to save the entire model If I only save state_dict(), there is no problem. But when save the entire model, the problem show. Further debug, finding the net was loaded and torch.save-able, but after the net.qconfig and it becomes not torch.save-able anymore: torch.save(net,"./net_before_qconfig") net.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.save(net,"./net_after_qconfig") As shown above, the torch.save(net, “./net_before_qconfig”) works, but after net.qconfig, the net is not torch.save-able anymore… ;(( And having the same complaints: torch.save(net,"./net_after_qconfig") File “/home/paul/pytorch/lib/python3.6/site-packages/torch/serialization.py”, line 370, in save _legacy_save(obj, opened_file, pickle_module, pickle_protocol) File “/home/paul/pytorch/lib/python3.6/site-packages/torch/serialization.py”, line 443, in _legacy_save pickler.dump(obj) AttributeError: Can’t pickle local object ‘_with_args.._PartialWrapper’ Is this QAT problem? torch.save cannot save the entire model after QAT? I need the entire model saved to port to a micro processor for speed up testing. If this is QAT limitation, what would be the work around to still save the entire quantization model (not just state_dict)? Thank you again for your help!
st30228
Based on your latest debugging, the issue might be related to QAT. I’m not deeply familiar with QAT and the support to use torch.save on the model directly. However, generally I would not recommend to save the model directly, as it can break in various ways. The better way would be to save the state_dict (and additional configs etc.). When reloading you would then recreate the model object and use load_state_dict.
st30229
I using Pytorch’s swa_utils that internally calls deepcopy. However, I get the following error: > Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment This seems to be caused by the weight dropout scheme I am using, where the original weight matrix is moved into _raw. The code for weight dropout is here: import torch from torch.nn import Parameter import torch.nn.functional as F class WeightDrop(object): def __init__(self, name, dropout): self.name = name self.dropout = dropout def compute_weight(self, module): return F.dropout( getattr(module, self.name + "_raw"), p = self.dropout, training = module.training, inplace = False ) @staticmethod def apply(module, name, dropout): for k, hook in module._forward_pre_hooks.items(): if isinstance(hook, WeightDrop) and hook.name == name: raise RuntimeError(f"Cannot register two weight_dropout hooks with name '{name}'") fn = WeightDrop(name, dropout) weight = getattr(module, name) del module._parameters[name] ## creating _raw parameter module.register_parameter(name + "_raw", Parameter(weight.data)) setattr(module, name, fn.compute_weight(module)) module.register_forward_pre_hook(fn) return fn def remove(self, module): weight = module._parameters[self.name + "_raw"] delattr(module, self.name) del module._parameters[self.name + "_raw"] module.register_parameter(self.name, Parameter(weight.data)) def __call__(self, module, inputs): if self.name in module._parameters: del module._parameters[self.name] setattr(module, self.name, self.compute_weight(module)) def weight_drop(module, name, dropout): WeightDrop.apply(module, name, dropout) return module def apply_weight_drop(module, name, dropout): wdrop = weight_drop(module, name, dropout) fp = module.flatten_parameters def decorator(*args, **kwargs): device = getattr(module, name + "_raw").device setattr(module, name, getattr(module, name).to(device)) return fp(*args, **kwargs) module.flatten_parameters = decorator return wdrop Here is the code for applying deep copy on a GRU: import copy gru = torch.nn.GRU(10, 10) gru_wd = apply_weight_drop(gru, "weight_hh_l0", 0.2) gru_wd_copy = copy.deepcopy(gru_wd) > RuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment Any ideas how to fix this error? Thank you!
st30230
Hello, Our team has torch==1.7.1 as a dependency in requirements.txt. When running bazel build on a target, I’m getting a SigKill action happening when wheels is being built for torch. It looks like at some point, bazel runs: python3 -m pip --isolated wheel torch==1.7.1 which results in: Collecting torch==1.7.1 Killed If I run it as a part of my full bazel build, I get: Collecting torch==1.7.1 (Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/vagrant/.cache/bazel/_bazel_vagrant/f9910d98673307a31f928c448bd4acd0/external/rules_python/python/pip_install/extract_wheels/__main__.py", line 5, in <module> main() File "/home/vagrant/.cache/bazel/_bazel_vagrant/f9910d98673307a31f928c448bd4acd0/external/rules_python/python/pip_install/extract_wheels/__init__.py", line 87, in main subprocess.run(pip_args, check=True) File "/usr/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/home/vagrant/venv/bin/python3', '-m', 'pip', '--isolated', 'wheel', '-r', '/home/vagrant/tech-backends/requirements.txt']' died with <Signals.SIGKILL: 9>. ) Does anyone know how to address this? I have python 3.8.5.
st30231
Hi, I noticed that when we define a pytorch model, we usually need to specify its components before applying forward() function. An example is this: class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) # flatten all dimensions except batch x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x here we need to defined self.conv1, self.conv2, etc… While in keras, we usually directly define these components as we build the computation graph, e.g. encoder_input = keras.Input(shape=(28, 28, 1), name="img") x = layers.Conv2D(16, 3, activation="relu")(encoder_input) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.MaxPooling2D(3)(x) x = layers.Conv2D(32, 3, activation="relu")(x) x = layers.Conv2D(16, 3, activation="relu")(x) Here we do not need to define several model.Conv2D in advance, but instead define them as we build the graph. I am wondering if there are similar way to do this in PyTorch? I am asking this since: sometimes it could be error-prone to keep track of what components I have defined in __init__() and what components are being used in forward() when the number of layers are large, it may not be quite convenient to label all layers with unique index. If anyone has idea, please let me know, thanks!
st30232
I have tensor y= torch.arange(50).reshape(5,10) tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49]]) and now I want to select elements of y that their indexes appears in s such that each row of s is correspond the the row of y that I want to selects the elements from. for example if s is s = torch.arange(10).reshape(5,2) tensor([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]) the output after slicing (y[s]) should be: [[0,1], [12,13], [24,25], [36,37], [48,49]] how can I do it? I tried y[s], but it is not working, also I know that s should have same number of columns will the gradient be reserved after slicing?
st30233
Solved by eqy in post #5 Does torch.gather work for you? import torch y = torch.arange(50).reshape(5,10) s = torch.arange(10).reshape(5,2) print(torch.gather(y, 1, s)) tensor([[ 0, 1], [12, 13], [24, 25], [36, 37], [48, 49]])
st30234
my naive approch would be this: stack = [] for i in range(s.shape[0]): stack.append(y[:,s][:,i,:][i,:]) output_selec = torch.stack(stack) output_selec tensor([[ 0, 1], [12, 13], [24, 25], [36, 37], [48, 49]]) but 1) I am sure there should be a more pythonic/cleaner way (maybe @ptrblck can give me a suggestion), and 2) I am not sure if it will cause a disconnection between the gradients from previous operations
st30235
As far as I remember, slicing tensors does cause the tensor to lose its grad_fn.
st30236
Does torch.gather work for you? import torch y = torch.arange(50).reshape(5,10) s = torch.arange(10).reshape(5,2) print(torch.gather(y, 1, s)) tensor([[ 0, 1], [12, 13], [24, 25], [36, 37], [48, 49]])
st30237
Apparently, it does, thanks. @eqy would torch.gather safe for keeping the gradient? I see in the torch.gather documentations 4 that it has something called sparse_grad, I did not quite understand what does it mean to have gradient to be a sparse tensor.…
st30238
We can directly test if backprop works with both slice and gather: import torch class SliceNet(torch.nn.Module): def __init__(self): super(SliceNet, self).__init__() self.linear1 = torch.nn.Linear(32, 64) self.sigmoid = torch.nn.Sigmoid() self.linear2 = torch.nn.Linear(32, 1, bias=True) def forward(self, x): x = self.linear1(x) x = self.sigmoid(x) x = x[:,:32] x = self.linear2(x) return self.sigmoid(x) class GatherNet(torch.nn.Module): def __init__(self): super(GatherNet, self).__init__() self.linear1 = torch.nn.Linear(32, 64) self.sigmoid = torch.nn.Sigmoid() self.linear2 = torch.nn.Linear(32, 1, bias=True) batch_size = 2 self.indices = torch.arange(0, 64, 2).repeat(batch_size, 1) def forward(self, x): x = self.linear1(x) x = self.sigmoid(x) x = torch.gather(x, 1, self.indices) x = self.linear2(x) return self.sigmoid(x) def test_gather_gradient(): print("testing gathernet, loss should go down") data = torch.randn(2, 32) label = torch.randn(2, 1).round() gathernet = GatherNet() criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(gathernet.parameters(), 1e-4) for i in range(10000): out = gathernet(data) loss = criterion(out, label) if i % 1000 == 0: print(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() def test_slice_gradient(): print("testing slicenet, loss should go down") data = torch.randn(2, 32) label = torch.randn(2, 1).round() slicenet = SliceNet() criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(slicenet.parameters(), 1e-4) for i in range(10000): out = slicenet(data) loss = criterion(out, label) if i % 1000 == 0: print(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() test_gather_gradient() test_slice_gradient() testing gathernet, loss should go down 0.16012126207351685 0.13146084547042847 0.10953329503536224 0.09261573851108551 0.07940082252025604 0.06893321871757507 0.06052353233098984 0.053674302995204926 0.04802417382597923 0.043307825922966 testing slicenet, loss should go down 1.1417878866195679 0.950451672077179 0.828807532787323 0.7515791654586792 0.7005079984664917 0.6650912761688232 0.6394497156143188 0.6201967597007751 0.60529625415802 0.593470573425293 In general, I don’t think most indexing operations should be problematic for backprop. For example, could max pooling work if indexing was a problem?
st30239
abhibha1807: As far as I remember, slicing tensors does cause the tensor to lose its grad_fn. Minor correction: slicing doesn’t detach the output: x = torch.randn(4, 4, requires_grad=True) y = x[:3, :2] y.mean().backward() print(x.grad) > tensor([[0.1667, 0.1667, 0.0000, 0.0000], [0.1667, 0.1667, 0.0000, 0.0000], [0.1667, 0.1667, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, 0.0000]])
st30240
I have been working on a project recently that uses CUDA functions to perform many evaluations of the same deep neural network model in parallel across many threads of a GPU in a control optimization problem in a C++/CUDA environment. In this setup, each thread gets a different input vector and access to a shared memory array storing the model parameters. The current implementation is very low-level and effectively amounts to performing the matrix linear combination and non-linear activation operations of the neural network using custom CUDA functions. Because of these custom functions, we’re currently limited to standard multilayer perceptron models. We would like to expand our system’s capabilities and incorporate other architectures such as CNNs and RNNs. Does PyTorch support something like this in a C++/CUDA environment (TorchScript?) where I can load a model of any architecture with over to the GPU and use PyTorch’s math library to perform many forward-pass model evaluations in parallel using different CUDA threads each with a different input vector? I haven’t come across anything online that matches what I’m looking for, but I may just not be searching for the correct thing with the right terminology. Hopefully this is clear enough to get across what I’m currently doing and looking for. Let me know if I need to clarify to elaborate more on what I’m trying to do.
st30241
Hello, I was following the Pytorch tutorial of [customizing dataloader class], everything works just fine except when running the last segment of code. It just stay there without finishing, and does not print any output either. tutorialoutput1137×1000 95.1 KB (Developing Custom PyTorch Dataloaders — PyTorch Tutorials 1.7.1 documentation)
st30242
Solved by ptrblck in post #2 Could you set the num_workers to 0 and try to rerun the cell? Depending on your setup etc. multiprocessing might run into issues (e.g. the Windows FAQ explains the if-clause guard, which is needed for Windows setups).
st30243
Could you set the num_workers to 0 and try to rerun the cell? Depending on your setup etc. multiprocessing might run into issues (e.g. the Windows FAQ 2 explains the if-clause guard, which is needed for Windows setups).
st30244
Hi ptrbick, thanks for the response. It works after reducing the num_workers to zero. So another approach is to add if-then clause. I will give it a try later.
st30245
Hi there, I’m a windows guy but my employer just gave me a new Mac to work on. Following the official “Start Locally” steps for Mac fails with some sort of compatibility error: Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package pytorch conflicts for: torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1'] pytorch Package numpy-base conflicts for: torchvision -> numpy[version='>=1.11'] -> numpy-base[SHIT TON OF VERSIONS] pytorch -> numpy[version='>=1.11'] -> numpy-base[SHIT TON OF VERSIONS] I thought that following the directions that state the addition of -c=conda-forge would fix the problem but they do not. I next tried to downgrade my version of python to 3.8, but the problem persists. This is a brand new, untouched environment.
st30246
Solved by Brando_Miranda in post #4 Ultimately this is what solved my problem: conda install -y pytorch torchvision torchaudio -c pytorch -c conda-forge @smth sorry to ping you again but why does the official pytorch installation not include the conda-forge channel? It seems I always have issues without it…at least letting you know.…
st30247
did you ever manage to solve this? I am also having this fail: (synthesis) miranda9@Brandos-MBP ~ % conda install pytorch torchvision torchaudio -c pytorch Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - Found conflicts! Looking for incompatible packages. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package pytorch conflicts for: torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|1.9.0'] torchvision -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|1.9.0|>=1.1.0|>=1.0.0|>=0.4|>=0.3|>=0.2|1.7.1.*|1.3.1.*'] pytorch Package six conflicts for: torchvision -> six pytorch -> mkl-service[version='>=2,<3.0a0'] -> six @smth Perhaps you’d know? I just started a brand new env with python 3.9 and only numpy installed. Why can’t I install pytorch 1.9 the current newest version? cross posted: https://www.reddit.com/r/pytorch/comments/o1hwgv/installing_pytorch_fails_on_macos_with_brand_new/ SO: python - Unable to install Pytorch on Mac OS X from scratch due to Pytorch package conflicts with Conda - how to fix? - Stack Overflow 1
st30248
Ultimately this is what solved my problem: conda install -y pytorch torchvision torchaudio -c pytorch -c conda-forge @smth sorry to ping you again but why does the official pytorch installation not include the conda-forge channel? It seems I always have issues without it…at least letting you know. Hope this helps!
st30249
I am confused on how to quickly restore an array shuffled by a permutation. Example #1: [x, y, z] shuffled by P: [2, 0, 1], we will obtain [z, x, y], the corresponding inverse should be P^-1: [1, 2, 0]. Example #2: [a, b, c, d, e, f] shuffled by P: [5, 2, 0, 1, 4, 3], then we will get [f, c, a, b, e, d], the corresponding inverse should be P^-1: [2, 3, 1, 5, 4, 0]. I wrote the following codes based on matrix multiplication (the transpose of permutation matrix is its inverse), but this approach is too slow when I utilize it on my model training. Does there exisits a faster implementation? import torch n = 10 x = torch.Tensor(list(range(n))) print('Original array', x) random_perm_indices = torch.randperm(n).long() perm_matrix = torch.eye(n)[random_perm_indices].t() x = x[random_perm_indices] print('Shuffled', x) restore_indices = torch.Tensor(list(range(n))).view(n, 1) restore_indices = perm_matrix.mm(restore_indices).view(n).long() x = x[restore_indices] print('Restored', x)
st30250
Solved by tom in post #4 @KFrank 's method is a great ad hoc way to inverse a permutation. If the argument is rather large (say >=10000 elements) and you know it is a permutation (0…9999) then you could also use indexing: def inverse_permutation(perm): inv = torch.empty_like(perm) inv[perm] = torch.arange(perm.siz…
st30251
Hi Bin! 111414: Example #1: [x, y, z] shuffled by P: [2, 0, 1], we will obtain [z, x, y], the corresponding inverse should be P^-1: [1, 2, 0]. Example #2: [a, b, c, d, e, f] shuffled by P: [5, 2, 0, 1, 4, 3], then we will get [f, c, a, b, e, d], the corresponding inverse should be P^-1: [2, 3, 1, 5, 4, 0]. If you represent your permutation as a vector of permuted integers (as you do), you may use argsort() to obtain the inverse permutation: >>> import torch >>> torch.__version__ '1.7.1' >>> p1 = torch.tensor ([2, 0, 1]) >>> torch.argsort (p1) tensor([1, 2, 0]) >>> p2 = torch.tensor ([5, 2, 0, 1, 4, 3]) >>> torch.argsort (p2) tensor([2, 3, 1, 5, 4, 0]) Best. K. Frank
st30252
@KFrank 's method is a great ad hoc way to inverse a permutation. If the argument is rather large (say >=10000 elements) and you know it is a permutation (0…9999) then you could also use indexing: def inverse_permutation(perm): inv = torch.empty_like(perm) inv[perm] = torch.arange(perm.size(0), device=perm.device) return inv This will give the same result (again, if you knew the input is a permutation), but on my computer and with 100.000 elements, a quick %timeit benchmark has it faster on the CPU (~4ms vs. ~165µs) and on the GPU (~147µs vs. ~20µs). In other words, this is something where the O(n log n) complexity of sort vs. the O(n) complexity of index assignment shows for large operands. The beauty of @KFrank’s solution is that you don’t need to write a new function, it instantly generalizes to batches, too. Unless you are in the CPU, have large operands and are in a hurry or in a very tight loop, the speed difference probably doesn’t matter much. Best regards Thomas
st30253
Hi Thomas (and Bin)! tom: this is something where the O(n log n) complexity of sort vs. the O(n) complexity of index assignment shows for large operands. This is absolutely correct – using argsort() to invert a permutation has the sub-optimal O(n log n) complexity, and will definitely matter with large permutations. Best. K. Frank
st30254
Just want to add that if the dim indices are negative (which they could be because we are talking python) one needs to account for that. So my solution would be: def invert_permutation(*dims: int): n = len(dims) dims = [d if d >= 0 else d + n for d in dims] return torch.argsort(torch.LongTensor(dims)).tolist() # passes this test @pytest.mark.parametrize("p_in,p_expct", [ ([0, 1, 2], [0, 1, 2]), # identity ([0, 2, 1], [0, 2, 1]), # swap last 2 ([-3, -2, 1], [0, 1, 2]), # identity with neg. ix ([0, -1, -2], [0, 2, 1]), # swap last 2 ]) def test_inverse_permute(p_in, p_expct): assert lazy.tensor.invert_permutation(*p_in) == p_expct
st30255
I am working with sequences of images for a binary classification. With batch size 1 my sequence has dimension [1, 9, 3, 256, 256]. Since I want to use a Temporal CN (from this GitHub: TCN/TCN/mnist_pixel at master · locuslab/TCN · GitHub 1), which requires input size [N, C_in, L_in], I am reshaping the the images like this: images = images.view(-1, 3, 9) where 3 is the channel size and 9 the sequence length. However, I get the following error when calling loss = criterion(outputs, labels): The size of tensor a (9) must match the size of tensor b (65536) at non-singleton dimension 0 The 65536 results from 256*256 but I don’t know how to adjust my dimensions accordingly so that they match my labels’ dimensions. The model looks like this class Chomp1d(nn.Module): def __init__(self, chomp_size): super(Chomp1d, self).__init__() self.chomp_size = chomp_size def forward(self, x): return x[:, :, :-self.chomp_size].contiguous() class TemporalBlock(nn.Module): def __init__(self, n_inputs, n_outputs, kernel_size, stride, dilation, padding, dropout=0.2): super(TemporalBlock, self).__init__() self.conv1 = weight_norm(nn.Conv1d(n_inputs, n_outputs, kernel_size, stride=stride, padding=padding, dilation=dilation)) self.chomp1 = Chomp1d(padding) self.relu1 = nn.ReLU() self.dropout1 = nn.Dropout(dropout) self.conv2 = weight_norm(nn.Conv1d(n_outputs, n_outputs, kernel_size, stride=stride, padding=padding, dilation=dilation)) self.chomp2 = Chomp1d(padding) self.relu2 = nn.ReLU() self.dropout2 = nn.Dropout(dropout) self.net = nn.Sequential(self.conv1, self.chomp1, self.relu1, self.dropout1, self.conv2, self.chomp2, self.relu2, self.dropout2) self.downsample = nn.Conv1d(n_inputs, n_outputs, 1) if n_inputs != n_outputs else None self.relu = nn.ReLU() self.init_weights() def init_weights(self): self.conv1.weight.data.normal_(0, 0.01) self.conv2.weight.data.normal_(0, 0.01) if self.downsample is not None: self.downsample.weight.data.normal_(0, 0.01) def forward(self, x): out = self.net(x) res = x if self.downsample is None else self.downsample(x) return self.relu(out + res) class TemporalConvNet(nn.Module): def __init__(self, num_inputs, num_channels, kernel_size=2, dropout=0.2): super(TemporalConvNet, self).__init__() layers = [] num_levels = len(num_channels) for i in range(num_levels): dilation_size = 2 ** i in_channels = num_inputs if i == 0 else num_channels[i-1] out_channels = num_channels[i] layers += [TemporalBlock(in_channels, out_channels, kernel_size, stride=1, dilation=dilation_size, padding=(kernel_size-1) * dilation_size, dropout=dropout)] self.network = nn.Sequential(*layers) def forward(self, x): return self.network(x) class TCN(nn.Module): def __init__(self, input_size, output_size, num_channels, kernel_size, dropout): super(TCN, self).__init__() self.tcn = TemporalConvNet(input_size, num_channels, kernel_size=kernel_size, dropout=dropout) self.linear = nn.Linear(num_channels[-1], output_size) def forward(self, inputs): y1 = self.tcn(inputs) # input should have dimension (N, C, L) o = self.linear(y1[:, :, -1]) return F.log_softmax(o, dim=1)
st30256
It seems you are permuting the spatial dimensions into the batch dimension of the input tensor, while the target tensor seems to use the original batch dimension, which will thus break. Both, the model output and target, are expected to have the same batch dimension, which represents the number of samples in the current batch. I’m not familiar with your use case, so don’t know why the reshaping is needed, so could you explain it a bit more?
st30257
Thanks for your reply! In fact, I realized that my reshaping was incorrect. I used the following dimensions now and do not get an error anymore: [19, 3, 256256]. I realized that I obviously do not need the log_softmax at the end, since I am performing binary classification. So the model now looks likes this: class Chomp1d(nn.Module): def __init__(self, chomp_size): super(Chomp1d, self).__init__() self.chomp_size = chomp_size def forward(self, x): return x[:, :, :-self.chomp_size].contiguous() # Cell class GAP1d(nn.Module): "Global Adaptive Pooling + Flatten" def __init__(self, output_size=1): super(GAP1d, self).__init__() self.gap = nn.AdaptiveAvgPool1d(output_size) self.flatten = nn.Flatten() def forward(self, x): return self.flatten(self.gap(x)) class TemporalBlock(nn.Module): def __init__(self, ni, nf, ks, stride, dilation, padding, dropout=0.): super(TemporalBlock, self).__init__() self.conv1 = weight_norm(nn.Conv1d(ni,nf,ks,stride=stride,padding=padding,dilation=dilation)) self.chomp1 = Chomp1d(padding) self.relu1 = nn.ReLU() self.dropout1 = nn.Dropout(dropout) self.conv2 = weight_norm(nn.Conv1d(nf,nf,ks,stride=stride,padding=padding,dilation=dilation)) self.chomp2 = Chomp1d(padding) self.relu2 = nn.ReLU() self.dropout2 = nn.Dropout(dropout) self.net = nn.Sequential(self.conv1, self.chomp1, self.relu1, self.dropout1, self.conv2, self.chomp2, self.relu2, self.dropout2) self.downsample = nn.Conv1d(ni,nf,1) if ni != nf else None self.relu = nn.ReLU() self.init_weights() def init_weights(self): self.conv1.weight.data.normal_(0, 0.01) self.conv2.weight.data.normal_(0, 0.01) if self.downsample is not None: self.downsample.weight.data.normal_(0, 0.01) def forward(self, x): out = self.net(x) res = x if self.downsample is None else self.downsample(x) return self.relu(out + res) def TemporalConvNet(c_in, layers, ks=2, dropout=0.): temp_layers = [] for i in range(len(layers)): dilation_size = 2 ** i ni = c_in if i == 0 else layers[i-1] nf = layers[i] temp_layers += [TemporalBlock(ni, nf, ks, stride=1, dilation=dilation_size, padding=(ks-1) * dilation_size, dropout=dropout)] return nn.Sequential(*temp_layers) class TCN(nn.Module): def __init__(self, c_in, c_out, layers=8*[25], ks=7, conv_dropout=0., fc_dropout=0.): super(TCN1, self).__init__() self.tcn = TemporalConvNet(c_in, layers, ks=ks, dropout=conv_dropout) self.gap = GAP1d() self.dropout = nn.Dropout(fc_dropout) if fc_dropout else None self.linear = nn.Linear(layers[-1],c_out) self.init_weights() def init_weights(self): self.linear.weight.data.normal_(0, 0.01) def forward(self, x): x = self.tcn(x) x = self.gap(x) if self.dropout is not None: x = self.dropout(x) x = self.linear(x) return x And I am defining the model e.g. like this: input_channels = 3 output_dim=1 model = TCN(input_channels, output_dim, layers = 2*[5], ks=3, conv_dropout=0.2, fc_dropout=0.4) I am using BCELossWithLogits but it seems that my model isn’t learning. Both AUCs fluctuate a lot and the training AUC is never bigger then 59%, Validation AUC increases up to 80%. But for my LSTM model, the data works perfectly fine. The code I used (TCN/TCN/mnist_pixel at master · locuslab/TCN · GitHub) is from a use case where they perform a multiclass classification on sequences of the MNIST dataset. I am not sure whether I incorrectly implemented something when using the github code but if so, I just can’t figure out what.
st30258
As far as I understand the documentation for BatchNorm1d layer we provide number of features as argument to constructor(nn.BatchNorm1d(number of features)). As an input the layer takes (N, C, L), where N is batch size (I guess…), C is the number of features (this is the dimension where normalization is computed), and L is the input size. Let’s assume I have input in following shape: (batch_size, number_of_timesteps, number_of_features) which is usual data shape for time series if batch_first=True. Question Should I transpose the input (swap dimension 1 and 2) before running the batch normalization? In this case I will have to transpose the output again to use it in RNN later. It looks quite weird to me. Can someone please take a look at below example and let me know if this is the proper way. E.g.: import torch from torch import nn # data (batch size, number of time steps, number of features) x = torch.rand(3, 4, 5) # layers bn = nn.BatchNorm1d(5) rnn = nn.RNN(5, 10, 1, batch_first=True) # computation - transpose TWICE x_normalized = bn(x.transpose(1, 2)).transpose(1, 2) rnn(x_normalized)
st30259
Solved by ptrblck in post #2 Your code looks correct, since the batchnorm layer expects an input in [batch_size, features, temp. dim] so you would need to permute it before (and after to match the input of the rnn). In your code snippet you could of course initialize the tensor in the right shape, but I assume that code is jus…
st30260
Your code looks correct, since the batchnorm layer expects an input in [batch_size, features, temp. dim] so you would need to permute it before (and after to match the input of the rnn). In your code snippet you could of course initialize the tensor in the right shape, but I assume that code is just to show the usage.
st30261
Isn’t this a fundamentally flawed approach? BatchNorm1d is not working harmoniously with nn.Linear, which is the most fundamental part of PyTorch. This makes it not possible to use BatchNorm and Linear layer in the sequential format. At least to my knowledge.
st30262
It depends on what you are trying to achieve and I wouldn’t claim the approach is fundamentally flawed, as I’m not familiar with the use case of @adm. raraz15: This makes it not possible to use BatchNorm and Linear layer in the sequential format. At least to my knowledge. You can use nn.BatchNorm1d layers in an nn.Sequential container as seen here: model = nn.Sequential( nn.Linear(4, 3), nn.BatchNorm1d(3) ) x = torch.randn(2, 4) out = model(x) If you add another dimension and want to permute the activation, you could write a custom module, which permutes it to the right shape.
st30263
For 2D Batches it works great. However, when we use fully connected layers for 3D inputs of shape (N=batch size,L=sequence length,C=input size) we have to transpose 2 times to use BatchNorm1D after each linear transformation. Because each linear layer acts upon dim=-1 (our features) and BatchNorm can act only on dim=1, 2 transpositions must be done: before and after the BatchNorm. For example, a fully connected network of 3 layers is given below with BatchNorm before each non-linearity. class FC_layer(nn.Module): def __init__(self,input_size_FC1,output_size_FC1,output_size_FC2,output_size_FC3): super(FC_layer, self).__init__() self.linear_layer1 = nn.Linear(input_size_FC1,output_size_FC1) self.normalization1 = nn.BatchNorm1d(output_size_FC1) self.linear_layer2 = nn.Linear(output_size_FC1,output_size_FC2) self.normalization2 = nn.BatchNorm1d(output_size_FC2) self.linear_layer3 = nn.Linear(output_size_FC2,output_size_FC3) self.normalization3 = nn.BatchNorm1d(output_size_FC3) def forward(self,x): """ x : tensor of shape (N_batch, N_sequence, N_features) """ f = nn.ReLU() # Activation functions g = nn.LogSoftmax(dim=2) x_lin1 = self.linear_layer1(x) # Apply linear transformation x_lin1 = torch.transpose(x_lin1,1,2) # Transpose for BatchNorm1d x_lin1_norm = self.normalization1(x_lin1) # Normalize layer1_out = f(x_lin1_norm) # Apply non-linearity layer2_in = torch.transpose(layer1_out,1,2) # Transpose for next linear transformation x_lin2 = self.linear_layer2(layer2_in) x_lin2 = torch.transpose(x_lin2,1,2) x_lin2_norm = self.normalization2(x_lin2) layer2_out = f(x_lin2_norm) layer3_in = torch.transpose(layer2_out,1,2) # Transpose for next linear transformation x_lin3 = self.linear_layer3(layer3_in) x_lin3 = torch.transpose(x_lin3,1,2) x_lin3_norm = self.normalization3(x_lin3) layer3_out = g(x_lin3_norm) output = torch.transpose(layer3_out,1,2) return output Instead, if BatchNorm1d could act upon the final dimension, we wouldn’t need to worry about the dimensions conversions. I couldn’t find a way to do this in the sequential container, but you are suggesting that I could write a permutation module with no parametes and use this in the sequential container. Right? I am worried a about the necessity to do a copy() or replace the tensor with its transposed version. Also about the contiguous requirment of other pytorch functionalities.
st30264
Is there currently any way to avoid the permutation before and after the BatchNorm1d in a sequential container? I just need to apply that before an LSTM layer and I am not able to do the permutation suggested
st30265
following some of the discussions with multinomial in the past, there still seems to be some problem with its implementation in torch. 1.8.1. The code sample = torch.multinomial(torch.tensor([[1, 0, 0, 1]], dtype=torch.float), num_samples=3, replacement=False) print(sample) does not throw, nor does it for num_samples=4. However, sample = torch.multinomial(torch.tensor([[1, 0, 0, 1]], dtype=torch.float), num_samples=5, replacement=False) print(sample) gives RuntimeError: cannot sample n_sample > prob_dist.size(-1) samples without replacement something is wrong here I guess
st30266
I got this error after updating torch and torchvision :- Torch not compiled with CUDA enabled. Before this I was able to train model on GPU. Current version of torch(after updating) is 1.9.0+cpu. After searching on internet I updated my Cuda 11.0 to 11.3 and GTX1050ti driver version 451.82 to 466.77. It’s not working still, what should I do now?
st30267
Solved by ptrblck in post #8 You can just install the NVIDIA driver, select a desired PyTorch binary using the posted link, and install it. E.g. conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch will install PyTorch with the CUDA 10.2 runtime (as well as cudnn7.6.5).