instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Error while applying image augmentation transformations to data in FastAI
I am trying to replicate this kaggle notebook https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-with-resnet50-oversampling on Google Colab. The code was working fine till yesterday but today it is throwing a runtime error. Below is the problematic code: tfms = get_transforms(do_flip=True,flip_vert=True,max_rotate=360,max_warp=0,max_zoom=1.1,max_lighting=0.1,p_lighting=0.5) src = (ImageList.from_df(df=df,path=data_path,cols='path') #get dataset from dataset //ImageItemList threw errors so changed to ImageList .split_by_idx(range(len(train_df)-1,len(df))) #Splitting the dataset .label_from_df(cols='level') #obtain labels from the level column ) data= (src.transform(tfms,size=sz) #Data augmentation .databunch(bs=bs,num_workers=0) #DataBunch .normalize(imagenet_stats) #Normalize ) I get the following error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs) 593 x = ds[0] --> 594 try: x.apply_tfms(tfms, **kwargs) 595 except Exception as e: 8 frames /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in apply_tfms(self, tfms, do_resolve, xtra, size, resize_method, mult, padding_mode, mode, remove_out) 122 x = tfm(x, size=_get_crop_target(size,mult=mult), padding_mode=padding_mode) --> 123 else: x = tfm(x) 124 return x.refresh() /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in __call__(self, x, *args, **kwargs) 523 "Randomly execute our tfm on `x`." --> 524 return self.tfm(x, *args, **{**self.resolved, **kwargs}) if self.do_run else x 525 /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in __call__(self, p, is_random, use_on_y, *args, **kwargs) 469 "Calc now if `args` passed; else create a transform called prob `p` if `random`." --> 470 if args: return self.calc(*args, **kwargs) 471 else: return RandTransform(self, kwargs=kwargs, is_random=is_random, use_on_y=use_on_y, p=p) /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in calc(self, x, *args, **kwargs) 474 "Apply to image `x`, wrapping it if necessary." --> 475 if self._wrap: return getattr(x, self._wrap)(self.func, *args, **kwargs) 476 else: return self.func(x, *args, **kwargs) /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in affine(self, func, *args, **kwargs) 182 m = tensor(func(*args, **kwargs)).to(self.device) --> 183 self.affine_mat = self.affine_mat @ m 184 return self RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out During handling of the above exception, another exception occurred: Exception Traceback (most recent call last) <ipython-input-74-31aae73a70fc> in <module>() 6 ) 7 print(src) ----> 8 data= (src.transform(tfms,size=sz) #Data augmentation 9 .databunch(bs=bs,num_workers=0) #DataBunch 10 .normalize(imagenet_stats) #Normalize /usr/local/lib/python3.6/dist-packages/fastai/data_block.py in transform(self, tfms, **kwargs) 503 if not tfms: tfms=(None,None) 504 assert is_listy(tfms) and len(tfms) == 2, "Please pass a list of two lists of transforms (train and valid)." --> 505 self.train.transform(tfms[0], **kwargs) 506 self.valid.transform(tfms[1], **kwargs) 507 if self.test: self.test.transform(tfms[1], **kwargs) /usr/local/lib/python3.6/dist-packages/fastai/data_block.py in transform(self, tfms, tfm_y, **kwargs) 722 def transform(self, tfms:TfmList, tfm_y:bool=None, **kwargs): 723 "Set the `tfms` and `tfm_y` value to be applied to the inputs and targets." --> 724 _check_kwargs(self.x, tfms, **kwargs) 725 if tfm_y is None: tfm_y = self.tfm_y 726 tfms_y = None if tfms is None else list(filter(lambda t: getattr(t, 'use_on_y', True), listify(tfms))) /usr/local/lib/python3.6/dist-packages/fastai/data_block.py in _check_kwargs(ds, tfms, **kwargs) 594 try: x.apply_tfms(tfms, **kwargs) 595 except Exception as e: --> 596 raise Exception(f"It's not possible to apply those transforms to your dataset:\n {e}") 597 598 class LabelList(Dataset): Exception: It's not possible to apply those transforms to your dataset: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out I changed nothing in this code, it is the same as it was yesterday but for some reason it gives me an error today. Kindly Help. Edit: I found out it is working perfectly fine on my local Jupyter notebook. Still shows error for Colab though
It seems that some issues with torch that is used in colab FastAI Forum Try to install specific version of torch in your colab before run fastAI python code !pip install "torch==1.4" "torchvision==0.5.0"
https://stackoverflow.com/questions/61503339/
Best practice to pass PyTorch device name to model
Currently, I separated train.py with model.py for my deep learning project. So for the datasets, they are sent to cuda device inside the epoch for loop like below. train.py ... device = torch.device('cuda:2' if torch.cuda.is_available() else 'cpu') model = MyNet(~).to(device) ... for batch_data in train_loader: s0 = batch_data[0].to(device) s1 = batch_data[1].to(device) pred = model(s0, s1) However, inside my model (in model.py), it also needs to access the device variable for skip connection like method. To make a new copy of hidden unit (for residual connection) model.py class MyNet(nn.Module): def __init__(self, in_feats, hid_feats, out_feats): super(MyNet, self).__init__() self.conv1 = GCNConv(in_feats, hid_feats) ... def forward(self, data): x, edge_index = data.x, data.edge_index x1 = copy.copy(x.float()) x = self.conv1(x, edge_index) skip_conn = torch.zeros(len(data.batch), x1.size(1)).to(device) # <-- (some opps for x1 -> skip_conn) x = torch.cat((x, skip_conn), 1) In this case, I am currently passing device as a parameter, however, I believe this is not a best practice. Where should be the best practice to send the dataset to CUDA? In the case of multiple scripts need to access device, how sould I handle this? (parameter, global variable?)
You can add a new attribute to MyModel to store the device info and use this in the skip_conn initialization. class MyNet(nn.Module): def __init__(self, in_feats, hid_feats, out_feats, device): # <-- super(MyNet, self).__init__() self.conv1 = GCNConv(in_feats, hid_feats) self.device = device # <-- self.to(self.device) # <-- ... def forward(self, data): x, edge_index = data.x, data.edge_index x1 = copy.copy(x.float()) x = self.conv1(x, edge_index) skip_conn = torch.zeros(len(data.batch), x1.size(1), device=self.device) # <-- (some opps for x1 -> skip_conn) x = torch.cat((x, skip_conn), 1) Notice that in this example, MyNet is responsible for all the device logic including the .to(device) call. This way, we are encapsulating all model-related device management in the model class itself.
https://stackoverflow.com/questions/61503720/
Cross-validation of neural network: How to treat the number of epochs?
I'm implementing a pytorch neural network (regression) and want to identify the best network topology, optimizer etc.. I use cross validation, because I have x databases of measurements and I want to evaluate whether I can train a neural network with a subset of the x databases and apply the neural network to the unseen databases. Therefore, I also introduce a test database, which I doesn't use in the phase of the hyperparameter identification. I am confused on how to treat the number of epochs in cross validation, e.g. I have a number of epochs = 100. There are two options: The number of epochs is a hyperparameter to tune. In each epoch, the mean error across all cross validation iterations is determined. After models are trained with all network topologies, optimizers etc. the model with the smallest mean error is determined and has parameters like: -network topology: 1 -optimizer: SGD -number of epochs: 54 To calculate the performance on the test set, a model is trained with exactly these parameters (number of epochs = 54) on the training and the validation data. Then it is applied and evaluated on the test set. The number of epochs is NOT a hyperparameter to tune. Models are trained with all the network topologies, optimizers etc. For each model, the number of epochs, where the error is the smallest, is used. The models are compared and the best model can be determined with parameters like: -network topology: 1 -optimizer: SGD To calculate the performance on the test data, a β€œsimple” training and validation split is used (e.g. 80-20). The model is trained with the above parameters and 100 epochs on the training and validation data. Finally, a model with a number of epochs yielding the smallest validation error, is evaluated on the test data. Which option is the correct or the better one?
The number of epochs is better not to be fine-tuned. Option 2 is a better option. Actually, if the # of epochs is fixed, you need not to have validation set. Validation set gives you the optimal epoch of the saved model.
https://stackoverflow.com/questions/61504356/
GPU usage shows zero when CUDA with PyTorch using on Windows
I have pytorch script. import torch torch.cuda.is_available() # True device=torch.device('cuda:0') # I moved my tensors to device But Windows Task Manager shows zero GPU (NVIDIA GTX 1050TI) usage when pytorch script running Speed of my script is fine and if I had changing torch.device to CPU instead GPU a speed become slower, therefore cuda (GPU) is working. Why Windows Task Manager doesn't show GPU usage? Sample of my code: device=torch.device("cuda:0") model=torch.load('mymodel.pth', map_location=torch.device(device)) image=Image.open('picture.png').convert('RGB') transform=transforms.Compose([ transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) input=transform(image) input=torch.unsqueeze(input, 0) input=input.to(device) output=model(input)
Windows task manager overall utilization does not seem to include cuda usage. Make sure you select the cuda option in the graphs. For details see: https://medium.com/@michaelceber/gpu-monitoring-on-windows-10-for-machine-learning-cuda-41088de86d65
https://stackoverflow.com/questions/61507765/
How does one pickle arbitrary pytorch models that use lambda functions?
I currently have a neural network module: import torch.nn as nn class NN(nn.Module): def __init__(self,args,lambda_f,nn1, loss, opt): super().__init__() self.args = args self.lambda_f = lambda_f self.nn1 = nn1 self.loss = loss self.opt = opt # more nn.Params stuff etc... def forward(self, x): #some code using fields return out I am trying to checkpoint it but because pytorch saves using state_dicts it means I can't save the lambda functions I was actually using if I checkpoint with the pytorch torch.save etc. I literally want to save everything without issue and re-load to train on GPUs later. I currently am using this: def save_ckpt(path_to_ckpt): from pathlib import Path import dill as pickle ## Make dir. Throw no exceptions if it already exists path_to_ckpt.mkdir(parents=True, exist_ok=True) ckpt_path_plus_path = path_to_ckpt / Path('db') ## Pickle args db['crazy_mdl'] = crazy_mdl with open(ckpt_path_plus_path , 'ab') as db_file: pickle.dump(db, db_file) currently it throws no errors when I chekpoint it and it saved it. I am worried that when I train it there might be a subtle bug even if no exceptions/errors are trained or something unexpected might happen (e.g. weird saving on disks in the clusters etc who knows). Is this safe to do with pytorch classes/nn models? Especially if we want to resume training with GPUs? Cross posted: How does one pickle arbitrary pytorch models that use lambda functions? https://discuss.pytorch.org/t/how-does-one-pickle-arbitrary-pytorch-models-that-use-lambda-functions/79026 https://www.reddit.com/r/pytorch/comments/gagpjg/how_does_one_pickle_arbitrary_pytorch_models_that/? https://www.quora.com/unanswered/How-does-one-pickle-arbitrary-PyTorch-models-that-use-lambda-functions
this is not a good idea. If you do this then if your code changes to a different github repo then it will be hard restore your models that took a lot of time to train. The cycles spent recovering those or retraining is not worth it. I recommend to instead do it the pytorch way and only save the weights as they recommend in pytorch.
https://stackoverflow.com/questions/61510810/
How is Siamese network realized with Pytorch if it is single input during inference?
I am trying to train one CNN model with Pytorch, so that the output behaves differently for different types of inputs. (i.e. If the input images are human-beings, it outputs pattern A, but if the input is some other animals, it outputs pattern B). After some online search, it seems Siamese network is related to this. So I have the following 2 questions: (1) Is Siamese network really a good way to train such a model? (2) From the implementation point of view, how should I implement the code in pytorch? class SiameseNetwork(nn.Module): def __init__(self): super(SiameseNetwork, self).__init__() self.cnn1 = nn.Sequential( nn.ReflectionPad2d(1), nn.Conv2d(1, 4, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(4), nn.ReflectionPad2d(1), nn.Conv2d(4, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8), nn.ReflectionPad2d(1), nn.Conv2d(8, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8), ) self.fc1 = nn.Sequential( nn.Linear(8*100*100, 500), nn.ReLU(inplace=True), nn.Linear(500, 500), nn.ReLU(inplace=True), nn.Linear(500, 5)) def forward_once(self, x): output = self.cnn1(x) output = output.view(output.size()[0], -1) output = self.fc1(output) return output def forward(self, input1, input2): output1 = self.forward_once(input1) output2 = self.forward_once(input2) return output1, output2 Currently, I am trying some existing implementation I found online like the above class definition. It works, but there will always be two inputs and two outputs for this model. I agree that it is convenient for training, but ideally, it should be only one input and one (two is also fine) output during inference. Could someone provide some guidance on how to modify the code to make it single input?
You can call forward_once during inference: this takes a single input and returns a single output. Note that explicitly calling forward_once will not invoke any hooks you might have on forward/backward calls of your module. Alternatively, you can make forward_once your module's forward function, and make your training function do the double calling of your model (which makes more sense: Siamese networks is a training method, and not part of a network's architecture).
https://stackoverflow.com/questions/61512605/
How to concatenate a tensor WITHIN axis=1?
I have a tensor of shape(2,2,2,2): tensor([[[[ 5., 5.], [ 5., 5.]], [[ 10., 10.], [ 10., 10.]]], [[[ 100., 100.], [ 100., 100.]], [[1000., 1000.], [1000., 1000.]]]], device='cuda:0') I want to transform it such that the tensor along axis=1 are repeated 3 times. And after applying .view(-1) to that I get a 1D resultant tensor as: tensor([ 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 1000., 1000., 1000., 1000. 1000., 1000., 1000., 1000. 1000., 1000., 1000., 1000. 1000., 1000., 1000., 1000.], device='cuda:0') How to do this?
Use torch.repeat_interleave to Repeat elements of a tensor. t.repeat_interleave(repeats=3, dim=1).view(-1) tensor([ 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 5., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000.])
https://stackoverflow.com/questions/61514670/
Custom submodules in pytorch / libtorch C++
Full disclosure, I asked this same question on the PyTorch forums about a few days ago and got no reply, so this is technically a repost, but I believe it's still a good question, because I've been unable to find an answer anywhere online. Here goes: Can you show an example of using register_module with a custom module? The only examples I’ve found online are registering linear layers or convolutional layers as the submodules. I tried to write my own module and register it with another module and I couldn’t get it to work. My IDE is telling me no instance of overloaded function "MyModel::register_module" matches the argument list -- argument types are: (const char [14], TreeEmbedding) (TreeEmbedding is the name of another struct I made which extends torch::nn::Module.) Am I missing something? An example of this would be very helpful. Edit: Additional context follows below. I have a header file "model.h" which contains the following: struct TreeEmbedding : torch::nn::Module { TreeEmbedding(); torch::Tensor forward(Graph tree); }; struct MyModel : torch::nn::Module{ size_t embeddingSize; TreeEmbedding treeEmbedding; MyModel(size_t embeddingSize=10); torch::Tensor forward(std::vector<Graph> clauses, std::vector<Graph> contexts); }; I also have a cpp file "model.cpp" which contains the following: MyModel::MyModel(size_t embeddingSize) : embeddingSize(embeddingSize) { treeEmbedding = register_module("treeEmbedding", TreeEmbedding{}); } This setup still has the same error as above. The code in the documentation does work (using built-in components like linear layers), but using a custom module does not. After tracking down torch::nn::Linear, it looks as though that is a ModuleHolder (Whatever that is...) Thanks, Jack
I will accept a better answer if anyone can provide more details, but just in case anyone's wondering, I thought I would put up the little information I was able to find: register_module takes in a string as its first argument and its second argument can either be a ModuleHolder (I don't know what this is...) or alternatively it can be a shared_ptr to your module. So here's my example: treeEmbedding = register_module<TreeEmbedding>("treeEmbedding", make_shared<TreeEmbedding>()); This seemed to work for me so far.
https://stackoverflow.com/questions/61515915/
Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000]))
I am using models.vgg16(pretrained=True) model for image classification, where number of classes = 3. Batch size is 12 trainloader = torch.utils.data.DataLoader(train_data, batch_size=12, shuffle=True) since error says Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000])) I have changed last fc layer parameters and got last FC layer as Linear(in_features=1000, out_features=3, bias=True) Loss function is BCEWithLogitsLoss() criterion = nn.BCEWithLogitsLoss() optimizer = optim.SGD(vgg16.parameters(), lr=0.001, momentum=0.9) Training code is # zero the parameter gradients optimizer.zero_grad() outputs = vgg16(inputs) #----> forward pass loss = criterion(outputs, labels) #----> compute loss #error occurs here loss.backward() #----> backward pass optimizer.step() #----> weights update While computing loss, I get this error Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000])) code is available at: code
Try to double check how you modified the linear layer. It seems that somehow the model does not forward pass through it. Your model output have 1000 output size for each sample, while it should have 3. That's the reason you cannot evaluate the loss, since you try to compare 1000 classes to 3. You should have 3 outputs in your last layer, and that should work. EDIT From the code you shared here: link, I think there are two problems. First, you modifed your model this way: # Load the pretrained model from pytorch vgg16 = models.vgg16(pretrained=True) vgg16.classifier[6].in_features = 1000 vgg16.classifier[6].out_features = 3 while what you did here is to add a layer as an attribute to your network, you should also modify the forward() function of your model. Adding the layer as an attribute in the list doesn't apply the layer when forwardpassing the input. Usually the way to do this properly is to define new class which inherits from the model you want to implement - class myvgg16(models.vgg16) or more generally class myvgg(nn.Module). You can find further explanation in the following link If it fails, try to unsqueeze(1) your targets size (i.e. the lables variable). This is less likly to be the reason for the error but worth a try. EDIT Give another try of converting your target tensor to one hot vectors. And change the tensor type to Float as the BCELoss receives floats.
https://stackoverflow.com/questions/61518226/
How to fix the dimension error in the loss function/softmax?
I am implementing a logistic regression in PyTorch for XOR (I don't expect it to work well it's just a demonstration). For some reason I am getting an error 'IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)'. It is not clear to me where this originates. The error points to log_softmax during training. import torch.nn as nn import torch.nn.functional as F class LogisticRegression(nn.Module): # input_size: Dimensionality of input feature vector. # num_classes: The number of classes in the classification problem. def __init__(self, input_size, num_classes): # Always call the superclass (nn.Module) constructor first! super(LogisticRegression, self).__init__() # Set up the linear transform self.linear = nn.Linear(input_size, num_classes) # Forward's sole argument is the input. # input is of shape (batch_size, input_size) def forward(self, x): # Apply the linear transform. # out is of shape (batch_size, num_classes) out = self.linear(x) # Softmax the out tensor to get a log-probability distribution # over classes for each example. out_distribution = F.softmax(out, dim=-1) return out_distribution # Binary classifiation num_outputs = 1 num_input_features = 2 # Create the logistic regression model logreg_clf = LogisticRegression(num_input_features, num_outputs) print(logreg_clf) lr_rate = 0.001 X = torch.Tensor([[0,0],[0,1], [1,0], [1,1]]) Y = torch.Tensor([0,1,1,0]).view(-1,1) #view is similar to numpy.reshape() # Run the forward pass of the logistic regression model sample_output = logreg_clf(X) #completely random at the moment print(X) loss_function = nn.CrossEntropyLoss() # computes softmax and then the cross entropy optimizer = torch.optim.SGD(logreg_clf.parameters(), lr=lr_rate) from torch.autograd import Variable #training loop: epochs = 201 #how many times we go through the training set steps = X.size(0) #steps = 4; we have 4 training examples for i in range(epochs): for j in range(steps): #sample from the training set: data_point = np.random.randint(X.size(0)) x_var = Variable(X[data_point], requires_grad=False) y_var = Variable(Y[data_point], requires_grad=False) optimizer.zero_grad() # zero the gradient buffers y_hat = logreg_clf(x_var) #get the output from the model loss = loss_function.forward(y_hat, y_var) #calculate the loss loss.backward() #backprop optimizer.step() #does the update if i % 500 == 0: print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy()))
First of all, you are doing a binary classification task. So the number of output features should be 2; i.e., num_outputs = 1. Second, as it's been declared in nn.CrossEntropyLoss() documentation, the .forward method accepts two tensors as below: Input: (N, C) where C is the number of classes (in your case it is 2). Target: (N) N in the example above is the number of training examples that you pass in to the loss function; for simplicity, you can set it to one (i.e., doing a forward pass for each instance and update gradients thereafter). Note: Also, you don't need to use .Softmax() before nn.CrossEntropyLoss() module as this class has nn.LogSoftmax included in itself. I modified your code as below, this is a working example of your snippet: import torch.nn as nn import torch.nn.functional as F import numpy as np import torch class LogisticRegression(nn.Module): # input_size: Dimensionality of input feature vector. # num_classes: The number of classes in the classification problem. def __init__(self, input_size, num_classes): # Always call the superclass (nn.Module) constructor first! super(LogisticRegression, self).__init__() # Set up the linear transform self.linear = nn.Linear(input_size, num_classes) # Forward's sole argument is the input. # input is of shape (batch_size, input_size) def forward(self, x): # Apply the linear transform. # out is of shape (batch_size, num_classes) out = self.linear(x) # Softmax the out tensor to get a log-probability distribution # over classes for each example. return out # Binary classifiation num_outputs = 2 num_input_features = 2 # Create the logistic regression model logreg_clf = LogisticRegression(num_input_features, num_outputs) print(logreg_clf) lr_rate = 0.001 X = torch.Tensor([[0,0],[0,1], [1,0], [1,1]]) Y = torch.Tensor([0,1,1,0]).view(-1,1) #view is similar to numpy.reshape() # Run the forward pass of the logistic regression model sample_output = logreg_clf(X) #completely random at the moment print(X) loss_function = nn.CrossEntropyLoss() # computes softmax and then the cross entropy optimizer = torch.optim.SGD(logreg_clf.parameters(), lr=lr_rate) from torch.autograd import Variable #training loop: epochs = 201 #how many times we go through the training set steps = X.size(0) #steps = 4; we have 4 training examples for i in range(epochs): for j in range(steps): #sample from the training set: data_point = np.random.randint(X.size(0)) x_var = Variable(X[data_point], requires_grad=False).unsqueeze(0) y_var = Variable(Y[data_point], requires_grad=False).long() optimizer.zero_grad() # zero the gradient buffers y_hat = logreg_clf(x_var) #get the output from the model loss = loss_function(y_hat, y_var) #calculate the loss loss.backward() #backprop optimizer.step() #does the update if i % 500 == 0: print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy())) Update To get the predicted class labels which is either 0 or 1: pred = np.argmax(y_hat.detach().numpy, axis=0) As for the .detach() function, numpy expects the tensor/array to get detached from the computation graph; i.e., the tensor should not have require_grad=True and detach method would do the trick for you.
https://stackoverflow.com/questions/61519128/
No such file or directory: 'docker': 'docker' when running sagemaker studio in local mode
I try to train a pytorch model on amazon sagemaker studio. It's working when I use an EC2 for training with: estimator = PyTorch(entry_point='train_script.py', role=role, sagemaker_session = sess, train_instance_count=1, train_instance_type='ml.c5.xlarge', framework_version='1.4.0', source_dir='.', git_config=git_config, ) estimator.fit({'stockdata': data_path}) and it's work on local mode in classic sagemaker notebook (non studio) with: estimator = PyTorch(entry_point='train_script.py', role=role, train_instance_count=1, train_instance_type='local', framework_version='1.4.0', source_dir='.', git_config=git_config, ) estimator.fit({'stockdata': data_path}) But when I use it the same code (with train_instance_type='local') on sagemaker studio it doesn't work and I have the following error: No such file or directory: 'docker': 'docker' I tried to install docker with pip install but the docker command is not found if use it in terminal
This indicates that there is a problem finding the Docker service. By default, the Docker is not installed in the SageMaker Studio (confirming github ticket response).
https://stackoverflow.com/questions/61520346/
pytorch F.cross_entropy does not apply gradient to weights
I'm trying to train an MLP from scratch using torch tensors and some of the built-in loss functions. I have IRIS-data downloaded and stored in tensor (100, 4) and labels (100) (integers 0-2) in data_tr and targets_tr. I have enabled gradients on the input data data_tr.requires_grad=True I have a 2-layer MLP initialized like this: W1 = torch.randn([4, 64], requires_grad=True) W2 = torch.randn([64, 3], requires_grad=True) b1 = torch.tensor([1.0], requires_grad=True) b2 = torch.tensor([1.0], requires_grad=True I understand that I should train like this: for epoch in range(num_epochs): W1.grad = None W2.grad = None b1.grad = None b2.grad = None f = torch.relu(data_tr @ W1 + b1) @ W2 + b2 error = torch.nn.functional.cross_entropy(f, targets_tr) error.backward() W1 = W1 - lr * W1.grad W2 = W2 - lr * W2.grad b1 = b1 - lr * b1.grad b2 = b2 - lr * b2.grad Which makes a 2 layer MLP and cross_entropy applies softmax. The problem now is that none of the weights or biases (W1, W2, b1, b2) has any gradients after the backward pass. So I get a TypeError: unsupported operand type(s) for *: 'float' and 'NoneType' on the first attempt to updating a weight.
If you want to update the weights without using an optimizer, you have to either use torch.no_grad() or update their data directly to ensure autograd is not tracking the update operations. with torch.no_grad(): W1 -= lr * W1.grad W2 -= lr * W2.grad b1.data = b1 - lr * b1.grad b2.data = b2 - lr * b2.grad Note in the first case, if you do not subtract assign, the requires_grad will be set to False for the weights which again will result in None values for the gradients. with torch.no_grad(): W1 = W1 - lr * W1.grad W2 = W2 - lr * W2.grad print(W1.requires_grad, W2.requires_grad) >>> False False
https://stackoverflow.com/questions/61521464/
Pytorch: How to find accuracy for Multi Label Classification?
I am using vgg16, where number of classes is 3, and I can have multiple labels predicted for a data point. vgg16 = models.vgg16(pretrained=True) vgg16.classifier[6]= nn.Linear(4096, 3) using loss function : nn.BCEWithLogitsLoss() I am able to find find accuracy in case of a single label problem, as `images, labels = data images, labels = images.to(device), labels.to(device) labels = Encode(labels) outputs = vgg16(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() acc = (100 * correct / total)` How can I find accuracy for multi label classification?
From your question, vgg16 is returning raw logits. So here's what you can do: labels = Encode(labels) # torch.Size([N, C]) e.g. tensor([[1., 1., 1.]]) outputs = vgg16(images) # torch.Size([N, C]) outputs = torch.sigmoid(outputs) # torch.Size([N, C]) e.g. tensor([[0., 0.5, 0.]]) outputs[outputs >= 0.5] = 1 accuracy = (outputs == labels).sum()/(N*C)*100
https://stackoverflow.com/questions/61524717/
I am trying to build a neural network with one neuron using the pytorch library. It keeps giving me an error
I am trying to build a neural network with one neuron using the pytorch library. This is my code (the error is at the bottom) import numpy as np import random import matplotlib.pyplot as plt x_train = np.array([random.randint(1,1000) for x in range(1000)], dtype = np.float32) y_train = np.array([int(num*3+1) for num in x_train], dtype = np.float32) x_test = np.array([random.randint(1,1000) for x in range(1000)], dtype = np.float32) y_test = np.array([int(num*3+1) for num in x_train], dtype = np.float32) X_train = torch.from_numpy(x_train) Y_train = torch.from_numpy(y_train) plt.figure(figsize = (8,8)) plt.scatter(X_train, Y_train) plt.show() X_test = torch.from_numpy(x_test) Y_test = torch.from_numpy(y_test) input_size = 1 hidden_size = 1 output_size = 1 learning_rate = 0.1 w1 = torch.rand(input_size, hidden_size, requires_grad = True) b1 = torch.rand(hidden_size, output_size, requires_grad = True) for i in range(100): y_pred = X_train.mm(w1).clamp(min = 0).add(b1) loss = (Y_train-y_pred).pow(2).sum() loss.backward() with torch.no_grad(): w1-=w1.grad*learning_rate b1 -= b1.grad*learning_rate w1.grad.zero_() b1.grad.zero_() When I run this code, it gives me a runtime error: RuntimeError Traceback (most recent call last) <ipython-input-84-5142b17ecfff> in <module> 32 33 for i in range(100): ---> 34 y_pred = X_train.mm(w1).clamp(min = 0).add(b1) 35 loss = (Y_train-y_pred).pow(2).sum() 36 RuntimeError: matrices expected, got 1D, 2D tensors at C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensorMath.cpp:192 What is wrong with that line of code and how to make it work as planned.
torch expects 2D input, so you need to add a new dimension to your inputs tensors. X_train = torch.from_numpy(x_train[..., np.newaxis]) X_test = torch.from_numpy(x_test[..., np.newaxis]) As someone commented above, you can also use torch.unsqueeze: for i in range(100): y_pred = torch.unsqueeze(X_train, 1).mm(w1).clamp(min = 0).add(b1) loss = (Y_train-y_pred).pow(2).sum() Both do the same. The former applies it to the numpy arrays, and the latter does it on the torch tensors. Both will result in this shape, which is the correct format for torch: Out[13]: torch.Size([1000, 1])
https://stackoverflow.com/questions/61525611/
Pytorch memory model: how does "torch.from_numpy()" work?
I'm trying to have an in-depth understanding of how torch.from_numpy() works. import numpy as np import torch arr = np.zeros((3, 3), dtype=np.float32) t = torch.from_numpy(arr) print("arr: {0}\nt: {1}\n".format(arr, t)) arr[0,0]=1 print("arr: {0}\nt: {1}\n".format(arr, t)) print("id(arr): {0}\nid(t): {1}".format(id(arr), id(t))) The output looks like this: arr: [[0. 0. 0.] [0. 0. 0.] [0. 0. 0.]] t: tensor([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) arr: [[1. 0. 0.] [0. 0. 0.] [0. 0. 0.]] t: tensor([[1., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) id(arr): 2360964353040 id(t): 2360964352984 This is part of the doc from torch.from_numpy(): from_numpy(ndarray) -> Tensor Creates a :class:Tensor from a :class:numpy.ndarray. The returned tensor and :attr:ndarray share the same memory. Modifications to the tensor will be reflected in the :attr:ndarray and vice versa. The returned tensor is not resizable. And this is taken from the doc of id(): Return the identity of an object. This is guaranteed to be unique among simultaneously existing objects. (CPython uses the object's memory address.) So here comes the question: Since the ndarray arr and tensor t share the same memory, why do they have different memory addresses? Any ideas/suggestions?
Yes, t and arr are different Python objects at different regions of memory (hence different id) but both point to the same memory address which contains the data (contiguous (usually) C array). numpy operates on this region using C code binded to Python functions, same goes for torch (but using C++). id() doesn't know anything about the memory address of data itself, only of it's "wrappers". EDIT: When you assign b = a (assuming a is np.array), both are references to the same Python wrapper (np.ndarray). In other words they are the same object with different name. It's just how Python's assignment works, see documentation. All of the cases below would return True as well: import torch import numpy as np tensor = torch.tensor([1,2,3]) tensor2 = tensor id(tensor) == id(tensor2) arr = np.array([1, 2, 3, 4, 5]) arr2 = arr id(arr) == id(arr2) some_str = "abba" other_str = some_str id(some_str) == id(other_str) value = 0 value2 = value id(value) == id(value2) Now, when you use torch.from_numpy on np.ndarray you have two objects of different classes (torch.Tensor and original np.ndarray). As those are of different types they can't have the same id. One could see this case as analogous to the one below: value = 3 string_value = str(3) id(value) == id(string_value) Here it's intuitive both string_value and value are two different objects at different memory locations. EDIT 2: All in all concepts of Python object and underlying C array have to be separated. id() doesn't know about C bindings (how could it?), but it knows about memory addresses of Python structures (torch.Tensor, np.ndarray). In case of numpy and torch.tensor you can have following situations: separate on Python level but using same memory region for array (torch.from_numpy) separate on Python level and underlying memory region (one torch.tensor and another np.array). Could be created by from_numpy followed by clone() or a-like deep copy operation. same on Python level and underlying memory region (e.g. two torch.tensor objects, one referencing another as provided above)
https://stackoverflow.com/questions/61526297/
code to print information about only one batch
I want to view information about my variables (type, min,max,std etc). While I write code below, it shows me information about all images in all batches. Please, tell me code, that will show me information about only one batch (something like "for batch [0], print(type(input))" My code: for t,(input, image_id) in enumerate(loader): print(image_id) input = input.cuda() with torch.no_grad(): logit = net(input) probabilityy= torch.sigmoid(logit) probability = probabilityy.data.cpu().numpy() batch_size = len(image_id) for b in range(batch_size): p = probability[b] for c in range(4): predict, num = post_process(p[c], threshold, min_size) rle = run_length_encode(predict) print('input shape: ',input.shape) print('input ndim qty osi: ',input.ndim) print('input type: ',input.type) print('input mean std max min: ',input.mean(),input.std(),input.max(),input.min(),'\n') print('input: ',input[0,0:3,100:105,100:105])`
Usually, torch.DataLoader objects have a __getitem__ dunder method, so you should be able to use indexing to select one batch. But, I haven't seen how you load the data, so a universal solution would be to just break the loop: # ... print('input: ',input[0,0:3,100:105,100:105]) break Like that, it won't move on to the next loop iteration and will effectively only execute this once, i.e., on the first batch.
https://stackoverflow.com/questions/61530023/
Wrap two tensors in pytorch to get size of new tensor as 2
I have two tensors say x and y: x has shape: [21314, 3, 128, 128] y has shape: [21314] Can I get new tensor of shape : [ [21314, 3, 128, 128], [21314] ], basically of shape 2
I believe it's not possible, if you require to save it as a tensor object. Of course, you can use a list or a tuple for this case, but I guess that was not what you meant. First, a tensor is simply a generalization of a matrix for n dimentions instead of two. But let's simplify this for a matrix for now, for example 4x3. The first dimention is of size 4, that means 4 entries. A second dimention of 3 means that each of the 4 first dimention entries will have exactly (and not less then) 3 entries. That is, you must have full list of 3 elements in each nested list. In this simple example, note that you cannot have a matrix like that one: [[1,2,3] [1,2] [1] ] while this is a nested list it's not a matrix and also not a tensor of 2d. What i'm trying to say is that the shape your requested - [ [21314, 3, 128, 128], [21314] ] - is actually not a tensor. But, you could have think of it as a tensor of size two, with data type of tensor in each entry (what you probably ment when asking the question). Though this is not possible since tensors in pytorch holds only numbers of types: float32, float64, float16, uint8, int8, int16, int32, int64, bool. Nevertheless, in most cases you can achieve what you need with assigning two tensors to a list or tuple.
https://stackoverflow.com/questions/61530889/
PyTorch - unexpected shape of model parameters weights
I created a fully connected network in Pytorch with an input layer of shape (1,784) and a first hidden layer of shape (1,256). To be short: nn.Linear(in_features=784, out_features=256, bias=True) Method 1 : model.fc1.weight.data.shape gives me torch.Size([128, 256]), while Method 2 : list(model.parameters())[0].shape gives me torch.Size([256, 784]) In fact, between an input layer of size 784 and a hidden layer of size 256, I was expecting a matrix of shape (784,256). So, in the first case, I see the shape of the next hidden layer (128), which does not make sense for the weights between the input and first hidden layer, and, in the second case, it looks like Pytorch took the transform of the weight matrix. I don't really understand how Pytorch shapes the different weight matrices, and how can I access individual weights after the training. Should I use method 1 or 2? When I display the corresponding tensors, the displays look totally similar, while the shapes are different.
In Pytorch, the weights of model parameters are transposed before applying the matmul operation on the input matrix. That's why the weight matrix dimensions are flipped, and is different from what you expect; i.e., instead of being [784, 256], you observe that it is [256, 784]. You can see the Pytorch source documentation for nn.Linear, where we have: ... self.weight = Parameter(torch.Tensor(out_features, in_features)) ... def forward(self, input): return F.linear(input, self.weight, self.bias) When looking at the implementation of F.linear, we see the corresponding line that multiplies the input matrix with the transpose of the weight matrix: output = input.matmul(weight.t())
https://stackoverflow.com/questions/61532695/
Pytorch Matrix element wise multiplication
Let's say I have two tensors A of shape : [32 , 512] and B of shape : [32 , 512], And I want to do element-wise multiplication between the vectors of the matrix to get a new matrix of shape : [32 , 1] (first row of A with first row of B, and second row of A with second row of B and so on.. ) , current methods I have tried simply mul the matrix values and not rows and gives a matrix of shape : [32 , 512]. Thanks!
I guess this is equivalent of what you're looking for, first a matrix multiplication, then sum over each row. [associative law] import torch a = torch.zeros(31,512) b = torch.zeros(31,512) c = torch.sum(torch.mm(a,torch.t(b)), dim=1).unsqueeze(-1) print(c.shape) Out: torch.Size([31, 1])
https://stackoverflow.com/questions/61543572/
totally clear GPU memory
I can't seem to clear the GPU memory after sending a single variable to the GPU. import torch tm = torch.Tensor([1,2]).to("cuda") !nvidia-smi |===============================+======================+======================| | 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A | | 0% 37C P2 52W / 250W | 730MiB / 10989MiB | 0% Default So I use 730MiB... Now no matter what I try I can not make the 730MiB go to zero: del tm torch.cuda.empty_cache() import sys;sys.modules[__name__].__dict__.clear() %reset Once deleted, variables cannot be recovered. Proceed (y/[n])? y !nvidia-smi | 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A | | 0% 35C P8 1W / 250W | 728MiB / 10989MiB | 0% Default | I would be happy to hear any suggestions, Thanks
Ok, it is not possible, this memory is torch drivers, and it can not be released. I've opened a ticket in pytorch GitHub - https://github.com/pytorch/pytorch/issues/37664
https://stackoverflow.com/questions/61544379/
GAN does not learn when using symmetric outputs from generator to disciminator
I'm currently trying to implement the paper Generative modeling for protein structures and I have succesfully been able to train a model following Pytorch's DCGAN Tutorial which has a similar model structure to the paper. The two implementations differ when it comes to output of the generator. In the tutorial's model, the generator simply passes a normal output matrix to the discriminator. This works fine when I implement the paper's model (ommiting the symmetry and clamping) but the paper specifies: During training, we enforce that G(z) be positive by clamping output values above zero and symmetric when I put this into my training loop I receive a loss graph that indicates that the generator isn't learning. Here is my training loop: # Training Loop # Lists to keep track of progress img_list = [] G_losses = [] D_losses = [] iters = 0 print("Starting Training Loop...") # For each epoch for epoch in range(num_epochs): # For each batch in the dataloader for i, data in enumerate(dataloader, 0): ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### ## Train with all-real batch netD.zero_grad() # Format batch # Unsqueezed dim one to convert [128, 64, 64] to [128, 1, 64, 64] to conform to D architecture real_cpu = (data.unsqueeze(dim=1).type(torch.FloatTensor)).to(device) b_size = real_cpu.size(0) label = torch.full((b_size,), real_label, device=device) # Forward pass real batch through D output = netD(real_cpu).view(-1) # Calculate loss on all-real batch errD_real = criterion(output, label) # Calculate gradients for D in backward pass errD_real.backward() D_x = output.mean().item() ## Train with all-fake batch # Generate batch of latent vectors noise = torch.randn(b_size, nz, 1, 1, device=device) # Generate fake image batch with G fake = netG(noise) label.fill_(fake_label) # Make Symmetric sym_fake = (fake.detach().clamp(min=0) + fake.detach().clamp(min=0).permute(0, 1, 3, 2)) / 2 # Classify all fake batch with D output = netD(sym_fake).view(-1) # Calculate D's loss on the all-fake batch errD_fake = criterion(output, label) # Calculate the gradients for this batch errD_fake.backward() D_G_z1 = output.mean().item() # Add the gradients from the all-real and all-fake batches errD = errD_real + errD_fake # Update D optimizerD.step() #adjust_optim(optimizerD, iters) ############################ # (2) Update G network: maximize log(D(G(z))) ########################### netG.zero_grad() label.fill_(real_label) # fake labels are real for generator cost # Since we just updated D, perform another forward pass of all-fake batch through D output = netD(fake.detach()).view(-1) # Calculate G's loss based on this output errG = criterion(output, label) # Calculate gradients for G errG.backward() D_G_z2 = output.mean().item() # Update G optimizerG.step() adjust_optim(optimizerG, iters) # Output training stats if i % 50 == 0: print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, num_epochs, i, len(dataloader), errD.item(), errG.item(), D_x, D_G_z1, D_G_z2)) # Save Losses for plotting later G_losses.append(errG.item()) D_losses.append(errD.item()) # Check how the generator is doing by saving G's output on fixed_noise if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)): with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_list.append(vutils.make_grid(fake, padding=2, normalize=True)) iters += 1 Here is the training loss. Here is my expected loss. I make the output symmetric with the following line sym_fake = (fake.detach().clamp(min=0) + fake.detach().clamp(min=0).permute(0, 1, 3, 2)) / 2 and I pass it to the discriminator on the lines that call sym_fake Question Is my implementation in pytorch wrong or is there something I'm missing? I don't understand why the paper makes the matrix symmetric and clamps if the network is capable of generating images without the need for symmetry and clamping.
It could be because after the criterion for netG obtains an output that was detached from the parameters of netG thus the optimizer is not / can't be updating the parameters for netG.
https://stackoverflow.com/questions/61544723/
Pytorch merging list of tensors together
Let's say I have a list of tensors ([A , B , C ] where each tensor of is of shape [batch_size X 1024]. I want to merge all the tensors into a single tensor in the following way : The first row in A is the first row in the new tensor, and the first row of B is the seocnd row in the new tensor, and the first row of C is the third row of the new tensor and so on and so forth. So far I did it with for loops and this is not effictive at all. Would love to hear about more efficent ways. Thanks
Here's a minimal example that works: import torch a = torch.tensor([[1,1],[1,1]]) b = torch.tensor([[2,2],[2,2]]) c = torch.tensor([[3,3],[3,3]]) torch.stack([a,b,c],dim=0).view(6,2).t().contiguous().view(6,2) The output is: tensor([[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]]) In your case, view(6,2) should change to batch_size*3, 1024. Solution adapted from PyTorch forums where an example was shown with two tensors.
https://stackoverflow.com/questions/61547919/
How do i add a dimension to a numpy array and copy the dimension from another numpy array
I have a numpy array with the shape (128, 8) I want to add an extra dimension so it has the shape (128, 168, 8) And add the content of a 168 dimension from another array that has the shape (128, 168, 8). I can always permute the positions of the dimensions if I can somehow add it. Is this possible somehow? I have seen the append and concatenation methods but to no luck.
np.expand_dims(smaller_array, axis=1) + bigger_array Is the correct solution, thanks!
https://stackoverflow.com/questions/61548364/
How can I monitor both training and eval loss when finetuning BERT on a GLUE task?
I am running https://github.com/huggingface/transformers/blob/master/examples/run_glue.py to perform finetuning on a binary classification task (CoLA). I'd like to monitor both the training and evaluation losses to prevent overfitting. Currently the library is at 2.8.0, and I did the install from source. When I run the example with python run_glue.py --model_name_or_path bert-base-uncased --task_name CoLA --do_train --do_eval --data_dir my_dir --max_seq_length 128 --per_gpu_train_batch_size 8 --per_gpu_eval_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir ./outputs --logging_steps 5 In the stdout logs I see lines with one single value for the loss, such as {"learning_rate": 3.3333333333333333e-06, "loss": 0.47537623047828675, "step": 25} By peeking in https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py I see that training and evaluation losses are computed there (looks to me that code was recently refactored). I have thus replaced https://github.com/huggingface/transformers/blob/abb1fa3f374811ea09d0bc3440d820c50735008d/src/transformers/trainer.py#L314 with cr_loss = self._training_step(model, inputs, optimizer) tr_loss += cr_loss and added after line https://github.com/huggingface/transformers/blob/abb1fa3f374811ea09d0bc3440d820c50735008d/src/transformers/trainer.py#L345 logs["training loss"] = cr_loss with this I get: 0502 14:12:18.644119 23632 summary.py:47] Summary name training loss is illegal; using training_loss instead. | 4/10 [00:02<00:04, 1.49it/s] {"learning_rate": 3.3333333333333333e-06, "loss": 0.47537623047828675, "training loss": 0.5451719760894775, "step": 25} Is this OK, or am I doing anything wrong here? What's the best way to monitor in stdout both the averaged training and evaluation loss for a given logging interval during finetuning?
There's likely no change needed in the code if installing a more recent version (I tried 2.9.0 via pip): just fire the finetuning with the additional flag --evaluate_during_training and output will be OK 0506 12:11:30.021593 34540 trainer.py:551] ***** Running Evaluation ***** I0506 12:11:30.022596 34540 trainer.py:552] Num examples = 140 I0506 12:11:30.023634 34540 trainer.py:553] Batch size = 8 Evaluation: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:19<00:00, 1.10s/it] {"eval_mcc": 0.0, "eval_loss": 0.6600487811697854, "learning_rate": 3.3333333333333333e-06, "loss": 0.50044886469841, "step": 25} beware that the example scripts change quite frequently, so flags to accomplish this may change names... see also here https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301
https://stackoverflow.com/questions/61551797/
Reverse Image search (for image duplicates) on local computer
I have a bunch of poor quality photos that I extracted from a pdf. Somebody I know has the good quality photo's somewhere on her computer(Mac), but it's my understanding that it will be difficult to find them. I would like to loop through each poor quality photo perform a reverse image search using each poor quality photo as the query image and using this persons computer as the database to search for the higher quality images and create a copy of each high quality image in one destination folder. Example pseudocode for each image in poorQualityImages: search ./macComputer for a higherQualityImage of image copy higherQualityImage to ./higherQualityImages I need to perform this action once. I am looking for a tool, github repo or library which can perform this functionality more so than a deep understanding of content based image retrieval. There's a post on reddit where someone was trying to do something similar imgdupes is a program which seems like it almost achieves this, but I do not want to delete the duplicates, I want to copy the highest quality duplicate to a destination folder Update Emailed my previous image processing prof and he sent me this Off the top of my head, nothing out of the box. No guaranteed solution here, but you can narrow the search space. You’d need a little program that outputs the MSE or SSIM similarity index between two images, and then write another program or shell script that scans the hard drive and computes the MSE between each image on the hard drive and each query image, then check the images with the top X percent similarity score. Something like that. Still not maybe guaranteed to find everything you want. And if the low quality images are of different pixel dimensions than the high quality images, you’d have to do some image scaling to get the similarity index. If the poor quality images have different aspect ratios, that’s even worse. So I think it’s not hard but not trivial either. The degree of difficulty is partly dependent on the nature of the corruption in the low quality images. UPDATE Github project I wrote which achieves what I want
What you are looking for is called image hashing . In this answer you will find a basic explanation of the concept, as well as a go-to github repo for plug-and-play application. Basic concept of Hashing From the repo page: "We have developed a new image hash based on the Marr wavelet that computes a perceptual hash based on edge information with particular emphasis on corners. It has been shown that the human visual system makes special use of certain retinal cells to distinguish corner-like stimuli. It is the belief that this corner information can be used to distinguish digital images that motivates this approach. Basically, the edge information attained from the wavelet is compressed into a fixed length hash of 72 bytes. Binary quantization allows for relatively fast hamming distance computation between hashes. The following scatter plot shows the results on our standard corpus of images. The first plot shows the distances between each image and its attacked counterpart (e.g. the intra distances). The second plot shows the inter distances between altogether different images. While the hash is not designed to handle rotated images, notice how slight rotations still generally fall within a threshold range and thus can usually be matched as identical. However, the real advantage of this hash is for use with our mvp tree indexing structure. Since it is more descriptive than the dct hash (being 72 bytes in length vs. 8 bytes for the dct hash), there are much fewer false matches retrieved for image queries. " Another blogpost for an in-depth read, with an application example. Available Code and Usage A github repo can be found here. There are obviously more to be found. After importing the package you can use it to generate and compare hashes: >>> from PIL import Image >>> import imagehash >>> hash = imagehash.average_hash(Image.open('test.png')) >>> print(hash) d879f8f89b1bbf >>> otherhash = imagehash.average_hash(Image.open('other.bmp')) >>> print(otherhash) ffff3720200ffff >>> print(hash == otherhash) False >>> print(hash - otherhash) 36 The demo script find_similar_images also on the mentioned github, illustrates how to find similar images in a directory.
https://stackoverflow.com/questions/61553923/
Callbacks in Fastai
I am working on a deep learning project in Fastai and wish to use EarlyStoppingCallback with ReduceLROnPlateauCallback in it. Read callbacks.fastai but struggling to understand how to implement both and couldn't find any relevant example. Any help would be appreciated. learn = cnn_learner(data, models.resnet50, metrics = [accuracy,quadratic_kappa]) learn.fit(50,2e-6)
The way I normally do it is this way.... First create the learner Object learn = Learner(data, model, loss_func=...., opt_func=...., metrics=..... ) learn.unfreeze() Then you call any callbacks on your fit_one_cycle learn.fit_one_cycle(16, max_lr=1e-3, div_factor=100, pct_start=0.0, callbacks = [SaveModelCallback(learn, name=f'model',monitor='kappa_score')])
https://stackoverflow.com/questions/61560970/
How to use two seperate dataloaders together?
I have two tensors of shape (16384,3,224,224) each. I need to multiply these two together. Obviously these two tensors are too big to fit in GPU ram. So I want to know, how should I go about this, divide them smaller batches using slicing or should I use two separate dataloaders?(I am confused, how to use two different dataloader together) What would be the best way to do this?
I'm still not sure I totally understand the problem, but under the assumption that you have two big tensors t1 and t2 of shape [16384, 3, 224, 224] already loaded in RAM and want to perform element-wise multiplication then the easiest approach is result = t1 * t2 Alternatively you could break these into smaller tensors and multiply them that way. There are lots of ways to do this. One very PyTorch like way is to use a TensorDataset and operate on corresponding mini-batches of both tensors. If all you want to do is element-wise multiplication then the overhead of transferring tensors to and from the GPU is likely more expensive than the actual time saved during the computation. If you want to try it you can use something like this import torch from torch.utils import data batch_size = 100 device = 'cuda:0' dataset = data.TensorDataset(t1, t2) dataloader = data.DataLoader(dataset, num_workers=1, batch_size=batch_size) result = [] for d1, d2 in dataloader: d1, d2 = d1.to(device=device), d2.to(device=device) d12 = d1 * d2 result.append(d12.cpu()) result = torch.cat(result, dim=0) Or you could just do some slicing, which will probably be faster and more memory efficient since it avoids data copying on the CPU side. import torch batch_size = 100 device = 'cuda:0' index = 0 result = [] while index < t1.shape[0]: d1 = t1[index:index + batch_size].to(device=device) d2 = t2[index:index + batch_size].to(device=device) d12 = d1 * d2 result.append(d12.cpu()) index += batch_size result = torch.cat(result, dim=0) Note that for both of these examples most of the time is spent copying data back to the CPU and concatenating the final results. Ideally, you would just do whatever you need to do with the d12 batch within the loop and avoid ever sending the final multiplied result back to the CPU.
https://stackoverflow.com/questions/61562138/
Issue training pytorch model on gpu
I am trying to implement the discriminator of a basic MNIST GAN in PyTorch. When I run training on the CPU, it works without any issue giving the desired output. However, when I run it on the GPU it shows a runtime error. I am pasting the code for my model as well as my training below along with the modifications I made to try and run training on the GPU. dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") def preprocess(x): return x.view(-1,1,28,28).to(dev) discriminator = nn.Sequential( Lambda(preprocess), nn.Conv2d(1,64,3,stride=2,padding=1), nn.LeakyReLU(negative_slope=0.2), nn.Dropout(0.4), nn.Conv2d(64,64,3,stride=2,padding=1), nn.LeakyReLU(negative_slope=0.2), nn.Dropout(0.4), Lambda(lambda x:x.view(x.size(0),-1)), nn.Linear(3136,1), nn.Sigmoid() ) loss = nn.BCELoss() opt = optim.Adam(discriminator.parameters(),lr = 0.002) discriminator.to(dev) def train_discriminator(model, dataset,opt, n_iter=100, n_batch=256): half_batch = int(n_batch / 2) for i in range(n_iter): X_real, y_real = generate_real_samples(dataset, half_batch) error = loss(model(X_real),y_real) error.backward() X_fake, y_fake = generate_fake_samples(half_batch) error = loss(model(X_fake),y_fake) error.backward() opt.step() Now on running train_discriminator(discriminator,dataset,opt) I get the following error which I cannot make sense of. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-15-ee20eb2a8e55> in <module> ----> 1 train_discriminator(discriminator,dataset,opt) <ipython-input-13-9e6f9b4874c8> in train_discriminator(model, dataset, opt, n_iter, n_batch) 3 for i in range(n_iter): 4 X_real, y_real = generate_real_samples(dataset, half_batch) ----> 5 error = loss(model(X_real),y_real) 6 error.backward() 7 X_fake, y_fake = generate_fake_samples(half_batch) ~/environments/workspace/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/environments/workspace/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 496 497 def forward(self, input, target): --> 498 return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) 499 500 ~/environments/workspace/lib/python3.7/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2075 2076 return torch._C._nn.binary_cross_entropy( -> 2077 input, target, weight, reduction_enum) 2078 2079 RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'target' in call to _thnn_binary_cross_entropy_forward Would really appreciate it if anyone could suggest any changes that are needed to be made that will solve this issue.
According to the error message, the ground truths are not in GPU: RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'target' in call to _thnn_binary_cross_entropy_forward
https://stackoverflow.com/questions/61565293/
Loss on dev set is always increasing unlike training set loss
I designed a network for a text classification problem. To do this, I'm using huggingface transformet's BERT model with a linear layer above that for fine-tuning. My problem is that the loss on the training set is decreasing which is fine, but when it comes to do the evaluation after each epoch on the development set, the loss is increasing with epochs. I'm posting my code to investigate if there's something wrong with it. for epoch in range(1, args.epochs + 1): total_train_loss = 0 trainer.set_train() for step, batch in enumerate(train_dataloader): loss = trainer.step(batch) total_train_loss += loss avg_train_loss = total_train_loss / len(train_dataloader) logger.info(('Training loss for epoch %d/%d: %4.2f') % (epoch, args.epochs, avg_train_loss)) print("\n-------------------------------") logger.info('Start validation ...') trainer.set_eval() y_hat = list() y = list() total_dev_loss = 0 for step, batch_val in enumerate(dev_dataloader): true_labels_ids, predicted_labels_ids, loss = trainer.validate(batch_val) total_dev_loss += loss y.extend(true_labels_ids) y_hat.extend(predicted_labels_ids) avg_dev_loss = total_dev_loss / len(dev_dataloader) print(("\n-Total dev loss: %4.2f on epoch %d/%d\n") % (avg_dev_loss, epoch, args.epochs)) print("Training terminated!") Following is the trainer file, which I use for doing a forward pass on a given batch and then backpropagate accordingly. class Trainer(object): def __init__(self, args, model, device, data_points, is_test=False, train_stats=None): self.args = args self.model = model self.device = device self.loss = nn.CrossEntropyLoss(reduction='none') if is_test: # Should load the model from checkpoint self.model.eval() self.model.load_state_dict(torch.load(args.saved_model)) logger.info('Loaded saved model from %s' % args.saved_model) else: self.model.train() self.optim = AdamW(model.parameters(), lr=2e-5, eps=1e-8) total_steps = data_points * self.args.epochs self.scheduler = get_linear_schedule_with_warmup(self.optim, num_warmup_steps=0, num_training_steps=total_steps) def step(self, batch): batch = tuple(t.to(self.device) for t in batch) batch_input_ids, batch_input_masks, batch_labels = batch self.model.zero_grad() outputs = self.model(batch_input_ids, attention_mask=batch_input_masks, labels=batch_labels) loss = self.loss(outputs, batch_labels) loss = loss.sum() (loss / loss.numel()).backward() torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0) self.optim.step() self.scheduler.step() return loss def validate(self, batch): batch = tuple(t.to(self.device) for t in batch) batch_input_ids, batch_input_masks, batch_labels = batch with torch.no_grad(): model_output = self.model(batch_input_ids, attention_mask=batch_input_masks, labels=batch_labels) predicted_label_ids = self._predict(model_output) label_ids = batch_labels.to('cpu').numpy() loss = self.loss(model_output, batch_labels) loss = loss.sum() return label_ids, predicted_label_ids, loss def _predict(self, logits): return np.argmax(logits.to('cpu').numpy(), axis=1) Finally, the following is my model (i.e., Classifier) class: import torch.nn as nn from transformers import BertModel class Classifier(nn.Module): def __init__(self, args, is_eval=False): super(Classifier, self).__init__() self.bert_model = BertModel.from_pretrained( args.init_checkpoint, output_attentions=False, output_hidden_states=True, ) self.is_eval_mode = is_eval self.linear = nn.Linear(768, 2) # binary classification def switch_state(self): self.is_eval_mode = not self.is_eval_mode def forward(self, input_ids, attention_mask=None, labels=None): bert_outputs = self.bert_model(input_ids, token_type_ids=None, attention_mask=attention_mask) # Should give the logits to the the linear layer model_output = self.linear(bert_outputs[1]) return model_output For visualization the loss throughout the epochs:
When I've used Bert for text classification my model has generally behaved as you tell. In part this is expected because pre-trained models tend to require few epochs to fine-tune, actually if you check Bert's paper the number of epochs recommended for fine-tuning is between 2 and 4. On the other hand, I've usually found the optimum at just 1 or 2 epochs, which coincides with your case also. My guess is: there is a trade-off when fine-tuning pre-trained models between fitting to your downstream task and forgetting the weights learned at pre-training. Depending on the data you have, the equilibrium point may happen sooner or later and overfitting starts after that. But this paragraph is speculation based on my experience.
https://stackoverflow.com/questions/61566646/
example of doing simple prediction with pytorch-lightning
I have an existing model where I load some pre-trained weights and then do prediction (one image at a time) in pytorch. I am trying to basically convert it to a pytorch lightning module and am confused about a few things. So currently, my __init__ method for the model looks like this: self._load_config_file(cfg_file) # just creates the pytorch network self.create_network() self.load_weights(weights_file) self.cuda(device=0) # assumes GPU and uses one. This is probably suboptimal self.eval() # prediction mode What I can gather from the lightning docs, I can pretty much do the same, except not to do the cuda() call. So something like: self.create_network() self.load_weights(weights_file) self.freeze() # prediction mode So, my first question is whether this is the correct way to use lightning? How would lightning know if it needs to use the GPU? I am guessing this needs to be specified somewhere. Now, for the prediction, I have the following setup: def infer(frame): img = transform(frame) # apply some transformation to the input img = torch.from_numpy(img).float().unsqueeze(0).cuda(device=0) with torch.no_grad(): output = self.__call__(Variable(img)).data.cpu().numpy() return output This is the bit that has me confused. Which functions do I need to override to make a lightning compatible prediction? Also, at the moment, the input comes as a numpy array. Is that something that would be possible from the lightning module or do things always have to use some sort of a dataloader? At some point, I want to extend this model implementation to do training as well, so want to make sure I do it right but while most examples focus on training models, a simple example of just doing prediction at production time on a single image/data point might be useful. I am using 0.7.5 with pytorch 1.4.0 on GPU with cuda 10.1
LightningModule is a subclass of torch.nn.Module so the same model class will work for both inference and training. For that reason, you should probably call the cuda() and eval() methods outside of __init__. Since it's just a nn.Module under the hood, once you've loaded your weights you don't need to override any methods to perform inference, simply call the model instance. Here's a toy example you can use: import torchvision.models as models from pytorch_lightning.core import LightningModule class MyModel(LightningModule): def __init__(self): super().__init__() self.resnet = models.resnet18(pretrained=True, progress=False) def forward(self, x): return self.resnet(x) model = MyModel().eval().cuda(device=0) And then to actually run inference you don't need a method, just do something like: for frame in video: img = transform(frame) img = torch.from_numpy(img).float().unsqueeze(0).cuda(0) output = model(img).data.cpu().numpy() # Do something with the output The main benefit of PyTorchLighting is that you can also use the same class for training by implementing training_step(), configure_optimizers() and train_dataloader() on that class. You can find a simple example of that in the PyTorchLightning docs.
https://stackoverflow.com/questions/61566919/
FastAI throwing a runtime error when using custom train & test sets
I’m working on the Food-101 dataset and as you may know, the dataset comes with both train and test parts. Because the dataset could no longer be found on the ETH Zurich link, I had to divide them into partitions < 1GB each and clone them into Colab and reassemble. Its very tedious work but I got it working. I will omit the Python code but the file structure looks like this: Food-101 images train ...75750 train images test ...25250 test images meta classes.txt labes.txt test.json test.txt train.json train.txt README.txt license_agreement.txt The following code is what’s throwing the runtime error train_image_path = Path('images/train/') test_image_path = Path('images/test/') path = Path('../Food-101') food_names = get_image_files(train_image_path) file_parse = r'/([^/]+)_\d+\.(png|jpg|jpeg)' data = ImageDataBunch.from_folder(train_image_path, test_image_path, valid_pct=0.2, ds_tfms=get_transforms(), size=224) data.normalize(imagenet_stats) My guess is that ImageDataBunch.from_folder() is what’s throwing the error but I don’t know why its getting caught up on the data types as (I don’t think) I’m supplying it with any data that has a specific type. /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change " You can deactivate this warning by passing `no_check=True`. /usr/local/lib/python3.6/dist-packages/fastai/basic_data.py:262: UserWarning: There seems to be something wrong with your dataset, for example, in the first batch can't access these elements in self.train_ds: 9600,37233,16116,38249,1826... warn(warn_msg) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/IPython/core/formatters.py in __call__(self, obj) 697 type_pprinters=self.type_printers, 698 deferred_pprinters=self.deferred_printers) --> 699 printer.pretty(obj) 700 printer.flush() 701 return stream.getvalue() 11 frames /usr/local/lib/python3.6/dist-packages/fastai/vision/image.py in affine(self, func, *args, **kwargs) 181 "Equivalent to `image.affine_mat = image.affine_mat @ func()`." 182 m = tensor(func(*args, **kwargs)).to(self.device) --> 183 self.affine_mat = self.affine_mat @ m 184 return self 185 RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out
I had also faced the same error and used no_check=True in your ImageDataBunch arguments. Try using this before you create ImageDataBunch import warnings warnings.filterwarnings("ignore", category=UserWarning, module="torch.nn.functional") Make sure that you downgrade your torch version to 1.0.0,
https://stackoverflow.com/questions/61567150/
How is log_softmax() implemented to compute its value (and gradient) with better speed and numerical stability?
Both MXNet and PyTorch provide special implementation for computing log(softmax()), which is faster and numerically more stable. However, I cannot find the actual Python implementation for this function, log_softmax(), in either package. Can anyone explain how this is implemented, or better, point me to the relevant source code?
The numerical error: >>> x = np.array([1, -10, 1000]) >>> np.exp(x) / np.exp(x).sum() RuntimeWarning: overflow encountered in exp RuntimeWarning: invalid value encountered in true_divide Out[4]: array([ 0., 0., nan]) There are 2 methods to avoid the numerical error while compute the softmax: Exp Normalization: def exp_normalize(x): b = x.max() y = np.exp(x - b) return y / y.sum() >>> exp_normalize(x) array([0., 0., 1.]) Log Sum Exp def log_softmax(x): c = x.max() logsumexp = np.log(np.exp(x - c).sum()) return x - c - logsumexp Please note that, a reasonable choice for both b, c in above formula is max(x). With this choice, overflow due to exp is impossible. The largest number exponentiated after shifting is 0.
https://stackoverflow.com/questions/61567597/
How to use pytorch's grid_sample()?
I am having some trouble getting torch.nn.functional working as I would like, illustrated by the below example: import torch import torch.nn.functional as F import numpy as np sz = 5 input_arr = torch.from_numpy(np.arange(sz*sz).reshape(1,1,sz,sz)).float() indices = torch.from_numpy(np.array([-1,-1, -0.5,-0.5, 0,0, 0.5,0.5, 1,1]).reshape(1, 1, 5, 2)).float() out = F.grid_sample(input_arr, indices) print(input_arr) print(out) Since the indices are just the diagonals of the input, I'd expect to get something like tensor([[[[0., 6., 12., 18., 24.]]]]) (since (-1,-1) should give the top left and (1,1) should give the bottom right, according to the docs). However, I am getting this as output to the console: tensor([[[[ 0., 1., 2., 3., 4.], [ 5., 6., 7., 8., 9.], [10., 11., 12., 13., 14.], [15., 16., 17., 18., 19.], [20., 21., 22., 23., 24.]]]]) tensor([[[[ 0.0000, 4.5000, 12.0000, 19.5000, 6.0000]]]]) What am I doing wrong? Many thanks!
Have you tried passing the argument align_corners = True? If you read the documentation it states that : WARNING When align_corners = True, the grid positions depend on the pixel size relative to the input image size, and so the locations sampled by grid_sample() will differ for the same input given at different resolutions (that is, after being upsampled or downsampled). The default behavior up to version 1.2.0 was align_corners = True. Since then, the default behavior has been changed to align_corners = False, in order to bring it in line with the default for interpolate(). And to double check, I ran the code with and without align_corners = True, to get both the correct output you required and the incorrect output you described. # align_corners = False out = F.grid_sample(input_arr, indices, align_corners = False) print(out) # tensor([[[[ 0.0000, 4.5000, 12.0000, 19.5000, 6.0000]]]]) And # align_corners = True out = F.grid_sample(input_arr, indices, align_corners = True) print(out) # tensor([[[[ 0., 6., 12., 18., 24.]]]])
https://stackoverflow.com/questions/61570727/
Vector-Tensor element-wise multiplication in Pytorch
I am trying to extract the luminance from a tensor representing an image in Pytorch, and so I need to multiply element-wise a vector of size 3 (for the three RGB value weights) by a 3xNxN tensor representing the image such that I obtain a NxN matrix in the end where the three channels of the tensor have been summed with the weights given in the vector. I guess there exists Pytorch operations that would help me do that without loops but I haven't found them.
You have to reshape your 3 dimensional RGB vector to be broadcastable to 3xNxN like this: rgb = rgb.reshape(-1, 1, 1) So it will have shape (3, 1, 1) Now you can multiply it with original image and sum along first dimension: result = torch.sum(rgb * image, dim=0)
https://stackoverflow.com/questions/61571873/
MSELoss when mask is used
I'm trying to calculate MSELoss when mask is used. Suppose that I have tensor with batch_size of 2: [2, 33, 1] as my target, and another input tensor with the same shape. Since sequence length might differ for each instance, I have also a binary mask indicating the existence of each element in the input sequence. So here is what I'm doing: mse_loss = nn.MSELoss(reduction='none') loss = mse_loss(input, target) loss = (loss * mask.float()).sum() # gives \sigma_euclidean over unmasked elements mse_loss_val = loss / loss.numel() # now doing backpropagation mse_loss_val.backward() Is loss / loss.numel() a good practice? I'm skeptical, as I have to use reduction='none' and when calculating final loss value, I think I should calculate the loss only considering those loss elements that are nonzero (i.e., unmasked), however, I'm taking the average over all tensor elements with torch.numel(). I'm actually trying to take 1/n factor of MSELoss into account. Any thoughts?
There are some issues in the code. I think correct code should be: mse_loss = nn.MSELoss(reduction='none') loss = mse_loss(input, target) loss = (loss * mask.float()).sum() # gives \sigma_euclidean over unmasked elements non_zero_elements = mask.sum() mse_loss_val = loss / non_zero_elements # now doing backpropagation mse_loss_val.backward() This is only slightly worse than using .mean() if you are worried about numerical errors.
https://stackoverflow.com/questions/61580037/
Broadcast error HDF5: Can't broadcast (3, 2048, 1, 1) -> (4, 2048, 1, 1)
I have received the following error: TypeError: Can't broadcast (3, 2048, 1, 1) -> (4, 2048, 1, 1) I am extracting features and placing them into a hdf5 dataset like this: array_40 = hdf5_file.create_dataset( f'{phase}_40x_arrays', shape, maxshape=(None, args.batch_size, 2048, 1, 1)) In (None, args.batch_size, 2048, 1, 1), None is specified due to the unknown nature of the size of dataset. args.batch_size is 4 in this case, 2048, 1 and 1 are the number of features extracted and their spatial dimensions. shape is defined as: shape = (dataset_length, args.batch_size, 2048, 1, 1) However, I'm not sure what I can do with the args.batch_size, which in this case is 4. I can't leave this as None as it comes up with an illegal error: ValueError: Illegal value in chunk tuple EDIT: Yes, you're absolutley right. I'm trying to incrementally write to a hdf5 dataset. I've shown more of the code below. I'm extracting features and storing them incrementally into a hdf5 dataset. Despite a batch size of 4, it would be ideal to save each item from the batch, incrementally as its own instance/row. shape = (dataset_length, 2048, 1, 1) all_shape = (dataset_length, 6144, 1, 1) labels_shape = (dataset_length) batch_shape = (1,) path = args.HDF5_dataset + f'{phase}.hdf5' #hdf5_file = h5py.File(path, mode='w') with h5py.File(path, mode='a') as hdf5_file: array_40 = hdf5_file.create_dataset( f'{phase}_40x_arrays', shape, maxshape=(None, 2048, 1, 1) ) array_labels = hdf5_file.create_dataset( f'{phase}_labels', labels_shape, maxshape=(None), dtype=string_type ) array_batch_idx = hdf5_file.create_dataset( f'{phase}_batch_idx', data=np.array([-1, ]) ) hdf5_file.close() # either new or checkpionted file exists # load file and create references to exisitng h5 datasets with h5py.File(path, mode='r+') as hdf5_file: array_40 = hdf5_file[f'{phase}_40x_arrays'] array_labels = hdf5_file[f'{phase}_labels'] array_batch_idx = hdf5_file[f'{phase}_batch_idx'] batch_idx = int(array_batch_idx[0]+1) print("Batch ID is restarting from {}".format(batch_idx)) dataloaders_dict = torch.utils.data.DataLoader(datasets_dict, batch_size=args.batch_size, sampler=SequentialSampler2( datasets_dict, batch_idx, args.batch_size),drop_last=True, num_workers=args.num_workers, shuffle=False) # make sure shuffling is false for sampler to work and incase you restart for i, (inputs40x, paths40x, labels) in enumerate(dataloaders_dict): print(f'Batch ID: {batch_idx}') inputs40x = inputs40x.to(device) labels = labels.to(device) paths = paths40x x40 = resnet(inputs40x) # torch.Size([1, 2048, 1, 1]) batch, feats, 1l, 1l array_40[...] = x40.cpu() array_labels[batch_idx, ...] = labels[:].cpu() array_batch_idx[:,...] = batch_idx batch_idx +=1 hdf5_file.flush()
I think you are confused on use of maxshape=() parameter. It sets maximum allocated dataset size in each dimension. The first dataset dimension is set to dataset_length at creation with maxshape[0]=None which allows for unlimited growth in size. The size of second dataset dimension at creation is args.batch_size. You specified the same size for maxshape, so you can't increase this dimension. I'm a little confused by your example. It sounds like you trying to incrementally write data to the dataset in rows/instances of args.batch_size. Your example has 51 rows/instances of data, and you want to write in batches of args.batch_size=4. With 51 rows, you can write the first 48 rows (0-3, 4-7...44-47), then stuck with the 3 remaining rows. Can't you address this by adding a counter (call it nrows_left), and changing the batch size argument to min(args.batch_size, rows_left)? Seems like the easiest solution to me. Without more info, I can't write a complete example. I will attempt to show what I mean below: # args.batch_size = 4 shape = (dataset_length, 2048, 1, 1) array_40 = hdf5_file.create_dataset( f'{phase}_40x_arrays', shape, maxshape=(None, 2048, 1, 1)) nrows_left= dataset_length rcnt = 0 loopcnt = dataset_length/args.batch_size if dataset_length%args.batch_size != 0: loopcnt += 1 for loop in range(loopcnt) : nload = min(nrows_left, args.batch_size) array_40[rcnt :row+nload] = img_data[rcnt:row+nload ] rcnt += nload nrows_left -= nload
https://stackoverflow.com/questions/61580468/
What is the point of multinomial vs argmax evaluation of accuracy?
What is the purpose of evaluating prediction accuracy using multinomial instead of the straight up argmax? probs_Y = torch.softmax(model(test_batch, feature_1, feature_2), 1) sampled_Y = torch.multinomial(probs_Y, 1) argmax_Y = torch.max(probs_Y, 1)[1].view(-1, 1) print('Accuracy of sampled predictions on the test set: {:.4f}%'.format( (test_Y == sampled_Y.float()).sum().item() / len(test_Y) * 100)) print('Accuracy of argmax predictions on the test set: {:4f}%'.format( (test_Y == argmax_Y.float()).sum().item() / len(test_Y) * 100)) Result: Accuracy of sampled predictions on the test set: 88.8889% Accuracy of argmax predictions on the test set: 97.777778% Reading the pytorch docs it looks like multinomial is sampling according to some distribution - just not sure how that is relevant in assessing accuracy. I've noticed that the multinomial is non-deterministic - meaning that it is outputting a different accuracy, presumably by including different samples, each time it runs.
Here, with multinomial - we're sampling the classes with the multinomial distribution. From the Wikipedia example, Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample? Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large. If we look closely, we're doing the same, sampling without replacement. torch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None) ref: https://pytorch.org/docs/stable/torch.html?highlight=torch%20multinomial#torch.multinomial A multinomial experiment is a statistical experiment and it consists of n repeated trials. Each trial has a discrete number of possible outcomes. On any given trial, the probability that a particular outcome will occur is constant (that was the initial assumption). So, the voting count can be simulated with the probabilities. Classification from soft outputs is a similar voting process. If we repeat the experiment enough times, we'll reach as close as the actual probability. For example, let's start with an initial probs = [0.1, 0.1, 0.3, 0.5]. We can repeat the experiment n times and count how many times an index was selected by torch.multinomial. import torch cnt = [0, 0, 0, 0] for _ in range(5000): sampled_Y = torch.multinomial(torch.tensor([0.1, 0.1, 0.3, 0.5]), 1) cnt[sampled_Y[0]] += 1 print(cnt) After 50 iterations: [6, 3, 14, 27] After 5000 iterations: [480, 486, 1525, 2509] After 50000 iterations: [4988, 4967, 15062, 24983] But, this is avoided in model evaluation since it's not deterministic and requires a random generator to simulate the experiment. This is particularly useful for monte-carlo simulations, prior-posterior calculation. I have seen a graph classification example, where such evaluation was used. But I think it's not common and (even useful) in most classification tasks in machine learning.
https://stackoverflow.com/questions/61581774/
The loss effects in multitask learning framework
I have designed a multi-task network where the first layers are shared between two output layers. Through investigating multi-task learning principles, I got to know that there should be a weight scalar parameter such as alpha that dampens the two losses outputted from two output layers. My question is about this parameter itself. Does it have effect on the model's final performance? probably yes. This is the part of my code snippet for computation of losses: ... mtl_loss = (alpha) * loss_1 + (1-alpha) * loss_2 mtl_loss.backward() ... Above, loss_1 is MSELoss, and loss_2 is CrossEntropyLoss. As such, picking alpha=0.9, I'm getting the following loss values during training steps: [2020-05-03 04:46:55,398 INFO] Step 50/150000; loss_1: 0.90 + loss_2: 1.48 = mtl_loss: 2.43 (RMSE: 2.03, F1score: 0.07); lr: 0.0000001; 29 docs/s; 28 sec [2020-05-03 04:47:23,238 INFO] Step 100/150000; loss_1: 0.40 + loss_2: 1.27 = mtl_loss: 1.72 (RMSE: 1.38, F1score: 0.07); lr: 0.0000002; 29 docs/s; 56 sec [2020-05-03 04:47:51,117 INFO] Step 150/150000; loss_1: 0.12 + loss_2: 1.19 = mtl_loss: 1.37 (RMSE: 0.81, F1score: 0.08); lr: 0.0000003; 29 docs/s; 84 sec [2020-05-03 04:48:19,034 INFO] Step 200/150000; loss_1: 0.04 + loss_2: 1.10 = mtl_loss: 1.20 (RMSE: 0.55, F1score: 0.07); lr: 0.0000004; 29 docs/s; 112 sec [2020-05-03 04:48:46,927 INFO] Step 250/150000; loss_1: 0.02 + loss_2: 0.96 = mtl_loss: 1.03 (RMSE: 0.46, F1score: 0.08); lr: 0.0000005; 29 docs/s; 140 sec [2020-05-03 04:49:14,851 INFO] Step 300/150000; loss_1: 0.02 + loss_2: 0.99 = mtl_loss: 1.05 (RMSE: 0.43, F1score: 0.08); lr: 0.0000006; 29 docs/s; 167 sec [2020-05-03 04:49:42,793 INFO] Step 350/150000; loss_1: 0.02 + loss_2: 0.97 = mtl_loss: 1.04 (RMSE: 0.43, F1score: 0.08); lr: 0.0000007; 29 docs/s; 195 sec [2020-05-03 04:50:10,821 INFO] Step 400/150000; loss_1: 0.01 + loss_2: 0.94 = mtl_loss: 1.00 (RMSE: 0.41, F1score: 0.08); lr: 0.0000008; 29 docs/s; 223 sec [2020-05-03 04:50:38,943 INFO] Step 450/150000; loss_1: 0.01 + loss_2: 0.86 = mtl_loss: 0.92 (RMSE: 0.40, F1score: 0.08); lr: 0.0000009; 29 docs/s; 252 sec As training loss shows, it seems that my first network that uses MSELoss converges super fast, while the second network has not been converged yet. RMSE, and F1score are two metrics that I'm using to track the progress of first, and second network, respectively. I know that picking the optimal alpha is somewhat experimental, but are there hints to make the process of picking it easier? Specifically, I want the networks being trained in line with each other, not like above that the first network converges super duper fast. Can alpha parameter help controlling this?
With that alpha, loss_1 is contributing more to the result and due backpropagation updates weights proportionally to error it improves faster. Try using more equilibrated alpha to balance the performance in both tasks. You also can try change alpha during training.
https://stackoverflow.com/questions/61581839/
How to make pip run a specific command from requirements file?
I would like to add lines for torch and torchvision on my requirements.txt file, to allow for easy clone, and I will be moving from computer to computer and to cloud in the near future. I want an easy pip install -r requirements.txt and be done with it for my project. > pip freeze > requirements.txt gives something like ... torch==1.5.0 torchvision==0.6.0 ... However, pip install -r requirements.txt (which is in fact pip install torch) doesn't work, and instead, as the official torch site, clearly says the command should be: pip install torch===1.5.0 torchvision===0.6.0 -f https://download.pytorch.org/whl/torch_stable.html How do I make the requirements file reflect this? My desktop is Windows 10. Bonus question: My cloud is Linux based. How do I make the requirements file fit both desktop and cloud?
You can use the same options in your requirements.txt file, e.g. torch===1.5.0 torchvision===0.6.0 -f https://download.pytorch.org/whl/torch_stable.html Then simply run pip install -r requirements.txt
https://stackoverflow.com/questions/61582029/
In pytorch, is there a built-in method to extract rows with given indexes?
Suppose I have a torch tensor import torch a = torch.tensor([[1,2,3], [4,5,6], [7,8,9]]) and a list b = [0,2] Is there a built-in method to extract the rows 0 and 2 and put them in a new tensor: tensor([[1,2,3], [7,8,9]]) In particular, is there a function that look likes this: extract_rows(a,b) -> c where c contains desired rows. Sure, this can done by a for loop, but a built-in method is in general faster. Note that the example is only an example, there could be dozens of indexes in the list, and hundreds of rows in the tensor.
have a look at torch builtin index_select() method. It would be helpful to you. or You can do this using slicing. tensor = [[1,2,3], [4,5,6], [7,8,9]] new_tensor = tensor[0::2] print(new_tensor) Output: [[1, 2, 3], [7, 8, 9]]
https://stackoverflow.com/questions/61584886/
Derived Class of Pytorch nn.Module Cannot be Loaded by Module Import in Python
Using Python 3.6 with Pytorch 1.3.1. I have noticed that some saved nn.Modules cannot be loaded when the whole module is being imported into another module. To give an example, here is the template of a minimum working example. #!/usr/bin/env python3 #encoding:utf-8 # file 'dnn_predict.py' from torch import nn class NN(nn.Module):##NN network # Initialisation and other class methods networks=[torch.load(f=os.path.join(resource_directory, 'nn-classify-cpu_{fold}.pkl'.format(fold=fold))) for fold in range(5)] ... if __name__=='__main__': # Some testing snippets pass The whole file works just fine when I run it in the shell directly. However, when I want to use the class and load the neural network in another file using this code, it fails. #!/usr/bin/env python3 #encoding:utf-8 from dnn_predict import * The error reads AttributeError: Can't get attribute 'NN' on <module '__main__'> Does loading of saved variables or importing modules happen differently in Pytorch than other common Python libraries? Some help or pointer to the root cause will be really appreciated.
When you save a model with torch.save(model, PATH) the whole object gets serialised with pickle, which does not save the class itself, but a path to the file containing the class, hence when loading the model the exact same directory and file structure is required to find the correct class. When running a Python script, the module of that file is __main__, therefore if you want to load that module, your NN class must be defined in the script you're running. That is very inflexible, so the recommended approach is to not save the entire model, but instead just save the state dictionary, which only saves the parameters of the model. # Save the state dictionary of the model torch.save(model.state_dict(), PATH) Afterwards, the state dictionary can be loaded and applied to your model. from dnn_predict import NN # Create the model (will have randomly initialised parameters) model = NN() # Load the previously saved state dictionary state_dict = torch.load(PATH) # Apply the state dictionary to the model model.load_state_dict(state_dict) More details on the state dictionary and saving/loading the models: PyTorch - Saving and Loading Models
https://stackoverflow.com/questions/61591081/
How to vectorize matrix inversion while handling runtime error in pytorch
I need to invert some matrices in pytorch. However, some of the matrices are not invertible, which leads to the code throwing runtime error as follows, matrices = torch.randn([5,3,3]) matrices[[2,3]] = torch.zeros([3,3]) inverses = torch.inverse(matrices) RuntimeError: inverse_cpu: For batch 2: U(1,1) is zero, singular U. I have a fallback technique for such situations. However, I can't figure out which of the matrices throw the error. Currently, I have replaced the code with non-vectorized version, but it has become a bottleneck. Is there a way to handle this without giving up vectorization?
The best way I can think of is to first calculate the determinate of each matrix, then calculate inverses of those that have a abs(det)>0. matrices = torch.randn([5,3,3]) matrices[[2,3]] = torch.zeros([3,3]) determinants = torch.det(matrices) inverses = torch.inverse(matrices[determinants.abs()>0.]) You'll have to handle the removal of singular matrices, but that shouldn't be too hard since you have the index values of those matrices from determinants.abs()==0.. This allows you to keep the inversion vectorized.
https://stackoverflow.com/questions/61593887/
How is KL-divergence in pytorch code related to the formula?
In VAE tutorial, kl-divergence of two Normal Distributions is defined by: And in many code, such as here, hereand here, the code is implemented as: KL_loss = -0.5 * torch.sum(1 + logv - mean.pow(2) - logv.exp()) or def latent_loss(z_mean, z_stddev): mean_sq = z_mean * z_mean stddev_sq = z_stddev * z_stddev return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1) How are they related? why there is not any "tr" or ".transpose()" in code?
The expressions in the code you posted assume X is an uncorrelated multi-variate Gaussian random variable. This is apparent by the lack of cross terms in the determinant of the covariance matrix. Therefore the mean vector and covariance matrix take the forms Using this we can quickly derive the following equivalent representations for the components of the original expression Substituting these back into the original expression gives
https://stackoverflow.com/questions/61597340/
Why does by torch.optim.SGD method learning rate change?
With SGD learning rate should not be changed during epochs but it is. Help me understand why it happens please and how to prevent this LR changing? import torch params = [torch.nn.Parameter(torch.randn(1, 1))] optimizer = torch.optim.SGD(params, lr=0.9) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9) for epoch in range(5): print(scheduler.get_lr()) scheduler.step() Output is: [0.9] [0.7290000000000001] [0.6561000000000001] [0.5904900000000002] [0.5314410000000002] My torch version is 1.4.0
Since you are using the command torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9) (meaning actually torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.9)) thus you are multiplying the learning rate by gamma=0.9 every step_size=1 step: 0.9 = 0.9 0.729 = 0.9*0.9*0.9 0.6561 = 0.9*0.9*0.9*0.9 0.59049 = 0.9*0.9*0.9*0.9*0.9 The only "strange" point is that it missing 0.81=0.9*0.9 at the second step (UPDATE: see Szymon Maszke answer for an explanation) To prevent early decreasing, if you have N samples in your dataset, and the batch size is D, then set torch.optim.lr_scheduler.StepLR(optimizer, step_size=N/D, gamma=0.9) to decrease at each epoch. To decrease each E epoch set torch.optim.lr_scheduler.StepLR(optimizer, step_size=E*N/D, gamma=0.9)
https://stackoverflow.com/questions/61599465/
How Can I change the value of "maxdets" in Faster R-CNN by Pytorch?
I am implementing a faster RCNN network on pytorch. I have followed the next tutorial. https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html There are images in which I have more than 100 objects to classify. However, with this tutorial I can only detect a maximum of 100 objects, since the parameter "maxdets" = 100. Is there a way to change this value to adapt it to my project? IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.235 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.655 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.105 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.238 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.006 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.066 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.331 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.331 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 If only change next param, it would be the problem solved? cocoeval.Params.setDetParams.maxDets = [1, 10, 100] Thank you!
"There are images in which I have more than 100 objects to classify." maxDets = 100 doesn't mean it will classify only 100 images but it refers to % AverageRecall given 100 detections per image inshort maxDets is realted to metrics not actual no. of images classified. for more info visit : http://cocodataset.org/#detection-eval Tensorboard graph recall https://github.com/matterport/Mask_RCNN/issues/663 # Limit to max_per_image detections **over all classes** if number_of_detections > self.detections_per_img > 0: cls_scores = result.get_field("scores") image_thresh, _ = torch.kthvalue( cls_scores.cpu(), number_of_detections - self.detections_per_img + 1 ) keep = cls_scores >= image_thresh.item() keep = torch.nonzero(keep).squeeze(1) result = result[keep] return result according to this code snippet I found that it checks the no. of detection so model.roi_heads.detections_per_img=300 is correct for your purpose. And I haven't found much proper documentation on maxdets but i guess the above code should work. # non-maximum suppression, independently done per class keep = box_ops.batched_nms(boxes, scores, labels, self.nms_thresh) # keep only topk scoring predictions keep = keep[:self.detections_per_img] this code snippet says that we can filter out only some top detections we want to have in our model.
https://stackoverflow.com/questions/61599582/
Pytorch C++ (libtorch) outputs different results if I change shape
So I'm learning Neural networks right now and I noticed something really really strange in my network. I have an input layer created like this convN1 = register_module("convN1", torch::nn::Conv2d(torch::nn::Conv2dOptions(4, 256, 3).padding(1))); and an output layer that is a tanh function. So it's expecting a torch::Tensor of shape {/batchSize/, 4, /sideLength/, /sideLength/} which is and will output a tensor of just 1 float value. As such for testing I have created a custom Tensor of shape {4, 15, 15}. The really weird part is what's happening below auto inputTensor = torch::zeros({ 1, 4, 15, 15}); inputTensor[0] = customTensor; std::cout << network->forward(inputTensor); // Outputs something like 0.94142 inputTensor = torch::zeros({ 32, 4, 15, 15}); inputTensor[0] = customTensor; std::cout << network->forward(inputTensor); // Outputs something like 0.1234 then 0.8543 31 times So why is the customTensor getting 2 different values from my network just from the fact that the batchsize has changed? Am I not understanding some parts of how tensors work? P.S. I did check and the above block of code was operating under eval mode. Edit: Since it's been asked here's a more indepth look at my network convN1 = register_module("convN1", torch::nn::Conv2d(torch::nn::Conv2dOptions(4, 256, 3).padding(1))); batchNorm1 = register_module("batchNorm1", torch::nn::BatchNorm2d(torch::nn::BatchNormOptions(256))); m_residualBatch1 = register_module(batch1Name, torch::nn::BatchNorm2d(torch::nn::BatchNormOptions(256))); m_residualBatch2 = register_module(batch2Name, torch::nn::BatchNorm2d(torch::nn::BatchNormOptions(256))); m_residualConv1 = register_module(conv1Name, torch::nn::Conv2d(torch::nn::Conv2dOptions(256, 256, 3).padding(1))); m_residualConv2 = register_module(conv2Name, torch::nn::Conv2d(torch::nn::Conv2dOptions(256, 256, 3).padding(1))); valueN1 = register_module("valueN1", torch::nn::Conv2d(256, 2, 1)); batchNorm3 = register_module("batchNorm3", torch::nn::BatchNorm2d(torch::nn::BatchNormOptions(2))); valueN2 = register_module("valueN2", torch::nn::Linear(2 * BOARD_LENGTH, 64)); valueN3 = register_module("valueN3", torch::nn::Linear(64, 1)); And how it forwards is like so torch::Tensor Net::forwadValue(torch::Tensor x) { x = convN1->forward(x); x = batchNorm1->forward(x); x = torch::relu(x); torch::Tensor residualCopy = x.clone(); x = m_residualConv1->forward(x); x = m_residualBatch1->forward(x); x = torch::relu(x); x = m_residualConv2->forward(x); x = m_residualBatch2->forward(x); x += residualCopy; x = torch::relu(x); x = valueN1->forward(x); x = batchNorm3->forward(x) x = torch::relu(x); x = valueN2->forward(x.reshape({ x.sizes()[0], 30 })) x = torch::relu(x); x = valueN3->forward(x) return torch::tanh(x); }
THANK YOU @MichaelJungo turns out you were right in that one of my BatchNorm2d wasn't being set to eval mode. I was unclear how registering modules worked in the beginning (still am to an extent) so I overloaded the ::train() function to manually set all my modules to the necessary mode. In it I forgot to set one of my BatchNorm modules to the correct mode, doing so made it all consistent thanks a bunch!
https://stackoverflow.com/questions/61608010/
How to do cubic spline interpolation and integration in Pytorch
In Pytorch, is there cubic spline interpolation similar to Scipy's? Given 1D input tensors x and y, I want to interpolate through those points and evaluate them at xs to obtain ys. Also, I want an integrator function that finds Ys, the integral of the spline interpolation from x[0] to xs.
Here is a gist I made doing this with Cubic Hermite Splines in Pytorch efficiently and with autograd support. For convenience, I'll also put the code here. import torch as T def h_poly_helper(tt): A = T.tensor([ [1, 0, -3, 2], [0, 1, -2, 1], [0, 0, 3, -2], [0, 0, -1, 1] ], dtype=tt[-1].dtype) return [ sum( A[i, j]*tt[j] for j in range(4) ) for i in range(4) ] def h_poly(t): tt = [ None for _ in range(4) ] tt[0] = 1 for i in range(1, 4): tt[i] = tt[i-1]*t return h_poly_helper(tt) def H_poly(t): tt = [ None for _ in range(4) ] tt[0] = t for i in range(1, 4): tt[i] = tt[i-1]*t*i/(i+1) return h_poly_helper(tt) def interp_func(x, y): "Returns integral of interpolating function" if len(y)>1: m = (y[1:] - y[:-1])/(x[1:] - x[:-1]) m = T.cat([m[[0]], (m[1:] + m[:-1])/2, m[[-1]]]) def f(xs): if len(y)==1: # in the case of 1 point, treat as constant function return y[0] + T.zeros_like(xs) I = T.searchsorted(x[1:], xs) dx = (x[I+1]-x[I]) hh = h_poly((xs-x[I])/dx) return hh[0]*y[I] + hh[1]*m[I]*dx + hh[2]*y[I+1] + hh[3]*m[I+1]*dx return f def interp(x, y, xs): return interp_func(x,y)(xs) def integ_func(x, y): "Returns interpolating function" if len(y)>1: m = (y[1:] - y[:-1])/(x[1:] - x[:-1]) m = T.cat([m[[0]], (m[1:] + m[:-1])/2, m[[-1]]]) Y = T.zeros_like(y) Y[1:] = (x[1:]-x[:-1])*( (y[:-1]+y[1:])/2 + (m[:-1] - m[1:])*(x[1:]-x[:-1])/12 ) Y = Y.cumsum(0) def f(xs): if len(y)==1: return y[0]*(xs - x[0]) I = P.searchsorted(x[1:].detach(), xs) dx = (x[I+1]-x[I]) hh = H_poly((xs-x[I])/dx) return Y[I] + dx*( hh[0]*y[I] + hh[1]*m[I]*dx + hh[2]*y[I+1] + hh[3]*m[I+1]*dx ) return f def integ(x, y, xs): return integ_func(x,y)(xs) # Example if __name__ == "__main__": import matplotlib.pylab as P # for plotting x = T.linspace(0, 6, 7) y = x.sin() xs = T.linspace(0, 6, 101) ys = interp(x, y, xs) Ys = integ(x, y, xs) P.scatter(x, y, label='Samples', color='purple') P.plot(xs, ys, label='Interpolated curve') P.plot(xs, xs.sin(), '--', label='True Curve') P.plot(xs, Ys, label='Spline Integral') P.plot(xs, 1-xs.cos(), '--', label='True Integral') P.legend() P.show()
https://stackoverflow.com/questions/61616810/
Understanding input shape to PyTorch LSTM
This seems to be one of the most common questions about LSTMs in PyTorch, but I am still unable to figure out what should be the input shape to PyTorch LSTM. Even after following several posts (1, 2, 3) and trying out the solutions, it doesn't seem to work. Background: I have encoded text sequences (variable length) in a batch of size 12 and the sequences are padded and packed using pad_packed_sequence functionality. MAX_LEN for each sequence is 384 and each token (or word) in the sequence has a dimension of 768. Hence my batch tensor could have one of the following shapes: [12, 384, 768] or [384, 12, 768]. The batch will be my input to the PyTorch rnn module (lstm here). According to the PyTorch documentation for LSTMs, its input dimensions are (seq_len, batch, input_size) which I understand as following. seq_len - the number of time steps in each input stream (feature vector length). batch - the size of each batch of input sequences. input_size - the dimension for each input token or time step. lstm = nn.LSTM(input_size=?, hidden_size=?, batch_first=True) What should be the exact input_size and hidden_size values here?
You have explained the structure of your input, but you haven't made the connection between your input dimensions and the LSTM's expected input dimensions. Let's break down your input (assigning names to the dimensions): batch_size: 12 seq_len: 384 input_size / num_features: 768 That means the input_size of the LSTM needs to be 768. The hidden_size is not dependent on your input, but rather how many features the LSTM should create, which is then used for the hidden state as well as the output, since that is the last hidden state. You have to decide how many features you want to use for the LSTM. Finally, for the input shape, setting batch_first=True requires the input to have the shape [batch_size, seq_len, input_size], in your case that would be [12, 384, 768]. import torch import torch.nn as nn # Size: [batch_size, seq_len, input_size] input = torch.randn(12, 384, 768) lstm = nn.LSTM(input_size=768, hidden_size=512, batch_first=True) output, _ = lstm(input) output.size() # => torch.Size([12, 384, 512])
https://stackoverflow.com/questions/61632584/
My train accuracy remains at 10% when I add weight_decay parameter to my optimizer in PyTorch. I am using CIFAR10 dataset and LeNet CNN model
I am training CIFAR10 dataset on LeNet CNN model. I am using PyTorch on Google Colab. The code runs only when I use Adam optimizer with model.parameters() as the only parameter. But when I change my optimizer or use weight_decay parameter then the accuracy remains at 10% through all the epochs. I cannot understand the reason why it is happening. # CNN Model - LeNet class LeNet_ReLU(nn.Module): def __init__(self): super().__init__() self.cnn_model = nn.Sequential(nn.Conv2d(3,6,5), nn.ReLU(), nn.AvgPool2d(2, stride=2), nn.Conv2d(6,16,5), nn.ReLU(), nn.AvgPool2d(2, stride=2)) self.fc_model = nn.Sequential(nn.Linear(400, 120), nn.ReLU(), nn.Linear(120,84), nn.ReLU(), nn.Linear(84,10)) def forward(self, x): x = self.cnn_model(x) x = x.view(x.size(0), -1) x = self.fc_model(x) return x # Importing dataset and creating dataloader batch_size = 128 trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor()) trainloader = utils_data.DataLoader(trainset, batch_size=batch_size, shuffle=True) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transforms.ToTensor()) testloader = utils_data.DataLoader(testset, batch_size=batch_size, shuffle=False) # Creating instance of the model net = LeNet_ReLU() # Evaluation function def evaluation(dataloader): total, correct = 0, 0 for data in dataloader: inputs, labels = data outputs = net(inputs) _, pred = torch.max(outputs.data, 1) total += labels.size(0) correct += (pred==labels).sum().item() return correct/total * 100 # Loss function and optimizer loss_fn = nn.CrossEntropyLoss() opt = optim.Adam(net.parameters(), weight_decay = 0.9) # Model training loss_epoch_arr = [] max_epochs = 16 for epoch in range(max_epochs): for i, data in enumerate(trainloader, 0): inputs, labels = data outputs = net(inputs) loss = loss_fn(outputs, labels) loss.backward() opt.step() opt.zero_grad() loss_epoch_arr.append(loss.item()) print('Epoch: %d/%d, Test acc: %0.2f, Train acc: %0.2f' % (epoch,max_epochs, evaluation(testloader), evaluation(trainloader))) plt.plot(loss_epoch_arr)
The weight decay mechanism sets a penalty for high value wieghts, i.e. it stricts the weights to have relatively small values by adding their sum multiplied by the weight_decay argument you gave it. That can be seen as a quadratic regularization term. When passing large weight_decay value, you may strict your network too much and prevent it from learning, that's probably the reason it had 10% of accuracy which is related to non-learning at all and just guessing the answer (since you have 10 classes you receive 10% of acc, when the output isn't a function of your input at all). The solution would be to play around with different values, train for weight_decay of 1e-4 or some other values in that area. Note that when you reach values closer to zero you should have results which are closer to your initial train without using the weight decay. Hope that helps.
https://stackoverflow.com/questions/61633868/
I am getting Attribute error as 'int' object has no attribute 'to'
I am writing a python code in Kaggle notebook for Image Classification. In the training part, I am getting an error AttributeError Traceback (most recent call last) <ipython-input-22-052723d8ce9d> in <module> 5 test_loss = 0.0 6 for images,label in enumerate(train_loader): ----> 7 images,label = images.to(cuda),label.to(cuda) 8 optimizer.zero_grad() 9 AttributeError: 'int' object has no attribute 'to' This is the following code, (I am giving only 2 parts, pls tell if you need more) train_loader = torch.utils.data.DataLoader(train_data,batch_size = 128,num_workers =0,shuffle =True) test_loader = torch.utils.data.DataLoader(test_data,batch_size = 64,num_workers =0,shuffle =False) epoch = 10 for e in range(epoch): train_loss = 0.0 test_loss = 0.0 for images,label in enumerate(train_loader): images,label = images.to(cuda),label.to(cuda) optimizer.zero_grad() output = model(images) _,predict = torch.max(output.data, 1) loss = criterion(output,labels) loss.backward() optimizer.step() train_loss += loss.item() train_size += label.size(0) train_success += (predict==label).sum().item() print("train_accuracy is {.2f}".format(100*(train_success/train_size)) )
I don't know much about the environment you're working in, but this is what goes wrong: for images, label in enumerate (train_loader): Puts whatever is in train_loader into label, while images is given a number. Try this to see what I mean, and to see what goes wrong: for images, label in enumerate(train_loader): print(images) return And since images is a number (int), there is no images.to() method associated with images
https://stackoverflow.com/questions/61633944/
Validation set augmentations PyTorch example
In this PyTorch vision example for transfer learning, they are performing validation set augmentations, and I can't figure out why. # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } As far as I know, data augmentations are to be made solely on the training set (and sometimes on the test set, in what is called Test Time Augmentations). Why is it done here as well? Also, why not just resizing straight to 224?
To clarify, random data augmentation is only allowed on the training set. You can apply data augmentation to the validation and test sets provided that none of the augmentations are random. You will see this clearly in the example you provided. The training set uses many random augmentations (augmentations that use randomness usually have "random" in the name). However, the validation set only uses augmentations that don't introduce any randomness to the data. One last important detail: when you use normalization on the validation and test set you MUST use the same exact factors you used for the training set. You will see that the example above kept the numbers the same. The need to resize and then center crop comes from the fact the val set needs to come from the same domain of the train set, thus if the former was randomly resized and cropped to 224, the val set needs to deterministically resize and crop.
https://stackoverflow.com/questions/61637447/
How to extract patches from an image in pytorch?
I want to extract image patches from an image with patch size 128 and stride 32, so I have this code, but it gives me an error : from PIL import Image img = Image.open("cat.jpg") x = transforms.ToTensor()(img) x = x.unsqueeze(0) size = 128 # patch size stride = 32 # patch stride patches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride) print(patches.shape) and the error I get is : RuntimeError: maximum size for tensor at dimension 1 is 3 but size is 128 This is the only method I've found so far. but it gives me this error
The size of your x is [1, 3, height, width]. Calling x.unfold(1, size, stride) tries to create slices of size 128 from dimension 1, which has size 3, hence it is too small to create any slice. You don't want to create slices across dimension 1, since those are the channels of the image (RGB in this case) and they need to be kept as they are for all patches. The patches are only created across the height and width of an image. patches = x.unfold(2, size, stride).unfold(3, size, stride) The resulting tensor will have size [1, 3, num_vertical_slices, num_horizontal_slices, 128, 128]. You can reshape it to combine the slices to get a list of patches i.e. size of [1, 3, num_patches, 128, 128]: patches = patches.reshape(1, 3, -1, size, size)
https://stackoverflow.com/questions/61647207/
How to convolute 2d image, tensor depth 3, using Linear Layer? (in PyTorch)
Given a tensor shape (3, 256, 256). I would like to convolute or loop through it pixel by pixel to return me a tensor shape (1, 256, 256). This may sound a bit confusing so here is my code till now so you know I mean. class MorphNetwork(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3, 8) self.fc2 = nn.Linear(8, 1) def forward(self, x): # The input here is shape (3, 256, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) # Returned shape should be (1, 256, 256) return x As you can see my Linear layer accept's shape 3 which matches the depth of my original tensor. What is the best way of looping through all 256x256 to return me tensor shape (1, 256, 256)
A linear layer that takes 3dim input and outputs 8dim is mathematically equivalent to a convolution with a kernel of spatial size of 1x1 (I strongly recommend that you actually "do the math" and convince yourself that this is indeed correct). Therefore, you can use the following model, replacing the linear layers with nn.Conv2D: class MorphNetwork(nn.Module): def __init__(self): super().__init__() self.c1 = nn.Conv2d(3, 8, kernel_size=1, bias=True) self.c2 = nn.Conv2d(8, 1, kernel_size=1, bias=True) def forward(self, x): # The input here is shape (3, 256, 256) x = F.relu(self.c1(x)) x = self.c2(x) # Returned shape should be (1, 256, 256) return x If you insist on using a nn.Linear layer, you can unfold your input and then unfold it back after you apply the linear layer.
https://stackoverflow.com/questions/61653090/
Vectorize index identification for faster GPU processing
My input is a list of values, data_idx. In the example, the values range from [0, 5]. data_idx = [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0] My desired output, filled_matrix is a tensor of shape max(value) by len(data_idx) where each row, r, of the tensor contains all of indices where data_idx == r and -1 for the rest of the row if the number of matched indices is fewer than len(data_idx) For example, in the first row, r=0, data_idx==0 at indices [3, 11, 14]. The full output would look like: filled_matrix = tensor([[ 3, 11, 14, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [ 5, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [ 0, 9, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [ 8, 12, 13, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [ 1, 2, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]], dtype=torch.int8) I have working for-loop code that accomplishes my goal. import torch max_idx = 6 data_idx = torch.tensor([2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0]).cuda() max_number_data_idx = data_idx.shape[0] filled_matrix = torch.zeros([max_idx, max_number_data_idx], dtype=torch.int8, device='cuda') filled_matrix.fill_(-1) for i in range(max_idx): same_idx = (data_idx == i).nonzero().flatten() filled_matrix[i][:same_idx.shape[0]] = same_idx Now, I want to speed up this code. Specifically, I want it to be faster on the GPU. In the real scenario, the input, data_idx, can be a list containing millions of values. In that case, e.g 1 M of different values, the GPU will be call 1 M of time which make it very slow. My code is sequential and GPU hate sequential code. Is there a function which will produce the same result more efficiently? Or a way to vectorize this for-loop?
Disclaimer: I have not profiled this code too see if it is actually faster on a GPU. One vectorized solution is to use tensor views to broadcast the comparison. Tensor views do not use additional memory. You can see more details in the documentation First, make a matrix that contains the values that you want to compare to for each row. In this case, it's just the row indices. comparison = torch.tensor(range(max_idx)) Now we are going to use expand and unsqueeze to make views of data_idx and comparison which are the same shape as filled_matrix. comparison_view = comparison.unsqueeze(1).expand(max_idx, max_number_data_idx) print(comparison_view) # Each row is the index you want to compare to # tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]]) data_idx_view = data_idx.expand(max_idx, max_number_data_idx) print(data_idx_view) # Each row is a copy of data_idx # tensor([[2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0], [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0], [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0], [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0], [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0], [2, 5, 5, 0, 4, 1, 4, 5, 3, 2, 1, 0, 3, 3, 0]]) We can compare their equality and use nonzero to find the indices mask = comparison_view == data_idx_view mask_indices = mask.nonzero() print(mask_indices) # tensor([[ 0, 3], [ 0, 11], [ 0, 14], [ 1, 5], [ 1, 10], [ 2, 0], [ 2, 9], [ 3, 8], [ 3, 12], [ 3, 13], [ 4, 4], [ 4, 6], [ 5, 1], [ 5, 2], [ 5, 7]]) Now, you just need to manipulate these results into the format that you want for your output. filled_matrix = torch.zeros([max_idx, max_number_data_idx], dtype=torch.int8) filled_matrix.fill_(-1) col_indices = [0, 1, 2, 0, 1, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2] filled_matrix[mask_indices[:, 0], col_indices] = mask_indices[:, 1].type(torch.int8) I thought about several options for generating the col_indices list, but I couldn't come up with anything without a for loop. col_indices = torch.zeros(mask_indices.shape[0]) for i in range(1, mask_indices.shape[0]): if mask_indices[i,0] == mask_indices[i-1,0]: col_indices[i] = col_indices[i-1]+1 You would need to do some profiling to see which code is actually faster.
https://stackoverflow.com/questions/61663690/
Reformer local and LSH attention in HuggingFace implementation
The recent implementation of the Reformer in HuggingFace has both what they call LSH Self Attention and Local Self Attention, but the difference is not very clear to me after reading the documentation. Both use bucketing to avoid the quadratic memory requirement of vanilla transformers, but it is not clear how they differ. Is it the case that local self attention only allows queries to attend to keys sequentially near them (i.e., inside a given window in the sentence), as opposed to the proper LSH hashing that LSH self attention does? Or is it something else?
After closely examining the source code, I found that indeed the Local Self Attention attends to the sequentially near tokens.
https://stackoverflow.com/questions/61667186/
How can I swap axis in a torch tensor?
I have a torch tensor of size torch.Size([1, 128, 56, 128]) 1 is channel, 128 is the width, and height. 56 are the stacks of images. How can I resize it to torch.Size([1, 56, 128, 128]) ?
You could simply use permute or transpose.
https://stackoverflow.com/questions/61667967/
Duplicate layers when reusing pytorch model
I am trying to reuse some of the resnet layers for a custom architecture and ran into a issue I can't figure out. Here is a simplified example; when I run: import torch from torchvision import models from torchsummary import summary def convrelu(in_channels, out_channels, kernel, padding): return nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel, padding=padding), nn.ReLU(inplace=True), ) class ResNetUNet(nn.Module): def __init__(self): super().__init__() self.base_model = models.resnet18(pretrained=False) self.base_layers = list(self.base_model.children()) self.layer0 = nn.Sequential(*self.base_layers[:3]) def forward(self, x): print(x.shape) output = self.layer0(x) return output base_model = ResNetUNet().cuda() summary(base_model,(3,224,224)) Is giving me: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 112, 112] 9,408 Conv2d-2 [-1, 64, 112, 112] 9,408 BatchNorm2d-3 [-1, 64, 112, 112] 128 BatchNorm2d-4 [-1, 64, 112, 112] 128 ReLU-5 [-1, 64, 112, 112] 0 ReLU-6 [-1, 64, 112, 112] 0 ================================================================ Total params: 19,072 Trainable params: 19,072 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.57 Forward/backward pass size (MB): 36.75 Params size (MB): 0.07 Estimated Total Size (MB): 37.40 ---------------------------------------------------------------- This is duplicating each layer (there are 2 convs, 2 batchnorms, 2 relu's) as opposed to giving one layer each. If I print out self.base_layers[:3] I get: [Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), ReLU(inplace=True)] which shows just three layers without duplicates. Why is it duplicating my layers? I am using pytorch version 1.4.0
Your layers aren't actually being invoked twice. This is an artifact of how summary is implemented. The simple reason is because summary recursively iterates over all the children of your module and registers forward hooks for each of them. Since you have repeated children (in base_model and layer0) then those repeated modules get multiple hooks registered. When summary calls forward this causes both of the hooks for each module to be invoked which causes repeats of the layers to be reported. For your toy example a solution would be to simply not assign base_model as an attribute since it's not being used during forward anyway. This avoids having base_model ever being added as a child. class ResNetUNet(nn.Module): def __init__(self): super().__init__() base_model = models.resnet18(pretrained=False) base_layers = list(base_model.children()) self.layer0 = nn.Sequential(*base_layers[:3]) Another solution is to create a modified version of summary which doesn't register hooks for the same module multiple times. Below is an augmented summary where I use a set named already_registered to keep track of modules which already have hooks registered to avoid registering multiple hooks. from collections import OrderedDict import torch import torch.nn as nn import numpy as np def summary(model, input_size, batch_size=-1, device="cuda"): # keep track of registered modules so that we don't add multiple hooks already_registered = set() def register_hook(module): def hook(module, input, output): class_name = str(module.__class__).split(".")[-1].split("'")[0] module_idx = len(summary) m_key = "%s-%i" % (class_name, module_idx + 1) summary[m_key] = OrderedDict() summary[m_key]["input_shape"] = list(input[0].size()) summary[m_key]["input_shape"][0] = batch_size if isinstance(output, (list, tuple)): summary[m_key]["output_shape"] = [ [-1] + list(o.size())[1:] for o in output ] else: summary[m_key]["output_shape"] = list(output.size()) summary[m_key]["output_shape"][0] = batch_size params = 0 if hasattr(module, "weight") and hasattr(module.weight, "size"): params += torch.prod(torch.LongTensor(list(module.weight.size()))) summary[m_key]["trainable"] = module.weight.requires_grad if hasattr(module, "bias") and hasattr(module.bias, "size"): params += torch.prod(torch.LongTensor(list(module.bias.size()))) summary[m_key]["nb_params"] = params if ( not isinstance(module, nn.Sequential) and not isinstance(module, nn.ModuleList) and not (module == model) and module not in already_registered: ): already_registered.add(module) hooks.append(module.register_forward_hook(hook)) device = device.lower() assert device in [ "cuda", "cpu", ], "Input device is not valid, please specify 'cuda' or 'cpu'" if device == "cuda" and torch.cuda.is_available(): dtype = torch.cuda.FloatTensor else: dtype = torch.FloatTensor # multiple inputs to the network if isinstance(input_size, tuple): input_size = [input_size] # batch_size of 2 for batchnorm x = [torch.rand(2, *in_size).type(dtype) for in_size in input_size] # print(type(x[0])) # create properties summary = OrderedDict() hooks = [] # register hook model.apply(register_hook) # make a forward pass # print(x.shape) model(*x) # remove these hooks for h in hooks: h.remove() print("----------------------------------------------------------------") line_new = "{:>20} {:>25} {:>15}".format("Layer (type)", "Output Shape", "Param #") print(line_new) print("================================================================") total_params = 0 total_output = 0 trainable_params = 0 for layer in summary: # input_shape, output_shape, trainable, nb_params line_new = "{:>20} {:>25} {:>15}".format( layer, str(summary[layer]["output_shape"]), "{0:,}".format(summary[layer]["nb_params"]), ) total_params += summary[layer]["nb_params"] total_output += np.prod(summary[layer]["output_shape"]) if "trainable" in summary[layer]: if summary[layer]["trainable"] == True: trainable_params += summary[layer]["nb_params"] print(line_new) # assume 4 bytes/number (float on cuda). total_input_size = abs(np.prod(input_size) * batch_size * 4. / (1024 ** 2.)) total_output_size = abs(2. * total_output * 4. / (1024 ** 2.)) # x2 for gradients total_params_size = abs(total_params.numpy() * 4. / (1024 ** 2.)) total_size = total_params_size + total_output_size + total_input_size print("================================================================") print("Total params: {0:,}".format(total_params)) print("Trainable params: {0:,}".format(trainable_params)) print("Non-trainable params: {0:,}".format(total_params - trainable_params)) print("----------------------------------------------------------------") print("Input size (MB): %0.2f" % total_input_size) print("Forward/backward pass size (MB): %0.2f" % total_output_size) print("Params size (MB): %0.2f" % total_params_size) print("Estimated Total Size (MB): %0.2f" % total_size) print("----------------------------------------------------------------") # return summary
https://stackoverflow.com/questions/61668501/
Pytorch batch indexing
So the output of my network looks like this: output = tensor([[[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.0584, -0.2549], [ 0.0482, -0.2386], [ 0.0410, -0.2234], [ 0.0362, -0.2111], [ 0.0333, -0.2018], [ 0.0318, -0.1951], [ 0.0311, -0.1904], [ 0.0310, -0.1873], [ 0.0312, -0.1851], [ 0.0315, -0.1837], [ 0.0318, -0.1828], [ 0.0322, -0.1822], [ 0.0324, -0.1819], [ 0.0327, -0.1817], [ 0.0328, -0.1815], [ 0.0330, -0.1815], [ 0.0331, -0.1814], [ 0.0332, -0.1814], [ 0.0333, -0.1814], [ 0.0333, -0.1814], [ 0.0334, -0.1814], [ 0.0334, -0.1814], [ 0.0334, -0.1814]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.0584, -0.2549], [ 0.0482, -0.2386], [ 0.0410, -0.2234], [ 0.0362, -0.2111], [ 0.0333, -0.2018], [ 0.0318, -0.1951], [ 0.0311, -0.1904], [ 0.0310, -0.1873], [ 0.0312, -0.1851], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.0584, -0.2549], [ 0.0482, -0.2386], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.0584, -0.2549], [ 0.0482, -0.2386], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.0716, -0.2668], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]], [[ 0.0868, -0.2623], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164], [ 0.1003, -0.2164]]]) Which is a shape of [8, 24, 2] Now 8 is my batch size. And i would like to get a data point from every batch, at the following locations: index = tensor([24, 10, 3, 3, 1, 1, 1, 0]) So the 24th value from the first batch, the 10th value from the second batch, and so on. Now i have problems figuring out the syntax. I've tried torch.gather(output, 0, index) But it keeps telling me, that my dimensions don't match. And trying output[ : ,index] Just gets me the values at all the indexes for each batch. What would be the correct syntax here, to get these values?
To select only one element per batch you need to enumerate the batch indices, which can be done easily with torch.arange. output[torch.arange(output.size(0)), index] That essentially creates tuples between the enumerated tensor and your index tensor to access the data, which results in indexing output[0, 24], output[1, 10] etc.
https://stackoverflow.com/questions/61677466/
Multi-dimensional tensor dot product in pytorch
I have two tensors of shapes (8, 1, 128) as follows. q_s.shape Out[161]: torch.Size([8, 1, 128]) p_s.shape Out[162]: torch.Size([8, 1, 128]) Above two tensors represent a batch of eight 128 dimensional vectors. I want the dot product of batch q_s with batch p_s. How can I do this? I tried to use torch.tensordot function as follows. It works as expected as well. But it also does the extra work, which I don't want it to do. See the following example. dt = torch.tensordot(q_s, p_s, dims=([1,2], [1,2])) dt Out[176]: tensor([[0.9051, 0.9156, 0.7834, 0.8726, 0.8581, 0.7858, 0.7881, 0.8063], [1.0235, 1.5533, 1.2155, 1.2048, 1.3963, 1.1310, 1.1724, 1.0639], [0.8762, 1.3490, 1.2923, 1.0926, 1.4703, 0.9566, 0.9658, 0.8558], [0.8136, 1.0611, 0.9131, 1.1636, 1.0969, 0.9443, 0.9587, 0.8521], [0.6104, 0.9369, 0.9576, 0.8773, 1.3042, 0.7900, 0.8378, 0.6136], [0.8623, 0.9678, 0.8163, 0.9727, 1.1161, 1.6464, 0.9765, 0.7441], [0.6911, 0.8392, 0.6931, 0.7325, 0.8239, 0.7757, 1.0456, 0.6657], [0.8493, 0.8174, 0.8041, 0.9013, 0.8003, 0.7451, 0.7408, 1.1771]], grad_fn=<AsStridedBackward>) dt.shape Out[177]: torch.Size([8, 8]) As we can see, this produces the tensor of size (8,8) with the dot products I want lying on the diagonal. Is there any different way to obtain a smaller required tensor of shape (8,1), which just contains the elements lying on the diagonal in above result. To be more clear, the elements lying on the diagonal are the correct required dot products we want as a dot product of two batches. Element at index [0][0] is dot product of q_s[0] and p_s[0]. Element at index [1][1] is dot product of q_s[1] and p_s[1] and so on. Is there a better way to obtain the desired dot product in pytorch?
You can do it directly: a = torch.rand(8, 1, 128) b = torch.rand(8, 1, 128) torch.sum(a * b, dim=(1, 2)) # tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153]) torch.diag(torch.tensordot(a, b, dim=([1,2], [1,2]))) # tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153]) If you set axis=2 in the sum you will get a tensor with shape (8, 1).
https://stackoverflow.com/questions/61678051/
How to get padding mask from input ids?
Considering a batch of 4 pre-processed sentences (tokenization, numericalizing and padding) shown below: batch = torch.tensor([ [1, 2, 0, 0], [4, 0, 0, 0], [3, 5, 6, 7] ]) where 0 states for [PAD] token. Thus, what would be an efficient approach to generate a padding masking tensor of the same shape as the batch assigning zero at [PAD] positions and assigning one to other input data (sentence tokens)? In the example above it would be something like: padding_masking= tensor([ [1, 1, 0, 0], [1, 0, 0, 0], [1, 1, 1, 1] ])
You can get your desired result with padding_masking = batch > 0 If you want ints instead of booleans, use padding_masking.type(torch.int)
https://stackoverflow.com/questions/61688282/
How to get the correct shape of the tensor in custom dataset
I am using custom Dataset class, but the problem is that when I get the data from the Dataloader I am left with array that has different tensor shape than I want. shape that I get: torch.Size([1, 56, 128, 128]) shape that I want: torch.Size([1, 56, 1, 128, 128]) my approach was to: 1) to apply numpy.expand_dims on the array and get torch.Size([1, 1, 56, 128, 128]) 2) then to np.transpose on the array to get the shape I want torch.Size([1, 56, 1, 128, 128]) after first step I am getting the error: raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim)) ValueError: pic should be 2/3 dimensional. Got 4 dimensions. if I do the transpose first, none of the combination of np.transpose(array, axes=(1,2,0)) yields the shape torch.Size([56, 1, 128, 128]) if I convert array to Tensor first and then do torch.unsqueeze I get the error: raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic))) TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'> here is my code: class patientdataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.annotations = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): img_path = os.path.join(self.root_dir, self.annotations.iloc[index,0]) # np_load_old = np.load # np.load = lambda *a, **k: np_load_old(*a, allow_pickle=True, **k) image= np.asarray(np.load(img_path)) image= np.transpose(image, axes=(1,2,0)) image = torch.Tensor (image) image = torch.unsqueeze(image, dim=1) y_label = torch.tensor(np.asarray(self.annotations.iloc[index,1])) if self.transform: image = self.transform(image) return (image, y_label)
You can do using None: x.shape torch.Size([1, 56, 128, 128]) z = x[:,:,None,:,:] z.shape torch.Size([1, 56, 1, 128, 128]) I got a hint from here.
https://stackoverflow.com/questions/61688782/
Array indexing in pytorch
I am experiencing weird behavior while dealing with the following scenario. tne is the batch size, while N is the total number of possible locations for each example. Each example has exactly 24 scalar outputs, where their locations are stored in dof tensor (size of dof is (tne X 24)). Fint_e has a size of tneX24 (i.e., the 24 outputs for each example). I am trying to construct a large tensor, which has a size of tne X N. When I do the following, it fills in the wrong manner. Any advice? Fint_MAT = torch.zeros((tne,N)) Fint_MAT[:,dof] = Fint_e The dof tensor, which has the size of batch size X 24, has different indices for each example, but each example has in total 24 indices. For instance, dof[0,:] = 0, 1, 6, 9, … (24 in total) dof[1,:] = 1,100, 151, 300,… (24 in total) Any hint would be appreciated. I include below a simple scenario for better elaboration: tne = 3 N = 48 Fint_MAT = torch.zeros((tne,N)) Fint_e = torch.randn((tne, 24)) v1 = torch.arange(24).unsqueeze(0) v2 = torch.arange(12, 36).unsqueeze(0) v3 = torch.arange(24, 48).unsqueeze(0) dof = torch.cat((v1,v2,v3), axis=0).long() Fint_MAT[:,dof] = Fint_e
Okay, the key here is to use pairs of indices. Your dof tensor indexes the columns, but you also need to index the rows. x_index = torch.arange(3).unsqueeze(1).expand(3, 24) x_index is a 3 x 24 tensor where each row is the row index. Now you can use this together with the dof tensor to index elements in the Fint_MAT matrix Fint_MAT[x_index, dof] = Fint_e Basically, corresponding elements in x_index and dof form a [row, column] pair in Fint_MAT, so Fint_MAT[x_index[0], dof[0]]= Fint_e[0] etc. I think this should give you what you want.
https://stackoverflow.com/questions/61689643/
Cannot Uninstall Pytorch
I've installed pytorch 0.4.1 with both pip and conda in an effort to get a specific package working. Somehow version 1.5.0 is installed somewhere on my filesystem and despite my best efforts (following all these instructions) I cannot find and uninstall this version. Running python3.6 -c "import torch; print(torch.__version__)" results in 1.5.0 no matter how many uninstalls I attempt. I was using some virtual environments to help in my development, using venv. I am running macOS 10.15.4, Intel Core i9 8-core, 32 GB 2667 DDR4, AMD Radeon Pro 5500M 8 GB.
I was able to locate all installs of torch using sudo find . -name "*torch*". My version, which I ended up deleting 'by hand', was located at /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch. Hopefully this can help someone else.
https://stackoverflow.com/questions/61693276/
torch.cuda.is_available() returning false with Ubuntu 16.04
I have the following configurations nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2015 NVIDIA Corporation Built on Tue_Aug_11_14:27:32_CDT_2015 Cuda compilation tools, release 7.5, V7.5.17 nvidia-smi CUDA Toolkit (from Anaconda) cudatoolkit 10.2.89 hfd86e86_1 anaconda PyTorch (from Anaconda) pytorch 1.5.0 py3.7_cuda10.2.89_cudnn7.6.5_0 pytorch But I am getting torch.cuda.is_available() -> False Could anyone please tell which component should I upgrade or downgrade in order to bring up the CUDA?
I had the same issue on Ubuntu 18.04 with anaconda installation, recommended from the PyTorch official website. After installing cudatoolkit 10.1 rather than recommended 10.2 I got 'True'. Some people remedied this using driver version downgrade/upgrade as this is most likely a compatibility issue.
https://stackoverflow.com/questions/61695804/
Accuracy for every epoch in Pytorch
I'm struggling to calculate accuracy for every epoch in my training function for CNN classifier in Pytorch. After I run this script, it always prints out 0, 0.25 or 0.75 which is obviously wrong. I'm guessing the problem are the inputs of the get_accuracy function (outputs and labels) as they are not accumulated for the entire epoch but not sure how to fix that. Ideally, I'd like to print out both train and test accuracy for every epoch. def get_accuracy(pred, actual): assert len(pred) == len(actual) total = len(actual) _, predicted = torch.max(pred.data, 1) correct = (predicted == actual).sum().item() return correct / total def train_model(model): for epoch in range(epochs): running_loss = 0.00 for i, data in enumerate(trainloader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = cnn(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() #calculates loss of the batch in average running_loss /= len(trainloader) training_accuracy = get_accuracy(outputs, labels) test_accuracy = 'todo' print('='*10,'iteration: {0}'.format(epoch+1),'='*10,) print('\n Loss: {0} \n Training accuracy:{1}% \n Test accuracy: {2}%'.format(running_loss, training_accuracy, test_accuracy)) print('Finished Training')
I assume you want to calculate accuracy for multiclass case (so your classes are of form [0, 1, 2, 3, ..., N]). You are using maximum while it should be argmax, for example: def accuracy(predictions, labels): classes = torch.argmax(predictions, dim=1) return torch.mean((classes == labels).float()) This return index with maximum value, while you are returning maximum probability (or unnormalized probability). As probability is almost never equal to 1 you should have 0 accuracy always (up to float precision, it might sometimes correctly be really close to 0 or 1 so it may "hit"). For example 0.9 != 2 and you can never predict class 2, but you might, by accident, predict class 1 (0.999999999 ~= 1). This function should be called within your inner loop, just like you calculate loss, so it would be: for epoch in range(epochs): running_loss = 0.00 running_accuracy = 0.00 for i, data in enumerate(trainloader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = cnn(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() running_accuracy += accuracy(outputs, labels) running_loss /= len(trainloader) running_accuracy /= len(trainloader) Same thing would go for test and validation, just switch your model to evaluate mode (via model.eval()) and disable gradient using with torch.no_grad(): context manager).
https://stackoverflow.com/questions/61696593/
How to convert a tensorflow model to a pytorch model?
I'm new to pytorch. Here's an architecture of a tensorflow model and I'd like to convert it into a pytorch model. I have done most of the codes but am confused about a few places. 1) In tensorflow, the Conv2D function takes filter as an input. However, in pytorch, the function takes the size of input channels and output channels as inputs. So how do I find the equivalent number of input channels and output channels, provided with the size of the filter. 2) In tensorflow, the dense layer has a parameter called 'nodes'. However, in pytorch, the same layer has 2 different inputs (the size of the input parameters and size of the targeted parameters), how do I determine them based on the number of the nodes. Here's the tensorflow code. from keras.utils import to_categorical from keras.models import Sequential, load_model from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout model = Sequential() model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=X_train.shape[1:])) model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(43, activation='softmax')) Here's my code.: import torch.nn.functional as F import torch # The network should inherit from the nn.Module class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Define 2D convolution layers # 3: input channels, 32: output channels, 5: kernel size, 1: stride self.conv1 = nn.Conv2d(3, 32, 5, 1) # The size of input channel is 3 because all images are coloured self.conv2 = nn.Conv2d(32, 64, 5, 1) self.conv3 = nn.Conv2d(64, 128, 3, 1) self.conv3 = nn.Conv2d(128, 256, 3, 1) # It will 'filter' out some of the input by the probability(assign zero) self.dropout1 = nn.Dropout2d(0.25) self.dropout2 = nn.Dropout2d(0.5) # Fully connected layer: input size, output size self.fc1 = nn.Linear(36864, 128) self.fc2 = nn.Linear(128, 10) # forward() link all layers together, def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = self.conv3(x) x = F.relu(x) x = self.conv4(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output Thanks in advance!
1) In pytorch, we take input channels and output channels as an input. In your first layer, the input channels will be the number of color channels in your image. After that it's always going to be the same as the output channels from your previous layer (output channels are specified by the filters parameter in Tensorflow). 2). Pytorch is slightly annoying in the fact that when flattening your conv outputs you'll have to calculate the shape yourself. You can either use an equation to calculate this (=(βˆ’+2)/+1), or make a shape calculating function to get the shape of a dummy image after it's been passed through the conv part of the network. This parameter will be your size of input argument; the size of your output argument will just be the number of nodes you want in your next fully connected layer.
https://stackoverflow.com/questions/61697227/
how to decide which decision should be taken from four given options?
For my snake AI, I'm using pytorch whose structure is: class SnakeAI(nn.Module): def __init__(self): super().__init__() self.fc = nn.Sequential( nn.Linear(24, 16, bias=True), nn.ReLU(), nn.Linear(16, 8, bias=True), nn.ReLU(), nn.Linear(8, 4, bias=True), nn.Sigmoid() ) def forward(self, inputs): x = self.fc(inputs) return x Based on the inputs it gives me 4 values which to my understanding should be the probability of which decision to be, whose sum should be equal to 1. which is not the case here as the output is: [0.59388083 0.5833764 0.47855872 0.5388371 ] based on the given output should i use argmax to decide which direction to go with? is there any better way to do that?
You should change your last layer to nn.Softmax(-1). Then you would get in return a probability distribution and apply torch.max() which returns the maximum value and its index
https://stackoverflow.com/questions/61702825/
Expected input batch_size (1) to match target batch_size (11)
I know this seems to be a common problem but I wasn't able to find a solution. I'm running a multi-label classification model and having issues with tensor sizing. My full code looks like this: from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification import torch # Instantiating tokenizer and model tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased') model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-cased') # Instantiating quantized model quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) # Forming data tensors input_ids = torch.tensor(tokenizer.encode(x_train[0], add_special_tokens=True)).unsqueeze(0) labels = torch.tensor(Y[0]).unsqueeze(0) # Train model outputs = quantized_model(input_ids, labels=labels) loss, logits = outputs[:2] Which yields the error: ValueError: Expected input batch_size (1) to match target batch_size (11) Input_ids looks like: tensor([[ 101, 789, 160, 1766, 1616, 1110, 170, 1205, 7727, 1113, 170, 2463, 1128, 1336, 1309, 1138, 112, 119, 11882, 11545, 119, 108, 15710, 108, 3645, 108, 3994, 102]]) with shape: torch.Size([1, 28]) and labels looks like: tensor([[0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1]]) with shape: torch.Size([1, 11]) The size of input_ids will vary as the strings to be encoded vary in size. I also noticed that when feeding in 5 values of Y to produce 5 labels, it yields the error: ValueError: Expected input batch_size (1) to match target batch_size (55). with labels shape: torch.Size([1, 5, 11]) (Note that I didn't feed 5 input_ids, which is presumably why input size remains constant) I've tried a few different approaches to getting these to work, but I'm currently at a loss. I'd really appreciate some guidance. Thanks!
The labels for DistilBertForSequenceClassification need to have the size torch.Size([batch_size]) as mentioned in the documentation: labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). In your case, your labels should have size torch.Size([1]). That is not possible for your data, and that's because the sequence classification has one label for each sequence, but you wanted to make it a multi-label classification. As far as I'm aware there is no multi-label model in HuggingFace's transformer library that you could use out of the box. You would need to create your own model, which is not particularly difficult, because these extra models all use the same base model and add an appropriate classifier at the end, depending on the task to be solved. HuggingFace - Multi-label Text Classification using BERT – The Mighty Transformer explains how this can be done.
https://stackoverflow.com/questions/61704401/
Pytorch: size mismatch error although the sizes of the matrices do match (m1: [256 x 200], m2: [256 x 200])
I trying to do transfer learning by pre training (Self supervised learning) a model on rotation (0, 90, 180, dn 270 degrees: 4 labels) on unlabelled data. Here is the model: class RotNet1(nn.Module): def __init__(self): keep_prob = 0.9 super(RotNet1, self).__init__() self.layer1 = nn.Sequential(nn.Conv2d(in_channels = 3, out_channels = 80, kernel_size = 7, stride = 1, padding = 0), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 1), nn.Dropout(p=1 - keep_prob) ) self.bn1 = nn.BatchNorm2d(num_features = 80) self.dropout1 = nn.Dropout2d(p=0.02) self.layer2 = nn.Sequential(nn.Conv2d(in_channels = 80, out_channels = 128, kernel_size = 3, stride = 1, padding = 1), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 1), nn.Dropout(p=1 - keep_prob) ) self.bn2 = nn.BatchNorm2d(num_features = 128) self.layer3 = nn.Sequential(nn.Conv2d(in_channels = 128, out_channels = 256, kernel_size = 3, stride = 1, padding = 0), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 1), nn.Dropout(p=1 - keep_prob) ) self.bn3 = nn.BatchNorm2d(num_features = 256) self.layer4 = nn.Sequential(nn.Conv2d(in_channels = 256, out_channels = 512, kernel_size = 3, stride = 1, padding = 0), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 1), nn.Dropout(p=1 - keep_prob) ) self.bn4 = nn.BatchNorm2d(num_features = 512) self.layer5 = nn.Sequential(nn.Conv2d(in_channels = 512, out_channels = 512, kernel_size = 3, stride = 1, padding = 0), nn.ReLU(), nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 1), nn.Dropout(p=1 - keep_prob) ) self.bn5 = nn.BatchNorm2d(num_features = 512) self.drop_out = nn.Dropout() self.fc1 = nn.Linear(512* 2 * 2, 200) self.fc2 = nn.Linear(200, 4) #self.fc3 = nn.Linear(200, 100) def forward(self, input): out = self.layer1(input) out = self.bn1(out) out = self.dropout1(out) out = self.layer2(out) out = self.bn2(out) out = self.layer3(out) out = self.bn3(out) out = self.layer4(out) out = self.bn4(out) out = self.layer5(out) out = self.bn5(out) out = out.reshape(out.size(0), -1) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) #out = self.fc3(out) return out I trained this model on those 4 labels and names the model model_ssl. I then copied the model and changed the number the last fully connected layer from 4 to 200 (which is the number of labels in the labelled training and validation set where the number of example is restricted: model_a = copy.copy(model_ssl) #model_a.classifier num_classes = 200 model_a.fc2 = nn.Linear(256,num_classes).cuda() model_a.to(device) loss_fn = torch.nn.CrossEntropyLoss() n_epochs_a = 20 learning_rate_a = 0.01 alpha_a = 1e-5 momentum_a = 0.9 optimizer = torch.optim.SGD(model_a.parameters(), momentum = momentum_a, nesterov=True, weight_decay = alpha_a, lr=learning_rate_a) train_losses_a, val_losses_a, train_acc_a, val_acc_a = train(model_a, train_dataloader_sl, val_dataloader_sl, optimizer, n_epochs_a, loss_fn) Here is the error message: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-f6f362ba8c53> in <module>() 15 optimizer, 16 n_epochs_a, ---> 17 loss_fn) 6 frames <ipython-input-23-df58f17c5135> in train(model, train_dataloader, val_dataloader, optimizer, n_epochs, loss_function) 57 for epoch in range(n_epochs): 58 model.train() ---> 59 train_loss, train_accuracy = train_epoch(model, train_dataloader, optimizer, loss_fn) 60 model.eval() 61 val_loss, val_accuracy = evaluate(model, val_dataloader, loss_fn) <ipython-input-23-df58f17c5135> in train_epoch(model, train_dataloader, optimizer, loss_fn) 10 labels = labels.to(device=device, dtype=torch.int64) 11 # Run predictions ---> 12 output = model(images) 13 # Set gradients to zero 14 optimizer.zero_grad() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) <ipython-input-11-2cd851b6d8e4> in forward(self, input) 85 out = self.drop_out(out) 86 out = self.fc1(out) ---> 87 out = self.fc2(out) 88 #out = self.fc3(out) 89 return out /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input) 85 86 def forward(self, input): ---> 87 return F.linear(input, self.weight, self.bias) 88 89 def extra_repr(self): /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1608 if input.dim() == 2 and bias is not None: 1609 # fused op is marginally faster -> 1610 ret = torch.addmm(bias, input, weight.t()) 1611 else: 1612 output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [256 x 200], m2: [256 x 200] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:283 The size of the matrices m1 and m2 seems to match but there is still that error message. What should I do?
The output shape of fc1 has an output size of 200, so the input size of fc2 should be 200 not 256, num_classes and 256 should be switched: num_classes = 200 model_a.fc2 = nn.Linear(num_classes, 256).cuda()
https://stackoverflow.com/questions/61704416/
Does reshaping a tensor retain the characteristics of original tensor?
I have a tensor T of the shape (8, 5, 300), where 8 is the batch size, 5 is the number of documents in each batch, and 300 is the encoding of each of the document. If I reshape the Tensor as follows, does the properties of my Tensor remain the same? T = T.reshape(5, 300, 8) T.shape >> Size[5, 300, 8] So, does this new Tensor indicate the same properties as the original one? By the properties, I mean, can I say that this is also a Tensor of batch size 8, with 5 documents for each batch, and a 300 dimensional encoding for each document? Does this affect the training of the model? If reshaping of Tensor messes up the datapoints, then there is no point in training. For example, If reshaping like above gives output as a batch of 5 samples, with 300 documents of size 8 each. If it happens so, then it's useless, since I do not have 300 documents, neither do I have batch of 5 samples. I need to reshape it like this because my model in between produces output of the shape [8, 5, 300], and the next layer accepts input as [5, 300, 8].
NO You need to understand the difference between reshape/view and permute. reshape and view only changes the "shape" of the tensor, without re-ordering the elements. Therefore orig = torch.rand((8, 5, 300)) resh = orig.reshape(5, 300, 8) orig[0, 0, :] != resh[0, :, 0] If you want to change the order of the elements as well, you need to permute it: perm = orig.permute(1, 2, 0) orig[0, 0, :] == perm[0, :, 0]
https://stackoverflow.com/questions/61707112/
Error: one of the variables needed for gradient computation has been modified by an inplace operation
I'm using the Soft Actor-Critic implementation available here for one of my projects. But when I try to run it, I get the following error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). The error arises from the gradient computation in the sac.py file. I can't see the operation that might be inplace. Any help? The traceback: Traceback (most recent call last) <ipython-input-10-c124add9a61d> in <module>() 22 for i in range(updates_per_step): 23 # Update parameters of all the networks ---> 24 critic_1_loss, critic_2_loss, policy_loss, ent_loss, alpha = agent.update_parameters(memory, batch_size, updates) 25 updates += 1 26 2 frames <ipython-input-7-a2432c4c3767> in update_parameters(self, memory, batch_size, updates) 87 88 self.policy_optim.zero_grad() ---> 89 policy_loss.backward() 90 self.policy_optim.step() 91 /usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 196 products. Defaults to ``False``. 197 """ --> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph) 199 200 def register_hook(self, hook): /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 98 Variable._execution_engine.run_backward( 99 tensors, grad_tensors, retain_graph, create_graph, --> 100 allow_unreachable=True) # allow_unreachable flag 101 102
Just downgrade the PyTorch to anything under 1.5.0 (which is latest, as of writing this). pip uninstall torch pip install torch==1.4.0
https://stackoverflow.com/questions/61708729/
Save the best model trained on Faster RCNN (COCO dataset) with Pytorch avoiding to "overfitting"
I am training a Faster RCNN neural network on COCO dataset with Pytorch. I have followed next tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html The training results are as follows: Epoch: [6] [ 0/119] eta: 0:01:16 lr: 0.000050 loss: 0.3780 (0.3780) loss_classifier: 0.1290 (0.1290) loss_box_reg: 0.1848 (0.1848) loss_objectness: 0.0239 (0.0239) loss_rpn_box_reg: 0.0403 (0.0403) time: 0.6451 data: 0.1165 max mem: 3105 Epoch: [6] [ 10/119] eta: 0:01:13 lr: 0.000050 loss: 0.4129 (0.4104) loss_classifier: 0.1277 (0.1263) loss_box_reg: 0.2164 (0.2059) loss_objectness: 0.0244 (0.0309) loss_rpn_box_reg: 0.0487 (0.0473) time: 0.6770 data: 0.1253 max mem: 3105 Epoch: [6] [ 20/119] eta: 0:01:07 lr: 0.000050 loss: 0.4165 (0.4302) loss_classifier: 0.1277 (0.1290) loss_box_reg: 0.2180 (0.2136) loss_objectness: 0.0353 (0.0385) loss_rpn_box_reg: 0.0499 (0.0491) time: 0.6843 data: 0.1265 max mem: 3105 Epoch: [6] [ 30/119] eta: 0:01:00 lr: 0.000050 loss: 0.4205 (0.4228) loss_classifier: 0.1271 (0.1277) loss_box_reg: 0.2125 (0.2093) loss_objectness: 0.0334 (0.0374) loss_rpn_box_reg: 0.0499 (0.0484) time: 0.6819 data: 0.1274 max mem: 3105 Epoch: [6] [ 40/119] eta: 0:00:53 lr: 0.000050 loss: 0.4127 (0.4205) loss_classifier: 0.1209 (0.1265) loss_box_reg: 0.2102 (0.2085) loss_objectness: 0.0315 (0.0376) loss_rpn_box_reg: 0.0475 (0.0479) time: 0.6748 data: 0.1282 max mem: 3105 Epoch: [6] [ 50/119] eta: 0:00:46 lr: 0.000050 loss: 0.3973 (0.4123) loss_classifier: 0.1202 (0.1248) loss_box_reg: 0.1947 (0.2039) loss_objectness: 0.0315 (0.0366) loss_rpn_box_reg: 0.0459 (0.0470) time: 0.6730 data: 0.1297 max mem: 3105 Epoch: [6] [ 60/119] eta: 0:00:39 lr: 0.000050 loss: 0.3900 (0.4109) loss_classifier: 0.1206 (0.1248) loss_box_reg: 0.1876 (0.2030) loss_objectness: 0.0345 (0.0365) loss_rpn_box_reg: 0.0431 (0.0467) time: 0.6692 data: 0.1276 max mem: 3105 Epoch: [6] [ 70/119] eta: 0:00:33 lr: 0.000050 loss: 0.3984 (0.4085) loss_classifier: 0.1172 (0.1242) loss_box_reg: 0.2069 (0.2024) loss_objectness: 0.0328 (0.0354) loss_rpn_box_reg: 0.0458 (0.0464) time: 0.6707 data: 0.1252 max mem: 3105 Epoch: [6] [ 80/119] eta: 0:00:26 lr: 0.000050 loss: 0.4153 (0.4113) loss_classifier: 0.1178 (0.1246) loss_box_reg: 0.2123 (0.2036) loss_objectness: 0.0328 (0.0364) loss_rpn_box_reg: 0.0480 (0.0468) time: 0.6744 data: 0.1264 max mem: 3105 Epoch: [6] [ 90/119] eta: 0:00:19 lr: 0.000050 loss: 0.4294 (0.4107) loss_classifier: 0.1178 (0.1238) loss_box_reg: 0.2098 (0.2021) loss_objectness: 0.0418 (0.0381) loss_rpn_box_reg: 0.0495 (0.0466) time: 0.6856 data: 0.1302 max mem: 3105 Epoch: [6] [100/119] eta: 0:00:12 lr: 0.000050 loss: 0.4295 (0.4135) loss_classifier: 0.1171 (0.1235) loss_box_reg: 0.2124 (0.2034) loss_objectness: 0.0460 (0.0397) loss_rpn_box_reg: 0.0498 (0.0469) time: 0.6955 data: 0.1345 max mem: 3105 Epoch: [6] [110/119] eta: 0:00:06 lr: 0.000050 loss: 0.4126 (0.4117) loss_classifier: 0.1229 (0.1233) loss_box_reg: 0.2119 (0.2024) loss_objectness: 0.0430 (0.0394) loss_rpn_box_reg: 0.0481 (0.0466) time: 0.6822 data: 0.1306 max mem: 3105 Epoch: [6] [118/119] eta: 0:00:00 lr: 0.000050 loss: 0.4006 (0.4113) loss_classifier: 0.1171 (0.1227) loss_box_reg: 0.2028 (0.2028) loss_objectness: 0.0366 (0.0391) loss_rpn_box_reg: 0.0481 (0.0466) time: 0.6583 data: 0.1230 max mem: 3105 Epoch: [6] Total time: 0:01:20 (0.6760 s / it) creating index... index created! Test: [ 0/59] eta: 0:00:15 model_time: 0.1188 (0.1188) evaluator_time: 0.0697 (0.0697) time: 0.2561 data: 0.0634 max mem: 3105 Test: [58/59] eta: 0:00:00 model_time: 0.1086 (0.1092) evaluator_time: 0.0439 (0.0607) time: 0.2361 data: 0.0629 max mem: 3105 Test: Total time: 0:00:14 (0.2378 s / it) Averaged stats: model_time: 0.1086 (0.1092) evaluator_time: 0.0439 (0.0607) Accumulating evaluation results... DONE (t=0.02s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.643 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.079 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.096 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.333 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.333 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Epoch: [7] [ 0/119] eta: 0:01:16 lr: 0.000050 loss: 0.3851 (0.3851) loss_classifier: 0.1334 (0.1334) loss_box_reg: 0.1845 (0.1845) loss_objectness: 0.0287 (0.0287) loss_rpn_box_reg: 0.0385 (0.0385) time: 0.6433 data: 0.1150 max mem: 3105 Epoch: [7] [ 10/119] eta: 0:01:12 lr: 0.000050 loss: 0.3997 (0.4045) loss_classifier: 0.1250 (0.1259) loss_box_reg: 0.1973 (0.2023) loss_objectness: 0.0292 (0.0303) loss_rpn_box_reg: 0.0479 (0.0459) time: 0.6692 data: 0.1252 max mem: 3105 Epoch: [7] [ 20/119] eta: 0:01:07 lr: 0.000050 loss: 0.4224 (0.4219) loss_classifier: 0.1250 (0.1262) loss_box_reg: 0.2143 (0.2101) loss_objectness: 0.0333 (0.0373) loss_rpn_box_reg: 0.0493 (0.0484) time: 0.6809 data: 0.1286 max mem: 3105 Epoch: [7] [ 30/119] eta: 0:01:00 lr: 0.000050 loss: 0.4120 (0.4140) loss_classifier: 0.1191 (0.1221) loss_box_reg: 0.2113 (0.2070) loss_objectness: 0.0357 (0.0374) loss_rpn_box_reg: 0.0506 (0.0475) time: 0.6834 data: 0.1316 max mem: 3105 Epoch: [7] [ 40/119] eta: 0:00:53 lr: 0.000050 loss: 0.4013 (0.4117) loss_classifier: 0.1118 (0.1210) loss_box_reg: 0.2079 (0.2063) loss_objectness: 0.0357 (0.0371) loss_rpn_box_reg: 0.0471 (0.0473) time: 0.6780 data: 0.1304 max mem: 3105 Epoch: [7] [ 50/119] eta: 0:00:46 lr: 0.000050 loss: 0.3911 (0.4035) loss_classifier: 0.1172 (0.1198) loss_box_reg: 0.1912 (0.2017) loss_objectness: 0.0341 (0.0356) loss_rpn_box_reg: 0.0449 (0.0464) time: 0.6768 data: 0.1314 max mem: 3105 Epoch: [7] [ 60/119] eta: 0:00:39 lr: 0.000050 loss: 0.3911 (0.4048) loss_classifier: 0.1186 (0.1213) loss_box_reg: 0.1859 (0.2013) loss_objectness: 0.0334 (0.0360) loss_rpn_box_reg: 0.0412 (0.0462) time: 0.6729 data: 0.1306 max mem: 3105 Epoch: [7] [ 70/119] eta: 0:00:33 lr: 0.000050 loss: 0.4046 (0.4030) loss_classifier: 0.1177 (0.1209) loss_box_reg: 0.2105 (0.2008) loss_objectness: 0.0359 (0.0354) loss_rpn_box_reg: 0.0462 (0.0459) time: 0.6718 data: 0.1282 max mem: 3105 Epoch: [7] [ 80/119] eta: 0:00:26 lr: 0.000050 loss: 0.4125 (0.4067) loss_classifier: 0.1187 (0.1221) loss_box_reg: 0.2105 (0.2022) loss_objectness: 0.0362 (0.0362) loss_rpn_box_reg: 0.0469 (0.0462) time: 0.6725 data: 0.1285 max mem: 3105 Epoch: [7] [ 90/119] eta: 0:00:19 lr: 0.000050 loss: 0.4289 (0.4068) loss_classifier: 0.1288 (0.1223) loss_box_reg: 0.2097 (0.2009) loss_objectness: 0.0434 (0.0375) loss_rpn_box_reg: 0.0479 (0.0461) time: 0.6874 data: 0.1327 max mem: 3105 Epoch: [7] [100/119] eta: 0:00:12 lr: 0.000050 loss: 0.4222 (0.4086) loss_classifier: 0.1223 (0.1221) loss_box_reg: 0.2101 (0.2021) loss_objectness: 0.0405 (0.0381) loss_rpn_box_reg: 0.0483 (0.0463) time: 0.6941 data: 0.1348 max mem: 3105 Epoch: [7] [110/119] eta: 0:00:06 lr: 0.000050 loss: 0.4082 (0.4072) loss_classifier: 0.1196 (0.1220) loss_box_reg: 0.2081 (0.2013) loss_objectness: 0.0350 (0.0379) loss_rpn_box_reg: 0.0475 (0.0461) time: 0.6792 data: 0.1301 max mem: 3105 Epoch: [7] [118/119] eta: 0:00:00 lr: 0.000050 loss: 0.4070 (0.4076) loss_classifier: 0.1196 (0.1223) loss_box_reg: 0.2063 (0.2016) loss_objectness: 0.0313 (0.0375) loss_rpn_box_reg: 0.0475 (0.0462) time: 0.6599 data: 0.1255 max mem: 3105 Epoch: [7] Total time: 0:01:20 (0.6763 s / it) creating index... index created! Test: [ 0/59] eta: 0:00:14 model_time: 0.1194 (0.1194) evaluator_time: 0.0633 (0.0633) time: 0.2511 data: 0.0642 max mem: 3105 Test: [58/59] eta: 0:00:00 model_time: 0.1098 (0.1102) evaluator_time: 0.0481 (0.0590) time: 0.2353 data: 0.0625 max mem: 3105 Test: Total time: 0:00:13 (0.2371 s / it) Averaged stats: model_time: 0.1098 (0.1102) evaluator_time: 0.0481 (0.0590) Accumulating evaluation results... DONE (t=0.02s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.649 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.079 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.210 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.011 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.095 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.334 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.334 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 I have two questions: Overfitting: I don't know if my model is overfitting or underfitting. How I can find out looking the metrics? Save the best model of all epochs: How I can save the best model trained during the differents epochs? Which is the best epoch according to the results? Thank you!
You need to keep track of loss on test dataset (or some other metric like recall). Draw your attention to this part of code: for epoch in range(num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) # update the learning rate lr_scheduler.step() # evaluate on the test dataset evaluate(model, data_loader_test, device=device) train_one_epoch and evaluate are defined here. Evaluate function returns object of type CocoEvaluator, but you can modify the code so that it returns test loss (you need to either extract metrics from CocoEvaluator object somehow, or write your own metric evaluation). So, the answers are: Keep track of test loss, it will tell you about overfitting. Save the model state after every epoch until test loss begins to increase. Tutorial about saving models is here.
https://stackoverflow.com/questions/61711103/
Trouble installing Torch on Google App Engine
I've built a machine learning api that uses Torch as the ML framework. When I upload the code to Googe App Engine it runs out of memory. After some debugging I found out that the issue it the installation of Torch. I'm using Torch 1.5.0 and python 3.7.4 So how do I fix this error? Maybe I can change something i app.yaml? Error message: Step #1 - "builder": OSError: [Errno 12] Cannot allocate memory Step #1 - "builder": self.pid = os.fork() Step #1 - "builder": File "/usr/lib/python2.7/subprocess.py", line 938, in _execute_child Step #1 - "builder": errread, errwrite) Step #1 - "builder": File "/usr/lib/python2.7/subprocess.py", line 394, in __init__ Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/python/layer_builder.py", line 346, in _python_version Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/python/layer_builder.py", line 332, in GetCacheKeyRaw Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/python/layer_builder.py", line 109, in GetCacheKeyRaw Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/common/single_layer_image.py", line 60, in GetCacheKey Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/python/layer_builder.py", line 153, in BuildLayer Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__/ftl/python/builder.py", line 114, in Build Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__.py", line 54, in main Step #1 - "builder": File "/usr/local/bin/ftl.par/__main__.py", line 65, in <module> Step #1 - "builder": exec code in run_globals Step #1 - "builder": File "/usr/lib/python2.7/runpy.py", line 72, in _run_code Step #1 - "builder": "__main__", fname, loader, pkg_name) Step #1 - "builder": File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main Step #1 - "builder": Traceback (most recent call last): And again this error message didn't appear when i didn't include torch in my requirements.txt to reproduce: app.yaml runtime: python37 resources: memory_gb: 16 disk_size_gb: 10 requirements.txt gunicorn==20.0.4 aniso8601==8.0.0 beautifulsoup4==4.9.0 boto3==1.13.3 botocore==1.16.3 bs4==0.0.1 certifi==2020.4.5.1 chardet==3.0.4 click==7.1.2 colorama==0.4.3 docutils==0.15.2 filelock==3.0.12 Flask==1.1.2 Flask-RESTful==0.3.8 googletrans==2.4.0 idna==2.9 itsdangerous==1.1.0 Jinja2==2.11.2 jmespath==0.9.5 joblib==0.14.1 MarkupSafe==1.1.1 numpy==1.18.4 protobuf==3.11.3 python-dateutil==2.8.1 pytz==2020.1 regex==2020.4.4 requests==2.23.0 s3transfer==0.3.3 sacremoses==0.0.43 sentencepiece==0.1.86 six==1.14.0 soupsieve==2.0 tokenizers==0.5.2 tqdm==4.46.0 transformers==2.8.0 urllib3==1.25.9 Werkzeug==1.0.1 main.py import flask from flask import Flask, request from flask_restful import Api, Resource app = Flask(__name__) api = Api(app) production = False import json # Import api code # Create main api 'view' class main_api(Resource): def get(self): question = request.args.get('question') # Run the script # But not necessary for the minimum working test return { 'question': question, # 'results': results_from_script, } # Adds resource api.add_resource(main_api, '/') # Starts the api if __name__ == '__main__': host = '127.0.0.1' port = 8080 app.run(host=host, port=port, debug=not production)
I fixed this error by using the flex enviroment. The only thing I had to change was the app.yaml runtime: python env: flex entrypoint: gunicorn -b :$PORT main:app runtime_config: python_version: 3 manual_scaling: instances: 1 resources: cpu: 2 memory_gb: 5 disk_size_gb: 10 And then it was ready to be deployed
https://stackoverflow.com/questions/61711802/
How to 4 dimension PyTorch tensor multiply by 1 dimension tensor?
I'm trying to write function for mixup training. On this site i found some code and adapted to my previous code. But in original code only one random variable is generated for batch (64). But i want random value for every picture in batch. Code with one variable for batch: def mixup_data(x, y, alpha=1.0): lam = np.random.beta(alpha, alpha) batch_size = x.size()[0] index = torch.randperm(batch_size) mixed_x = lam * x + (1 - lam) * x[index,:] mixed_y = lam * y + (1 - lam) * y[index,:] return mixed_x, mixed_y x and y for input come from pytorch DataLoader. x input size: torch.Size([64, 3, 256, 256]) y input size: torch.Size([64, 3474]) This code works good. Then I changed it to this: def mixup_data(x, y): batch_size = x.size()[0] lam = torch.rand(batch_size) index = torch.randperm(batch_size) mixed_x = lam[index] * x + (1 - lam[index]) * x[index,:] mixed_y = lam[index] * y + (1 - lam[index]) * y[index,:] return mixed_x, mixed_y But it gives an error: RuntimeError: The size of tensor a (64) must match the size of tensor b (256) at non-singleton dimension 3 How i understand how the code works is it takes first image in batch and multiply by first value in lam tensor (64 values long). How can i do it?
You need to replace the following line: lam = torch.rand(batch_size) by lam = torch.rand(batch_size, 1, 1, 1) With your current code, lam[index] * x multiplication is not possible because lam[index] is of size torch.Size([64]) whereas x is of size torch.Size([64, 3, 256, 256]). So, you need to make the size of lam[index] as torch.Size([64, 1, 1, 1]) so that it becomes broadcastable. To cope with the following statement: mixed_y = lam[index] * y + (1 - lam[index]) * y[index, :] We can reshape the lam tensor before the statement. lam = lam.reshape(batch_size, 1) mixed_y = lam[index] * y + (1 - lam[index]) * y[index, :]
https://stackoverflow.com/questions/61713122/
dataloader does not return correct target tensor
I am writing custom dataset but the dataloader does not return correct target tensors. where the label is 010 I am getting 10. I am converting the labels into int before converting to tensor. y_label = torch.tensor(int(self.annotations.iloc[index, 1])) iterating through dataloader: Batch idx 0, data shape torch.Size([1, 1, 56, 128, 128]), target shape torch.Size([1]) tensor([1]) Batch idx 1, data shape torch.Size([1, 1, 56, 128, 128]), target shape torch.Size([1]) tensor([100]) Batch idx 2, data shape torch.Size([1, 1, 56, 128, 128]), target shape torch.Size([1]) tensor([10]) Batch idx 3, data shape torch.Size([1, 1, 56, 128, 128]), target shape torch.Size([1]) tensor([1]) csv file looks like the following: 1 p0.npy, 100 2 pl.npy, 001 3 p2.npy, 001 4 p3.npy, 001 5 p4.npy, 100 6 p5.npy, 010 7 p6.npy, 100 8 p7.npy, 100 9 p8.npy, 100 10 p9.npy, 010 11 plO.npy, 010 12 pll.npy, 010 13 p12.npy, 010 14 p13.npy, 100 code: class patientdataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.annotations = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): img_path = os.path.join(self.root_dir, self.annotations.iloc[index,0]) # np_load_old = np.load # np.load = lambda *a, **k: np_load_old(*a, allow_pickle=True, **k) image= np.array(np.load(img_path)) # y_label = torch.tensor(np.asarray(self.annotations.iloc[index,1])) y_label = torch.tensor(int(self.annotations.iloc[index, 1])) if self.transform: imagearrays = self.transform(image) image = imagearrays[None, :, :, :] imaget = np.transpose(image, (0, 2, 1, 3)) image = imaget return (image, y_label)
It seems your labels are in binary form. Convert them into decimal and then into tensor should do the trick for you. y_label = torch.tensor(int(self.annotations.iloc[index, 1], 2)) doing so will convert 010 into 2, 100 into 4 and so on.
https://stackoverflow.com/questions/61713873/
Why "ValueError: optimizer got an empty parameter list" is happened in this case?
I already studied the questions on this topic and I saw only recommendations to use ModuleList instead of usual list. But I don't understand why this error occurs in the case when I use nn.Sequential? I tried to build AlexNet like in official implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py but got "ValueError: optimizer got an empty parameter list" class AlexNet(nn.Module): def __init__(self, input_channels, n_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential ( nn.Conv2d(input_channels, 96, kernel_size=11, stride=4), nn.LocalResponseNorm(size=2, alpha=2e-5), nn.MaxPool2d(kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.Conv2d(96, 256, kernel_size=5, stride=1), nn.LocalResponseNorm(size=2, alpha=2e-5), nn.MaxPool2d(kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.Conv2d(256, 384, kernel_size=3, stride=1), nn.ReLU(inplace=True), nn.Conv2d(384, 384, kernel_size=3, stride=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, stride=1), nn.ReLU(inplace=True), ) self.fully_connected = nn.Sequential ( nn.Dropout2d(0.5), nn.Linear(256, 4096), nn.ReLU(inplace=True), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, n_classes) ) def forward(self, x): x = self.features(x) x = nn.Flatten(x) x = self.fully_connected(x) return x model = AlexNet(input_channels=1, n_classes=10) optimizer = optim.Adam(model.parameters() , lr=1e-3)
You have not actually created the nn.Sequential modules, but you have assigned the nn.Sequential class to self.features and self.fully_connected. The parentheses are on the next line and in Python that creates a tuple, because it doesn't follow an identifier and newlines usually terminate statements with some exceptions such as opening brackets. The opening parentheses need to be on the same line: self.features = nn.Sequential( nn.Conv2d(input_channels, 96, kernel_size=11, stride=4), nn.LocalResponseNorm(size=2, alpha=2e-5), nn.MaxPool2d(kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.Conv2d(96, 256, kernel_size=5, stride=1), nn.LocalResponseNorm(size=2, alpha=2e-5), nn.MaxPool2d(kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.Conv2d(256, 384, kernel_size=3, stride=1), nn.ReLU(inplace=True), nn.Conv2d(384, 384, kernel_size=3, stride=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, stride=1), nn.ReLU(inplace=True), ) self.fully_connected = nn.Sequential( nn.Dropout2d(0.5), nn.Linear(256, 4096), nn.ReLU(inplace=True), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, n_classes), )
https://stackoverflow.com/questions/61714840/
Combining feature matrixes of different shapes into a single feature
A library that I am using only supports 1 feature matrix as an input. Therefore, I would like to merge my two features into a single feature. Feature #1: a simple float e.g. tensor([1.9]) Feature #2: categorical that I would like to one-hot encode tensor([0., 1., 0]) tensor([ [1.9, 0., 0.], # row 1 for float [0., 1., 0.] # row 2 for OHE ]) My plan would be to take the 1x1 feature and the 3x1 feature merge them into a 3x2. For the float row, I would always have the 2nd and 3rd entries zeroed out. <-- is there a better approach? e.g. should i use three 1.9's? Would this method give the affect of training on both features simultaneously?
Yes, what you propose would work, in that the model would just learn to ignore the second and third indices. But since those are never used, you can just concatenate them directly, i.e. tensor([1.9, 0., 1., 0.]) you don't need to "indicate" in any way to the model that the first value is a scalar and the rest operate as a one-hot encoding. The model will figure out the relevant features for the task you care about.
https://stackoverflow.com/questions/61715383/
Sequence Labelling with BERT
I am using a model consisting of an embedding layer and an LSTM to perform sequence labelling, in pytorch + torchtext. I have already tokenised the sentences. If I use self-trained or other pre-trained word embedding vectors, this is straightforward. But if I use the Huggingface transformers BertTokenizer.from_pretrained and BertModel.from_pretrained there is a '[CLS]' and '[SEP]' token added to the beginning and end of the sentence, respectively. So the output of the model becomes a sequence that is two elements longer than the label/target sequence. What I am unsure of is: Are these two tags needed for the BertModel to embed each token of a sentence "correctly"? If they are needed, can I take them out after the BERT embedding layer, before the input to the LSTM, so that the lengths are correct in the output?
Yes, BertModel needed them since without those special symbols added, the output representations would be different. However, my experience says, if you fine-tune BertModel on the labeling task without [CLS] and [SEP] token added, then you may not see a significant difference. If you use BertModel to extract fixed word features, then you better add those special symbols. Yes, you can take out the embedding of those special symbols. In fact, this is a general idea for sequence labeling or tagging tasks. I suggest taking a look at some sequence labeling or tagging examples using BERT to become confident about your modeling decisions. You can find NER tagging example using Huggingface transformers here.
https://stackoverflow.com/questions/61717097/
Save a Hash Value as a Tensor in pytorch
I have a dataset that contains identifiers that are saved as string. I want to create a neural net that gets amongst other things these identifiers as labels and then checks if two identifier are exactly the same. If they are the same then I want to increase the loss if the network predicts wrong values. As an example an identifier looks like this ec2c1cc2410a4e259aa9c12756e1d6e It's always 32 values and uses hexadecimal characters (0-9a-f). I want to work with this value in pytorch and save it as a tensor but I get the following problem decimal_identifier = int(string_id, 16) tensor_id = torch.ToTensor(decimal_identifier) RuntimeError: Overflow when unpacking long So I can't convert the value into a decimal because the values are too big. Any idea how I could fix this? I know that it's always 32 chars but I haven't found a char tensor in pytorch. How can I feed this unique identifier in my neural net?
The problem is that int(string_id, 16) converts your 32 char long hash into a single integer. This is really a very VERY large number. You can, instead, convert it to an array: tensor_id = torch.tensor([int(c, 16) for c in string_id]) Resulting with (in your example): tensor([14, 12, 2, 12, 1, 12, 12, 2, 4, 1, 0, 10, 4, 14, 2, 5, 9, 10, 10, 9, 12, 1, 2, 7, 5, 6, 14, 1, 13, 6, 14]) You can also group the hex digits to 8 at a time (for int64 tensor): torch.tensor([int(string_id[i:i+8], 16) for i in range(0, len(string_id), 8)], dtype=torch.int64)
https://stackoverflow.com/questions/61717161/
When does dataloader shuffle happen for Pytorch?
I have beening using shuffle option for pytorch dataloader for many times. But I was wondering when this shuffle happens and whether it is performed dynamically during iteration. Take the following code as an example: namesDataset = NamesDataset() namesTrainLoader = DataLoader(namesDataset, batch_size=16, shuffle=True) for batch_data in namesTrainLoader: print(batch_data) When we define "namesTrainLoader", does that mean the shuffling is finished and the following iteration will be based on a fixed order of data? Will there be any randomness in the for loop after namesTrainLoader was defined? I was trying to replace half of "batch_data" with some special value: for batch_data in namesTrainLoader: batch_data[:8] = special_val pre = model(batch_data) Let us say there will be infinite number of epoches, will "model" eventually see all the data in "namesTrainLoader"? Or half of the data of "namesTrainLoader" is actually lost to "model"?
The shuffling happens when the iterator is created. In the case of the for loop, that happens just before the for loop starts. You can create the iterator manually with: # Iterator gets created, the data has been shuffled at this point. data_iterator = iter(namesTrainLoader) By default the data loader uses torch.utils.data.RandomSampler if you set shuffle=True (without providing your own sampler). Its implementation is very straight forward and you can see where the data is shuffled when the iterator is created by looking at the RandomSampler.__iter__ method: def __iter__(self): n = len(self.data_source) if self.replacement: return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist()) return iter(torch.randperm(n).tolist()) The return statement is the important part, where the shuffling takes place. It simply creates a random permutation of the indices. That means you will see your entire dataset every time you fully consume the iterator, just in a different order every time. Therefore there is no data lost (not including cases with drop_last=True) and your model will see all data at every epoch.
https://stackoverflow.com/questions/61718947/
How do i make my PyTorch DCGAN code to run on a GPU?
I am trying to train a DCGAN on a GPU but as I am starting out with PyTorch I have tried to do something from the documentation and it works but I want to confirm if it is the correct way to do it as I have done cause I looked at many other questions about running it on the GPU but they are done in different ways. import os import torch import torchvision.transforms as transforms from torch.autograd import Variable from torch.nn import ( BatchNorm2d, BCELoss, Conv2d, ConvTranspose2d, LeakyReLU, Module, ReLU, Sequential, Sigmoid, Tanh, ) from torch.optim import Adam from torch.utils.data import DataLoader from torchvision.datasets import CIFAR10 from torchvision.utils import save_image from tqdm import tqdm device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") dataset = DataLoader( CIFAR10( root="./Data", download=True, transform=transforms.Compose( [ transforms.Resize(64), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ] ), ), batch_size=64, shuffle=True, num_workers=2, ) try: os.mkdir("./Models") os.mkdir("./Results") except FileExistsError: pass class Gen(Module): def __init__(self): super(Gen, self).__init__() self.main = Sequential( ConvTranspose2d(100, 512, 4, 1, 0, bias=False), BatchNorm2d(512), ReLU(True), ConvTranspose2d(512, 256, 4, 2, 1, bias=False), BatchNorm2d(256), ReLU(True), ConvTranspose2d(256, 128, 4, 2, 1, bias=False), BatchNorm2d(128), ReLU(True), ConvTranspose2d(128, 64, 4, 2, 1, bias=False), BatchNorm2d(64), ReLU(True), ConvTranspose2d(64, 3, 4, 2, 1, bias=False), Tanh(), ) def forward(self, input): output = self.main(input) return output class Dis(Module): def __init__(self): super(Dis, self).__init__() self.main = Sequential( Conv2d(3, 64, 4, 2, 1, bias=False), LeakyReLU(0.2, inplace=True), Conv2d(64, 128, 4, 2, 1, bias=False), BatchNorm2d(128), LeakyReLU(0.2, inplace=True), Conv2d(128, 256, 4, 2, 1, bias=False), BatchNorm2d(256), LeakyReLU(0.2, inplace=True), Conv2d(256, 512, 4, 2, 1, bias=False), BatchNorm2d(512), LeakyReLU(0.2, inplace=True), Conv2d(512, 1, 4, 1, 0, bias=False), Sigmoid(), ) def forward(self, input): output = self.main(input) return output.view(-1) def weights(obj): classname = obj.__class__.__name__ if classname.find("Conv") != -1: obj.weight.data.normal_(0.0, 0.02) elif classname.find("BatchNorm") != -1: obj.weight.data.normal_(1.0, 0.02) obj.bias.data.fill_(0) gen = Gen() gen.apply(weights) gen.cuda().cuda().to(device) dis = Dis() dis.apply(weights) dis.cuda().to(device) criterion = BCELoss() optimizerDis = Adam(dis.parameters(), lr=0.0002, betas=(0.5, 0.999)) optimizerGen = Adam(gen.parameters(), lr=0.0002, betas=(0.5, 0.999)) for epoch in range(25): for batch, data in enumerate(tqdm(dataset, total=len(dataset)), 0): dis.zero_grad() input = Variable(data[0]).cuda().to(device) target = Variable(torch.ones(input.size()[0])).cuda().to(device) output = dis(input).cuda().to(device) realError = criterion(output, target) noise = Variable(torch.randn(input.size()[0], 100, 1, 1)).cuda().to(device) fake = gen(noise).cuda().to(device) target = Variable(torch.zeros(input.size()[0])).cuda().to(device) output = dis(fake.detach()).cuda().to(device) fakeError = criterion(output, target) errD = realError + fakeError errD.backward() optimizerDis.step() gen.zero_grad() target = Variable(torch.ones(input.size()[0])).cuda().to(device) output = dis(fake).cuda().to(device) errG = criterion(output, target) errG.backward() optimizerGen.step() print(f" {epoch+1}/25 Dis Loss: {errD.data:.4f} Gen Loss: {errG.data:.4f}") save_image(data[0], "./Results/Real.png", normalize=True) save_image(gen(noise).data, f"./Results/Fake{epoch+1}.png", normalize=True) torch.save(gen, f"./Models/model{epoch+1}.pth")
A few comments about your code: What is the value of your device variable? Make sure is is torch.device('cuda:0') (or whatever you GPU's device id is). Otherwise, if your device is actually torch.device('cpu') then you run in CPU. See torch.device for more information. You removed the "model" part of your code, but you may have skipped an important part there: Have you moved your model to GPU as well? Usually a model contains many internal parameters (aka trainable weights) and you need them also on device. your code should also have dis.to(device) criterion.to(device) # if your loss function also has trainable parameters Note that unlike torch.tensors, calling .to on a nn.Module is an "in place" operation. You have redundancy in your code: you do not have to call both .cuda() AND .to(). Calling .cuda() was the old way of moving things to GPU, but for a while now pytorch introduced .to() to make coding simpler. Since both your inputs, and model are on GPU, you do not need to explicitly move the outputs to device as well. Thus, you can replace output = dis(input).cuda().to(device) with just output = dis(input). No need to use Variable explicitly. You can replace noise = Variable(torch.randn(input.size()[0], 100, 1, 1)).cuda().to(device) with noise = torch.randn(input.size()[0], 100, 1, 1), device=input.device) You can also use torch.zeros_like and torch.ones_like for the target variable: target = torch.zeros_like(input) Note that zeros_like and one_like take care for the device (and data type) for you - it will be the same as input's.
https://stackoverflow.com/questions/61720137/
pytorch with mkl-dnn backend performance on small conv with multi thread
I'm trying pytorch model with mkl-dnn backend. But i got a problem that the multi thread performance is slower than expected, runing on small conv. Please see this table. mkl-dnn performace table Runs big conv, the performance is obvious faster on 8 threads compared with single thread. But runs small conv, the speed has no big differences with 8 threads and single thread. So my question is: 1. why 8 threads is not obviously faster than 1 thread with small conv? 2. How to improve the 8 threads performance on small conv? My code here import time import torch import torch.nn as nn from torch.nn.utils import weight_norm class MyConv(nn.Module): def __init__(self, *args, **kwargs): super().__init__() self.cell = nn.Conv1d(*args, **kwargs) self.cell.weight.data.normal_(0.0, 0.02) def forward(self, x): return self.cell(x) def main(): #print(*torch.__config__.show().split("\n"), sep="\n") torch.set_num_threads(1) dim = 32 kernels = 3 seq = 100000 MyCell = MyConv(dim, dim, kernel_size=kernels, stride=1) MyCell.eval() inputs = [] iter = 1000 for i in range(iter): inputs.append(torch.rand(1, dim, seq)) start = time.time() * 1000 for i in range(iter): print(i) y = MyCell(inputs[i]) #print(y) end = time.time() * 1000 print('cost %d ms per iter\n' % ((end - start) / iter)) if __name__ == "__main__": main() Mkldnn verbose info (running with "export MKLDNN_VERBOSE=1") mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw8c,num:1,1x32x1x107718,2.18091 mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw8i8o,num:1,32x32x1x1,0.00195312 mkldnn_verbose,exec,convolution,jit_1x1:avx2,forward_training,fsrc:nChw8c fwei:OIhw8i8o fbia:x fdst:nChw8c,alg:convolution_direct,mb1_ic32oc32_ih1oh1kh1sh1dh0ph0_iw107718ow107718kw1sw1dw0pw0,3.87793 mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nChw8c out:f32_nchw,num:1,1x32x1x107718,4.69116 Thanks a lot!
More threads does not mean more speed. If you have 4 cores, you cannot go any faster than 4 times 1 core. What you should do is tune your code for maximum performance in single-thread execution (with compiler optimization turned off), and after you have done that, turn on the compiler's optimizer and make the code multi-threaded, with no more threads than you have cores. Source: Multithreading - How to use CPU as much as possible?
https://stackoverflow.com/questions/61722302/
How do I use one-hot encoding with cross-entropy loss?
I have tried using one-hot encoding for a multiclass classification of about 120 classes using a dog breed dataset. Also using resnet18. But the following error is shown when I run the code. Please help me solve the issue. The code of my model is shown below: model = torchvision.models.resnet18() op = torch.optim.Adam(model.parameters(),lr=0.001) crit = nn.NLLLoss() model.fc = nn.Sequential( nn.Linear(512,120), nn.Dropout(inplace=True), nn.ReLU(), nn.LogSoftmax()) for i,(x,y) in enumerate(train_dl): # prepare one-hot vector y_oh=torch.zeors(y.shape[0],120) y_oh.scatter_(1, y.unsqueeze(1), 1) # do the prediction y_hat=model(x) y_=torch.max(y_hat) loss=crit(y,y_) op.zero_grad() loss.backward() op.step() The error: RuntimeError Traceback (most recent call last) <ipython-input-190-46a21ead759a> in <module> 6 7 y_hat=model(x) ----> 8 loss=crit(y_oh,y_hat) 9 op.zero_grad() 10 loss.backward() ***RuntimeError: 1D target tensor expected, multi-target not supported***
The NLLLoss you are using expects indices of the ground-truth target classes. Btw. you do not have to convert your targets into one-hot vectors and use directly the y tensor. Note also that NLLLoss expect the distribution output distribution in the logarithmic domains, i.e., use nn.LogSoftmax instead of nn.Softmax.
https://stackoverflow.com/questions/61723290/
How can I send a custom dataset through ssh?
I have to train a GAN (coded in Python using pytorch) on a remote GPU that I can only access from my PC via ssh, but I have a custom dataset (that I cannot download from anywhere) which is stored in the PC without the GPU. I've searched on Google very intensively and tried to use the scp command (which is the only solution that I've found), but it seems that the dataset is too big to be send within an acceptable time (13GB in size). How can I transfer the dataset to the PC with the GPU within a decent amount of time, given that I cannot access the PC in any other way than an ssh connection, in order to train the network? Moreover, how can I retrieve the state_dict() and store it to my PC, once the training is complete?
It has nothing to do with the dataset itself. You can use Rsync to transfer files from you PC to the remote server using SSH and vice versa meaning you can transfer data/folders from remote server to your local PC as well. Rsync is a utility for efficiently transferring and synchronizing files between a computer and an external hard drive and across networked computers by comparing the modification times and sizes of files. It is also well suited for transferring large files over ssh as it is able to resume from previously interrupted transfer. From here: rsync is typically used for synchronizing files and directories between two different systems. For example, if the command rsync local-file user@remote-host:remote-file is run, rsync will use SSH to connect as user to remote-host.[7] Once connected, it will invoke the remote host's rsync and then the two programs will determine what parts of the local file need to be transferred so that the remote file matches the local one. How to use: Similar to cp, rcp and scp, rsync requires the specification of a source and of a destination, of which at least one must be local. Generic syntax: rsync [OPTION] … SRC … [USER@]HOST:DEST rsync [OPTION] … [USER@]HOST:SRC [DEST] where SRC is the file or directory (or a list of multiple files and directories) to copy from, DEST is the file or directory to copy to, and square brackets indicate optional parameters. Simple example : The following command will transfer all the files in the directory dataset to the home directory in the remote server: rsync -avz dataset/ [email protected]:/home/ the -avz switch options simply mean, compress and transfer the files in archive mode and show the progress on screen: Common options : -v : verbose -r : copies data recursively (but don’t preserve timestamps and permission while transferring data -a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps -z : compress file data -h : human-readable, output numbers in a human-readable format You can read more here as well.
https://stackoverflow.com/questions/61736305/
Pytorch inconsistent size with pad_packed_sequence, seq2seq
I'm having some inconsistencies with the output of a encoder I got from this github . The encoder looks as follows: class Encoder(nn.Module): r"""Applies a multi-layer LSTM to an variable length input sequence. """ def __init__(self, input_size, hidden_size, num_layers, dropout=0.0, bidirectional=True, rnn_type='lstm'): super(Encoder, self).__init__() self.input_size = 40 self.hidden_size = 512 self.num_layers = 8 self.bidirectional = True self.rnn_type = 'lstm' self.dropout = 0.0 if self.rnn_type == 'lstm': self.rnn = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, dropout=dropout, bidirectional=bidirectional) def forward(self, padded_input, input_lengths): """ Args: padded_input: N x T x D input_lengths: N Returns: output, hidden - **output**: N x T x H - **hidden**: (num_layers * num_directions) x N x H """ total_length = padded_input.size(1) # get the max sequence length packed_input = pack_padded_sequence(padded_input, input_lengths, batch_first=True,enforce_sorted=False) packed_output, hidden = self.rnn(packed_input) pdb.set_trace() output, _ = pad_packed_sequence(packed_output, batch_first=True, total_length=total_length) return output, hidden So it only consists of a rnn lstm cell, if I print the encoder this is the output: LSTM(40, 512, num_layers=8, batch_first=True, bidirectional=True) So it should have a 512 sized output right? But when I feed a tensor with size torch.Size([16, 1025, 40]) 16 samples of 1025 vectors with size 40 (that gets packed to fit the RNN) the output that I get from the RNN has a new encoded size of 1024 torch.Size([16, 1025, 1024]) when it should have been encoded to 512 right? Is there something Im missing?
Setting bidirectional=True makes the LSTM bidirectional, which means there will be two LSTMs, one that goes from left to right and the other that goes from right to left. From the nn.LSTM documentation - Outputs: output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case. Your output has the size [batch, seq_len, 2 * hidden_size] (batch and seq_len are swapped in your case due to setting batch_first=True) because of using a bidirectional LSTM. The outputs of the two are concatenated in order to have the information of both, which you could easily separate if you wanted to treat them differently.
https://stackoverflow.com/questions/61748181/
Is there a way to monitor optimizer's step in Pytorch?
Consider that you are using a Pytorch optimizer such as torch.optim.Adam(model_parameters). So in your training loop you will have something like: optimizer = torch.optim.Adam(model_parameters) # put the training loop here loss.backward() optimizer.step() optimizer.zero() Is there a way to monitor what steps are taking your optimizer ? To make sure that you are not on a flat area and thus taking no steps since the gradient are null. Maybe checking the learning rate would be a solution ?
Answering to myself here. The best practice would be (in PyTorch) to check the gradient of the leaf tensors. If the gradient is None but is_leaf attribute is set to True, something is obviously buggy. torch.nn.Parameters('insert your tensor here') is especially confusing to this regard. As the Tensor needs to be defined as a torch.nn.Parameters to be succesfully updated. I would advise not to use tensor.requires_grad_(True) as is confuses torch. Only set your parameters as above.
https://stackoverflow.com/questions/61752656/
Custom dataset os.path.join () returns type error
I am writing custom dataset but it returns type error when I am merging root directory path with pandas iloc of image name in csv file: img_path = os.path.join(self.root_dir, self.annotations.iloc[index,0]) error: TypeError: join() argument must be str or bytes, not 'int64' I have tried converting the annotation.iloc to string type but it's still giving me the same error. csvfile with filenames and labels: custom dataset class: class patientdataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.annotations = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): img_path = os.path.join(self.root_dir, self.annotations.iloc[index,0]) image= np.array(np.load(img_path)) y_label = torch.tensor(self.annotations.iloc[index, 1]).long() if self.transform: imagearrays = self.transform(image) image = imagearrays[None, :, :, :] imaget = np.transpose(image, (0, 2, 1, 3)) image = imaget return (image, y_label)
According to your dataset (attached csv file), pd.read_csv(csv_file) produces dataframe with 3 columns: 1st for index, 2nd for filename, and 3rd for label. And this img_path = os.path.join(self.root_dir, self.annotations.iloc[index,0]) line is not working because iloc[index, 0] is about the 1st column, it will extract index data and not a file name, and join expects to get 2 strings, that's why you are getting TypeError. Based on your csv file example, you should do: class patientdataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.annotations = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): img_path = os.path.join(self.root_dir, self.annotations.iloc[index, 1]) # 1 - for file name (2nd column) image= np.array(np.load(img_path)) y_label = torch.tensor(self.annotations.iloc[index, 2]).long() # 2 - for label (3rd column) if self.transform: imagearrays = self.transform(image) image = imagearrays[None, :, :, :] imaget = np.transpose(image, (0, 2, 1, 3)) image = imaget return (image, y_label)
https://stackoverflow.com/questions/61755441/
problem on training(first epoch take a long time more than 4 hours )using colab and fastai
I have a problem on training (first epoch takes a long time more than 4 hours) using colab and fastai. It takes me about 5 hours only for the first epoch which is not useful using google colab because the limitation using of GPU I'm using 'efficientnet-b4' and about 52k training photos after data augmentation and 11k valid photos. The problem is shown on the photo:
Colab takes alot of time fetching the data from google drive as it is not indexed efficiently and thus makes the first epoch very long. For large datasets, I would recommend using google cloud storage and importing data from there.
https://stackoverflow.com/questions/61756071/
PyInstaller executable fails to get source code of TorchScript
I'm trying to make Windows executable of my script that includes PyTorch. My script's imports are: import numpy.core.multiarray # which is a workaround for "ImportError: numpy.core.multiarray failed to import" import six # which is workaround for "ModuleNotFoundError: No module named 'six'" import torch import torch.nn as nn import warnings import argparse import json import math import numpy as np import jsonschema import os from datetime import datetime from sklearn.mixture import GaussianMixture from scipy.io import wavfile from scipy.signal import get_window from scipy.signal import spectrogram I'm using command: pyinstaller --hidden-import pkg_resources.py2_warn extractor.py PyInstaller throws no error while creating .exe, but when I run the .exe i get: Traceback (most recent call last): File "site-packages\torch\_utils_internal.py", line 46, in get_source_lines_and_file File "inspect.py", line 967, in getsourcelines File "inspect.py", line 798, in findsource OSError: could not get source code During handling of the above exception, another exception occurred: Traceback (most recent call last): File "extractor.py", line 3, in <module> import torch File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "C:\ProgramData\Anaconda3\envs\forexe2\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 623, in exec_module exec(bytecode, module.__dict__) File "site-packages\torch\__init__.py", line 367, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "C:\ProgramData\Anaconda3\envs\forexe2\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 623, in exec_module exec(bytecode, module.__dict__) File "site-packages\torch\distributions\__init__.py", line 112, in <module> File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "C:\ProgramData\Anaconda3\envs\forexe2\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 623, in exec_module exec(bytecode, module.__dict__) File "site-packages\torch\distributions\von_mises.py", line 55, in <module> File "site-packages\torch\jit\__init__.py", line 1287, in script File "site-packages\torch\jit\frontend.py", line 164, in get_jit_def File "site-packages\torch\_utils_internal.py", line 53, in get_source_lines_and_file OSError: Can't get source for <function _rejection_sample at 0x0000000006892F70>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available. Original error: could not get source code [5704] Failed to execute script extractor Which I don't understand. This may be similar issue to this question. What is causing the problem? I'm using conda env, with torch installed via pip (which is a workaround for the torch to be correctly hooked). Windows 10 Python 3.8.2 torch 1.5.0+cu101 torchvision 0.6.0+cu101 (also tried with 0.2.2) PyInstaller 3.6
Torch is open source, so you can search for the function _rejection_sample on the torch GitHub. This identifies the problematic file as torch.distributions.von_mises. If your program is not using the torch.distributions module, you can simply exclude it by changing the .spec file generated by pyinstaller. # -*- mode: python ; coding: utf-8 -*- block_cipher = None excluded_modules = ['torch.distributions'] # <<< ADD THIS LINE a = Analysis(['C:/your/path/here'], pathex=['C:\\your\\path\\here'], binaries=[], datas=[], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=excluded_modules, # <<< CHANGE THIS LINE win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) # remaining code omitted for brevity You should only need to make changes in the two places specified above. The rest should already be there. Then, build from the .spec file using pyinstaller your_file.spec In the future, consider using debug flags when you build your project. This also identifies the location of the file causing problems.
https://stackoverflow.com/questions/61756222/
What does *variable.shape mean in python
I know "*variable_name" assists in packing and unpacking. But how does variable_name.shape work? Unable to visualize why the second dimension is squeezed out when prefixing with ""? print("top_class.shape {}".format(top_class.shape)) top_class.shape torch.Size([64, 1]) print("*top_class.shape {}".format(*top_class.shape)) *top_class.shape 64
for numpy.array that is extensively used in math-related and image processing programs, .shape describes the size of the array for all existing dimensions: >>> import numpy as np >>> a = np.zeros((3,3,3)) >>> a array([[[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]], [[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]]) >>> a.shape (3, 3, 3) >>> The asterisk "unpacks" the tuple into several separate arguments, in your case (64,1) becomes 64, 1, so only the first one get printed because there's only one format specification.
https://stackoverflow.com/questions/61764512/
How to calculate roc auc score for the whole epoch like avg accuracy?
I am implementing a training loop in PyTorch and for metrics, I want to use ROC AUC score using sklearn.metrics.roc_auc_score. I can use sklearn's implementation for calculating the score for a single prediction but have a little trouble imagining how to use it to calculate the average score for the whole epoch. Can anyone push me in the right direction?
y_true and y_score, in the function can be 1-D arrays, so if you collect the values form the entire epoch, you can directly call the function. Note that if you do multi-label classification, you need to compute the ROC AUC score for each class separately.
https://stackoverflow.com/questions/61768191/
How can I set maximum n elements in one row in pytorch tensor to 1 and ΠΎther to zero?
For example, if I have tensor 0.8, 0.1, 0.9, 0.2 0.7, 0.1, 0.4, 0.6 and n = 2, I want to get 1, 0, 1, 0 1, 0, 0, 1 Or maybe it's better to do it numpy, but the question is the same.
For performance efficiency, we can use np.argpartition - def partition_assign(a, n): idx = np.argpartition(a,-n,axis=1)[:,-n:] out = np.zeros(a.shape, dtype=int) np.put_along_axis(out,idx,1,axis=1) return out Or we can use np.argpartition(-a,n,axis=1)[:,:n] at that argpartition step. Sample runs - In [56]: a Out[56]: array([[0.8, 0.1, 0.9, 0.2], [0.7, 0.1, 0.4, 0.6]]) In [57]: partition_assign(a, n=2) Out[57]: array([[1, 0, 1, 0], [1, 0, 0, 1]]) In [58]: partition_assign(a, n=3) Out[58]: array([[1, 0, 1, 1], [1, 0, 1, 1]])
https://stackoverflow.com/questions/61769537/
Add multiple tensors inplace in PyTorch
I can add two tensors x and y inplace like this x = x.add(y) Is there a way of doing the same with three or more tensors given all tensors have same dimensions?
result = torch.sum(torch.stack([x, y, ...]), dim=0) Without stack: from functools import reduce result = reduce(torch.add, [x, y, ...]) EDIT As @LudvigH pointed out, the second method is not as memory-efficient, as inplace addition. So it's better like this: from functools import reduce result = reduce( torch.Tensor.add_, [x, y, ...], torch.zeros_like(x) # optionally set initial element to avoid changing `x` )
https://stackoverflow.com/questions/61774526/
Converting two Numpy data sets into a particularr PyTorch data set
I want to play around with a neural network that recognizes handwritten numbers. I found some of these on the web which use PyTorch, however they seem to download the data from the MNIST website in a particular format. My data is, however, available as follows: with np.load('prediction-challenge-01-data.npz') as fh: data_x = fh['data_x'] data_y = fh['data_y'] Where data_x is the training data and data_y are the labels of the pictures. I want these data sets to be in the same format as trainloader as shown below: trainset = datasets.MNIST('/data/mnist', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) Where trainloader already has the training set data_x and labels data_y together in one set. Is there any way to do this? Edit: Shapes of data_x and data_y: In [1]: data_x.shape Out[2]: (20000, 1, 28, 28) In [5]: data_y.shape Out[7]: (20000,)
You can easily create your own dataset. Just inherit from torch.utils.data.Dataset and implement __getitem__ at the very least: Here is a quick and dirty example to get you going: class YourOwnDataset(torch.utils.data.Dataset): def __init__(self, input_file_path, transformations) : super().__init__() self.path = input_file_path self.transforms = transformations with np.load(self.path) as fh: # I assume fh['data_x'] is a list you get the idea self.data = fh['data_x'] self.labels = fh['data_y'] # in getitem, we retrieve one item based on the input index def __getitem__(self, index): data = self.data[index] # based on the loss you chose and what you have in mind, # you can transform you label, here I assume they are # integer numbers (like, 1, 3, etc as labels used for classification) label = self.labels[index] img = convert/reshape your data into img img = self.transforms(img) return img, labels def __len__(self): return len(self.data) and you can create your dataset like : from torchvision import transforms # add any number of transformations you like, I just added ToTensor() transformations = transforms.Compose([transforms.ToTensor()]) trainset = YourOwnDataset('prediction-challenge-01-data.npz', transformations ) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
https://stackoverflow.com/questions/61777997/
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
I set my model and data to the same device, but always raise the error like this: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same The following is my training code, I hope you can answer it.Thanks! def train(train_img_path, train_label_path, pths_path, interval, log_file): file_num = len(os.listdir(train_img_path)) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net = EAST(extractor=extractor, geometry_mode=geometry_mode, pretrained=True) net = net.to(device) trainset = custom_dataset(train_img_path, train_label_path) train_loader = data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers, drop_last=True) optimizer = optim.SGD(net.parameters(), lr=initial_lr, momentum=momentum, weight_decay=weight_decay_sgd) criterion = Loss(weight_geo, weight_angle, geometry_mode="RBOX") net.train() epoch_loss = 0. for epoch in range(max_epoch): epoch_time = time.time() for i, (img, score_gt, geo_gt, ignored_map) in enumerate(train_loader): start_time = time.time() img, score_gt, geo_gt, ignored_map = img.to(device), score_gt.to(device),\ geo_gt.to(device), ignored_map.to(device) score_pred, geo_pred = net(img) total_loss, score_loss, loss_AABB, loss_angle = criterion(score_pred, geo_pred, score_gt, geo_gt, ignored_map) epoch_loss += total_loss.item() optimizer.zero_grad() total_loss.backward() optimizer.step()
I suspect your loss function has some internal parameters of its own, therefore you should also criterion = Loss(weight_geo, weight_angle, geometry_mode="RBOX").to(device) It would be easier to spot the error if you provide a full trace, indicating which line exactly caused the error.
https://stackoverflow.com/questions/61778066/
Training loss not changing at all (PyTorch)
I am trying to solve a text classification problem. My training data has input as a sequence of 80 numbers in which each represent a word and target value is just a number between 1 and 3. I pass it through this model: class Model(nn.Module): def __init__(self, tokenize_vocab_count): super().__init__() self.embd = nn.Embedding(tokenize_vocab_count+1, 300) self.embd_dropout = nn.Dropout(0.3) self.LSTM = nn.LSTM(input_size=300, hidden_size=100, dropout=0.3, batch_first=True) self.lin1 = nn.Linear(100, 1024) self.lin2 = nn.Linear(1024, 512) self.lin_dropout = nn.Dropout(0.8) self.lin3 = nn.Linear(512, 3) def forward(self, inp): inp = self.embd_dropout(self.embd(inp)) inp, (h_t, h_o) = self.LSTM(inp) h_t = F.relu(self.lin_dropout(self.lin1(h_t))) h_t = F.relu(self.lin_dropout(self.lin2(h_t))) out = F.softmax(self.lin3(h_t)) return out My training loop is as follows: model = Model(tokenizer_obj.count+1).to('cuda') optimizer = optim.AdamW(model.parameters(), lr=1e-2) loss_fn = nn.CrossEntropyLoss() EPOCH = 10 for epoch in range(0, EPOCH): for feature, target in tqdm(author_dataloader): train_loss = loss_fn(model(feature.to('cuda')).view(-1, 3), target.to('cuda')) optimizer.zero_grad() train_loss.backward() optimizer.step() print(f"epoch: {epoch + 1}\tTrain Loss : {train_loss}") I printed out the feature and target dimension and it is as follows: torch.Size([64, 80]) torch.Size([64]) Here 64 is the batch_size. I am not doing any validation as of now. When I train I am getting a constant loss value and no change /home/koushik/Software/miniconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/rnn.py:50: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.3 and num_layers=1 "num_layers={}".format(dropout, num_layers)) 0%| | 0/306 [00:00<?, ?it/s]/media/koushik/Backup Plus/Code/Machine Deep Learning/NLP/src/Deep Learning/model.py:20: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. out = F.softmax(self.lin3(h_t)) 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 89.36it/s] epoch: 1 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 89.97it/s] epoch: 2 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 89.35it/s] epoch: 3 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 89.17it/s] epoch: 4 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 88.72it/s] epoch: 5 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 87.75it/s] epoch: 6 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 85.67it/s] epoch: 7 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 85.40it/s] epoch: 8 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 84.49it/s] epoch: 9 Train Loss : 1.0986120700836182 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 306/306 [00:03<00:00, 84.21it/s] epoch: 10 Train Loss : 1.0986120700836182 Can anyone please help
You're using nn.CrossEntropyLoss, which applies log-softmax, but you also apply softmax in the model: out = F.softmax(self.lin3(h_t)) The output of your model should be the raw logits, without the F.softmax.
https://stackoverflow.com/questions/61781193/
How to implement tf.gather_nd in Pytorch with the argument batch_dims?
I have been doing a project on image matching, so I need to find correspondences between 2 images. To get descriptors, I will need a interpolate function. However, when I read about a equivalent function which is done in Tensorflow, I still don’t get how to implement tf.gather_nd(parmas, indices, barch_dims) in Pytorch. Especially when there is a argument: batch_dims. I have gone through stackoverflow and there is no perfect equivalence yet. The referred interpolate function in Tensorflow is below and I have been trying to implement this in Pytorch Arguments' information is below: inputs is a dense feature map[i] from a for loop of batch size, which means it is 3D[H, W, C](in pytorch is [C, H, W]) pos is a set of random point coordinate shapes like [[i, j], [i, j],...,[i, j]], so it is 2D when it goes in interpolate function(in pytorch is [[i,i,...,i], [j,j,...,j]]) and it then expands both of their dimensions when they get into this function I just want a perfect implement of tf.gather_nd with argument batch_dims. Thank you! And here's a simple example of using it: pos = tf.ones((12, 2)) ## stands for a set of coordinates [[i, i,…, i], [j, j,…, j]] inputs = tf.ones((4, 4, 128)) ## stands for [H, W, C] of dense feature map outputs = interpolate(pos, inputs, batched=False) print(outputs.get_shape()) # We get (12, 128) here interpolate function (tf version): def interpolate(pos, inputs, nd=True): pos = tf.expand_dims(pos, 0) inputs = tf.expand_dims(inputs, 0) h = tf.shape(inputs)[1] w = tf.shape(inputs)[2] i = pos[:, :, 0] j = pos[:, :, 1] i_top_left = tf.clip_by_value(tf.cast(tf.math.floor(i), tf.int32), 0, h - 1) j_top_left = tf.clip_by_value(tf.cast(tf.math.floor(j), tf.int32), 0, w - 1) i_top_right = tf.clip_by_value(tf.cast(tf.math.floor(i), tf.int32), 0, h - 1) j_top_right = tf.clip_by_value(tf.cast(tf.math.ceil(j), tf.int32), 0, w - 1) i_bottom_left = tf.clip_by_value(tf.cast(tf.math.ceil(i), tf.int32), 0, h - 1) j_bottom_left = tf.clip_by_value(tf.cast(tf.math.floor(j), tf.int32), 0, w - 1) i_bottom_right = tf.clip_by_value(tf.cast(tf.math.ceil(i), tf.int32), 0, h - 1) j_bottom_right = tf.clip_by_value(tf.cast(tf.math.ceil(j), tf.int32), 0, w - 1) dist_i_top_left = i - tf.cast(i_top_left, tf.float32) dist_j_top_left = j - tf.cast(j_top_left, tf.float32) w_top_left = (1 - dist_i_top_left) * (1 - dist_j_top_left) w_top_right = (1 - dist_i_top_left) * dist_j_top_left w_bottom_left = dist_i_top_left * (1 - dist_j_top_left) w_bottom_right = dist_i_top_left * dist_j_top_left if nd: w_top_left = w_top_left[..., None] w_top_right = w_top_right[..., None] w_bottom_left = w_bottom_left[..., None] w_bottom_right = w_bottom_right[..., None] interpolated_val = ( w_top_left * tf.gather_nd(inputs, tf.stack([i_top_left, j_top_left], axis=-1), batch_dims=1) + w_top_right * tf.gather_nd(inputs, tf.stack([i_top_right, j_top_right], axis=-1), batch_dims=1) + w_bottom_left * tf.gather_nd(inputs, tf.stack([i_bottom_left, j_bottom_left], axis=-1), batch_dims=1) + w_bottom_right * tf.gather_nd(inputs, tf.stack([i_bottom_right, j_bottom_right], axis=-1), batch_dims=1) ) interpolated_val = tf.squeeze(interpolated_val, axis=0) return interpolated_val
As far as I'm aware there is no directly equivalent of tf.gather_nd in PyTorch and implementing a generic version with batch_dims is not that simple. However, you likely don't need a generic version, and given the context of your interpolate function, a version for [C, H, W] would suffice. At the beginning of interpolate you add a singular dimension to the front, which is the batch dimension. Setting batch_dims=1 in tf.gather_nd means there is one batch dimension at the beginning, therefore it applies it per batch, i.e. it indexes inputs[0] with pos[0] etc. There is no benefit of adding a singular batch dimension, because you could have just used the direct computation. # Adding singular batch dimension # Shape: [1, num_pos, 2] pos = tf.expand_dims(pos, 0) # Shape: [1, H, W, C] inputs = tf.expand_dims(inputs, 0) batched_result = tf.gather_nd(inputs, pos, batch_dims=1) single_result = tf.gater_nd(inputs[0], pos[0]) # The first element in the batched result is the same as the single result # Hence there is no benefit to adding a singular batch dimension. tf.reduce_all(batched_result[0] == single_result) # => True Single version In PyTorch the implementation for [H, W, C] can be done with Python's indexing. While PyTorch usually uses [C, H, W] for images, it's only a matter of what dimension to index, but let's keep them the same as in TensorFlow for the sake of comparison. If you were to index them manually, you would do it as such: inputs[pos_h[0], pos_w[0]], inputs[pos_h[1], pos_w[1]] and so on. PyTorch allows you to do that automatically by providing the indices as lists: inputs[pos_h, pos_w], where pos_h and pos_w have the same length. All you need to do is split your pos into two separate tensors, one for the indices along the height dimension and the other along the width dimension, which you also did in the TensorFlow version. inputs = torch.randn(4, 4, 128) # Random positions 0-3, shape: [12, 2] pos = torch.randint(4, (12, 2)) # Positions split by dimension pos_h = pos[:, 0] pos_w = pos[:, 1] # Index the inputs with the indices per dimension gathered = inputs[pos_h, pos_w] # Verify that it's identical to TensorFlow's output inputs_tf = tf.convert_to_tensor(inputs.numpy()) pos_tf = tf.convert_to_tensor(pos.numpy()) gathered_tf = tf.gather_nd(inputs_tf, pos_tf) gathered_tf = torch.from_numpy(gathered_tf.numpy()) torch.equal(gathered_tf, gathered) # => True If you want to apply it to a tensor of size [C, H, W] instead, you only need to change the dimensions you want to index: # For [H, W, C] gathered = inputs[pos_h, pos_w] # For [C, H, W] gathered = inputs[:, pos_h, pos_w] Batched version Making it a batched batched version (for [N, H, W, C] or [N, C, H, W]) is not that difficult, and using that is more appropriate, since you're dealing with batches anyway. The only tricky part is that each element in the batch should only be applied to the corresponding batch. For this the batch dimensions needs to be enumerated, which can be done with torch.arange. The batch enumeration is just the list with the batch indices, which will be combined with the pos_h and pos_w indices, resulting in inputs[0, pos_h[0, 0], pos_h[0, 0]], inputs[0, pos_h[0, 1], pos_h[0, 1]] ... inputs[1, pos_h[1, 0], pos_h[1, 0]] etc. batch_size = 3 inputs = torch.randn(batch_size, 4, 4, 128) # Random positions 0-3, different for each batch, shape: [3, 12, 2] pos = torch.randint(4, (batch_size, 12, 2)) # Positions split by dimension pos_h = pos[:, :, 0] pos_w = pos[:, :, 1] batch_enumeration = torch.arange(batch_size) # => [0, 1, 2] # pos_h and pos_w have shape [3, 12], so the batch enumeration needs to be # repeated 12 times per batch. # Unsqueeze to get shape [3, 1], now the 1 could be repeated to 12, but # broadcasting will do that automatically. batch_enumeration = batch_enumeration.unsqueeze(1) # Index the inputs with the indices per dimension gathered = inputs[batch_enumeration, pos_h, pos_w] # Again, verify that it's identical to TensorFlow's output inputs_tf = tf.convert_to_tensor(inputs.numpy()) pos_tf = tf.convert_to_tensor(pos.numpy()) # This time with batch_dims=1 gathered_tf = tf.gather_nd(inputs_tf, pos_tf, batch_dims=1) gathered_tf = torch.from_numpy(gathered_tf.numpy()) torch.equal(gathered_tf, gathered) # => True Again, for [N, C, H, W], only the dimensions that are indexed need to be changed: # For [N, H, W, C] gathered = inputs[batch_enumeration, pos_h, pos_w] # For [N, C, H, W] gathered = inputs[batch_enumeration, :, pos_h, pos_w] Just a little side note on the interpolate implementation, rounding the positions (floor and ceil respectively) doesn't make sense, because indices must be integers, so it has no effect, as long as your positions are actual indices. That also results in i_top_left and i_bottom_left being the same value, but even if they are to be rounded differently, they are always 1 position apart. Furthermore, i_top_left and i_top_right are literally the same. I don't think that this function produces a meaningful output. I don't know what you're trying to achieve, but if you're looking for image interpolation you could have a look at torch.nn.functional.interpolate.
https://stackoverflow.com/questions/61783826/
How to convert a matrix of torch.tensor to a larger tensor?
I meet a problem to convert a python matrix of torch.tensor to a torch.tensor For example, M is an (n,m) matrix, with each element M[i][j] is a torch.tensor with same size (p, q, r, ...). How to convert python list of list M to a torch.tensor with size (n,m,p,q,r,...) e.g. M = [] for i in range(5): row = [] for j in range(10): row.append(torch.rand(3,4)) M.append(row) How to convert above M to a torch.tensor with size (5,10,3,4).
Try torch.stack() to stack a list of tensors on the first dimension. import torch M = [] for i in range(5): row = [] for j in range(10): row.append(torch.rand(3,4)) row = torch.stack(row) M.append(row) M = torch.stack(M) print(M.size()) # torch.Size([5, 10, 3, 4])
https://stackoverflow.com/questions/61786172/
Retrieving documents based on matrix multiplication
I have a model that represents a collection of documents in multidimensional vector space. So, for example, for 100k documents, my model represents them in the form of 300 dimensional vectors. So, finally, I get a matrix of size [100K, 300]. For retrieving those documents according to relevance to the given query, I do matrix multiplication. For example, I represent a given query as a [300, 1]. Then I get the cosine similarity scores using matrix multiplication as follows : [100K, 300]*[300, 1] = [100K, 1]. Now how can I retrieve top 1000 documents from this collection with highest cosine similarity. The trivial way would be to sort based on cosine similarity and grab the first 1000 docs. Is there any way to retrieve the documents this way using some function in pytorch? I mean, how can I get the indices of highest 1000 values from a 1D torch tensor?p
Once you have the similarity scores after the dot product. you can get the top 1000 indices as follows top_indices = torch.argsort(sims)[:1000] similar_docs = sims[top_indices]
https://stackoverflow.com/questions/61789727/
Pytorch Siamese Network not converging
Good morning everyone Below is my implementation of a pytorch siamese network. I am using 32 batch size, MSE loss and SGD with 0.9 momentum as optimizer. class SiameseCNN(nn.Module): def __init__(self): super(SiameseCNN, self).__init__() # 1, 40, 50 self.convnet = nn.Sequential(nn.Conv2d(1, 8, 7), nn.ReLU(), # 8, 34, 44 nn.Conv2d(8, 16, 5), nn.ReLU(), # 16, 30, 40 nn.MaxPool2d(2, 2), # 16, 15, 20 nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), # 32, 15, 20 nn.Conv2d(32, 64, 3, padding=1), nn.ReLU()) # 64, 15, 20 self.linear1 = nn.Sequential(nn.Linear(64 * 15 * 20, 100), nn.ReLU()) self.linear2 = nn.Sequential(nn.Linear(100, 2), nn.ReLU()) def forward(self, data): res = [] for j in range(2): x = self.convnet(data[:, j, :, :]) x = x.view(-1, 64 * 15 * 20) res.append(self.linear1(x)) fres = abs(res[1] - res[0]) return self.linear2(fres) Each batch contains alternating pairs, i.e [pos, pos], [pos, neg], [pos, pos] etc... However, the network doesn't converge, and the problem seems that fres in the network is the same for each pair (regardless of whether it is a positive or negative pair), and the output of self.linear2(fres) is always approximately equal to [0.0531, 0.0770]. This is in contrast with what I am expecting, which is that the first value of [0.0531, 0.0770] would get closer to 1 for a positive pair as the network learns, and the second value would get closer to 1 for a negative pair. These two values also need to sum up to 1. I have tested exactly the same setup and same input images for a 2 channel network architecture, where, instead of feeding in [pos, pos] you would stack those 2 images in a depth-wise fashion, for example numpy.stack([pos, pos], -1). The dimension of nn.Conv2d(1, 8, 7) also changes to nn.Conv2d(2, 8, 7) in this setup. This works perfectly fine. I have also tested exactly the same setup and input images for a traditional CNN approach, where I just pass in single positive and negative grey scale images into the network, instead of stacking them (as with the 2-CH approach) or passing them in as image pairs (as with the Siamese approach). This also works perfectly, but the results are not so good as with the 2 channel approach. EDIT (Solutions I've tried): I have tried a number of different loss functions, including HingeEmbeddingLoss and CrossEntropyLoss, all resulting in more or less the same problem. So I think it is safe to say that the problem is not caused by the employed loss function; MSELoss. Different batch sizes also seem to have no effect on the issue. I tried increasing the number of trainable parameters as suggested in Keras Model for Siamese Network not Learning and always predicting the same ouput Also doesn't work. Tried to change the network architecture as implemented here: https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb. In other words, changed the forward pass to the following code. Also changed the loss to CrossEntropy, and the optimizer to Adam. Still no luck: def forward(self, data): res = [] for j in range(2): x = self.convnet(data[:, j, :, :]) x = x.view(-1, 64 * 15 * 20) res.append(x) fres = self.linear2(self.linear1(abs(res[1] - res[0])))) return fres I also tried to change the whole network from a CNN to a linear network as implemented here: https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb. Still doesn't work. Tried to use a lot more data as suggested here: Keras Model for Siamese Network not Learning and always predicting the same ouput. No luck... Tried to use torch.nn.PairwiseDistance between the outputs of convnet. Made some sort of improvement; the network starts to converge for the first few epochs, and then hits the same plateau everytime: def forward(self, data): res = [] for j in range(2): x = self.convnet(data[:, j, :, :]) res.append(x) pdist = nn.PairwiseDistance(p=2) diff = pdist(res[1], res[0]) diff = diff.view(-1, 64 * 15 * 10) fres = self.linear2(self.linear1(diff)) return fres Another thing to note perhaps is that, within the context of my research, a Siamese network is trained for each object. So the first class is associated with the images containing the object in question, and the second class is associated with images containing other objects. Don't know if this might be the cause of the problem. It is however not a problem within the context of the Traditional CNN and 2-Channel CNN approaches. As per request, here is my training code: model = SiameseCNN().cuda() ls_fn = torch.nn.BCELoss() optim = torch.optim.SGD(model.parameters(), lr=1e-6, momentum=0.9) epochs = np.arange(100) eloss = [] for epoch in epochs: model.train() train_loss = [] for x_batch, y_batch in dp.train_set: x_var, y_var = Variable(x_batch.cuda()), Variable(y_batch.cuda()) y_pred = model(x_var) loss = ls_fn(y_pred, y_var) train_loss.append(abs(loss.item())) optim.zero_grad() loss.backward() optim.step() eloss.append(np.mean(train_loss)) print(epoch, np.mean(train_loss)) Note dp in dp.train_set is a class with attributes train_set, valid_set, test_set, where each set is created as follows: DataLoader(TensorDataset(torch.Tensor(x), torch.Tensor(y)), batch_size=bs) As per request, here is an example of the predicted probabilities vs true label, where you can see the model doesn't seem to be learning: Predicted: 0.5030623078346252 Label: 1.0 Predicted: 0.5030624270439148 Label: 0.0 Predicted: 0.5030624270439148 Label: 1.0 Predicted: 0.5030625462532043 Label: 0.0 Predicted: 0.5030625462532043 Label: 1.0 Predicted: 0.5030626654624939 Label: 0.0 Predicted: 0.5030626058578491 Label: 1.0 Predicted: 0.5030627250671387 Label: 0.0 Predicted: 0.5030626654624939 Label: 1.0 Predicted: 0.5030627846717834 Label: 0.0 Predicted: 0.5030627250671387 Label: 1.0 Predicted: 0.5030627846717834 Label: 0.0 Predicted: 0.5030627250671387 Label: 1.0 Predicted: 0.5030628442764282 Label: 0.0 Predicted: 0.5030627846717834 Label: 1.0 Predicted: 0.5030628442764282 Label: 0.0
Problem solved. Turns out the network will predict the same output every time if you give it the same images every time Small indexing mistake on my part during data partitioning. Thanks for everyone's help and assistance. Here is an example of the convergence as it is now: 0 0.20198837077617646 1 0.17636818194389342 2 0.15786472541093827 3 0.1412761415243149 4 0.126698794901371 5 0.11397973036766053 6 0.10332610329985618 7 0.09474560652673245 8 0.08779258838295936 9 0.08199785630404949 10 0.07704121413826942 11 0.07276330365240574 12 0.06907484836131335 13 0.06584368328005076 14 0.06295975042134523 15 0.06039590438082814 16 0.058096024941653016
https://stackoverflow.com/questions/61793268/