instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
PyTorch - How to derive manually ResNet?
I managed to find this implementation of ResNet-18: def __init__(self): super(ResNet18,self).__init__() self.block1 = nn.Sequential( nn.Conv2d(1,64,kernel_size=2,stride=2,padding=3,bias=False), nn.BatchNorm2d(64), nn.ReLU(True) ) self.block2 = nn.Sequential( nn.MaxPool2d(1,1), ResidualBlock(64,64), ResidualBlock(64,64,2) ) self.block3 = nn.Sequential( ResidualBlock(64,128), ResidualBlock(128,128,2) ) self.block4 = nn.Sequential( ResidualBlock(128,256), ResidualBlock(256,256,2) ) self.block5 = nn.Sequential( ResidualBlock(256,512), ResidualBlock(512,512,2) ) self.avgpool = nn.AvgPool2d(2) # vowel_diacritic self.fc1 = nn.Linear(512,11) # grapheme_root self.fc2 = nn.Linear(512,168) # consonant_diacritic self.fc3 = nn.Linear(512,7) def forward(self,x): x = self.block1(x) x = self.block2(x) x = self.block3(x) x = self.block4(x) x = self.block5(x) x = self.avgpool(x) x = x.view(x.size(0),-1) x1 = self.fc1(x) x2 = self.fc2(x) x3 = self.fc3(x) return x1,x2,x3 class ResidualBlock(nn.Module): def __init__(self,in_channels,out_channels,stride=1,kernel_size=3,padding=1,bias=False): super(ResidualBlock,self).__init__() self.cnn1 = nn.Sequential(nn.Conv2d(in_channels,out_channels,kernel_size,stride,padding,bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(True) ) self.cnn2 = nn.Sequential( nn.Conv2d(out_channels,out_channels,kernel_size,1,padding,bias=False), nn.BatchNorm2d(out_channels) ) if stride != 1 or in_channels != out_channels: self.shortcut = nn.Sequential( nn.Conv2d(in_channels,out_channels,kernel_size=1,stride=stride,bias=False), nn.BatchNorm2d(out_channels) ) else: self.shortcut = nn.Sequential() def forward(self,x): residual = x x = self.cnn1(x) x = self.cnn2(x) x += self.shortcut(residual) x = nn.ReLU(True)(x) return x I am still new to PyTorch and I am trying to obtain the architectures for ResNets 34, 50 and 101 in a similar format as above for ResNet18. Initially I thought that the only changes that I have to make are the last parameter of each block according to the source GitHub code at https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py, where for ResNet34 we have return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress, **kwargs). Am I doing this wrong? I am still reading up on the documentation and trying to make sense of the methodologies, so some guidance will be deeply appreciated. :')
Coming from a TensorFlow background with a flavour of FastAi: you can use all those ResNet variants if you import them directly from torchvision module: #Download the pretrained model import torch import torchvision.models as models model = models.resnet18(pretrained = True) #change here to whatever model you want #Switch device to gpu if available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
https://stackoverflow.com/questions/59452502/
NameError: "linear regression" is not defined
Here is a code snippet where I am applying Linear regression using Pytorch. I face a NameError, that says name "linear regression" not defined. Kindly help in rectifying it. import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F x_data=Variable(torch.Tensor([[10.0],[9.0],[3.0],[2.0]])) y_data=Variable(torch.Tensor([[90.0],[80.0],[50.0],[30.0]])) class LinearRegression(torch.nn.Module): def __init__(self): super(LinearRegression,self). __init__ () self.linear = torch.nn.Linear(1,1) def forward(self, x): y_pred = self.linear(x) return y_pred model = LinearRegression()
model = LinearRegression() should be outside class import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F x_data=Variable(torch.Tensor([[10.0],[9.0],[3.0],[2.0]])) y_data=Variable(torch.Tensor([[90.0],[80.0],[50.0],[30.0]])) class LinearRegression(torch.nn.Module): def __init__(self): super(LinearRegression,self). __init__ () self.linear = torch.nn.Linear(1,1) def forward(self, x): y_pred = self.linear(x) return y_pred model = LinearRegression()
https://stackoverflow.com/questions/59452567/
How to disable progress bar in Pytorch Lightning
I have a lot of issues withe the tqdm progress bar in Pytorch Lightning: when I run trainings in a terminal, the progress bars overwrite themselves. At the end of an training epoch, a validation progress bar is printed under the training bar, but when that ends, the progress bar from the next training epoch is printed over the one from the previous epoch. hence it's not possible to see the losses from previous epochs. INFO:root: Name Type Params 0 l1 Linear 7 K Epoch 2: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2093/3750 [00:05<00:03, 525.47batch/s, batch_nb=1874, loss=0.714, training_loss=0.4, v_nb=51] the progress bars wobbles from left to right, caused by the changing in number of digits behind the decimal point of some losses. when running in Pycharm, the validation progress bar is not printed, but in stead, INFO:root: Name Type Params 0 l1 Linear 7 K Epoch 1: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1875/3750 [00:05<00:05, 322.34batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1879/3750 [00:05<00:05, 319.41batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1942/3750 [00:05<00:04, 374.05batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2005/3750 [00:05<00:04, 425.01batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2068/3750 [00:05<00:03, 470.56batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2131/3750 [00:05<00:03, 507.69batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2194/3750 [00:06<00:02, 538.19batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2257/3750 [00:06<00:02, 561.20batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 2320/3750 [00:06<00:02, 579.22batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 2383/3750 [00:06<00:02, 591.58batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 2445/3750 [00:06<00:02, 599.77batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 2507/3750 [00:06<00:02, 605.00batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 2569/3750 [00:06<00:01, 607.04batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] Epoch 1: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2633/3750 [00:06<00:01, 613.98batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49] I would like to know if these issues can be solved or else how can I disable the progress bar and instead, just print some log details on the screen.
Use the command show_progress_bar=False in Trainer.
https://stackoverflow.com/questions/59455268/
Python/Pytorch - how to use Image arrays?
I want to put image data in a neural network, but I am having trouble using the Image datatype. I read my data here using Pytorch; import torch import torchvision import numpy as np from settings import Settings class Data_Read: @staticmethod def getTrain(): train_dataset = torchvision.datasets.ImageFolder( root=Settings.pathTrainImagesCopy, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=64, num_workers=0, shuffle=True ) return train_loader @staticmethod def getTest(): test_dataset = torchvision.datasets.ImageFolder( root=Settings.pathTestImagesCopy, transform=torchvision.transforms.ToTensor() ) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=64, num_workers=0, shuffle=True ) return test_loader The following code creates a single dimension column of images; class Imagez: @staticmethod def Get(arr): imageData = [] for item in arr: filePath = item img = Image.open(filePath).convert('LA') imageData.append(img) return imageData And these methods are called from the main class as follows; trainData = Data_Read.getTrain() testData = Data_Read.getTest() arrTrain = np.array(trainData.dataset.imgs)[:,0] labelTrain = trainData.dataset.targets arrTest = np.array(testData.dataset.imgs)[:,0] labelTest = testData.dataset.targets X_Train = Imagez.Get(arrTest) I find whenever I try to use the Images datatype I in X_Train I get into trouble with error messages. For example; mlp = MLPClassifier(hidden_layer_sizes=(10,10,10), max_iter=1000 ) mlp.fit(X_Train, labelTrain) Will give me this error message; Traceback (most recent call last): File "c:\Users\hijik.vscode\extensions\ms-python.python-2019.11.50794\pythonFiles\ptvsd_launcher.py", line 43, in main(ptvsdArgs) File "c:\Users\hijik.vscode\extensions\ms-python.python-2019.11.50794\pythonFiles\lib\python\old_ptvsd\ptvsd__main__.py", line 432, in main run() File "c:\Users\hijik.vscode\extensions\ms-python.python-2019.11.50794\pythonFiles\lib\python\old_ptvsd\ptvsd__main__.py", line 316, in run_file runpy.run_path(target, run_name='main') File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "d:\702\702-Coursework-Task-5\src\Main.py", line 74, in mlp.fit(X_Train, Y_TrainLabels) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 981, in fit return self._fit(X, y, incremental=(self.warm_start and File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 323, in _fit X, y = self._validate_input(X, y, incremental) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py", line 919, in _validate_input multi_output=True) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 719, in check_X_y estimator=estimator) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\utils\validation.py", line 496, in check_array array = np.asarray(array, dtype=dtype, order=order) File "C:\Users\hijik\AppData\Local\Continuum\anaconda3\lib\site-packages\numpy\core\numeric.py", line 538, in asarray return array(a, dtype, copy=False, order=order) TypeError: int() argument must be a string, a bytes-like object or a number, not 'Image' I am thinking I need to convert my images to another data type. What advice would you give? EDIT - this is the minimum reproducible error; X_Train = [] filePath = '..\\images\\Train\\anger\\S010_004_00000014.png' img = Image.open(filePath).convert('LA') X_Train.append(img) Y_TrainLabels = ["0"] mlp = MLPClassifier(hidden_layer_sizes=(10,10,10), max_iter=1000 ) mlp.fit(X_Train, Y_TrainLabels)
First of all, your Imagez class returns list of PIL images, those cannot be used for training as you need number representations. Easy fix would be: import torchvision class Imagez: @staticmethod def Get(arr): imageData = [] for item in arr: filePath = item # This transforms below img = torchvision.transforms.functional.to_tensor( Image.open(filePath).convert("LA") ) imageData.append(img) return imageData See convert docs and whether you really need it. Secondly, if you are going for sklearn and it's MLPClassifier, you have transform those arrays into np.array. For each you can call .numpy() on pytorch tensor to convert it and stack it afterwards into an array. Furthermore, your inputs and outputs seem to be images, you either have to flatten those into a single vector for fully connected layers or use convolutional neural networks and PyTorch, for example the network below: import torch model = torch.nn.Sequential( torch.nn.Conv2d(3, 64, kernel_size=3), torch.nn.Conv2d(64, 128, kernel_size=3), torch.nn.Conv2d(128, 128, kernel_size=3), torch.nn.Conv2d(128, 128, kernel_size=3), torch.nn.Conv2d(128, 1, kernel_size=3), ) You could find some basic tutorials on PyTorch website, 60 Minute Blitz seems like a good starting point for you cause there is a lot to fix here.
https://stackoverflow.com/questions/59461100/
Random Choice with Pytorch?
I have a tensor of pictures, and would like to randomly select from it. I'm looking for the equivalent of np.random.choice(). import torch pictures = torch.randint(0, 256, (1000, 28, 28, 3)) Let's say I want 10 of these pictures.
torch has no equivalent implementation of np.random.choice(), see the discussion here. The alternative is indexing with a shuffled index or random integers. To do it with replacement: Generate n random indices Index your original tensor with these indices pictures[torch.randint(len(pictures), (10,))] To do it without replacement: Shuffle the index Take the n first elements indices = torch.randperm(len(pictures))[:10] pictures[indices] Read more about torch.randint and torch.randperm. Second code snippet is inspired by this post in PyTorch Forums.
https://stackoverflow.com/questions/59461811/
Finding indices with zeros from 4D Tensors in Pytorch
I have an interesting question for you, if you are working with Pytorch. I have a batch of images with shape (3, 224, 224). So, if my batch size is 64, let’s say, the final tensor I have has the shape of (64, 3, 224, 224). Now, here is the question. Suppose that some of the images in this batch is filled with only zeros. What is the fastest way to find out which batch indices are with only zeros? I don’t want to create for loop for that, since it is slow. Thanks for your answer.
a cheaper way to do this is assume only empty image would have sum = 0, which i think is pretty reasonable import torch t = torch.rand(64,3,224,224) t[10] = 0 s = t.view(64, -1).sum(dim = -1) zero_index = (s==0).nonzero().item() # 10
https://stackoverflow.com/questions/59462901/
Create a new model in pytorch with custom initial value for the weights
I'm new to pytorch and I want to understand how can I set the initial weight for the first hidden layer of my network. I explain a little better: my network is a very simple one layer MLP with 784 inputs values and 10 output values class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 10) # Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened # x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) # output so no dropout here x = F.log_softmax(self.fc2(x), dim=1) return x and I have, for now, a numpy matrix of shape(128, 784) witch contain the values that I want for the weight in fc1. How can I initialise the weights of the first layer with the value contained in the matrix? Searching online in other answers I find out that I have to define the init function for the weight, e.g. def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv2d') != -1: m.weight.data.normal_(0.0, 0.02) elif classname.find('BatchNorm') != -1: m.weight.data.normal_(1.0, 0.02) m.bias.data.fill_(0) but i can not understand the code
You can use simply torch.nn.Parameter() to assign a custom weight for the layer of your network. As in your case - model.fc1.weight = torch.nn.Parameter(custom_weight) torch.nn.Parameter: A kind of Tensor that is to be considered a module parameter. For Example: # Classifier model model = Classifier() # your custom weight, here taking randam custom_weight = torch.rand(model.fc1.weight.shape) custom_weight.shape torch.Size([128, 784]) # before assign custom weight print(model.fc1.weight) Parameter containing: tensor([[ 1.6920e-02, 4.6515e-03, -1.0214e-02, ..., -7.6517e-03, 2.3892e-02, -8.8965e-03], ..., [-2.3137e-02, 5.8483e-03, 4.4392e-03, ..., -1.6159e-02, 7.9369e-03, -7.7326e-03]]) # assign custom weight to first layer model.fc1.weight = torch.nn.Parameter(custom_weight) # after assign custom weight model.fc1.weight Parameter containing: tensor([[ 0.1724, 0.7513, 0.8454, ..., 0.8780, 0.5330, 0.5847], [ 0.8500, 0.7687, 0.3371, ..., 0.7464, 0.1503, 0.7720], [ 0.8514, 0.6530, 0.6261, ..., 0.7867, 0.9312, 0.3890], ..., [ 0.5426, 0.7655, 0.1191, ..., 0.4343, 0.2500, 0.6207], [ 0.2310, 0.4260, 0.4138, ..., 0.1168, 0.5946, 0.2505], [ 0.4220, 0.5500, 0.6282, ..., 0.5921, 0.7953, 0.9997]])
https://stackoverflow.com/questions/59467473/
Pytorch Dataloader for Image GT dataset
I am new to pytorch. I am trying to create a DataLoader for a dataset of images where each image got a corresponding ground truth (same name): root: --->RGB: ------>img1.png ------>img2.png ------>... ------>imgN.png --->GT: ------>img1.png ------>img2.png ------>... ------>imgN.png When I use the path for root folder (that contains RGB and GT folders) as input for the torchvision.datasets.ImageFolder it reads all of the images as if they were all intended for input (classified as RGB and GT), and it seems like there is no way to pair the RGB-GT images. I would like to pair the RGB-GT images, shuffle, and divide it to batches of defined size. How can it be done? Any advice will be appreciated. Thanks.
I think, the good starting point is to use VisionDataset class as a base. What we are going to use here is: DatasetFolder source code. So, we going to create smth similar. You can notice this class depends on two other functions from datasets.folder module: default_loader and make_dataset. We are not going to modify default_loader, because it's already fine, it just helps us to load images, so we will import it. But we need a new make_dataset function, that prepared the right pairs of images from root folder. Since original make_dataset pairs images (image paths if to be more precisely) and their root folder as target class (class index) and we have a list of (path, class_to_idx[target]) pairs, but we need (rgb_path, gt_path). Here is the code for new make_dataset: def make_dataset(root: str) -> list: """Reads a directory with data. Returns a dataset as a list of tuples of paired image paths: (rgb_path, gt_path) """ dataset = [] # Our dir names rgb_dir = 'RGB' gt_dir = 'GT' # Get all the filenames from RGB folder rgb_fnames = sorted(os.listdir(os.path.join(root, rgb_dir))) # Compare file names from GT folder to file names from RGB: for gt_fname in sorted(os.listdir(os.path.join(root, gt_dir))): if gt_fname in rgb_fnames: # if we have a match - create pair of full path to the corresponding images rgb_path = os.path.join(root, rgb_dir, gt_fname) gt_path = os.path.join(root, gt_dir, gt_fname) item = (rgb_path, gt_path) # append to the list dataset dataset.append(item) else: continue return dataset What do we have now? Let's compare our function with original one: from torchvision.datasets.folder import make_dataset as make_dataset_original dataset_original = make_dataset_original(root, {'RGB': 0, 'GT': 1}, extensions='png') dataset = make_dataset(root) print('Original make_dataset:') print(*dataset_original, sep='\n') print('Our make_dataset:') print(*dataset, sep='\n') Original make_dataset: ('./data/GT/img1.png', 1) ('./data/GT/img2.png', 1) ... ('./data/RGB/img1.png', 0) ('./data/RGB/img2.png', 0) ... Our make_dataset: ('./data/RGB/img1.png', './data/GT/img1.png') ('./data/RGB/img2.png', './data/GT/img2.png') ... I think it works great) It's time to create our class Dataset. The most important part here is __getitem__ methods, because it imports images, applies transformation and returns a tensors, that can be used by dataloaders. We need to read a pair of images (rgb and gt) and return a tuple of 2 tensor images: from torchvision.datasets.folder import default_loader from torchvision.datasets.vision import VisionDataset class CustomVisionDataset(VisionDataset): def __init__(self, root, loader=default_loader, rgb_transform=None, gt_transform=None): super().__init__(root, transform=rgb_transform, target_transform=gt_transform) # Prepare dataset samples = make_dataset(self.root) self.loader = loader self.samples = samples # list of RGB images self.rgb_samples = [s[1] for s in samples] # list of GT images self.gt_samples = [s[1] for s in samples] def __getitem__(self, index): """Returns a data sample from our dataset. """ # getting our paths to images rgb_path, gt_path = self.samples[index] # import each image using loader (by default it's PIL) rgb_sample = self.loader(rgb_path) gt_sample = self.loader(gt_path) # here goes tranforms if needed # maybe we need different tranforms for each type of image if self.transform is not None: rgb_sample = self.transform(rgb_sample) if self.target_transform is not None: gt_sample = self.target_transform(gt_sample) # now we return the right imported pair of images (tensors) return rgb_sample, gt_sample def __len__(self): return len(self.samples) Let's test it: from torch.utils.data import DataLoader from torchvision.transforms import ToTensor import matplotlib.pyplot as plt bs=4 # batch size transforms = ToTensor() # we need this to convert PIL images to Tensor shuffle = True dataset = CustomVisionDataset('./data', rgb_transform=transforms, gt_transform=transforms) dataloader = DataLoader(dataset, batch_size=bs, shuffle=shuffle) for i, (rgb, gt) in enumerate(dataloader): print(f'batch {i+1}:') # some plots for i in range(bs): plt.figure(figsize=(10, 5)) plt.subplot(221) plt.imshow(rgb[i].squeeze().permute(1, 2, 0)) plt.title(f'RGB img{i+1}') plt.subplot(222) plt.imshow(gt[i].squeeze().permute(1, 2, 0)) plt.title(f'GT img{i+1}') plt.show() Out: batch 1: ... Here you can find a notebook with code and simple dummy dataset.
https://stackoverflow.com/questions/59467781/
PyTorch Object Detection with GPU on Ubuntu 18.04 - RuntimeError: CUDA out of memory. Tried to allocate xx.xx MiB
I'm attempting to get this PyTorch person detection example: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html running locally with a GPU, either in a Jupyter Notebook or a regular python file. I get the error in the title either way. I'm using Ubuntu 18.04. Here is a summary of the steps I've performed: 1) Stock Ubuntu 18.04 install on a Lenovo ThinkPad X1 Extreme Gen 2 with a GTX 1650 GPU. 2) Perform a standard CUDA 10.0 / cuDNN 7.4 install. I'd rather not restate all the steps as this post is going to be more than long enough already. This is a standard procedure, pretty much any link found via googling is what I followed. 3) Install torch and torchvision pip3 install torch torchvision 4) From this link on the PyTorch site: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html I've both saved the linked notebook: https://colab.research.google.com/github/pytorch/vision/blob/temp-tutorial/tutorials/torchvision_finetuning_instance_segmentation.ipynb And Also tried the link at the bottom that has the regular Python file: https://pytorch.org/tutorials/_static/tv-training-code.py 5) Before running either the notebook or the regular Python way, I did the following (found at the top of the above linked notebook): Install the CoCo API into Python: cd ~ git clone https://github.com/cocodataset/cocoapi.git cd cocoapi/PythonAPI open Makefile in gedit, change the two instances of "python" to "python3", then: python3 setup.py build_ext --inplace sudo python3 setup.py install Get the necessary files the above linked files need to run: cd ~ git clone https://github.com/pytorch/vision.git cd vision git checkout v0.5.0 from ~/vision/references/detection, copy coco_eval.py, coco_utils.py, engine.py, transforms.py, and utils.py to whichever directory the above linked notebook or tv-training-code.py file are being ran from. 6) Download the Penn Fudan Pedestrian dataset from the link on the above page: https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip then unzip and put in the same directory as the notebook or tv-training-code.py In case the above link ever breaks or just for easier reference, here is tv-training-code.py as I have downloaded it at this time: # Sample code from the TorchVision 0.3 Object Detection Finetuning Tutorial # http://pytorch.org/tutorials/intermediate/torchvision_tutorial.html import os import numpy as np import torch from PIL import Image import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor from engine import train_one_epoch, evaluate import utils import transforms as T class PennFudanDataset(object): def __init__(self, root, transforms): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set # of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) masks = torch.as_tensor(masks, dtype=torch.uint8) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) def get_model_instance_segmentation(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return model def get_transform(train): transforms = [] transforms.append(T.ToTensor()) if train: transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) def main(): # train on the GPU or on the CPU, if a GPU is not available device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # our dataset has two classes only - background and person num_classes = 2 # use our dataset and defined transformations dataset = PennFudanDataset('PennFudanPed', get_transform(train=True)) dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False)) # split the dataset in train and test set indices = torch.randperm(len(dataset)).tolist() dataset = torch.utils.data.Subset(dataset, indices[:-50]) dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:]) # define training and validation data loaders data_loader = torch.utils.data.DataLoader( dataset, batch_size=2, shuffle=True, num_workers=4, collate_fn=utils.collate_fn) data_loader_test = torch.utils.data.DataLoader( dataset_test, batch_size=1, shuffle=False, num_workers=4, collate_fn=utils.collate_fn) # get the model using our helper function model = get_model_instance_segmentation(num_classes) # move model to the right device model.to(device) # construct an optimizer params = [p for p in model.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005) # and a learning rate scheduler lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1) # let's train it for 10 epochs num_epochs = 10 for epoch in range(num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) # update the learning rate lr_scheduler.step() # evaluate on the test dataset evaluate(model, data_loader_test, device=device) print("That's it!") if __name__ == "__main__": main() Here is an exmaple run of tv-training-code.py $ python3 tv-training-code.py Epoch: [0] [ 0/60] eta: 0:01:17 lr: 0.000090 loss: 4.1717 (4.1717) loss_classifier: 0.8903 (0.8903) loss_box_reg: 0.1379 (0.1379) loss_mask: 3.0632 (3.0632) loss_objectness: 0.0700 (0.0700) loss_rpn_box_reg: 0.0104 (0.0104) time: 1.2864 data: 0.1173 max mem: 1865 Traceback (most recent call last): File "tv-training-code.py", line 165, in <module> main() File "tv-training-code.py", line 156, in main train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) File "/xxx/PennFudanExample/engine.py", line 46, in train_one_epoch losses.backward() File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag File "/usr/local/lib/python3.6/dist-packages/torch/autograd/function.py", line 77, in apply return self._forward_cls.backward(self, *args) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/function.py", line 189, in wrapper outputs = fn(ctx, *args) File "/usr/local/lib/python3.6/dist-packages/torchvision/ops/roi_align.py", line 38, in backward output_size[0], output_size[1], bs, ch, h, w, sampling_ratio) RuntimeError: CUDA out of memory. Tried to allocate 132.00 MiB (GPU 0; 3.81 GiB total capacity; 2.36 GiB already allocated; 132.69 MiB free; 310.59 MiB cached) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:267) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fdfb6c9b813 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x1ce68 (0x7fdfb6edce68 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so) frame #2: <unknown function> + 0x1de6e (0x7fdfb6edde6e in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so) frame #3: at::native::empty_cuda(c10::ArrayRef<long>, c10::TensorOptions const&, c10::optional<c10::MemoryFormat>) + 0x279 (0x7fdf59472789 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch.so) [many more frame lines omitted] Clearly the line: RuntimeError: CUDA out of memory. Tried to allocate 132.00 MiB (GPU 0; 3.81 GiB total capacity; 2.36 GiB already allocated; 132.69 MiB free; 310.59 MiB cached) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:267) is the critical error. If I run an nvidia-smi before a run: $ nvidia-smi Tue Dec 24 14:32:49 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 1650 Off | 00000000:01:00.0 On | N/A | | N/A 47C P8 5W / N/A | 296MiB / 3903MiB | 3% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1190 G /usr/lib/xorg/Xorg 142MiB | | 0 1830 G /usr/bin/gnome-shell 72MiB | | 0 3711 G ...uest-channel-token=14371934934688572948 78MiB | +-----------------------------------------------------------------------------+ It seems pretty clear there is plenty of GPU memory available (this GPU is 4GB). Moreover, I'm confident my CUDA/cuDNN install and GPU hardware are good b/c I train and inference the TensorFlow object detection API on this computer frequently, and as long as I use the allow_growth option I never have GPU related errors. From Googling on this error it seems to be relatively common. The most common solutions are: 1) Try a smaller batch size (not really applicable in this case since the training and testing batch sizes are 2 and 1 respectively, and I tried with 1 and 1 and still got the same error) 2) Update to the latest version of PyTorch (but I'm already at the latest version). Some other suggestions involve reworking the training script. I'm very familiar with TensorFlow but I'm new to PyTorch so I'm not sure how to go about that. Also, most of the rework suggestions I can find for this error do not pertain to object detection and therefore I'm not able to relate them to this training script specifically. Has anybody else gotten this script to run locally with an NVIDIA GPU? Do you suspect a OS/CUDA/PyTorch configuration concern, or is there someway the script can be reworked to prevent this error? Any assistance would be greatly appreciated.
Very strange, after changing both the training and testing batch size to 1, it now does not crash with a GPU error. Very strange since I'm certain I tried this before. Perhaps it had something to do with changing the batch size to 1 for both training and testing, and then rebooting or somehow refreshing something else? I'm not really sure. Very odd. Now the evaluate function call is crashing with the error: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer. But it seems this is completely unrelated so I'll make a separate post for that.
https://stackoverflow.com/questions/59473949/
Pass variable sized input to Linear layer in Pytorch
I have a Linear() layer in Pytorch after a few Conv() layers. All the images in my dataset are black and white. However most of the images in my test set are of a different dimension than the images in my training set. Apart from resizing the images themselves, is there any way to define the Linear() layer in such a way that it takes a variable input dimension? For example something similar to view(-1)
Well, it doesn't make sense to have a Linear() layer with a variable input size. Because in fact it's a learnable matrix of shape [n_in, n_out]. And matrix multiplication is not defined for inputs if theirs feature dimension != n_in What you can do is to apply pooling from functional API. You'll need to specify kernel_size and stride such that resulting output will have feature dimension size = n_in.
https://stackoverflow.com/questions/59477922/
from torchvision.datasets.vision import VisionDataset ImportError: No module named vision
I am new to pytorch, and I am trying to follow guides (like this one) that start with: from torchvision.datasets.vision import VisionDataset or from .vision import VisionDataset When I try to run it I get an error: ImportError: No module named vision In addition, It seems that torchvision.datasets.vision is not mentioned in any documentation. Please help. Thanks.
You need to upgrade your torchvision package as VisionDataset was introduced in torchvision 0.3.0 in PR#749 as a base class for all datasets. Check the release.
https://stackoverflow.com/questions/59489415/
Pytorch - Should backward() function be in the loop of epoch or batch?
When training nn models using Pytorch, is there a difference regarding where we place the backward method? For example, which one of below is correct? Calculate gradient across the batch: for e in range(epochs): for i in batches_list: out = nn_model(i) loss = loss_function(out, actual) loss_sum += loss.item() lstm.zero_grad() loss.backward() optimizer.step() loss_list.append(loss_sum / num_train_obs) Calculate gradient across the epoch: for e in range(epochs): for i in batches_list: out = nn_model(i) loss = loss_function(out, actual) loss_sum += loss.item() lstm.zero_grad() loss_sum.backward() optimizer.step() loss_list.append(loss_sum / num_train_obs)
Both are programmatically correct. The first one is batch gradient descent, and the second one is gradient descent. In most of the problems we want to do batch gradient descent, so the first one is the right approach. It is also likely to train faster. You may use the second approach if you want to do Gradient descent (but it is seldom desired to do GD when you can do batch GD). However, since in GD you don't clear the graph every batch (.zero_grad is called only once), you may run out-of-memory.
https://stackoverflow.com/questions/59491896/
How do I change a torch tensor to concat with another tensor
I'm trying to concatenate a tensor of numerical data with the output tensor of a resnet-50 model. The output of that model is tensor shape torch.Size([10,1000]) and the numerical data is tensor shape torch.Size([10, 110528,8]) where the 10 is the batch size, 110528 is the number of observations in a data frame sense, and 8 is the number of columns (in a dataframe sense). I need to reshape the numerical tensor to torch.Size([10,8]) so it will concatenate properly. How would I reshape the tensor?
Starting tensors. a = torch.randn(10, 1000) b = torch.randn(10, 110528, 8) New tensor to allow concatenate. c = torch.zeros(10,1000,7) Check shapes. a[:,:,None].shape, c.shape (torch.Size([10, 1000, 1]), torch.Size([10, 1000, 7])) Alter tensor a to allow concatenate. a = torch.cat([a[:,:,None],c], dim=2) Concatenate in dimension 1. torch.cat([a,b], dim=1).shape torch.Size([10, 111528, 8])
https://stackoverflow.com/questions/59492723/
PyTorch and TensorFlow object detection - evaluate - object of type cannot be safely interpreted as an integer
I'm attempting to get this PyTorch person detection example running: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html I'm using Ubuntu 18.04. Here is a summary of the steps I've performed: 1) Stock Ubuntu 18.04 install on a Lenovo ThinkPad X1 Extreme Gen 2 with a GTX 1650 GPU. 2) Perform a standard CUDA 10.0 / cuDNN 7.4 install. I'd rather not restate all the steps as this post is going to be more than long enough already. This is a standard procedure, pretty much any link found via googling is what I followed. 3) Install torch and torchvision 4) From this link on the PyTorch site: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html I saved the source available from the link at the bottom: https://pytorch.org/tutorials/_static/tv-training-code.py To a directory I made, PennFudanExample 5) I did the following (found at the top of the above linked notebook): Install the CoCo API into Python: cd ~ git clone https://github.com/cocodataset/cocoapi.git cd cocoapi/PythonAPI open Makefile in gedit, change the two instances of "python" to "python3", then: python3 setup.py build_ext --inplace sudo python3 setup.py install Get the necessary files the above linked files need to run: cd ~ git clone https://github.com/pytorch/vision.git cd vision git checkout v0.5.0 from ~/vision/references/detection, copy coco_eval.py, coco_utils.py, engine.py, transforms.py, and utils.py to directory PennFudanExample. 6) Download the Penn Fudan Pedestrian dataset from the link on the above page: https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip then unzip and put in directory PennFudanExample 7) The only change I made to tv-training-code.py was to change the training batch size from 2 to 1 to prevent a GPU out of memory crash, see this other post I made here: PyTorch Object Detection with GPU on Ubuntu 18.04 - RuntimeError: CUDA out of memory. Tried to allocate xx.xx MiB Here is tv-training-code.py as I'm running it with the slight batch size edit I mentioned: # Sample code from the TorchVision 0.3 Object Detection Finetuning Tutorial # http://pytorch.org/tutorials/intermediate/torchvision_tutorial.html import os import numpy as np import torch from PIL import Image import torchvision from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor from engine import train_one_epoch, evaluate import utils import transforms as T class PennFudanDataset(object): def __init__(self, root, transforms): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) img = Image.open(img_path).convert("RGB") # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background mask = Image.open(mask_path) mask = np.array(mask) # instances are encoded as different colors obj_ids = np.unique(mask) # first id is the background, so remove it obj_ids = obj_ids[1:] # split the color-encoded mask into a set # of binary masks masks = mask == obj_ids[:, None, None] # get bounding box coordinates for each mask num_objs = len(obj_ids) boxes = [] for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class labels = torch.ones((num_objs,), dtype=torch.int64) masks = torch.as_tensor(masks, dtype=torch.uint8) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) target = {} target["boxes"] = boxes target["labels"] = labels target["masks"] = masks target["image_id"] = image_id target["area"] = area target["iscrowd"] = iscrowd if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs) def get_model_instance_segmentation(num_classes): # load an instance segmentation model pre-trained pre-trained on COCO model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) # get number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # now get the number of input features for the mask classifier in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels hidden_layer = 256 # and replace the mask predictor with a new one model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask, hidden_layer, num_classes) return model def get_transform(train): transforms = [] transforms.append(T.ToTensor()) if train: transforms.append(T.RandomHorizontalFlip(0.5)) return T.Compose(transforms) def main(): # train on the GPU or on the CPU, if a GPU is not available device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # our dataset has two classes only - background and person num_classes = 2 # use our dataset and defined transformations dataset = PennFudanDataset('PennFudanPed', get_transform(train=True)) dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False)) # split the dataset in train and test set indices = torch.randperm(len(dataset)).tolist() dataset = torch.utils.data.Subset(dataset, indices[:-50]) dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:]) # define training and validation data loaders # !!!! CHANGE HERE !!!! For this function call, I changed the batch_size param value from 2 to 1, otherwise this file is exactly as provided from the PyTorch website !!!! data_loader = torch.utils.data.DataLoader( dataset, batch_size=1, shuffle=True, num_workers=4, collate_fn=utils.collate_fn) data_loader_test = torch.utils.data.DataLoader( dataset_test, batch_size=1, shuffle=False, num_workers=4, collate_fn=utils.collate_fn) # get the model using our helper function model = get_model_instance_segmentation(num_classes) # move model to the right device model.to(device) # construct an optimizer params = [p for p in model.parameters() if p.requires_grad] optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005) # and a learning rate scheduler lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1) # let's train it for 10 epochs num_epochs = 10 for epoch in range(num_epochs): # train for one epoch, printing every 10 iterations train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) # update the learning rate lr_scheduler.step() # evaluate on the test dataset evaluate(model, data_loader_test, device=device) print("That's it!") if __name__ == "__main__": main() Here is the full text output including error I'm getting currently: Epoch: [0] [ 0/120] eta: 0:01:41 lr: 0.000047 loss: 7.3028 (7.3028) loss_classifier: 1.0316 (1.0316) loss_box_reg: 0.0827 (0.0827) loss_mask: 6.1742 (6.1742) loss_objectness: 0.0097 (0.0097) loss_rpn_box_reg: 0.0046 (0.0046) time: 0.8468 data: 0.0803 max mem: 1067 Epoch: [0] [ 10/120] eta: 0:01:02 lr: 0.000467 loss: 2.0995 (3.5058) loss_classifier: 0.6684 (0.6453) loss_box_reg: 0.0999 (0.1244) loss_mask: 1.2471 (2.7069) loss_objectness: 0.0187 (0.0235) loss_rpn_box_reg: 0.0060 (0.0057) time: 0.5645 data: 0.0089 max mem: 1499 Epoch: [0] [ 20/120] eta: 0:00:56 lr: 0.000886 loss: 1.0166 (2.1789) loss_classifier: 0.2844 (0.4347) loss_box_reg: 0.1631 (0.1540) loss_mask: 0.4710 (1.5562) loss_objectness: 0.0187 (0.0242) loss_rpn_box_reg: 0.0082 (0.0099) time: 0.5524 data: 0.0020 max mem: 1704 Epoch: [0] [ 30/120] eta: 0:00:50 lr: 0.001306 loss: 0.5554 (1.6488) loss_classifier: 0.1258 (0.3350) loss_box_reg: 0.1356 (0.1488) loss_mask: 0.2355 (1.1285) loss_objectness: 0.0142 (0.0224) loss_rpn_box_reg: 0.0127 (0.0142) time: 0.5653 data: 0.0023 max mem: 1756 Epoch: [0] [ 40/120] eta: 0:00:45 lr: 0.001726 loss: 0.4520 (1.3614) loss_classifier: 0.1055 (0.2773) loss_box_reg: 0.1101 (0.1530) loss_mask: 0.1984 (0.8981) loss_objectness: 0.0063 (0.0189) loss_rpn_box_reg: 0.0139 (0.0140) time: 0.5621 data: 0.0023 max mem: 1776 Epoch: [0] [ 50/120] eta: 0:00:39 lr: 0.002146 loss: 0.3448 (1.1635) loss_classifier: 0.0622 (0.2346) loss_box_reg: 0.1004 (0.1438) loss_mask: 0.1650 (0.7547) loss_objectness: 0.0033 (0.0172) loss_rpn_box_reg: 0.0069 (0.0131) time: 0.5535 data: 0.0022 max mem: 1776 Epoch: [0] [ 60/120] eta: 0:00:33 lr: 0.002565 loss: 0.3292 (1.0543) loss_classifier: 0.0549 (0.2101) loss_box_reg: 0.1113 (0.1486) loss_mask: 0.1596 (0.6668) loss_objectness: 0.0017 (0.0148) loss_rpn_box_reg: 0.0082 (0.0140) time: 0.5590 data: 0.0022 max mem: 1776 Epoch: [0] [ 70/120] eta: 0:00:28 lr: 0.002985 loss: 0.4105 (0.9581) loss_classifier: 0.0534 (0.1877) loss_box_reg: 0.1049 (0.1438) loss_mask: 0.1709 (0.5995) loss_objectness: 0.0015 (0.0132) loss_rpn_box_reg: 0.0133 (0.0138) time: 0.5884 data: 0.0023 max mem: 1783 Epoch: [0] [ 80/120] eta: 0:00:22 lr: 0.003405 loss: 0.3080 (0.8817) loss_classifier: 0.0441 (0.1706) loss_box_reg: 0.0875 (0.1343) loss_mask: 0.1960 (0.5510) loss_objectness: 0.0015 (0.0122) loss_rpn_box_reg: 0.0071 (0.0137) time: 0.5812 data: 0.0023 max mem: 1783 Epoch: [0] [ 90/120] eta: 0:00:17 lr: 0.003825 loss: 0.2817 (0.8171) loss_classifier: 0.0397 (0.1570) loss_box_reg: 0.0499 (0.1257) loss_mask: 0.1777 (0.5098) loss_objectness: 0.0008 (0.0111) loss_rpn_box_reg: 0.0068 (0.0136) time: 0.5644 data: 0.0022 max mem: 1794 Epoch: [0] [100/120] eta: 0:00:11 lr: 0.004244 loss: 0.2139 (0.7569) loss_classifier: 0.0310 (0.1446) loss_box_reg: 0.0327 (0.1163) loss_mask: 0.1573 (0.4731) loss_objectness: 0.0003 (0.0101) loss_rpn_box_reg: 0.0050 (0.0128) time: 0.5685 data: 0.0022 max mem: 1794 Epoch: [0] [110/120] eta: 0:00:05 lr: 0.004664 loss: 0.2139 (0.7160) loss_classifier: 0.0325 (0.1358) loss_box_reg: 0.0327 (0.1105) loss_mask: 0.1572 (0.4477) loss_objectness: 0.0003 (0.0093) loss_rpn_box_reg: 0.0047 (0.0128) time: 0.5775 data: 0.0022 max mem: 1794 Epoch: [0] [119/120] eta: 0:00:00 lr: 0.005000 loss: 0.2486 (0.6830) loss_classifier: 0.0330 (0.1282) loss_box_reg: 0.0360 (0.1051) loss_mask: 0.1686 (0.4284) loss_objectness: 0.0003 (0.0086) loss_rpn_box_reg: 0.0074 (0.0125) time: 0.5655 data: 0.0022 max mem: 1794 Epoch: [0] Total time: 0:01:08 (0.5676 s / it) creating index... index created! Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py", line 117, in linspace num = operator.index(num) TypeError: 'numpy.float64' object cannot be interpreted as an integer During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/cdahms/workspace-apps/PennFudanExample/tv-training-code.py", line 166, in <module> main() File "/home/cdahms/workspace-apps/PennFudanExample/tv-training-code.py", line 161, in main evaluate(model, data_loader_test, device=device) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) File "/home/cdahms/workspace-apps/PennFudanExample/engine.py", line 80, in evaluate coco_evaluator = CocoEvaluator(coco, iou_types) File "/home/cdahms/workspace-apps/PennFudanExample/coco_eval.py", line 28, in __init__ self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type) File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 75, in __init__ self.params = Params(iouType=iouType) # parameters File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 527, in __init__ self.setDetParams() File "/home/cdahms/models/research/pycocotools/cocoeval.py", line 506, in setDetParams self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) File "<__array_function__ internals>", line 6, in linspace File "/usr/local/lib/python3.6/dist-packages/numpy/core/function_base.py", line 121, in linspace .format(type(num))) TypeError: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer. Process finished with exit code 1 The really strange thing is after I resolved the above-mentioned GPU error this was working for about 1/2 a day and now I'm getting this error, and I could swear I didn't change anything. I've tried uninstalling and reinstalling torch, torchvision, pycocotools, and for copying the files coco_eval.py, coco_utils.py, engine.py, transforms.py, and utils.py, I've tried checking out torchvision v0.5.0, v0.4.2, and using the latest commit, all produce the same error. Also, I was working from home yesterday (Christmas) and this error does not happen on my home computer, which is also Ubuntu 18.04 with an NVIDIA GPU. In Googling for this error one suggestion that is relatively common is to backdate numpy to 1.11.0, but that version is really old now and therefore this would likely cause problems with other packages. Also in Googleing for this error it seems the general fix is to add a cast to int somewhere or to change a divide by / to // but I'm really hesitant to make changes internal to pycocotools or worse yet inside numpy. Also since error was not occurring previously and is not occurring on another computer I don't suspect this is a good idea anyway. Fortunately I can comment out the line evaluate(model, data_loader_test, device=device) For now and the training will complete, although I don't get the evaluation data (Mean Average Precision, etc.) About the only thing left I can think of at this point is to format the HD and reinstall Ubuntu 18.04 and everything else, but this will take at least a day, and if this ever happens again I'd really like to know what may be causing it. Ideas? Suggestions? Additional stuff I should check? -- EDIT -- After re-testing on the same computer experiencing the concern, I found this same error occurs with the evaluation step when using the TensorFlow object detection API.
!@#$%^& I finally figured this out after about 15 hours on it, as it turns out numpy 1.18.0, which was released 5 days ago as of when I'm writing this, breaks the evaluation process for both TensorFlow and PyTorch object detection. To make a long story short the fix is: sudo -H pip3 install numpy==1.17.4 A few things I can also mention: -numpy 1.17.4 was released on November 10th, 2019 and therefore should still be good for quite some time -There is now a pip package for pycocotools, so instead of the above procedure (cloning and building) you can now simply do: sudo -H pip3 install pycocotools --- Update --- This has now been fixed in pycocotools with this commit: https://github.com/cocodataset/cocoapi/pull/354 Also see this (closed) issue for more background: https://github.com/numpy/numpy/issues/15192 When the updated version of pycocotools will make it into the pycocotools pip3 package, I'm not sure.
https://stackoverflow.com/questions/59493606/
How do I create a torch diagonal matrices with different element in each batch?
I want to create a tensor like tensor([[[1,0,0],[0,1,0],[0,0,1]],[[2,0,0],[0,2,0],[0,0,2]]]]) That is, when a torch tensor B of size (1,n) is given, I want to create a torch tensor A of size (n,3,3) such that A[i] is an B[i] * (identity matrix of size 3x3). Without using 'for sentence', how do I create this?
Use torch.einsum (Einstein's notation of sum and product) A = torch.eye(3) b = torch.tensor([1.0, 2.0, 3.0]) torch.einsum('ij,k->kij', A, b) Will return: tensor([[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], [[2., 0., 0.], [0., 2., 0.], [0., 0., 2.]], [[3., 0., 0.], [0., 3., 0.], [0., 0., 3.]]])
https://stackoverflow.com/questions/59494681/
How can I convert pytorch cpu-based transformation to cuda-based?
The original issue for the code is availablehere. I am using this repository for a line segmentation project and I developed this code to get an input (whether image or video) and draw road lines on it and give it in output: import argparse import sys from time import time, clock from os.path import splitext, basename, exists from model import SCNN from utils.check_extension import is_video, is_image from utils.transforms import * # I will put all the necessary code for utils.transforms after this # ------------------------------------------------ SCNN parameters time1 = time() net = SCNN(input_size=(800, 288), pretrained=False) mean = (0.3598, 0.3653, 0.3662) # CULane mean, std std = (0.2573, 0.2663, 0.2756) transform_img = Resize((800, 288)) transform_to_net = Compose(ToTensor(), Normalize(mean=mean, std=std)) # ------------------------------------------------ Arguments def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default='models/vgg_SCNN_DULR_w9.pth', help='path to vgg models') parser.add_argument('--input', type=str, default='demo/line_3.mp4', help='path to image file') parser.add_argument('--output', type=str, default='public/', help='path to the output directory') args = parser.parse_args() return args def main(): args = parse_args() filename, extension = splitext(basename(args.input)) print("Loading file [{}] ....".format(filename)) if not exists(args.input): print("file [{}] is not recognized".format(args.input)) sys.exit() if is_video(extension): video_capture = cv2.VideoCapture() fourcc = cv2.VideoWriter_fourcc(*'XVID') output = args.output + filename + '.avi' if video_capture.open(args.input): property_id = int(cv2.CAP_PROP_FRAME_COUNT) total_frames = int(cv2.VideoCapture.get(video_capture, property_id)) frame_no = 1 width, height = int(video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)), \ int(video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = video_capture.get(cv2.CAP_PROP_FPS) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") save_dict = torch.load(args.weights, map_location=device) net.load_state_dict(save_dict['net']) net.eval() # can't write out mp4, so try to write into an AVI file video_writer = cv2.VideoWriter(output, fourcc, fps, (width, height)) while video_capture.isOpened(): start = time() ret, frame = video_capture.read() if not ret: break frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = transform_img({'img': frame})['img'] x = transform_to_net({'img': frame})['img'] x.unsqueeze_(0) stop1 = time() print('stop1: ', stop1 - start) seg_pred, exist_pred = net(x)[:2] seg_pred = seg_pred.detach().cpu().numpy() exist_pred = exist_pred.detach().cpu().numpy() seg_pred = seg_pred[0] stop2 = time() print('stop2: ', stop2 - stop1) frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) lane_img = np.zeros_like(frame) color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8') coord_mask = np.argmax(seg_pred, axis=0) for i in range(0, 4): if exist_pred[0, i] > 0.5: lane_img[coord_mask == (i + 1)] = color[i] img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=frame, beta=1., gamma=0.) img = cv2.resize(img, (width, height)) stop3 = time() print('stop3: ', stop3 - stop2) # if frame_no % 20 == 0: # print('# {}/{} frames processed!'.format(frame_no, total_frames)) frame_no += 1 video_writer.write(img) end = time() print('Whole loop: {} seconds'.format(end - start)) print('------------') print('------------') print('# All frames processed ') video_capture.release() video_writer.release() elif is_image(extension): img = cv2.imread(args.input) height, width, _ = img.shape img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = transform_img({'img': img})['img'] x = transform_to_net({'img': img})['img'] x.unsqueeze_(0) save_dict = torch.load(args.weights, map_location='cpu') net.load_state_dict(save_dict['net']) net.eval() seg_pred, exist_pred = net(x)[:2] seg_pred = seg_pred.detach().cpu().numpy() exist_pred = exist_pred.detach().cpu().numpy() seg_pred = seg_pred[0] img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) lane_img = np.zeros_like(img) color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8') coord_mask = np.argmax(seg_pred, axis=0) for i in range(0, 4): if exist_pred[0, i] > 0.5: lane_img[coord_mask == (i + 1)] = color[i] img = cv2.addWeighted(src1=lane_img, alpha=0.8, src2=img, beta=1., gamma=0.) img = cv2.resize(img, (width, height)) output = args.output + filename + '.jpg' cv2.imwrite(output, img) else: print("file format [{}] is not supported".format(args.input)) sys.exit() if __name__ == '__main__': main() The code which belong to Resize, ToTensor, Normalize, Compose are here: class Compose(CustomTransform): """ All transform in Compose should be able to accept two non None variable, img and boxes """ def __init__(self, *transforms): self.transforms = [*transforms] def __call__(self, sample): for t in self.transforms: sample = t(sample) return sample def __iter__(self): return iter(self.transforms) def modules(self): yield self for t in self.transforms: if isinstance(t, Compose): for _t in t.modules(): yield _t else: yield t class Normalize(CustomTransform): def __init__(self, mean, std): self.transform = Normalize_th(mean, std) def __call__(self, sample): img = sample.get('img') img = self.transform(img) _sample = sample.copy() _sample['img'] = img return _sample class ToTensor(CustomTransform): def __init__(self, dtype=torch.float): self.dtype=dtype def __call__(self, sample): img = sample.get('img') segLabel = sample.get('segLabel', None) exist = sample.get('exist', None) img = img.transpose(2, 0, 1) img = torch.from_numpy(img).type(self.dtype) / 255. if segLabel is not None: segLabel = torch.from_numpy(segLabel).type(torch.long) if exist is not None: exist = torch.from_numpy(exist).type(torch.float32) # BCEloss requires float tensor _sample = sample.copy() _sample['img'] = img _sample['segLabel'] = segLabel _sample['exist'] = exist return _sample class Resize(CustomTransform): def __init__(self, size): if isinstance(size, int): size = (size, size) self.size = size #(W, H) def __call__(self, sample): img = sample.get('img') segLabel = sample.get('segLabel', None) img = cv2.resize(img, self.size, interpolation=cv2.INTER_CUBIC) if segLabel is not None: segLabel = cv2.resize(segLabel, self.size, interpolation=cv2.INTER_NEAREST) _sample = sample.copy() _sample['img'] = img _sample['segLabel'] = segLabel return _sample def reset_size(self, size): if isinstance(size, int): size = (size, size) self.size = size The code works fine but I found out that its too slow for testing in real-time application. I added some time measurement to see if I can find out the bottlenecks and this is the output for one loop: ------------ stop1: 0.002989053726196289 stop2: 1.4032211303710938 stop3: 0.004946708679199219 Whole loop: 1.41636061668396 seconds These lines happened to be the most computationally expensive lines: seg_pred, exist_pred = net(x)[:2] seg_pred = seg_pred.detach().cpu().numpy() exist_pred = exist_pred.detach().cpu().numpy() seg_pred = seg_pred[0] Now I am stuck with this issue that how I can modify the code to improve the computation speed. Initially I thought of modifying the code to allow cuda computation. I asked the main author how I can modify the code for cuda version in here and he pointed out to these lines: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = transform_img({'img': frame})['img'] x = transform_to_net({'img': frame})['img'] x.unsqueeze_(0) Unfortunately my experience with pytorch is not much, so I am asking for help now. I hope the information I shared suffices for the readers. Any help would be appreciated Thanks
Set device: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") what he means his putting the data on device: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = transform_img({'img': frame})['img'] x = transform_to_net({'img': frame})['img'] x.unsqueeze_(0).to(device)
https://stackoverflow.com/questions/59497887/
torch tensor changes to numpy array in for/while loop?
print('\nCollecting experience') for ep in range(400): state = env.reset() #print(state.shape) #state = np.array(state) state = state.transpose((2, 0, 1)) #state = torch.from_numpy(state) state = Variable(torch.from_numpy(state)) state = state.unsqueeze(0) print("AA", state.shape) episode_reward = 0 step = 0 for i in range(50): # env.render() print("BB",state.shape) action = agent.get_action(state) i have tried and it works without the loop, it doesn't work with while loop either what is printed: Collecting experience AA torch.Size([1, 1, 84, 84]) BB torch.Size([1, 1, 84, 84]) BB (84, 84, 1) what is causing the second BB print?
solved it, turns out the problem was later in the loop, the next input wasn't the same, so made the processing into numpy and things into a function, and it worked
https://stackoverflow.com/questions/59504791/
run conv2d against a list
I tested conv2d with following code: import torch import torch.nn as nn x=torch.randint(500,(256,)) conv=nn.Conv2d(1,6,5,padding=1) y=x.view(1,1,16,16) z=conv(y) print (z.shape) and I got error: /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, --> 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input): RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'weight' in call to _thnn_conv2d_forward How to fix it?
In pytorch, the nn.Conv2d module needs the data to be in float. You can just make a simple edit: x = torch.randint(500,(256,), dtype=torch.float32) Alternatively you can also do: x = torch.randint(500,(256,)) x = x.float()
https://stackoverflow.com/questions/59508656/
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 4, 8, 8], but got 2-dimensional input of size [1, 4] instead
I am initializing a Convolutional DQN with the following code: class ConvDQN(nn.Module): def __init__(self, input_dim, output_dim): super(ConvDQN, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.conv = nn.Sequential( nn.Conv2d(self.input_dim, 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.ReLU() ) self.fc_input_dim = self.feature_size() self.fc = nn.Sequential( nn.Linear(self.fc_input_dim, 128), nn.ReLU(), nn.Linear(128, 256), nn.ReLU(), nn.Linear(256, self.output_dim) ) def forward(self, state): features = self.conv(state) features = features.view(features.size(0), -1) qvals = self.fc(features) return qvals def feature_size(self): return self.conv(autograd.Variable(torch.zeros(1, *self.input_dim))).view(1, -1).size(1) And it gives me the error: File "dqn.py", line 86, in __init__ self.fc_input_dim = self.feature_size() File "dqn.py", line 105, in feature_size return self.conv(autograd.Variable(torch.zeros(32, *self.input_dim))).view(1, -1).size(1) File "C:\Users\ariji\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\ariji\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\container.py", line 92, in forward input = module(input) File "C:\Users\ariji\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\ariji\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\nn\modules\conv.py", line 320, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 4, 8, 8], but got 2-dimensional input of size [1, 4] instead So I get the fact that the input that I am passing to the convolutional network is of incorrect dimensions. What I do not understand is how I am supposed to add the required dimensions to my input? Or should I change something in my convolutional network?
You pass the conv layer torch.zeros(1, *self.input_dim) which is torch.Size([1, 4]), but you initialize the conv layer as, nn.Sequential( nn.Conv2d(self.input_dim, 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, stride=1), nn.ReLU() ) So self.conv is expecting a tensor of that size but you pass it torch.Size([1, 4])
https://stackoverflow.com/questions/59510603/
SyntaxError: invalid character in identifier - Pytorch 1.3.1
When executing the following method: def flatten(t): t = t.reshape(1, βˆ’1) t = t.squeeze() return t Python complains about the second argument. File "pytorch.py", line 16 t = t.reshape(1, βˆ’1) ^ SyntaxError: invalid character in identifier My pytorch version is 1.3.1. I already tried to remove the space before the argument with no effect. Any ideas?
The character in βˆ’1 is not a hyphen. Rather it's the actual minus sign from unicode. This makes the python interpreter think that βˆ’1 is an identifier instead of -1 value. You might have copied the code from somewhere that has this stylised characters. Just replace βˆ’ with -
https://stackoverflow.com/questions/59512462/
Cannot find conflict in Conda UnsatisfiableError message
Trying to install pytorch in conda in Docker, and get UnsatisfiableError. However, I cannot find any actual conflict in the error message or I may not understand it correctly. The Docker image used is nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04. There will be no problem if I do not specify the pytorch version to install. But I do need this version of pytorch for some legacy code. Here is the conda command and error message. (Python3.6) root@0cb9aad73116:/# conda install -c pytorch pytorch=0.3.1 Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Package libgcc-ng conflicts for: python==3.6.9 -> libgcc-ng[version='>=7.3.0'] zlib -> libgcc-ng[version='>=7.2.0|>=7.3.0'] tk -> libgcc-ng[version='>=7.2.0|>=7.3.0'] xz -> libgcc-ng[version='>=7.2.0'] pytorch=0.3.1 -> libgcc-ng[version='>=5.4.0'] libedit -> libgcc-ng[version='>=7.2.0|>=7.3.0'] libffi -> libgcc-ng[version='>=7.2.0'] ncurses -> libgcc-ng[version='>=7.2.0|>=7.3.0'] sqlite -> libgcc-ng[version='>=7.2.0|>=7.3.0'] readline -> libgcc-ng[version='>=7.2.0|>=7.3.0'] openssl -> libgcc-ng[version='>=7.2.0|>=7.3.0'] Package libstdcxx-ng conflicts for: ncurses -> libstdcxx-ng[version='>=7.2.0|>=7.3.0'] pytorch=0.3.1 -> libstdcxx-ng[version='>=5.4.0'] python==3.6.9 -> libstdcxx-ng[version='>=7.3.0'] libffi -> libstdcxx-ng[version='>=7.2.0'] Package xz conflicts for: python==3.6.9 -> xz[version='>=5.2.4,<6.0a0'] Package cudnn conflicts for: pytorch=0.3.1 -> cudnn[version='7.0.*|>=7.0.5,<=8.0a0'] Package nccl conflicts for: pytorch=0.3.1 -> nccl[version='<2'] Package setuptools conflicts for: wheel -> setuptools pip -> setuptools Package zlib conflicts for: tk -> zlib[version='>=1.2.11,<1.3.0a0'] python==3.6.9 -> zlib[version='>=1.2.11,<1.3.0a0'] Package ca-certificates conflicts for: openssl -> ca-certificates Package libffi conflicts for: python==3.6.9 -> libffi[version='>=3.2.1,<4.0a0'] Package cudatoolkit conflicts for: pytorch=0.3.1 -> cudatoolkit[version='8.*|8.0.*'] Package _libgcc_mutex conflicts for: libgcc-ng -> _libgcc_mutex=[build=main] Package numpy conflicts for: pytorch=0.3.1 -> numpy[version='>=1.11|>=1.11.3,<2.0a0'] Package openssl conflicts for: python==3.6.9 -> openssl[version='>=1.1.1c,<1.1.2a'] Package libedit conflicts for: sqlite -> libedit[version='>=3.1.20170329,<3.2.0a0|>=3.1.20181209,<3.2.0a0'] Package ncurses conflicts for: python==3.6.9 -> ncurses[version='>=6.1,<7.0a0'] readline -> ncurses[version='6.0.*|>=6.0,<7.0a0|>=6.1,<7.0a0'] libedit -> ncurses[version='6.0.*|>=6.1,<7.0a0'] Package cffi conflicts for: pytorch=0.3.1 -> cffi Package sqlite conflicts for: python==3.6.9 -> sqlite[version='>=3.29.0,<4.0a0'] Package * conflicts for: pytorch=0.3.1 -> *[track_features=cuda90] Package tk conflicts for: python==3.6.9 -> tk[version='>=8.6.8,<8.7.0a0'] Package wheel conflicts for: pip -> wheel Package certifi conflicts for: setuptools -> certifi[version='>=2016.09|>=2016.9.26'] Package mkl conflicts for: pytorch=0.3.1 -> mkl[version='>=2018|>=2018.0.2,<2019.0a0'] Package readline conflicts for: python==3.6.9 -> readline[version='>=7.0,<8.0a0'] Package pip conflicts for: python==3.6.9 -> pip Packages installed (Python3.6) root@0cb9aad73116:/# conda list # packages in environment at /opt/conda/envs/Python3.6: # # Name Version Build Channel _libgcc_mutex 0.1 main ca-certificates 2019.11.27 0 certifi 2019.11.28 py36_0 libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1 hd88cf55_4 libgcc-ng 9.1.0 hdf63c60_0 libstdcxx-ng 9.1.0 hdf63c60_0 ncurses 6.1 he6710b0_1 openssl 1.1.1d h7b6447c_3 pip 19.3.1 py36_0 python 3.6.9 h265db76_0 readline 7.0 h7b6447c_5 setuptools 42.0.2 py36_0 sqlite 3.30.1 h7b6447c_0 tk 8.6.8 hbc83047_0 wheel 0.33.6 py36_0 xz 5.2.4 h14c3975_4 zlib 1.2.11 h7b6447c_3 Similar error remains if use Docker image continuumio/miniconda3:4.7.12.
docker run -it nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 bash apt-get update apt install wget -y wget https://repo.continuum.io/archive/Anaconda3-4.3.0-Linux-x86_64.sh bash Anaconda3-4.3.0-Linux-x86_64.sh [Enter] yes [Enter] yes . /root/.bashrc conda install mkl y conda install -c pytorch pytorch=0.3.1 y conda list|grep pytorch pytorch 0.3.1 py36_cuda8.0.61_cudnn7.1.2_3 pytorch root@dacfe958940b:/# bash (base) root@dacfe958940b:/# python Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. import torch print(torch.__version__) 0.3.1.post3
https://stackoverflow.com/questions/59513720/
Fix random seed for torchvision transforms
I use some code similar to the following - for data augmentation: from torchvision import transforms #... augmentation = transforms.Compose([ transforms.RandomApply([ transforms.RandomRotation([-30, 30]) ], p=0.5), transforms.RandomHorizontalFlip(p=0.5), ]) During my testing I want to fix random values to reproduce the same random parameters each time I change the model training settings. How can I do it? I want to do something similar to np.random.seed(0) so each time I call random function with probability for the first time, it will run with the same rotation angle and probability. In other words, if I do not change the code at all, it must reproduce the same result when I rerun it. Alternatively I can separate transforms, use p=1, fix the angle min and max to a particular value and use numpy random numbers to generate results, but my question if I can do it keeping the code above unchanged.
In the __getitem__ of your dataset class make a numpy random seed. def __getitem__(self, index): img = io.imread(self.labels.iloc[index,0]) target = self.labels.iloc[index,1] seed = np.random.randint(2147483647) # make a seed with numpy generator random.seed(seed) # apply this seed to img transforms if self.transform is not None: img = self.transform(img) random.seed(seed) # apply this seed to target transforms if self.target_transform is not None: target = self.target_transform(target) return img, target
https://stackoverflow.com/questions/59516181/
How to load Omniglot on Pytorch
I'm trying to do some experiments on the Omniglot dataset, and I saw that Pytorch implemented it. I've run the command from torchvision.datasets import Omniglot but I have no idea on how to actually load the dataset. Is there a way to open it equivalent to how we open MNIST? Something like the following: train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) The final goal is to be able to open training and test set separately and run experiments on it.
You can do exact same transformations as Omniglot contains images and labels just like MNIST, for example: import torchvision dataset = torchvision.datasets.Omniglot( root="./data", download=True, transform=torchvision.transforms.ToTensor() ) image, label = dataset[0] print(type(image)) # torch.Tensor print(type(label)) # int
https://stackoverflow.com/questions/59520146/
Creating LSTM model with pytorch
I'm quite new to using LSTM in Pytorch, I'm trying to create a model that gets a tensor of size 42 and a sequence of 62.(so 62 tensor a of size 42 each). Which means that I have 62 tensors in a sequence. Each tensor is of size 42.(shape is [62,42]. Call this input tensor. and I want to predict a tensor of 1 with a sequence of 8 (so size 1 tensor and 8 sequences) using this. Which means that there are 8 tensors in a sequence of size 1 each. Call this label tensor. The connection between those tensors is this: Input tensor is made of columns: A1 A2 A3 ...... A42 While label tensor if more like: A3 What I’m trying to show is that if needed label tensor can be padded with zero in all places instead of the value of A3, so it can reach a length of 42. How can I do this? since from what I'm reading from the Pytorch documentation I can only predict in the same ratio(1 point predict 1), while I want to predict from tensor of 42 with a sequence of 62 a tensor of 1 and sequence of 8. Is it doable? Do I need to pad the predicted tensor to size 42 from 1? Thanks! a good solution will be using seq2seq for example
If I correctly understand your question, given a sequence of length 62 you want to predict a sequence of length 8, in the sense that the order of your outputs have an importance, this is the case if you are doing some time series forcasting). In that case using a seq2seq model will be a good choice, here is a tutorial for this link. Globbaly, you nedd to implement an encoder and a decoder, here is en exemple of such an implemtation: class EncoderRNN(nn.Module): def __init__(self, input_dim=42, hidden_dim=100): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.lstm = nn.LSTM(input_dim, hidden_dim) def forward(self, input, hidden): output, hidden = self.lstm(input, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) class DecoderRNN(nn.Module): def __init__(self, hidden_dim, output_dim): super(DecoderRNN, self).__init__() self.hidden_dim = hidden_dim self.lstm = nn.LSTM(hidden_dim, hidden_dim) self.out = nn.Linear(hidden_dim, output_dim) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden): output, hidden = self.lstm(input, hidden) output = self.softmax(self.out(output[0])) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) If your the order of your 8 outputs has no importance, then you can simply add a Linear layer with 8 units after the LSTM layer. You can use this code directly in that case class Net(nn.Module): def __init__(self, hidden_dim=100, input_dim=42, output_size=8): super(Net, self).__init__() self.hidden_dim = hidden_dim self.lstm = nn.LSTM(input_dim, hidden_dim, batch_first=True) # The linear layer that maps from hidden state space to tag space self.fc = nn.Linear(hidden_dim, output_size_size) def forward(self, seq): lstm_out, _ = self.lstm(seq) output = self.fc(lstm_out) return output
https://stackoverflow.com/questions/59520620/
Mask values in segmentation task are either 0 or 255. How do I resolve this?
I must be getting something terribly wrong with the fast-ai library, since I seem to be the only one having this problem. Everytime I try the learning rate finder or training the network, it gives me an error. It took me a week to produce this specific error message, which made me check the mask values. It turns out they are either 0 for background pixels and 255 for foreground ones. This is a problem as I only have two classes. How can I change the 255 values to 1 within my Databunch Object? Is there a way to divide each mask value by 255 or do I need to do it beforehand somehow? I am kinda lost within the process. Here's the error message i'm getting: LR Finder is complete, type {learner_name}.recorder.plot() to see the graph. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-c7a9c29f9dd1> in <module>() ----> 1 learn.lr_find() 2 learn.recorder.plot() 8 frames /usr/local/lib/python3.6/dist-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, wd) 39 cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div) 40 epochs = int(np.ceil(num_it/len(learn.data.train_dl))) ---> 41 learn.fit(epochs, start_lr, callbacks=[cb], wd=wd) 42 43 def to_fp16(learn:Learner, loss_scale:float=None, max_noskip:int=1000, dynamic:bool=True, clip:float=None, /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks) 198 else: self.opt.lr,self.opt.wd = lr,wd 199 callbacks = [cb(self) for cb in self.callback_fns + listify(defaults.extra_callback_fns)] + listify(callbacks) --> 200 fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks) 201 202 def create_opt(self, lr:Floats, wd:Floats=0.)->None: /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, learn, callbacks, metrics) 99 for xb,yb in progress_bar(learn.data.train_dl, parent=pbar): 100 xb, yb = cb_handler.on_batch_begin(xb, yb) --> 101 loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler) 102 if cb_handler.on_batch_end(loss): break 103 /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler) 28 29 if not loss_func: return to_detach(out), to_detach(yb[0]) ---> 30 loss = loss_func(out, *yb) 31 32 if opt is not None: /usr/local/lib/python3.6/dist-packages/fastai/layers.py in __call__(self, input, target, **kwargs) 241 if self.floatify: target = target.float() 242 input = input.view(-1,input.shape[-1]) if self.is_2d else input.view(-1) --> 243 return self.func.__call__(input, target.view(-1), **kwargs) 244 245 def CrossEntropyFlat(*args, axis:int=-1, **kwargs): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target) 914 def forward(self, input, target): 915 return F.cross_entropy(input, target, weight=self.weight, --> 916 ignore_index=self.ignore_index, reduction=self.reduction) 917 918 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2007 if size_average is not None or reduce is not None: 2008 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2009 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2010 2011 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:97 Also this is how I set up my data: data = (SegmentationItemList.from_df(img_df,IMAGE_PATH) # import from df in greyscale ('L') .split_by_rand_pct(valid_pct=0.15) # 1/15 train/validation split .label_from_func(get_mask, classes = array(['background','cell'])) # segmentation mask and classes .transform(tfms, tfm_y=True, size=TILE_SHAPE) # apply data augmentation .databunch(bs=BATCH_SIZE) # set batchsize .normalize() ) Please tell me if you need any more information. I already tried adding an 'after_open' function, which should have divided all by 255 to the 'label_from_func' part. I also know there is a div attribute within the 'open_image' function of fast-ai, which is supposed to normalize RGB values between 0 and 1, but I couldn't find one for 'label_from_func'. Edit: I found this post in the fastai community. However even with these answers i was not able to solve my problem. I tried adding this snippet to pass div=True into the open_mask function, but it did not work: src.train.y.create_func = partial(open_mask, div=True) src.valid.y.create_func = partial(open_mask, div=True) I also tried .set_attr(mask_opener=partial(open_mask, div=True)) after .label_from_func(), but then it throws this attribute error: AttributeError: setattr Still need help
The below custom classes are necessary to deal with the binary image segmentation datasets which use 0 and 255 to encode the mask class SegLabelListCustom(SegmentationLabelList): def open(self, fn): return open_mask(fn, div=True) class SegItemListCustom(SegmentationItemList): _label_cls = SegLabelListCustom Link for reference: https://github.com/fastai/fastai/issues/1540 Below is example of using these custom classes to create a source for a databunch. src = (SegItemListCustom.from_folder('/home/jupyter/AerialImageDataset/train/') .split_by_folder(train='images', valid='validate') .label_from_func(get_y_fn, classes=labels)) I really hope this helps you out as I struggled to deal with this myself not too long ago and this was my solution. It was difficult because many of the answers I found were for previous versions and no longer worked. Let me know if you need more clarification or help as I know how frustrating it can be to be stuck early on.
https://stackoverflow.com/questions/59520705/
TypeError: Cannot handle this data type
Trying to put the saliency map to the image and make a new data set trainloader = utilsxai.load_data_cifar10(batch_size=1,test=False) testloader = utilsxai.load_data_cifar10(batch_size=128, test=True) this load_cifar10 is torchvision data = trainloader.dataset.data trainloader.dataset.data = (data * sal_maps_hf).reshape(data.shape) sal_maps_hf shape with (50000,32,32,3) and trainloader shape with (50000,32,32,3) but when I run this for idx,img in enumerate(trainloader): --------------------------------------------------------------------------- KeyError Traceback (most recent call last) ~/venv/lib/python3.7/site-packages/PIL/Image.py in fromarray(obj, mode) 2644 typekey = (1, 1) + shape[2:], arr["typestr"] -> 2645 mode, rawmode = _fromarray_typemap[typekey] 2646 except KeyError: KeyError: ((1, 1, 3), ' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) in ----> 1 show_images(trainloader) in show_images(trainloader) 1 def show_images(trainloader): ----> 2 for idx,(img,target) in enumerate(trainloader): 3 img = img.squeeze() 4 #pritn(img) 5 img = torch.tensor(img) ~/venv/lib/python3.7/site-packages/torch/utils/data/dataloader.py in next(self) 344 def next(self): 345 index = self._next_index() # may raise StopIteration --> 346 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 347 if self._pin_memory: 348 data = _utils.pin_memory.pin_memory(data) ~/venv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] ~/venv/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in (.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] ~/venv/lib/python3.7/site-packages/torchvision/datasets/cifar.py in getitem(self, index) 120 # doing this so that it is consistent with all other datasets 121 # to return a PIL Image --> 122 img = Image.fromarray(img) 123 124 if self.transform is not None: ~/venv/lib/python3.7/site-packages/PIL/Image.py in fromarray(obj, mode) 2645 mode, rawmode = _fromarray_typemap[typekey] 2646 except KeyError: -> 2647 raise TypeError("Cannot handle this data type") 2648 else: 2649 rawmode = mode TypeError: Cannot handle this data type trainloader.dataset.__getitem__ getitem of Dataset CIFAR10 Number of datapoints: 50000 Root location: /mnt/3CE35B99003D727B/input/pytorch/data Split: Train StandardTransform Transform: Compose( Resize(size=32, interpolation=PIL.Image.BILINEAR) ToTensor() )
Your sal_maps_hf is not np.uint8. Based on the partial information in the question and in comments, I guess that your mask is of dtype np.float (or similar), and by multiplying data * sal_maps_hf your data is cast to dtype other than np.uint8 which later makes PIL.Image to throw an exception. Try: trainloader.dataset.data = (data * sal_maps_hf).reshape(data.shape).astype(np.uint8)
https://stackoverflow.com/questions/59529339/
How to seemlessly blend (B x C x H x W) tensor tiles together to hide tile boundaries?
For completeness this is a text summary of what I am trying to do: Split the image into tiles. Run each tile through a new copy of the model for a set number of iterations. Feather tiles and put them into rows. Feather rows and put them back together into the original image/tensor. Maybe save the output, then split the output into tiles again. Repeat steps 2 and 3 for a set number of iterations. I only need help with steps 1, 3, and 4. The style transfer process causes some slight differences to form in the processed tiles, so I need to blend them back together. By feathering I basically mean fading a tile into another to blur the boundaries (like in ImageMagick, Photoshop, etc...). I am trying to accomplish this blending by using Torch.linspace() to create masks, though I'm not sure if there's a better approach. What I am trying to accomplish is based on/inspired by: https://github.com/VaKonS/neural-style/blob/Multi-resolution/neural_style.lua, though I'm working with PyTorch. The code I am trying to implement tiling with can be found here: https://gist.github.com/ProGamerGov/e64fcb309274c2946f5a9a679ed45669, though you shouldn't need to look at it as everything you need can be found below. In essence this is what I am trying to do (red areas overlap with another tile): . This is what I have for code so far. Feathering and adding rows together is not yet implemented as I can't get the individual tile feathering working yet. import torch from PIL import Image import torchvision.transforms as transforms def tile_calc(tile_size, v, d): max_val = max(min(tile_size*v+tile_size, d), 0) min_val = tile_size*v if abs(min_val - max_val) < tile_size: min_val = max_val-tile_size return min_val, max_val def split_tensor(tensor, tile_size=256): tiles, tile_idx = [], [] tile_size_y, tile_size_x = tile_size+8, tile_size +5 # Make H and W different for testing h, w = tensor.size(2), tensor.size(3) h_range, w_range = int(-(h // -tile_size_y)), int(-(w // -tile_size_x)) for y in range(h_range): for x in range(w_range): ty, y_val = tile_calc(tile_size_y, y, h) tx, x_val = tile_calc(tile_size_x, x, w) tiles.append(tensor[:, :, ty:y_val, tx:x_val]) tile_idx.append([ty, y_val, tx, x_val]) w_overlap = tile_idx[0][3] - tile_idx[1][2] h_overlap = tile_idx[0][1] - tile_idx[w_range][0] if tensor.is_cuda: base_tensor = torch.zeros(tensor.squeeze(0).size(), device=tensor.get_device()) else: base_tensor = torch.zeros(tensor.squeeze(0).size()) return tiles, base_tensor.unsqueeze(0), (h_range, w_range), (h_overlap, w_overlap) # Feather vertically def feather_tiles(tensor_list, hxw, w_overlap): print(len(tensor_list)) mask_list = [] if w_overlap > 0: for i, tile in enumerate(tensor_list): if i % hxw[1] != 0: lin_mask = torch.linspace(0,1,w_overlap).repeat(tile.size(2),1) mask_part = torch.ones(tile.size(2), tile.size(3)-w_overlap) mask = torch.cat([lin_mask, mask_part], 1) mask = mask.repeat(3,1,1).unsqueeze(0) mask_list.append(mask) else: mask = torch.ones(tile.squeeze().size()).unsqueeze(0) mask_list.append(mask) return mask_list def build_row(tensor_tiles, tile_masks, hxw, w_overlap, bt, tile_size): print(len(tensor_tiles), len(tile_masks)) if bt.is_cuda: row_base = torch.ones(bt.size(1),tensor_tiles[0].size(2),bt.size(3), device=bt.get_device()).unsqueeze(0) else: row_base = torch.ones(bt.size(1),tensor_tiles[0].size(2),bt.size(3)).unsqueeze(0) row_list = [] for v in range(hxw[1]): row_list.append(row_base.clone()) num_tiles = 0 row_val = 0 tile_size_y, tile_size_x = tile_size+8, tile_size +5 h, w = bt.size(2), bt.size(3) h_range, w_range = hxw[0], hxw[1] for y in range(h_range): for x in range(w_range): ty, y_val = tile_calc(tile_size_y, y, h) tx, x_val = tile_calc(tile_size_x, x, w) if num_tiles % hxw[1] != 0: new_mean = (row_list[row_val][:, :, :, tx:x_val].mean() + tensor_tiles[num_tiles])/2 row_list[row_val][:, :, :, tx:x_val] = row_list[row_val][:, :, :, tx:x_val] - row_list[row_val][:, :, :, tx:x_val].mean() tensor_tiles[num_tiles] = tensor_tiles[num_tiles] - tensor_tiles[num_tiles].mean() row_list[row_val][:, :, :, tx:x_val] = (row_list[row_val][:, :, :, tx:x_val] + ( tensor_tiles[num_tiles] * tile_masks[num_tiles])) + new_mean else: row_list[row_val][:, :, :, tx:x_val] = tensor_tiles[num_tiles] num_tiles+=1 row_val+=1 return row_list def preprocess(image_name, image_size): image = Image.open(image_name).convert('RGB') if type(image_size) is not tuple: image_size = tuple([int((float(image_size) / max(image.size))*x) for x in (image.height, image.width)]) Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()]) tensor = (Loader(image) * 256).unsqueeze(0) return tensor def deprocess(output_tensor): output_tensor = output_tensor.squeeze(0).cpu() / 256 output_tensor.clamp_(0, 1) Image2PIL = transforms.ToPILImage() image = Image2PIL(output_tensor.cpu()) return image input_tensor = preprocess('test.jpg', 256) tile_tensors, base_t, hxw, ovlp = split_tensor(input_tensor, 128) tile_masks = feather_tiles(tile_tensors, hxw, ovlp[1]) row_tensors = build_row(tile_tensors, tile_masks, hxw, ovlp[1], base_t, 128) ft = deprocess(row_tensors[0]) # save tensor to view it ft.save('ft_row_0.png')
I was able to create a solution here that works for any tile size, image size, and pattern: https://github.com/ProGamerGov/neural-dream/blob/master/neural_dream/dream_tile.py I used masks to blend the tiles back together again.
https://stackoverflow.com/questions/59537390/
FastAi What does the slice(lr) do in fit_one_cycle()
In Lesson 3 - planet, I saw these 2 lines of code: lr = 0.01 learn.fit_one_cycle(5, slice(lr)) if the slice(min_lr, max_lr) then I understand the fit_one_cycle() will use the spread-out Learning Rates from slice(min_lr, max_lr). (Hopefully, my understanding to this is correct) But in this case slice(lr) only has one parameter, What are the differences between fit_one_cycle(5, lr) and fit_one_cycle(5, slice(lr)) ? And what are the benefits of using slice(lr) instead of lr directly?
Jeremy took a while to explain what slice does in Lesson 5. What I understood was that the fastai.vision module divides the architecture in 3 groups and trains them with variable learning rates depending on what you input. (Starting layers usually don't require large variations in parameters) Additionally, if you use 'fit_one_cycle', all the groups will have learning rate annealing with their respective variable learning. Check Lesson 5 https://course.fast.ai/videos/?lesson=5 (use the transcript finder to quickly go to the 'slice' part)
https://stackoverflow.com/questions/59538623/
What would be the equivalent of keras.layers.Masking in pytorch?
I have time-series sequences which I needed to keep the length of sequences fixed to a number by padding zeroes into matrix and using keras.layers.Masking in keras I could neglect those padded zeros for further computations, I am wondering how could it be done in Pytorch? Either I need to do the padding in pytroch and pytorch can't handle the sequences with varying lengths what is the equivalent to Masking layer of keras in pytorch, or if pytorch handles the sequences with varying lengths, how could it be done?
You can use PackedSequence class as equivalent to keras masking. you can find more features at torch.nn.utils.rnn Here putting example from packing for variable-length sequence inputs for rnn import torch import torch.nn as nn from torch.autograd import Variable batch_size = 3 max_length = 3 hidden_size = 2 n_layers =1 # container batch_in = torch.zeros((batch_size, 1, max_length)) #data vec_1 = torch.FloatTensor([[1, 2, 3]]) vec_2 = torch.FloatTensor([[1, 2, 0]]) vec_3 = torch.FloatTensor([[1, 0, 0]]) batch_in[0] = vec_1 batch_in[1] = vec_2 batch_in[2] = vec_3 batch_in = Variable(batch_in) seq_lengths = [3,2,1] # list of integers holding information about the batch size at each sequence step # pack it pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True) >>> pack PackedSequence(data=Variable containing: 1 2 3 1 2 0 1 0 0 [torch.FloatTensor of size 3x3] , batch_sizes=[3]) # initialize rnn = nn.RNN(max_length, hidden_size, n_layers, batch_first=True) h0 = Variable(torch.randn(n_layers, batch_size, hidden_size)) #forward out, _ = rnn(pack, h0) # unpack unpacked, unpacked_len = torch.nn.utils.rnn.pad_packed_sequence(out) >>> unpacked Variable containing: (0 ,.,.) = -0.7883 -0.7972 0.3367 -0.6102 0.1502 -0.4654 [torch.FloatTensor of size 1x3x2] more you would find this article useful. [Jum to Title - "How the PackedSequence object works"] - link
https://stackoverflow.com/questions/59545229/
How do I blend together tensors in patterns other than 2x2?
I have created code that blends tensors together, by first blending them together into rows and then blending the rows together into the final output. It works well for 4 tensors in a 2x2 pattern, but it fails to do a 2x3 (6 tensors), 3x3 (9 tensors), 4x4 (16 tensors) pattern. The tensors are in the form of (B x C x H x W), where B is batch size, C is channels, H is height, and W is width. For both the tiles to rows (tile_overlay()), and rows to final image (row_overlay()), I create a base tensor that I add the tiles/rows to. I suspect the issue with my code lies either with how I get the base tensor's dimensions, how I track where to put the rows/tiles on the base tensor, or the issue with both of those things. import torch from PIL import Image import torchvision.transforms as transforms def preprocess(image_name, image_size): image = Image.open(image_name).convert('RGB') if type(image_size) is not tuple: image_size = tuple([int((float(image_size) / max(image.size))*x) for x in (image.height, image.width)]) Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()]) tensor = (Loader(image) * 256).unsqueeze(0) return tensor def deprocess(output_tensor): output_tensor = output_tensor.clone().squeeze(0).cpu() / 256 output_tensor.clamp_(0, 1) Image2PIL = transforms.ToPILImage() image = Image2PIL(output_tensor.cpu()) return image def prepare_tile(tile, overlap, side='both'): lin_mask_left = torch.linspace(0,1,overlap).repeat(tile.size(3),1).repeat(3,1,1).unsqueeze(0) lin_mask_right = torch.linspace(1,0,overlap).repeat(tile.size(3),1).repeat(3,1,1).unsqueeze(0) if side == 'both' or side == 'right': tile[:,:,:,overlap:] = tile[:,:,:,overlap:] * lin_mask_right if side == 'both' or side == 'left': tile[:,:,:,:overlap] = tile[:,:,:,:overlap] * lin_mask_left return tile def overlay_tiles(tile_list, rows, overlap): c = 1 f_tiles = [] base_length = 0 for i, tile in enumerate(tile_list): if c == 1: f_tile = prepare_tile(tile.clone(), overlap, side='right') if i + 1<= rows[1]: base_length += tile.clone().size(3) - overlap elif c == rows[1]: f_tile = prepare_tile(tile.clone(), overlap, side='left') if i + 1<= rows[1]: base_length += tile.size(3) - overlap elif c > 0 and c < rows[1]: f_tile = prepare_tile(tile.clone(), overlap, side='both') if i + 1<= rows[1]: base_length += tile.size(3) - (overlap*2) f_tiles.append(f_tile) if c == rows[1]: c = 0 c+=1 base_length += overlap base_tensor = torch.zeros(3, tile_list[0].size(2), base_length).unsqueeze(0) row_list = [] for row in range(rows[1]): row_list.append(base_tensor.clone()) row_val, num_tiles = 0, 0 l_max = tile_list[0].size(3) for y in range(rows[0]): for x in range(rows[1]): if num_tiles % rows[1] != 0: l_max += (f_tiles[num_tiles].size(3)-overlap)*x l_min = l_max - f_tiles[num_tiles].size(3) row_list[row_val][:, :, :, l_min:l_max] = row_list[row_val][:, :, :, l_min:l_max] + f_tiles[num_tiles] else: row_list[row_val][:, :, :, :f_tiles[num_tiles].size(3)] = f_tiles[num_tiles] l_max = tile_list[0].size(3) num_tiles+=1 row_val+=1 return row_list def prepare_row(row_tensor, overlap, side='both'): lin_mask_top = torch.linspace(0,1,overlap).repeat(row_tensor.size(3),1).rot90(3).repeat(3,1,1).unsqueeze(0) lin_mask_bottom = torch.linspace(1,0,overlap).repeat(row_tensor.size(3),1).rot90(3).repeat(3,1,1).unsqueeze(0) if side == 'both' or side == 'top': row_tensor[:,:,:overlap,:] = row_tensor[:,:,:overlap,:] * lin_mask_top if side == 'both' or side == 'bottom': row_tensor[:,:,overlap:,:] = row_tensor[:,:,overlap:,:] * lin_mask_bottom return row_tensor def overlay_rows(row_list, rows, overlap): c = 1 f_rows = [] base_height = 0 for i, row_tensor in enumerate(row_list): if c == 1: f_row = prepare_row(row_tensor.clone(), overlap, side='bottom') if i + 1<= rows[0]: base_height += row_tensor.size(2) - overlap elif c == rows[1]: f_row = prepare_row(row_tensor.clone(), overlap, side='top') if i + 1<= rows[0]: base_height += row_tensor.size(2) - overlap elif c > 0 and c < rows[0]: f_row = prepare_row(row_tensor.clone(), overlap, side='both') if i + 1<= rows[0]: base_height += tile.size(2) - (overlap*2) f_rows.append(f_row) if c == rows[0]: c = 0 c+=1 base_height += overlap base_tensor = torch.zeros(3, base_height, row_list[0].size(3)).unsqueeze(0) num_rows = 0 l_max = row_list[0].size(3) for y in range(rows[0]): if num_rows > 0: l_max += (f_rows[num_rows].size(2)-overlap)*y l_min = l_max - f_rows[num_rows].size(2) base_tensor[:, :, l_min:l_max, :] = base_tensor[:, :, l_min:l_max, :] + f_rows[num_rows] else: base_tensor[:, :, :f_rows[num_rows].size(2), :] = f_rows[num_rows] l_max = row_list[0].size(2) num_rows+=1 return base_tensor def rebuild_image(tensor_list, rows, overlap_hw): row_tensors = overlay_tiles(tensor_list, rows, overlap_hw[1]) full_tensor = overlay_rows(row_tensors, rows, overlap_hw[0]) return full_tensor test_tensor_1 = preprocess('brad_pitt.jpg', (1080,1080)) test_tensor_2 = preprocess('starry_night_google.jpg', (1080,1080)) tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone()] rows = [2, 2] overlap = [540, 540] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_2x2.png') tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(),] rows = [3, 3] overlap = [540, 540] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_3x3.png') tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone()] rows = [4, 4] overlap = [540, 540] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_4x4.png') Running the above code will result in this error message when trying to create the 3x3 output: Traceback (most recent call last): File "t0.py", line 148, in <module> complete_tensor = rebuild_image(tensor_list, rows, overlap) File "t0.py", line 126, in rebuild_image row_tensors = overlay_tiles(tensor_list, rows, overlap_hw[1]) File "t0.py", line 68, in overlay_tiles row_list[row_val][:, :, :, l_min:l_max] = row_list[row_val][:, :, :, l_min:l_max] + f_tiles[num_tiles] RuntimeError: The size of tensor a (0) must match the size of tensor b (1080) at non-singleton dimension 3 This is an example of what the 2x2 output looks like: And this is a visual diagram with two examples of what I am doing:
The code now works with some changes: import torch from PIL import Image import torchvision.transforms as transforms def preprocess(image_name, image_size): image = Image.open(image_name).convert('RGB') if type(image_size) is not tuple: image_size = tuple([int((float(image_size) / max(image.size))*x) for x in (image.height, image.width)]) Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()]) tensor = (Loader(image) * 256).unsqueeze(0) return tensor def deprocess(output_tensor): output_tensor = output_tensor.clone().squeeze(0).cpu() / 256 output_tensor.clamp_(0, 1) Image2PIL = transforms.ToPILImage() image = Image2PIL(output_tensor.cpu()) return image def prepare_tile(tile, overlap, side='both'): h, w = tile.size(2), tile.size(3) lin_mask_left = torch.linspace(0,1,overlap).repeat(h,1).repeat(3,1,1).unsqueeze(0) lin_mask_right = torch.linspace(1,0,overlap).repeat(h,1).repeat(3,1,1).unsqueeze(0) if side == 'both' or side == 'right': tile[:,:,:,w-overlap:] = tile[:,:,:,w-overlap:] * lin_mask_right if side == 'both' or side == 'left': tile[:,:,:,:overlap] = tile[:,:,:,:overlap] * lin_mask_left return tile def calc_length(w, overlap, rows): count = 0 l_max = w for y in range(rows[0]): for x in range(rows[1]): if count % rows[1] != 0: l_max += w-overlap l_min = l_max - w else: l_max = w count+=1 return l_max def overlay_tiles(tile_list, rows, overlap): c = 1 f_tiles = [] base_length = 0 for i, tile in enumerate(tile_list): if c == 1: f_tile = prepare_tile(tile.clone(), overlap, side='right') elif c == rows[1]: f_tile = prepare_tile(tile.clone(), overlap, side='left') elif c > 0 and c < rows[1]: f_tile = prepare_tile(tile.clone(), overlap, side='both') f_tiles.append(f_tile) if c == rows[1]: c = 0 c+=1 w = tile_list[0].size(3) base_length = calc_length(w, overlap, rows) base_tensor = torch.zeros(3, tile_list[0].size(2), base_length).unsqueeze(0) row_list = [] for row in range(rows[0]): row_list.append(base_tensor.clone()) row_num, num_tiles = 0, 0 l_max = w for y in range(rows[0]): for x in range(rows[1]): if num_tiles % rows[1] != 0: l_max += w-overlap l_min = l_max - w print(num_tiles, l_max, l_min) row_list[row_num][:, :, :, l_min:l_max] = row_list[row_num][:, :, :, l_min:l_max] + f_tiles[num_tiles] else: row_list[row_num][:, :, :, :w] = f_tiles[num_tiles] l_max = w num_tiles+=1 row_num+=1 return row_list def prepare_row(row_tensor, overlap, side='both'): lin_mask_top = torch.linspace(0,1,overlap).repeat(row_tensor.size(3),1).rot90(3).repeat(3,1,1).unsqueeze(0) lin_mask_bottom = torch.linspace(1,0,overlap).repeat(row_tensor.size(3),1).rot90(3).repeat(3,1,1).unsqueeze(0) if side == 'both' or side == 'top': row_tensor[:,:,:overlap,:] = row_tensor[:,:,:overlap,:] * lin_mask_top if side == 'both' or side == 'bottom': row_tensor[:,:,overlap:,:] = row_tensor[:,:,overlap:,:] * lin_mask_bottom return row_tensor def calc_height(h, overlap, rows): num_rows = 0 l_max = h for y in range(rows[0]): if num_rows > 0: l_max += (h-overlap) l_min = l_max - h else: l_max = h num_rows+=1 return l_max def overlay_rows(row_list, rows, overlap): c = 1 f_rows = [] base_height = 0 for i, row_tensor in enumerate(row_list): if c == 1: f_row = prepare_row(row_tensor.clone(), overlap, side='bottom') elif c == rows[0]: f_row = prepare_row(row_tensor.clone(), overlap, side='top') elif c > 0 and c < rows[0]: f_row = prepare_row(row_tensor.clone(), overlap, side='both') f_rows.append(f_row) if c == rows[0]: c = 0 c+=1 h = row_list[0].size(2) base_height = calc_height(h, overlap, rows) base_tensor = torch.zeros(3, base_height, row_list[0].size(3)).unsqueeze(0) num_rows = 0 l_max = row_list[0].size(3) for y in range(rows[0]): if num_rows > 0: l_max += (h-overlap) l_min = l_max - h base_tensor[:, :, l_min:l_max, :] = base_tensor[:, :, l_min:l_max, :] + f_rows[num_rows] else: base_tensor[:, :, :h, :] = f_rows[num_rows] l_max = h num_rows+=1 return base_tensor def rebuild_image(tensor_list, rows, overlap_hw): row_tensors = overlay_tiles(tensor_list, rows, overlap_hw[1]) full_tensor = overlay_rows(row_tensors, rows, overlap_hw[0]) return full_tensor test_tensor_1 = preprocess('brad_pitt.jpg', (1080,720)) test_tensor_2 = preprocess('starry_night_google.jpg', (1080,720)) print("2x2 Test") tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone()] rows = [2, 2] overlap = [540, 260] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_2x2.png') print("3x3 Test") tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(),test_tensor_1.clone(),] rows = [3, 3] overlap = [540, 540] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_3x3.png') print("3x4 Test") tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone()] rows = [3, 4] overlap = [540, 260] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_3x4.png') print("4x3 Test") tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone()] rows = [4, 3] overlap = [540, 260] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_4x3.png') print("4x4 Test") tensor_list = [test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone(), \ test_tensor_1.clone(), test_tensor_2.clone(), test_tensor_1.clone(), test_tensor_2.clone()] rows = [4, 4] overlap = [540, 260] complete_tensor = rebuild_image(tensor_list, rows, overlap) ft = deprocess(complete_tensor.clone()) ft.save('complete_tensor_4x4.png')
https://stackoverflow.com/questions/59547464/
How do I install Pytorch 1.3.1 with CUDA enabled
I have a conda environment on my Ubuntu 16.04 system. When I install Pytorch using: conda install pytorch and I try and run the script I need, I get the error message: raise AssertionError("Torch not compiled with CUDA enabled") From looking at forums, I see that this is because I have installed Pytorch without CUDA support. I then tried: conda install -c pytorch torchvision cudatoolkit=10.1 pytorch but now I get the error: from torch.utils.cpp_extension import BuildExtension, CUDAExtension File "/home/username/miniconda3/envs/super_resolution/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module> from torch._C import * ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found So it seems that these two installs are installing different versions of Pytorch(?). The first one that seemed to work was Pytorch 1.3.1. My question: How do I install Pytorch with CUDA enabled, but ensure it is version 1.3.1 so that it works with my system?
Given that your system is running Ubuntu 16.04, it comes with glibc installed. You can check your version by typing ldd --version. Keep in mind that PyTorch is compiled on CentOS which runs glibc version 2.17. Then check the CUDA version installed on your system nvcc --version Then install PyTorch as follows e.g. if your cuda version is 9.2: conda install pytorch torchvision cudatoolkit=9.2 -c pytorch If you get the glibc version error, try installing an earlier version of PyTorch. If neither of the above options work, then try installing PyTorch from sources. If you would like to set a specific PyTorch version to install, please set it as <version_nr> in the below command: conda install pytorch=<version_nr> torchvision cudatoolkit=9.2 -c pytorch
https://stackoverflow.com/questions/59563220/
pytorch acess of weights and biases from a spcecific neuron
e.g: input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) i want to acess all the weights and the bias of the N-TH neuron in a specific layer. I know that model.layer[1].weight gives the acess to all weights in a layer, but i also want to know of what neuron this weight is.
Assume you have n neurons in the layer, The weight should be in order from neuron[0] to neuron[n]. For example to access weights of a fully connected layer Parameter containing: tensor([[-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02, 1.0699e-02, -2.8968e-02], #1st neuron weights [ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02, 1.9830e-02, -1.4294e-02], #2nd neuron weights [ 3.0891e-02, 2.5781e-02, -2.5248e-02, ..., -1.5813e-02, 6.1708e-03, -1.8673e-02], #3rd neuron weights ..., [-1.2596e-03, -1.2320e-05, 1.9106e-02, ..., 2.1987e-02, -3.3817e-02, -9.4880e-03], #nth neuron weights [ 1.4234e-02, 2.1246e-02, -1.0369e-02, ..., -1.2366e-02, -4.7024e-04, -2.5259e-02], #(n+1)th neuron weights [ 7.5356e-03, 3.4400e-02, -1.0673e-02, ..., 2.8880e-02, -1.0365e-02, -1.2916e-02] #(n+2)th neuron weights], requires_grad=True) For instance [-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02, 1.0699e-02, -2.8968e-02] will be all the weights of the 1st neuron -7.3584e-03 is the weight to the 1st neuron in the next layer -2.3753e-02 is the weight to the 2nd neuron in the next layer -2.2565e-02 is the weight to the 3rd neuron in the next layer [ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02, 1.9830e-02, -1.4294e-02] will be all the weights of the 2nd neuron 2.2930e-02 is the weight to the 1st neuron in the next layer -2.4317e-02 is the weight to the 2nd neuron in the next layer -2.2565e-02 is the weight to the 3rd neuron in the next layer
https://stackoverflow.com/questions/59569330/
How does one vectorize reinforcement learning environments?
I have a Python class that conforms to OpenAI's environment API, but it's written in non-vectorized form i.e. it receives one input action per step and returns one reward per step. How do I vectorize the environment? I haven't been able to find any clear explanation on GitHub.
You could write a custom class that iterates over an internal tuple of environments while maintaining the basic Gym API. In practice, there will be some differences, because the underlying environments don't terminate on the same timestep. Consequently, it's easier to combine the standard step and reset functions in one method called step. Here's an example: class VectorEnv: def __init__(self, make_env_fn, n): self.envs = tuple(make_env_fn() for _ in range(n)) # Call this only once at the beginning of training (optional): def seed(self, seeds): assert len(self.envs) == len(seeds) return tuple(env.seed(s) for env, s in zip(self.envs, seeds)) # Call this only once at the beginning of training: def reset(self): return tuple(env.reset() for env in self.envs) # Call this on every timestep: def step(self, actions): assert len(self.envs) == len(actions) return_values = [] for env, a in zip(self.envs, actions): observation, reward, done, info = env.step(a) if done: observation = env.reset() return_values.append((observation, reward, done, info)) return tuple(return_values) # Call this at the end of training: def close(self): for env in self.envs: env.close() Then you can just instantiate it like this: import gym make_env_fn = lambda: gym.make('CartPole-v0') env = VectorEnv(make_env_fn, n=4) You'll have to do a little bookkeeping for your agent to handle the tuple of return values when you call step. This is also why I prefer to pass a function make_env_fn to __init__, because it's easy to add wrappers like gym.wrappers.Monitor that track statistics for each environment individually and automatically.
https://stackoverflow.com/questions/59569710/
Obtain a set of embedding from pretrained model - vgg16 pytorch
For a certain project purpose I am trying to store the 1 * 4096 embeddings (The output right before the final layer) of around 6000 images into a pkl file. For the same, I am running an iteration over the 6000 images on vgg16 modified model in google colab. But it returns 'CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 15.90 GiB total capacity; 14.86 GiB already allocated; 1.88 MiB free; 342.26 MiB cached)' error. Whereas I have used the same dataset split into test-train for training and validating my model and that runs fine. I am wondering why obtaining and storing the embedding alone is becoming a heavy task in colab. Is there any other way I can obtain the embeddings and store in a pkl file other than the below code. embedding = [] vgg16 = vgg16.to(device) for x in range (0, len(inputImages)) : input = transformations(inputImages[x]) //pre processing input = torch.unsqueeze(input, 0) input = input.to(device) embedding.append(vgg16(input)) The code is interupted at the last line with the CUDA out of memory error.
The output that you have generated vgg16(input), thats still in cuda. This is so because this output is used for calculating the loss afterwards. So to avoid having your output being stored in CUDA and eat up your GPU memory, move it to CPU using .cpu().numpy(). If that throws an error, you might have to use .detach() as well to detach the variable.
https://stackoverflow.com/questions/59569936/
Max Pooling in VGG16 Before Global Average Pooling (GAP)?
I am currently using VGG16 with Global Average Pooling (GAP) before final classification layer. The VGG16 model used is the one provided by torchvision. However, I noticed that before the GAP layer, there is a Max Pooling layer. Is this okay or should the Max Pooling layer be removed before the GAP layer? The network architecture can be seen below. VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace=True) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace=True) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace=True) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=1) #GAP Layer (classifier): Sequential( (0): Linear(in_features=512, out_features=7, bias=True) ) ) Thanks in advance.
If you are going to train the classifier, it should be okay. Nonetheless, I wouldn't remove it either way. It is worth mentioning that the max-pooling is part of the original architecture, as can be seen in Table 1 of the original paper: https://arxiv.org/pdf/1409.1556.pdf.
https://stackoverflow.com/questions/59572222/
CNN Pytorch Error : Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same
I'm receiving the error, Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same Following is my code, device = torch.device('cuda:0') trainData = torchvision.datasets.FashionMNIST('/content/', train=True, transform=None, target_transform=None, download=True) testData = torchvision.datasets.FashionMNIST('/content/', train=False, transform=None, target_transform=None, download=True) class Net(nn.Module): def __init__(self): super().__init__() ''' Network Structure: input > (1)Conv2D > (2)MaxPool2D > (3)Conv2D > (4)MaxPool2D > (5)Conv2D > (6)MaxPool2D > (7)Linear > (8)LinearOut ''' # Creating the convulutional Layers self.conv1 = nn.Conv2d(in_channels=CHANNELS, out_channels=32, kernel_size=3) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3) self.flatten = None # Creating a Random dummy sample to get the Flattened Dimensions x = torch.randn(CHANNELS, DIM, DIM).view(-1, CHANNELS, DIM, DIM) x = self.convs(x) # Creating the Linear Layers self.fc1 = nn.Linear(self.flatten, 512) self.fc2 = nn.Linear(512, CLASSES) def convs(self, x): # Creating the MaxPooling Layers x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) if not self.flatten: self.flatten = x[0].shape[0] * x[0].shape[1] * x[0].shape[2] return x # FORWARD PASS def forward(self, x): x = self.convs(x) x = x.view(-1, self.flatten) sm = F.relu(self.fc1(x)) x = F.softmax(self.fc2(sm), dim=1) return x, sm x_train, y_train = training_set x_train, y_train = x_train.to(device), y_train.to(device) optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE) loss_func = nn.MSELoss() loss_log = [] for epoch in range(EPOCHS): for i in tqdm(range(0, len(x_train), BATCH_SIZE)): x_batch = x_train[i:i+BATCH_SIZE].view(-1, CHANNELS, DIM, DIM).to(device) y_batch = y_train[i:i+BATCH_SIZE].to(device) net.zero_grad() output, sm = net(x_batch) loss = loss_func(output, y_batch.float()) loss.backward() optimizer.step() loss_log.append(loss) # print(f"Epoch : {epoch} || Loss : {loss}") return loss_log train_set = (trainData.train_data, trainData.train_labels) test_set = (testData.test_data, testData.test_labels) EPOCHS = 5 LEARNING_RATE = 0.001 BATCH_SIZE = 32 net = Net().to(device) loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) And this is the Error that I'm getting, RuntimeError Traceback (most recent call last) <ipython-input-8-0db1a1b4e37d> in <module>() 5 net = Net().to(device) 6 ----> 7 loss_log = train(net, train_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 6 frames <ipython-input-6-7de4a78e3736> in train(net, training_set, EPOCHS, LEARNING_RATE, BATCH_SIZE) 13 14 net.zero_grad() ---> 15 output, sm = net(x_batch) 16 loss = loss_func(output, y_batch.float()) 17 loss.backward() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) <ipython-input-5-4fddc427892a> in forward(self, x) 41 # FORWARD PASS 42 def forward(self, x): ---> 43 x = self.convs(x) 44 x = x.view(-1, self.flatten) 45 sm = F.relu(self.fc1(x)) <ipython-input-5-4fddc427892a> in convs(self, x) 31 32 # Creating the MaxPooling Layers ---> 33 x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) 34 x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) 35 x = F.max_pool2d(F.relu(self.conv3(x)), kernel_size=(2, 2)) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 343 344 def forward(self, input): --> 345 return self.conv2d_forward(input, self.weight) 346 347 class Conv3d(_ConvNd): /usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in conv2d_forward(self, input, weight) 340 _pair(0), self.dilation, self.groups) 341 return F.conv2d(input, weight, self.bias, self.stride, --> 342 self.padding, self.dilation, self.groups) 343 344 def forward(self, input): RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same I double-checked that my Neural Net and my Inputs both are in GPU. I'm still getting this error and I don't understand why! Somebody, please help me to get out of this error.
Cast your input x_batch to float. Use x_batch = x_batch.float() before you pass it through your model.
https://stackoverflow.com/questions/59582663/
Android(kotlin), how getting assets file path? (at pytorch mobile)
I'm trying PyTorch Mobile tutorial by kotlin. I want to load module, "model.pt" in assets file. But no idea to load module in assets file. Java (written in PyTorch Mobile Tutorial "hello world") Module module = Module.load(assetFilePath(this, "model.pt")); kotlin val module = Module.load("?????")
Declare this function: fun assetFilePath(context: Context, asset: String): String { val file = File(context.filesDir, asset) try { val inpStream: InputStream = context.assets.open(asset) try { val outStream = FileOutputStream(file, false) val buffer = ByteArray(4 * 1024) var read: Int while (true) { read = inpStream.read(buffer) if (read == -1) { break } outStream.write(buffer, 0, read) } outStream.flush() } catch (ex: Exception) { e.printStackTrace() } return file.absolutePath } catch (e: Exception) { e.printStackTrace() } return "" } And then use it as: val module = Module.load(assetFilePath(this, "model.pt"))
https://stackoverflow.com/questions/59588556/
Why should the (huggingface) Transformers library be installed on a virtual environment?
In the huggingface github it is written: You should install Transformers in a virtual environment. If you're unfamiliar with Python virtual environments, check out the user guide. Create a virtual environment with the version of Python you're going to use and activate it. Now, if you want to use Transformers, you can install it with pip. If you'd like to play with the examples, you must install it from source. Why should it be installed in a virtual python environment? What are the advantages of doing that rather than installing it on python as is?
Summing up the comments in a community answer: It's not needed to install huggingface Transformers in a virtual environment, it can be installed just like any other package though there are advantages of using a virtual environment, and is considered a good practice. You want to work in virtual envs for all Python work you do, so that you don't interfere the system install of Python, and so that you don't have a big global list of hundreds of packages that have nothing to do with each other and that may have conflicting requirements. Apart from that, using a virtual environment for your project also means that it is easily deployable to a different machine, because all the dependencies are self-contained and can be packaged up in one go. More in this answer
https://stackoverflow.com/questions/59589483/
How do I pass a color image through a Pytorch convolutional layer with custom filter?
I'm trying to visualize what happens when a color image passes through a convolutional layer. For that, I'm setting custom weights with zeroes and ones. The problem I'm facing is that I'm losing the 3D channels, and get a 1D channel after passing the data through the layer. import requests from io import BytesIO from PIL import Image import torch import torch.nn as nn import matplotlib.pyplot as plt import numpy as np link = 'https://audimediacenter-a.akamaihd.net/system/production/media/85094/images' \ '/2a4e98976b1f9088fe6ae883f2f29e4d8f3ed473/A1912967_x500.jpg?1575885688' r = requests.get(link, timeout=10) im = Image.open(BytesIO(r.content)) pic = np.array(im) horizontal_filter = torch.zeros(5, 5) horizontal_filter[2, :] = 1 print(horizontal_filter) This is my custom filter: tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [1., 1., 1., 1., 1.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) Now I'm using the custom filter and repeat it to fit 3 channels. hz = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=5, stride=1, bias=None) hz.weight.data = horizontal_filter.type('torch.FloatTensor').repeat(1, 3, 1, 1) print(hz.weight.data.shape) This is the shape of the filter: torch.Size([1, 3, 5, 5]) I pass it through the convolutional filter and I'm losing the 3 channels: zz = hz(torch.tensor(pic[None, ...]).permute(0, 3, 1, 2).type('torch.FloatTensor')) print(np.transpose(zz.detach().numpy(), (0, 2, 3, 1)).shape) (1, 329, 496, 1) If I plot it, I don't have the colors anymore. z = np.transpose(zz.detach().numpy(), (0, 2, 3, 1))[0, :, :, 0] f, axarr = plt.subplots() axarr.imshow(z) plt.show() tl;dr: How do I pass a 3D picture through a convolutional layer and return an image with 3 channels?
The issue is that you are not repeating the channels enough. Since you have 3 input and output channels, the Conv weight matrix would be 3x3x5x5. Since you had set it to 1x3x5x5, it was able to output only 1 channel. You need to make the following change hz.weight.data = horizontal_filter.type('torch.FloatTensor').repeat(3, 3, 1, 1) Because of your filter, your output would have a max value of ~3700. So to view, divide by the max using z = z/np.max(z) and then you get
https://stackoverflow.com/questions/59592682/
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
As I've read through some previous questions, I'm getting this error that probably has something to do with a discrepancy between dimensions of tensors but since this is my very first attempt of running PyTorch so I'm coming here because I have far to little intuition about it. Wanted to run a non-standard dataset (which I'm pretty sure I'm loading fine) on a basic MNIST setup to mess with it and see what moves what. Traceback (most recent call last): File "C:/Users/Administrator/Desktop/pytong/proj/pytorch_cnnv2.py", line 110, in <module> loss = loss_func(output, b_y) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\loss.py", line 916, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py", line 1995, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py", line 1316, in log_softmax ret = input.log_softmax(dim) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) The line making the fuss is the loss function: loss = loss_func(output, b_y) The rest of the code, sans imports and load: class CNNModel(nn.Module): def __init__(self): super(CNNModel, self).__init__() # Convolution 1 self.cnn1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1, padding=0) self.relu1 = nn.ReLU() # Max pool 1 self.maxpool1 = nn.MaxPool2d(kernel_size=2) # Convolution 2 self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0) self.relu2 = nn.ReLU() # Max pool 2 self.maxpool2 = nn.MaxPool2d(kernel_size=2) # Fully connected self.fc1 = nn.Linear(338 * 4 * 4, 5) def forward(self, x): # Convolution 1 out = self.cnn1(x) print(out.shape) out = self.relu1(out) print(out.shape) # Max pool 1 out = self.maxpool1(out) print(out.shape) # Convolution 2 out = self.cnn2(out) print(out.shape) out = self.relu2(out) print(out.shape) # Max pool 2 out = self.maxpool2(out) print('++++++++++++++ out') print(out.shape) # out = out.reshape(-1, 169 * 4 * 4) out = out.view(out.size(0), -1) print(out.shape) print('-----------------------') # Linear function (readout) out = self.fc1(out) print(out.shape) print('=======================') return out if __name__ == '__main__': print("Number of train samples: ", len(train_data)) print("Number of test samples: ", len(test_data)) print("Detected Classes are: ", train_data.class_to_idx) # classes are detected by folder structure model = CNNModel() optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE) loss_func = nn.CrossEntropyLoss() # Training and Testing for epoch in range(EPOCHS): print(enumerate(train_data_loader)) for step, (x, y) in enumerate(train_data_loader): b_x = Variable(x) # batch x (image) b_y = Variable(y) # batch y (target) # print('============ b_x') # print(len(b_x)) # print(b_x.data) # print('============ b_y') # print(len(b_y)) # print(b_y.data) output = model(b_x)[0] loss = loss_func(output, b_y) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_x = Variable(test_data_loader) test_output, last_layer = model(test_x) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = sum(pred_y == test_y) / float(test_y.size(0)) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) Plus the output of my prints that I've tried to use for diagnostics: torch.Size([100, 16, 60, 60]) torch.Size([100, 16, 60, 60]) torch.Size([100, 16, 30, 30]) torch.Size([100, 32, 26, 26]) torch.Size([100, 32, 26, 26]) ++++++++++++++ out torch.Size([100, 32, 13, 13]) torch.Size([100, 5408]) ----------------------- torch.Size([100, 5]) =======================
The problem is in this line: output = model(b_x)[0] The [0] changes the shape from [100, 5] to [5], and the loss expects the other way. Just remove it: output = model(b_x)
https://stackoverflow.com/questions/59593807/
How to find a good LR?
I'm very new to PyTorch and working on a project of mine to solve a Sudoku board. What I'm doing is giving the network a Tensor that has the board (9x9) and another 2 values, the first being the row and the second being the column. My network: class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(9 * 9 + 2, 32) # board + row and col self.fc2 = nn.Linear(32, 32) self.fc3 = nn.Linear(32, 9) # the number in that spot def forward(self, x): x = f.relu(self.fc1(x)) x = f.relu(self.fc2(x)) x = self.fc3(x) return f.log_softmax(x, dim=1) I have a for loop that iterates over each board, and then iterate on each block in the 9x9 grid and input the board with the row and column to the network. optimizer = optim.Adam(net.parameters(), lr=0.1) scheduler = lr_scheduler.CosineAnnealingLR(optimizer, len(quizzes), eta_min=0) for i, board in enumerate(quizzes): new_board = [[val for val in row] for row in board] # Don't affect original board for row in range(9): for col in range(9): if new_board[row][col] != 0: continue row_col[0] = row # the row value row_col[1] = col # the col value final_tensor = board_tensor.view(-1, 9 * 9 + 2) output = net(final_tensor) # type: Tensor solution_num = solutions[i][row][col] solution_tensor = torch.tensor(solution_num - 1, dtype=torch.long).reshape(-1) # do -1 because it needs to match the node. optimizer.zero_grad() loss = f.nll_loss(output, solution_tensor) loss.backward() optimizer.step() scheduler.step(epoch=epoch) new_board[row][col] = solution_num # Add the new value into the board. avg_loss += loss.item() count += 1 if i % 100 == 0: print(f"Loss: {round(avg_loss / count, 3)}. {i} / {len(quizzes)}. {epoch} / {EPOCHS}") avg_loss = 0 count = 0 Now, my issue is that the network is just guessing the same number over and over. Every time I reset the network it guesses a different number, but after a bit it just stays constant and doesn't guess any other number. This of course makes the accuracy 11.111% (1/9) and I don't know how to get past this. I tried using MSELoss instead of NLL_loss but it didn't change any result, and switched between optim.Adam to optim.SGE. I am quite new to this whole topic so I don't know which functions I should be using (log_softmax, Adam, SGE, and all those types of functions for loss / optimization). Does anyone know where I messed up? I tried changing the learning rate and adding a weight decay too, but that didn't help
I don't suspect your issue is with parameter tuning. I suspect that the model is unable to learn how to solve this problem in this way. To start: given a single cell on an unsolved (but let's assume solvable) Sudoku board, it's likely that other cells need to be solved in order to be able to know the correct value for the current cell. Asking your network to implicitly solve the entire puzzle to write down the answer for a single cell doesn't really make sense. Further, I'm not sure this task is something ML can solve in the approach you're using. If you really have your heart set on solving this using ML, add a tenth "I don't know the answer" class. Then alter your dataset to be aware of when a given cell is not yet knowable. This will still end up with a model that is limited to the skill level of the person creating the dataset though. This will also be a non-trivial amount of extra work and I think there are better ways for you to spend your time learning about ML. Side note: Sudoku is a graph colouring problem. Or depending on how you want to deal with it, a constrained integer program. Kinda weird to use ML to solve a solved problem IMHO.
https://stackoverflow.com/questions/59595270/
How to save model architecture in PyTorch?
I know I can save a model by torch.save(model.state_dict(), FILE) or torch.save(model, FILE). But both of them don't save the architecture of model. So how can we save the architecture of a model in PyTorch like creating a .pb file in Tensorflow ? I want to apply different tweaks to my model. Do I have any better way than copying the whole class definition every time and creating a new class if I can't save the architecture of a model?
You can refer to this article to understand how to save the classifier. To make a tweaks to a model, what you can do is create a new model which is a child of the existing model. class newModel( oldModelClass): def __init__(self): super(newModel, self).__init__() With this setup, newModel has all the layers as well as the forward function of oldModelClass. If you need to make tweaks, you can define new layers in the __init__ function and then write a new forward function to define it.
https://stackoverflow.com/questions/59596075/
CNN pytorch : How are parameters selected and flow between layers
I'm pretty new to CNN and have been following the below code. I'm not able to understand how and why have we selected the each argument of Conv2d() and nn.Linear () as they are i.e. the output, filter, channels, weights,padding and stride. I do understand the meaning of each though. Can someone very succinctly explain the flow for each layer? (Input Image Size is 32*32*3) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 16, 3, padding=1) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) self.pool = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(64 * 4 * 4, 500) self.fc2 = nn.Linear(500, 10) self.dropout = nn.Dropout(0.25) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) x = x.view(-1, 64 * 4 * 4) x = self.dropout(x) x = F.relu(self.fc1(x)) x = self.dropout(x) x = self.fc2(x) return x
I think you'll find receptive field arithmetics useful for your understanding. Your net has 3 convolution layers each with kernel size of 3x3 and padding of 1 pixels, which means that the spatial output of your convolution layers is the same as their input. Each conv layer is followed by a max pooling with stride 2, that is, it reduces the spatial dimensions by a factor of 2. So, in the spatial domain, you have an input of size 32x32 after first conv and pool its dimensions are 16x16, after the second conv and pool it is 8x8 and after the third conv+pool it is 4x4. As for the "feature"/"channel" dimension: the input has 3 channels. The first conv layer has 16 filters ("out_channels=16") then 32 and finally 64. Thus, after three conv layers your feature map has 64 channels (per spatial location). Overall, an input of size 3x32x32 becomes 64x4x4 after the three conv+pooling layers defined by your network. a nn.Linear layer does not assign "spatial" meaning to its inputs and expects a 1D input (per entry in a minibatch), thus your forward function "eliminates" the spatial dimensions and converts x to a 1D vector using the x.view(-1, 64 * 4 * 4) command.
https://stackoverflow.com/questions/59598012/
Correct dimensions to convert a wide data frame to Pytorch tensor object?
I have this time-series data frame that has 56 columns and 36508 samples; 55 are predictors and the last one is the output. I'm trying to fit a LSTM neural network and while I'll be able to fit the model, I'm having some hard time converting the features to Pytorch tensor objects. Currently I have already normalised the data between 0 and 1 and also split the data into train and test sets. import torch import torch.nn as nn print(x_train.shape) (27380, 55) print(y_train.shape) (27380,) print(x_test.shape) (9128, 55) print(y_test.shape) (9128,) I've had no problems converting the target to a tensor object, since the series is only 1D like so: y_train = torch.FloatTensor(y_train).view(-1) print(y_train[:5]) tensor([0.7637, 0.6220, 0.6566, 0.6922, 0.6774]) But when it comes to converting the features, then I'm unable to figure out the dimensions that need to be specified. I've tried this: x_train = torch.FloatTensor(x_train).view(-1, 55) ValueError: could not determine the shape of object type 'DataFrame' How do I properly convert the features dataset to a tensor object? Sadly the documentation seems vague.
Try converting to numpy and then to tensors: x_train = torch.from_numpy(np.array(x_train).astype(np.float32))
https://stackoverflow.com/questions/59598574/
What is temperature in Self Attention Terminology?
I was reading the paper's code "Attention is All You Need" . Code linked at here. I found this term called temperature. How is it related to the Q,K,V formula for Attention. My understanding of Self Attention is Attention = Softmax(matmul(Q,K.T),dim=-1) Output = Attention.V Looking for something to correct or augment my understanding.
In the paper they define "Scaled Dot-Product Attention" as : Attention(Q, K, V) = matmul(softmax(matmul(Q,K.T) / sqrt(dk)), V) where dk is the dimension of queries (Q) and keys(K) In the implementation, temperature seems to be the square root of dk, as it's called from the init part of MultiHeadAttention class : self.attention = ScaledDotProductAttention(temperature=d_k ** 0.5) and it's used in ScaledDotProductAttention class which implements the formula above: attn = torch.matmul(q / self.temperature, k.transpose(2, 3)) ScaledDotProductAttention class : https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/Modules.py#L7
https://stackoverflow.com/questions/59604092/
Multilingual Bert sentence vector captures language used more than meaning - working as interned?
Playing around with BERT, I downloaded the Huggingface Multilingual Bert and entered three sentences, saving their sentence vectors (the embedding of [CLS]), then translated them via Google Translate, passed them through the model and saved their sentence vectors. I then compared the results using cosine similarity. I was surprised to see that each sentence vector was pretty far from the one generated from the sentence translated from it (0.15-0.27 cosine distance) while different sentences from the same language were quite close indeed (0.02-0.04 cosine distance). So instead of having sentences of similar meaning (but different languages) grouped together (in 768 dimensional space ;) ), dissimilar sentences of the same language are closer. To my understanding the whole point of Multilingual Bert is inter-language transfer learning - for example training a model (say, and FC net) on representations in one language and having that model be readily used in other languages. How can that work if sentences (of different languages) of the exact meaning are mapped to be more apart than dissimilar sentences of the same language? My code: import torch import transformers from transformers import AutoModel,AutoTokenizer bert_name="bert-base-multilingual-cased" tokenizer = AutoTokenizer.from_pretrained(bert_name) MBERT = AutoModel.from_pretrained(bert_name) #Some silly sentences eng1='A cat jumped from the trees and startled the tourists' e=tokenizer.encode(eng1, add_special_tokens=True) ans_eng1=MBERT(torch.tensor([e])) eng2='A small snake whispered secrets to large cats' t=tokenizer.tokenize(eng2) e=tokenizer.encode(eng2, add_special_tokens=True) ans_eng2=MBERT(torch.tensor([e])) eng3='A tiger sprinted from the bushes and frightened the guests' e=tokenizer.encode(eng3, add_special_tokens=True) ans_eng3=MBERT(torch.tensor([e])) # Translated to Hebrew with Google Translate heb1='Χ—ΧͺΧ•Χœ Χ§Χ€Χ₯ ΧžΧ”Χ’Χ₯ Χ•Χ”Χ‘Χ”Χ™Χœ אΧͺ Χ”Χͺיירים' e=tokenizer.encode(heb1, add_special_tokens=True) ans_heb1=MBERT(torch.tensor([e])) heb2='Χ Χ—Χ© קטן ΧœΧ—Χ© Χ‘Χ•Χ“Χ•Χͺ ΧœΧ—ΧͺΧ•ΧœΧ™Χ Χ’Χ“Χ•ΧœΧ™Χ' e=tokenizer.encode(heb2, add_special_tokens=True) ans_heb2=MBERT(torch.tensor([e])) heb3='נמר Χ¨Χ₯ ΧžΧ”Χ©Χ™Χ—Χ™Χ Χ•Χ”Χ€Χ—Χ™Χ“ אΧͺ האורחים' e=tokenizer.encode(heb3, add_special_tokens=True) ans_heb3=MBERT(torch.tensor([e])) from scipy import spatial import numpy as np # Compare Sentence Embeddings result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_heb1[1].data.numpy()) print ('Eng1-Heb1 - Translated sentences',result) result = spatial.distance.cosine(ans_eng2[1].data.numpy(), ans_heb2[1].data.numpy()) print ('Eng2-Heb2 - Translated sentences',result) result = spatial.distance.cosine(ans_eng3[1].data.numpy(), ans_heb3[1].data.numpy()) print ('Eng3-Heb3 - Translated sentences',result) print ("\n---\n") result = spatial.distance.cosine(ans_heb1[1].data.numpy(), ans_heb2[1].data.numpy()) print ('Heb1-Heb2 - Different sentences',result) result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy()) print ('Heb1-Heb3 - Similiar sentences',result) print ("\n---\n") result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy()) print ('Eng1-Eng2 - Different sentences',result) result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng3[1].data.numpy()) print ('Eng1-Eng3 - Similiar sentences',result) #Output: """ Eng1-Heb1 - Translated sentences 0.2074061632156372 Eng2-Heb2 - Translated sentences 0.15557605028152466 Eng3-Heb3 - Translated sentences 0.275478720664978 --- Heb1-Heb2 - Different sentences 0.044616520404815674 Heb1-Heb3 - Similar sentences 0.027982771396636963 --- Eng1-Eng2 - Different sentences 0.027982771396636963 Eng1-Eng3 - Similar sentences 0.024596810340881348 """ P.S. At least the Heb1 was closer to Heb3 than to Heb2. This was also observed for the English equivalents, but less so.
The [CLS] Token somehow represents the input sequence, but how exactly is difficult to say. The language is of course an important characteristic of a sentence, probably more than meaning. BERT is a pretrained model which tries to model such characteristics as meaning, structure and also language. If you want to have a model, which helps you identify if two sentences of different language mean the same thing, I can think of two different approaches: approach: You can train a classifier (SVM, logistic Regression or even some neuronal nets such as CNN) on that task. Inputs: two [CLS]-Token, Output: Same meaning, or not same meaning. As training data, you could choose [CLS]-Token-pairs of sentences of different language which are either of the same meaning or not. To get meaningful results, you would need a lot of such sentence pairs. Luckily you can either generate them via google translate, or use a parallel texts such as the bible which exists in a lot of languages, and extract sentence pairs from there. approach: Fine-tune the bert model on exactly that task: As in the previous approach, you need a lot of training data. A sample input to the BERT model would look like that: A cat jumped from the trees and startled the tourists [SEP] Χ—ΧͺΧ•Χœ Χ§Χ€Χ₯ ΧžΧ”Χ’Χ₯ Χ•Χ”Χ‘Χ”Χ™Χœ אΧͺ Χ”Χͺיירים To classify if those sentences are of the same meaning, you would add a classification layer on top of the [CLS]-Token and Fine tune the whole Model on that task. Note: I have never worked with a multilingual BERT-model, those approaches are what comes to my mind to accomplish the mentioned task. If you try those approaches, I would be interested to know how they perform .
https://stackoverflow.com/questions/59619760/
What is a buffer in Pytorch?
I understand what register_buffer does and the difference between register_buffer and register_parameters. But what is the precise definition of a buffer in PyTorch?
This can be answered looking at the implementation: def register_buffer(self, name, tensor): if '_buffers' not in self.__dict__: raise AttributeError( "cannot assign buffer before Module.__init__() call") elif not isinstance(name, torch._six.string_classes): raise TypeError("buffer name should be a string. " "Got {}".format(torch.typename(name))) elif '.' in name: raise KeyError("buffer name can't contain \".\"") elif name == '': raise KeyError("buffer name can't be empty string \"\"") elif hasattr(self, name) and name not in self._buffers: raise KeyError("attribute '{}' already exists".format(name)) elif tensor is not None and not isinstance(tensor, torch.Tensor): raise TypeError("cannot assign '{}' object to buffer '{}' " "(torch Tensor or None required)" .format(torch.typename(tensor), name)) else: self._buffers[name] = tensor That is, the buffer's name: must be a string: not isinstance(name, torch._six.string_classes) cannot contain a . (dot): '.' in name cannot be an empty string: name == '' cannot be an attribute of the Module: hasattr(self, name) should be unique: name not in self._buffers and the tensor (guess what?): should be a Tensor: isinstance(tensor, torch.Tensor) So, the buffer is just a tensor with these properties, registered in the _buffers attribute of a Module;
https://stackoverflow.com/questions/59620431/
Replace all indices in tensor within a range with 1s
def generate_mask(data : list, max_seq_len : int): """ Generates a mask for data where each element is expected to be max_seq_len length after padding Args: data : The data being forwarded through LSTM after being converted to a tensor max_seq_len : The length of the names after being padded """ batch_sz = len(data) ret = torch.zeros(1,batch_sz, max_seq_len, dtype=torch.bool) for i in range(batch_sz): name = data[i] for letter_idx in range(len(name)): ret[0][i][letter_idx] = 1 return ret I have this code for generating a mask and I really hate how I'm doing it. Essentially as you can see I'm just going through every name and turning each index from 0 to name length to 1, I'd prefer a more elegant way to do this.
Well, you can simplify to something like this: # [...] for i in range(batch_sz): ret[0, i, :len(data[i])] = 1
https://stackoverflow.com/questions/59620634/
Despite installing the torch vision pytorch library, I am getting an error saying that there is no module named torch vision
The error that I am getting when I use import torchvision is this: Error Message "*Traceback (most recent call last): File "/Users/gokulsrin/Desktop/torch_basics/data.py", line 4, in <module> import torchvision ModuleNotFoundError: No module named 'torchvision'*" I don't know what to do. I have tried changing the version of python from the native one to the one downloaded through anaconda. I am using anaconda as a package manager and have installed torch vision through anaconda as well as through pip commands.
From PyTorch installing Docs you should follow these steps: In Anaconda use this command: conda install pytorch torchvision cpuonly -c pytorch In Pip use this command: pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html Note: If you have an enabled CUDA card you can change the cpuonly option to cudatoolkit=10.1 or cudatoolkit=9.2 After successfully installing the package you can import it with the command import torchvision and the output should look like this: Otherwise, there is something wrong when you are downloading the package from the Internet
https://stackoverflow.com/questions/59621736/
classification using 4-channel images in pytorch
I have some gray scale and color images with label. I want to combine this gray and color images (4-channel) and run transfer learning using 4-channel images. How to do that?
If I understand the question correctly you want to combine 1 channel images and 3 channel images and get a 4 channel image and use this as your input. If this is what you want to do you can just use torch.cat(). Some example code of loading two images and combining them along the channel dimension import numpy as np import torch from PIL import Image image_rgb = Image.open(path_to_rgb_image) image_rgb_tensor = torch.from_numpy(np.array(image_rgb)) image_rgb.close() image_grayscale = Image.open(path_to_grayscale_image)) image_grayscale_tensor = troch.from_numpy(np.array(image_grayscale)) image_grayscale.close() image_input = torch.cat([image_rgb_tensor, image_grayscale_tensor], dim=2) I assumed that the grayscale image you want to use translated to a tensor with the shape [..., ..., 1] and the rgb image to [..., ..., 3].
https://stackoverflow.com/questions/59622376/
How to use Pytorch Tensor object in Opencv without convert to numpy array?
I’m trying to develop a text detection application with Pytorch and Opencv in Python. I can use Pytorch tensor with Opencv like below. val = y[0,:,:,0].data.cpu().numpy() cv2.threshold(val , 0.4, 1, 0) But it takes a lot of time. I need to do this operation by using the tensor object. How can I do that?
Given that the last 0 in your threshold call means cv.THRESH_BINARY, it follows this function: As your maxval is set to 1, you can replace this threshold call with something like this: (y[0,:,:,0] > 0.4).float() I am casting to float, but you can change that as you wish, ofc. Or even to something like: (y[0,:,:,0] > 0.4).to(dtype=y.dtype) so that it will remain with the same data type.
https://stackoverflow.com/questions/59624518/
Incorrect predictions on extracted images from text
I trained a model in PyTorch on the EMNIST data set - and got about 85% accuracy on the test set. Now, I have an image of handwritten text from which I have extracted individual letters, but I'm getting very poor accuracy on the images that I have extracted. One hot mappings that I'm using - letters_EMNIST = {0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5', 6: '6', 7: '7', 8: '8', 9: '9', 10: 'A', 11: 'B', 12: 'C', 13: 'D', 14: 'E', 15: 'F', 16: 'G', 17: 'H', 18: 'I', 19: 'J', 20: 'K', 21: 'L', 22: 'M', 23: 'N', 24: 'O', 25: 'P', 26: 'Q', 27: 'R', 28: 'S', 29: 'T', 30: 'U', 31: 'V', 32: 'W', 33: 'X', 34: 'Y', 35: 'Z', 36: 'a', 37: 'b', 38: 'd', 39: 'e', 40: 'f', 41: 'g', 42: 'h', 43: 'n', 44: 'q', 45: 'r', 46: 't'} For reference, this is an example of the image used for testing data - And this is an example of the image I extracted - How can I debug this?
First of all, you should check your extraction technique and whether it works correctly. Rest of the answer assumes this step is done. Distribution of EMNIST and distribution of your extracted data is probably quite different, hence it might be hard to obtain good results. There are some steps you may do to improve the score though. Additional data If you have some way to extract more images of letters and ciphers and label them appropriately you should use it during neural net training. The more of those you get, the better your results probably be (provided data is of quite high quality, e.g. not many false positives). Data augmentation You could do this without too much work I think. You have to remember though that data augmentation has to preserve labels. So no things like flipping (it would be fine for number 8 but u flipped could become n). Augmentations which should be fine: Small rotations (up to 20 degrees or so) Small Gaussian noise or similar CutOut with small patches (black rectangulars of size 3x3 pixels or similar being zeroed out on the image) Gentle spatial transformations (rescale, shift, linear transformations) MixUp (you mix linearly two images with different labels (e.g. image of A multiplied by 0.6 and cipher 2 multiplied by 0.4 and try to classify it as 0.6 A and 0.4 2). Remember labels dont have to be solely binary. This should help your network not to be overconfident with it's predictions You can find all of those in albumentations third party library. Model augmentation For your model you could employ thing like dropout (be careful with it's integration with batch norm though), shake shake, Stochastic Depth etc. Final You can use all of those, remember to test how it performs. I tried to list them with the most promising approach on top. One possibility would be to make the model more robust to variance via augmentation.
https://stackoverflow.com/questions/59629985/
When I run deep learning training code on Google Colab, do the resulting weights and biases get saved somewhere?
I am training some deep learning code from this repository on a Google Colab notebook. The training is ongoing and seems like it is going to take a day or two. I am new to deep learning, but my question: Once the Google Colab notebook has finished running the training script, does this mean that the resulting weights and biases will be hard written to a model somewhere (in the repository folder that I have on my Google Drive), and therefore I can then run the code on any test data I like at any point in the future? Or, once I close the Google Colab notebook, do I lose the weight and bias information and would have to run the training script again if I wanted to use the neural network? I realise that this might depend on the details of the script (again, the repository is here), but I thought that there might be a general way that these things work also. Any help in understanding would be greatly appreciated.
No; Colab comes with no built-in checkpointing; any saving must be done by the user - so unless the repository code does so, it's up to you. Note that the repo would need to figure out how to connect to a remote server (or connect to your local device) for data transfer; skimming through its train.py, there's no such thing. How to save model? See this SO; for a minimal version - the most common, and a reliable option is to "mount" your Google Drive onto Colab, and point save/load paths to direct from google.colab import drive drive.mount('/content/drive') # this should trigger an authentication prompt %cd '/content/drive/My Drive/' # alternatively, %cd '/content/drive/My Drive/my_folder/' Once cd'd into, for example, DL Code in your My Drive (see below), you can simply do model.save("model0.h5"), and this will create model0.h5 in DL Code, containing entire model architecture & its optimizer. For just weights, use model.save_weights().
https://stackoverflow.com/questions/59631255/
How to add a Validation and Test Set in Pytorch Model?
I have Non Linear Regression Model ANN( X = [1000,3] , Y = [1000,8] ) with One hidden Layer(Nh = 6). How to add a Validation(10% Dataset) and Test Set(10% Dataset) in this model ? Model : N, D_in, H, D_out = x.shape[0], x.shape[1], 6, y.shape[1] model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(D_in, H)), #('Sig', nn.Sigmoid()), ('ISRU', ISRU()), # Add ISRU ('fc2', nn.Linear(H, D_out))])) # Error ----- loss_fn = torch.nn.L1Loss(reduction='mean') # Train ----- optimizer = torch.optim.Adam(model.parameters(), lr=1,eps=2**(-EPS)) epoch = 250 for t in range(epoch): # Forward pass: compute predicted y by passing x to the model. clear_output(wait=True) y_pred = model(X) # Compute and print loss. loss = loss_fn(y_pred, Y) if t % 100 == 99: print(t, loss.item()) optimizer.zero_grad() ; loss.backward() ; optimizer.step() ; if loss.item() < diff : lista = np.vstack((lista, [loss.item(),2,EPS])) ; diff = loss.item()
there are many ways to do this. you can use what @Shai suggested, I want to add what I would like to do. I ofen use train_test_split to split my data into train and test and then move forward to convert my train and test data to TensorDataset if you like an easier solution, you can take a look at skorch it is a scikit learn wrapper for pytorch. I find it easy to use and more like the keras API, skorch will implement train test split automatically when you start training but you can also pass your custom train and test set to it.
https://stackoverflow.com/questions/59631404/
Understanding batch_size in CNNs
Say that I have a CNN model in Pytorch and 2 inputs of the following sizes: input_1: [2, 1, 28, 28] input_2: [10, 1, 28, 28] Notes: To reiterate, input_1 is batch_size == 2 and input_2 is batch_size == 10. Input_2 is a superset of input_1. That is, input_2 contains the 2 images in input_1 in the same position. My question is: how does the CNN process the images in both inputs? I.e. does the CNN process every image in the batch sequentially? Or does it concatenate all of the images in the batch size and then perform convolutions per usual? The reason I ask is because: The output of CNN(input_1) != CNN(input_2)[:2] That is, the difference in batch_size results in slightly different CNN outputs for both inputs for the same positions.
CNN is a general term for convolutional neural networks. Depending on the particular architecture it may do different things. The main building blocks of CNNs are convolutions which do not cause any "crosstalk" between items in batch and pointwise nonlinearities like ReLU which do not either. However, most architectures also involve other operations, such as normalization layers - arguably the most popular is batch norm which does introduce crosstalk. Many models will also use dropout which behaves stochastically outside of eval mode (by default models are in train mode). Both above effects could lead to the observed outcome above, as well as other custom operations which could cause cross-talk across the batch. Aside from that, because of numeric precision issues, your code may not give exactly the same results, even if it doesn't feature any cross-batch operations. This error is very minor but sufficient to manifest itself when checking with CNN(input_1) == CNN(input_2)[:2]. It is better to use allclose instead, with a suitable epsilon.
https://stackoverflow.com/questions/59633685/
Changing back to CPU with Google Colab
I am running a model on Google Colab. The final step I would like to do is print an image, and show the top 5 classification predictions of the model. Here is the code: image = process_image(imgpath) index = 17 plot = imshow(image, ax = plt) plot.axis('off') plot.title(cat_to_name[str(index)]) plot.show() axes = predict(image, model) yaxis = [cat_to_name[str(i)] for i in np.array(axes[1][0])] y_pos = np.arange(len(yaxis)) xaxis = np.array(axes[0][0]) plt.barh(y_pos, xaxis) plt.xlabel('probability') plt.yticks(y_pos, yaxis) plt.title('probability of flower classification') plt.show() I am getting this error when I run this cell: TypeError Traceback (most recent call last) <ipython-input-19-d0bb6f461eec> in <module>() 11 axes = predict(image, model) 12 ---> 13 yaxis = [cat_to_name[str(i)] for i in np.array(axes[1][0])] 14 y_pos = np.arange(len(yaxis)) 15 xaxis = np.array(axes[0][0]) /usr/local/lib/python3.6/dist-packages/torch/tensor.py in __array__(self, dtype) 447 def __array__(self, dtype=None): 448 if dtype is None: --> 449 return self.numpy() 450 else: 451 return self.numpy().astype(dtype, copy=False) TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. Is there a way to temporarily use CPU on Google Colab and in this particular step? I don't really need to switch back to GPU because this is the final step in my code.
Try the following: yaxis = [cat_to_name[str(i)] for i in axes[1][0].cpu()] xaxis = axes[0][0].cpu().numpy()
https://stackoverflow.com/questions/59635097/
How to fit custom data into Pytorch DataLoader?
I have pre-processed and normalized my data, and split into training set and testing set. I have the following dimensions for my x_train and y_train: Shape of X_Train: (708, 256, 3) Shape of Y_Train: (708, 4) As you can see, x_train is 3-D. How can I go about inputting it into the pytorch dataloader? What do I put for the class block? class training_set(Dataset): def __init__(self,X,Y): def __len__(self): return def __getitem__(self, idx): return training_set = torch.utils.data.TensorDataset(x_train, y_train) train_loader = torch.utils.data.DataLoader(training_set, batch_size=50, shuffle=True)
x_train, y_train = torch.rand((708, 256, 3)), torch.rand((708, 4)) # data class training_set(data.Dataset): def __init__(self,X,Y): self.X = X # set data self.Y = Y # set lables def __len__(self): return len(self.X) # return length def __getitem__(self, idx): return [self.X[idx], self.Y[idx]] # return list of batch data [data, labels] training_dataset = training_set(x_train, y_train) train_loader = torch.utils.data.DataLoader(training_dataset, batch_size=50, shuffle=True) Actually you don't need to use a custom data set because in your case it is simple dataset. You can first change to TensorDataset so that you can do like training_dataset = torch.utils.data.TensorDataset(x_train, y_train) train_loader = torch.utils.data.DataLoader(training_dataset, batch_size=50, shuffle=True) both will return same results.
https://stackoverflow.com/questions/59637308/
how to vectorize the scatter-matmul operation
I have many matrices w1, w2, w3...wn with shapes (k*n1, k*n2, k*n3...k*nn) and x1, x2, x3...xn with shapes (n1*m, n2*m, n3*m...nn*m). I want to get w1@x1, w2@x2, w3@x3 ... respectively. The resulting matrix is multiple k*m matrices and can be concatenated into a large matrix with shape (k*n)*m. Multiply them one by one will be slow. How to vectorize this operation? Note: The input can be a k*(n1+n2+n3+...+nn) matrix and a (n1+n2+n3+...+nn)*m matrix, and we may use a batch index to indicate those submatrices. This operation is related to the scatter operations implemented in pytorch_scatter, so I refer it as "scatter_matmul".
You can vectorize your operation by creating a large block-diagonal matrix W of shape n*kx(n1+..+nn) where the w_i matrices are the blocks on the diagonal. Then you can vertically stack all x matrices into an X matrix of shape (n1+..+nn)xm. Multiplying the block diagonal W with the vertical stack of all x matrices, X: Y = W @ X results with Y of shape (k*n)xm which is exactly the concatenated large matrix you are seeking. If the shape of the block diagonal matrix W is too large to fit into memory, you may consider making W sparse and compute the product using torch.sparse.mm.
https://stackoverflow.com/questions/59640574/
What to define as entrypoint when initializing a pytorch estimator with a custom docker image for training on AWS Sagemaker?
So I created a docker image for training. In the dockerfile I have an entrypoint defined such that when docker run is executed, it will start running my python code. To use this on aws sagemaker in my understanding I need to create a pytorch estimator in a jupyter notebook in sagemaker. I tried something like this: import sagemaker from sagemaker.pytorch import PyTorch sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() estimator = PyTorch(entry_point='train.py', role=role, framework_version='1.3.1', image_name='xxx.ecr.eu-west-1.amazonaws.com/xxx:latest', train_instance_count=1, train_instance_type='ml.p3.xlarge', hyperparameters={}) estimator.fit({}) In the documentation I found that as image name I can specify the link the my docker image on aws ecr. When I try to execute this it keeps complaining [Errno 2] No such file or directory: 'train.py' It complains immidiatly, so surely I am doing something completely wrong. I would expect that first my docker image should run, and than it could find out that the entry point does not exist. But besides this, why do I need to specify an entry point, as in, should it not be clear that the entry to my training is simply docker run? For maybe better understanding. The entrypoint python file in my docker image looks like this: if __name__=='__main__': parser = argparse.ArgumentParser() # Hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument('--epochs', type=int, default=5) parser.add_argument('--batch_size', type=int, default=16) parser.add_argument('--learning_rate', type=float, default=0.0001) # Data and output directories parser.add_argument('--output_data_dir', type=str, default=os.environ['OUTPUT_DATA_DIR']) parser.add_argument('--train_data_path', type=str, default=os.environ['CHANNEL_TRAIN']) parser.add_argument('--valid_data_path', type=str, default=os.environ['CHANNEL_VALID']) # Start training ... Later I would like to specify the hyperparameters and data channels. But for now I simply do not understand what to put as entry point. In the documentation it says that the entrypoint is required and it should be a local/global path to the entrypoint...
If you really would like to use a complete separate by yourself build docker image, you should create an Amazon Sagemaker algorithm (which is one of the options in the Sagemaker menu). Here you have to specify a link to your docker image on amazon ECR as well as the input parameters and data channels etc. When choosing this options, you should not use the PyTorch estimater but the Algoritm estimater. This way you indeed don't have to specify an entrypoint because it simple runs the docker when training and the default entrypoint can be defined in your docker file. The Pytorch estimator can be used when having you own model code, but you would like to run this code in an off-the-shelf Sagemaker PyTorch docker image. This is why you have to for example specify the PyTorch framework version. In this case the entrypoint file by default should be placed next to where your jupyter notebook is stored (just upload the file by clicking on the upload button). The PyTorch estimator inherits all options from the framework estimator where options can be found where to place the entrypoint and model, for example source_dir.
https://stackoverflow.com/questions/59648275/
Pytorch tensor, how to switch channel position - Runtime error
I have my training dataset as below, where X_train is 3D with 3 channels Shape of X_Train: (708, 256, 3) Shape of Y_Train: (708, 4) Then I convert them into a tensor and input into the dataloader: X_train=torch.from_numpy(X_data) y_train=torch.from_numpy(y_data) training_dataset = torch.utils.data.TensorDataset(X_train, y_train) train_loader = torch.utils.data.DataLoader(training_dataset, batch_size=50, shuffle=False) However when training the model, I get the following error: RuntimeError: Given groups=1, weight of size 24 3 5, expected input[708, 256, 3] to have 3 channels, but got 256 channels instead I suppose this is due to the position of the channel? In Tensorflow, the channel position is at the end, but in PyTorch the format is "Batch Size x Channel x Height x Width"? So how do I swap the positions in the x_train tensor to match the expected format in the dataloader? class TwoLayerNet(torch.nn.Module): def __init__(self): super(TwoLayerNet,self).__init__() self.conv1 = nn.Sequential( nn.Conv1d(3, 3*8, kernel_size=5, stride=1), nn.Sigmoid(), nn.AvgPool1d(kernel_size=2, stride=0)) self.conv2 = nn.Sequential( nn.Conv1d(3*8, 12, kernel_size=5, stride=1), nn.Sigmoid(), nn.AvgPool1d(kernel_size=2, stride = 0)) #self.drop_out = nn.Dropout() self.fc1 = nn.Linear(708, 732) self.fc2 = nn.Linear(732, 4) def forward(self, x): out = self.conv1(x) out = self.conv2(out) out = out.reshape(out.size(0), -1) out = self.drop_out(out) out = self.fc1(out) out = self.fc2(out) return out
Use permute. X_train = torch.rand(708, 256, 3) X_train = X_train.permute(2, 0, 1) X_train.shape # => torch.Size([3, 708, 256])
https://stackoverflow.com/questions/59648324/
Unable to import TorchVision after installation Mac OSX
I've installed Pytorch and Torchvision in the way suggested on their website via pip within a virtual environment (env), and whilst no errors occur during installation when I go to import torchvision in my python code the following error occurs. Traceback (most recent call last): File "demo.py", line 2, in <module> import torchvision File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/__init__.py", line 2, in <module> from torchvision import datasets File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/datasets/__init__.py", line 9, in <module> from .fakedata import FakeData File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/datasets/fakedata.py", line 3, in <module> from .. import transforms File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/transforms/__init__.py", line 1, in <module> from .transforms import * File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 17, in <module> from . import functional as F File "/Users/QuinceyBee/env/lib/python3.7/site-packages/torchvision/transforms/functional.py", line 5, in <module> from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION ImportError: cannot import name 'PILLOW_VERSION' from 'PIL' (/Users/QuinceyBee/env/lib/python3.7/site-packages/PIL/__init__.py) I have tried creating new virtual environments to rebuild from scratch, tried to install via conda within a conda environment, however, neither of these resolved this issue. I apologise for any format issues, this is the first time posting on here and also I'm relatively new to using python. Any assistance would be greatly appreciated.
Pillow 7.0.0 removed PILLOW_VERSION, you should use version in your own code instead. https://pillow.readthedocs.io/en/stable/deprecations.html#pillow-version-constant If using Torchvision, there is a release planned this week (week 2, 2020) to fix it: https://github.com/pytorch/vision/issues/1712#issuecomment-570286349 The options are: wait for the new torchvision release use the master version of torchvision as given below pip install -U git+https://github.com/pytorch/vision) install torchvision from a nightly, which also requires a pytorch from a nightly version or install Pillow<7 pip install "pillow<7"
https://stackoverflow.com/questions/59654271/
Should this boxes for my custom Object Detector be smaller?
So Im trying to make a Object Detector for this companys forms, and we have labelled the images as shown in the example image I uploaded, my question is: Should We make more accurate boxes or is OK as they are, since the written part that we are trying to detect could be bigger. So, what im asking is: In the example image, the "Descripcion" part or Description, has just 2 lines of text, but it could be more, should we make the box to just select the Description title + the 2 lines or so we stick to what we are doing now title + the 2 lines + all blank space that could have been filled with lines
It depends on what you really want to do with the detected boxes. What are the next steps, can the next step e.g. extracting the text handle all the free space, or would it be better to just the the part where it is actually written. Besides that right now in your example I find that most boxes are too big. The form is more or less already splitted in boxes and it could be better to make the boxes smaller and more accurate e.g.the box around IMPORTE and some amount in €. I would label this closer. So the box only contains the information you actually want and nothing else. But as I said it really depends on the next step the boxes should be used for.
https://stackoverflow.com/questions/59662395/
coremltools: how to properly use NeuralNetworkMultiArrayShapeRange?
I have a PyTorch network and I want to deploy it to iOS devices. In short, I fail to add flexibility to the input tensor shape in CoreML. The network is a convnet that takes an RGB image (stored as a tensor) as an input and returns an RGB image of the same size. Using PyTorch, I can input images of any size I want, for instance a tensor of size (1, 3, 300, 300) for a 300x300 image. To convert the PyTorch model to a CoreML model, I first convert it to an ONNX model using torch.onnx.export. This function requires to pass a dummy input so that it can execute the graph. So I did using: input = torch.rand(1, 3, 300, 300) My guess is that the ONNX model only accepts images / tensors of size (1, 3, 300, 300). Now, I can use the onnx_coreml.convert function to convert the ONNX model to a CoreML model. By printing the CoreML model's spec description using Python, I get something like: input { name: "my_image" type { multiArrayType { shape: 1 shape: 3 shape: 300 shape: 300 dataType: FLOAT32 } } } output { name: "my_output" type { multiArrayType { shape: 1 shape: 3 shape: 300 shape: 300 dataType: FLOAT32 } } } metadata { userDefined { key: "coremltoolsVersion" value: "3.1" } } The model's input must be a multiArrayType of size (1, 3, 300, 300). By copying this model to XCode, I can see while inspecting the model that my_name is listed under the "Inputs" section and it is expected to be a MultiArray (Float32 1 x 3 x 300 x 300). So far, everything is coherent. My problem is to add flexibility to the input shape. I tried to use coremltools with no luck. This is my problem. Here is my code: import coremltools from coremltools.models.neural_network import flexible_shape_utils spec = coremltools.utils.load_spec('my_model.mlmodel') shape_range = flexible_shape_utils.NeuralNetworkMultiArrayShapeRange() shape_range.add_channel_range((3,3)) shape_range.add_height_range((64, 5000)) shape_range.add_width_range((64, 5000)) flexible_shape_utils.update_multiarray_shape_range(spec, feature_name='my_image', shape_range=shape_range) coremltools.models.utils.save_spec(spec, 'my_flexible_model.mlmodel') I get the following spec description using Python: input { name: "my_image" type { multiArrayType { shape: 1 shape: 3 shape: 300 shape: 300 dataType: FLOAT32 shapeRange { sizeRanges { lowerBound: 3 upperBound: 3 } sizeRanges { lowerBound: 64 upperBound: 5000 } sizeRanges { lowerBound: 64 upperBound: 5000 } } } } } Only 3 ranges as specified, which makes sense since I only defined a range for channel, height and width, but not for the batch size. In XCode, I get the following error when inspecting the flexible CoreML model: There was a problem decoding this CoreML document validator error: Description of multiarray feature 'my_image' has a default 4-d shape but a 3-d shape range I'm pretty sure it was working on another project when I was on macOS X Mojave, but at this point I'm sure of nothing. I'm using: macOS X Catalina conda 4.7.12 python 3.7.5 pytorch 1.3.1 onnx 1.6.0 onnx-coreml 1.1 coremltools 3.1 Thanks for the help
Easiest thing to do is to remove that shape:1. Something like this: del spec.description.input[0].shape[0] Now the default shape should also have 3 dimensions. However, I would suggest changing the type of the input from multi-array to an actual image. Since you're going to be using it with images anyway. That will let you pass in the image as a CVPixelBuffer or CGImage object instead of an MLMultiArray.
https://stackoverflow.com/questions/59662399/
In Torch C++ API, How to write to the internal data of a tensor fastly?
I am using torch C++ frontend and want to have a tensor with specified value in it. To achieve this one may allocate memory and set value by hand, then use torch::from_blob to build a tensor on the memory block, but it seems not clean enough for me. In the very bottom of this document I found out that I can use subscript to directly access and modify the data. However, this approach has a big running time overhead, likely because the subscript access will treat the element of tensor as a 0-d tensor. The following code will cost more than 2 seconds on my machine (-O3 optimization level), which is unreasonably long for modern CPU. torch::Tensor tensor = torch::empty({1000, 1000}); for(int i=0; i < 1000; i++) { for(int j=0 ; j < 1000; j++) { tensor[i][j] = calc_tensor_data(i,j); } } Is there a clean and fast way to achieve this goal?
After hours of fruitless search on the Internet, I got a hypothesis in my mind, and I decided to give it a shot. It turns out that the accessor mentioned in the same document works well as left value too, although this feature is not mentioned by document at all. The following code is just fine, and it is as fast as manipulating raw pointer directly. torch::Tensor tensor = torch::empty({1000, 1000}); auto accessor = tensor.accessor<float,2>(); for(int i=0; i < 1000; i++) { for(int j=0 ; j < 1000; j++) { accessor[i][j] = calc_tensor_data(i,j); } }
https://stackoverflow.com/questions/59676983/
Pytorch : AttributeError: 'function' object has no attribute 'cuda'
import torch import models model_names = sorted(name for name in models.__dict__ if name.islower() and not name.startswith("__") and callable(models.__dict__[name])) model = models.__dict__['resnet18'] model = torch.nn.DataParallel(model,device_ids = [0]) #PROBLEM CAUSING LINE model.to('cuda:0') To run this code you need to clone this repository : https://github.com/SoftwareGift/FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019.git Please run this piece of code inside the root folder of the cloned directory. I am getting the follow error AttributeError: 'function' object has no attribute 'cuda' I have tried using torch.device object as well for the same function and it results in the same error. Please ask for any other details that are required. PyTorch newbie here python:3.7 pytorch:1.3.1
Replace model = torch.nn.DataParallel(model,device_ids = [0]) with model = torch.nn.DataParallel(model(), device_ids=[0]) (notice the () after model inside DataParallel). The difference is simple: the models module contains classes/functions which create models and not instances of models. If you trace the imports, you'll find that models.__dict__['resnet18'] resolves to this function. Since DataParallel wraps an instance, not a class itself, it is incompatible. The () calls this model building function/class constructor to create an instance of this model. A much simpler example of this would be the following class MyNet(nn.Model): def __init__(self): self.linear = nn.Linear(4, 4) def forward(self, x): return self.linear(x) model = nn.DataParallel(MyNet) # this is what you're doing model = nn.DataParallel(MyNet()) # this is what you should be doing Your error message complains that function (since model without the () is of type function) has no attribute cuda, which is a method of nn.Model instances.
https://stackoverflow.com/questions/59678247/
How to Debug Saving Model TypeError: can't pickle SwigPyObject objects?
I'm trying to save a model, which is an object of a class that inherits from nn.Module. It trains fine without problem, but when I try running the code: torch.save( obj=model, f=os.path.join(tensorboard_writer.get_logdir(), 'model.ckpt')) I receive the error: TypeError: can't pickle SwigPyObject objects I have no idea what a SwigPyObject object is. How do I debug this error to save my model? Full Traceback: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 149, in _with_file_like return body(f) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 224, in <lambda> return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol)) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 296, in _save pickler.dump(obj) TypeError: can't pickle SwigPyObject objects In case it helps, my model is an object of the following class: class RecurrentModel(nn.Module): def __init__(self, core_str, core_kwargs, tensorboard_writer=None, input_size=1, hidden_size=32, output_size=2): super(RecurrentModel, self).__init__() self.tensorboard_writer = tensorboard_writer self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.core = self._create_core(core_str=core_str, core_kwargs=core_kwargs) self.core_hidden = None self.linear = nn.Linear( in_features=hidden_size, out_features=output_size, bias=True) self.softmax = nn.Softmax(dim=-1) # converts all weights into doubles i.e. float64 # this prevents PyTorch from breaking when multiplying float32 * flaot64 self.double() # TODO figure out why writing the model to tensorboard doesn't work # dummy_input = torch.zeros(size=(10, 1, 1), dtype=torch.double) # tensorboard_writer.add_graph( # model=self, # input_to_model=dict(stimulus=dummy_input)) def _create_core(self, core_str, core_kwargs): if core_str == 'lstm': core_constructor = nn.LSTM elif core_str == 'rnn': core_constructor = nn.RNN elif core_str == 'gru': core_constructor = nn.GRU else: raise ValueError('Unknown core string') core = core_constructor( input_size=self.input_size, hidden_size=self.hidden_size, batch_first=True, **core_kwargs) return core def forward(self, model_input): if self.core_hidden is None: core_output, self.core_hidden = self.core( model_input['stimulus']) else: core_output, self.core_hidden = self.core( model_input['stimulus'], self.core_hidden) linear_output = self.linear(core_output) softmax_output = self.softmax(linear_output) forward_output = dict( core_output=core_output, core_hidden=self.core_hidden, linear_output=linear_output, softmax_output=softmax_output) return forward_output def reset_core_hidden(self): self.core_hidden = None I then construct and save the model: tensorboard_writer = SummaryWriter() model = RecurrentModel( core_str='rnn', core_kwargs={}, tensorboard_writer=tensorboard_writer) torch.save( obj=model, f=os.path.join(tensorboard_writer.get_logdir(), 'model.ckpt') )
Someone on the PyTorch forum clarified. SummaryWriter has a file handle, which is typically unable to serialize. The problematic line is: self.tensorboard_writer = tensorboard_writer https://github.com/pytorch/pytorch/issues/32046#issuecomment-573151743
https://stackoverflow.com/questions/59685649/
Testing and Confidence score of Network trained with nn.CrossEntropyLoss()
I have trained a network with the following structure: Intent_LSTM( (attention): Attention() (embedding): Embedding(34601, 400) (lstm): LSTM(400, 512, num_layers=2, batch_first=True, dropout=0.5) (dropout): Dropout(p=0.5, inplace=False) (fc): Linear(in_features=512, out_features=3, bias=True) ) Now I want to test this trained network and also get the confidence score of classification. Here is my current implementation of test function: output = model_current(inputs) pred = torch.round(output.squeeze()) pred = pred.argmax(dim=1, keepdim=True) Now my question isas follows. Here pred is just the output from a Fully connected layer from my network without softmax (as required by a loss function). Is this(pred = pred.argmax(dim=1, keepdim=True)) the right way to get the predictions? Or should I pass the output from the network to a softmax layer and then do argmax? How do I get the confidence score? Should I pass the output from the network to a softmax layer and select the argmax as the confidence of the class?
It doesn't really matter if you pick argmax before or after doing softmax. Because whatever maximizes softmax will also maximize the logits (pre-softmax) values. So you should get similar values. Softmax will give you scores or probabilities for each class. Hence, the values after doing softmax can be used as confidence scores.
https://stackoverflow.com/questions/59687382/
autograd differentiation example in PyTorch - should be 9/8?
In the example for the Torch tutorial for Python, they use the following graph: x = [[1, 1], [1, 1]] y = x + 2 z = 3y^2 o = mean( z ) # 1/4 * x.sum() Thus, the forward pass gets us this: x_i = 1, y_i = 3, z_i = 27, o = 27 In code this looks like: import torch # define graph x = torch.ones(2, 2, requires_grad=True) y = x + 2 z = y * y * 3 out = z.mean() # if we don't do this, torch will only retain gradients for leaf nodes, ie: x y.retain_grad() z.retain_grad() # does a forward pass print(z, out) however, I get confused at the gradients computed: # now let's run our backward prop & get gradients out.backward() print(f'do/dz = {z.grad[0,0]}') which outputs: do/dx = 4.5 By chain rule, do/dx = do/dz * dz/dy * dy/dx, where: dy/dx = 1 dz/dy = 9/2 given x_i=1 do/dz = 1/4 given x_i=1 which means: do/dx = 1/4 * 9/2 * 1 = 9/8 However this doesn't match the gradients returned by Torch (9/2 = 4.5). Perhaps I have a math error (something with the do/dz = 1/4 term?), or I don't understand autograd in Torch. Any pointers?
do/dz = 1 / 4 dz/dy = 6y = 6 * 3 = 18 dy/dx = 1 therefore, do/dx = 9/2
https://stackoverflow.com/questions/59692600/
Loss for binary sparsity
I have binary images (as the one below) at the output of my net. I need the '1's to be further from each other (not connected), so that they would form a sparse binary image (without white blobs). Something like salt-and-pepper noise. I am looking for a way to define a loss (in pytorch) that would punish based on the density of the '1's. Thanks. I
It depends on how you're generating said image. Since neural networks have to be trained by backpropagation, I'm rather sure your binary image is not the direct output of your neural network (ie not the thing you're applying loss to), because gradient can't blow through binary (discrete) variables. I suspect you do something like pixel-wise binary cross entropy or similar and then threshold. I assume your code works like that: you densely regress real-valued numbers and then apply thresholding, likely using sigmoid to map from [-inf, inf] to [0, 1]. If it is so, you can do the following. Build a convolution kernel which is 0 in the center and 1 elsewhere, of size related to how big you want your "sparsity gaps" to be. kernel = [ [1, 1, 1, 1, 1] [1, 1, 1, 1, 1] [1, 1, 0, 1, 1] [1, 1, 1, 1, 1] [1, 1, 1, 1, 1] ] Then you apply sigmoid to your real-valued output to squash it to [0, 1]: squashed = torch.sigmoid(nn_output) then you convolve squashed with kernel, which gives you the relaxed number of non-zero neighbors. neighborhood = nn.functional.conv2d(squashed, kernel, padding=2) and your loss will be the product of each pixel's value in squashed with the corresponding value in neighborhood: sparsity_loss = (squashed * neighborhood).mean() If you think of this loss applied to your binary image, for a given pixel p it will be 1 if and only if both p and at least one of its neighbors have values 1 and 0 otherwise. Since we apply it to non-binary numbers in [0, 1] range, it will be the differentiable approximation of that. Please note that I left out some of the details from the code above (like correctly reshaping kernel to work with nn.functional.conv2d).
https://stackoverflow.com/questions/59694122/
RuntimeError: leaf variable has been moved into the graph interior
I try to use pytorch for autogradient. When I'm testing is, I met the error. My code is as below: w11 = torch.rand((100,2), requires_grad=True) w12 = torch.rand((100,2), requires_grad=True) w12[:,1] = w12[:,1] + 1 w13 = torch.rand((100,2), requires_grad=True) w13[:,1] = w13[:,1] + 2 out1=(w11-w12)**2 out2=out1.mean() out2.backward(retain_graph=True)
Use with torch.no_grad() when you want to replace something in tensors with requires_grad=True, w11 = torch.rand((100,2), requires_grad=True) w12 = torch.rand((100,2), requires_grad=True) w13 = torch.rand((100,2), requires_grad=True) with torch.no_grad(): w12[:,1] = w12[:,1] + 1 w13[:,1] = w13[:,1] + 2 out1=(w11-w12)**2 out2=out1.mean() out2.backward(retain_graph=True) All will go well.
https://stackoverflow.com/questions/59696406/
What is a pytorch snapshot and a Dlib model? (Running Hopenet)
I am trying to make Hopenet run on my computer, using the code on github I have installed all required libraries. This is my forked code, with test_hopenet.py updated to python 3. I installed all required libraries, and using pip install "pillow<7" because of some old requirements. python code/test_on_video_dlib.py --snapshot PATH_OF_SNAPSHOT --face_model PATH_OF_DLIB_MODEL --video PATH_OF_VIDEO --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO It looks like I am missing some basic understanding which is supposed to be obvious: What is a SNAPSHOT? Where do I get one? What is a Dlib Model? Where do I get the right one? Again - I just want to make this code run, but I can't understand the instructions.
For the dlib model you can use: https://github.com/davisking/dlib-models/mmod_human_face_detector.dat.bz2 For the snapshot path you can use hopenet_robust_alpha1.pkl downloadable from the link "300W-LP, alpha 1, robust to image quality" under Pretrained Models section of Nathanial Ruiz's README.md
https://stackoverflow.com/questions/59697062/
BERT tokenizer & model download
I`m beginner.. I'm working with Bert. However, due to the security of the company network, the following code does not receive the bert model directly. tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=False) model = BertForSequenceClassification.from_pretrained("bert-base-multilingual-cased", num_labels=2) So I think I have to download these files and enter the location manually. But I'm new to this, and I'm wondering if it's simple to download a format like .py from github and put it in a location. I'm currently using the bert model implemented by hugging face's pytorch, and the address of the source file I found is: https://github.com/huggingface/transformers Please let me know if the method I thought is correct, and if so, what file to get. Thanks in advance for the comment.
As described here, what you need to do are download pre_train and configs, then putting them in the same folder. Every model has a pair of links, you might want to take a look at lib code. For instance import torch from transformers import * model = BertModel.from_pretrained('/Users/yourname/workplace/berts/') with /Users/yourname/workplace/berts/ refer to your folder Below are what I found at src/transformers/configuration_bert.py there are a list of models' configs BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-config.json", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-config.json", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-config.json", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-config.json", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/config.json", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/config.json", } and at src/transformers/modeling_bert.py there are links to pre_trains BERT_PRETRAINED_MODEL_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-pytorch_model.bin", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-pytorch_model.bin", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-pytorch_model.bin", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-pytorch_model.bin", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-pytorch_model.bin", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-pytorch_model.bin", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-pytorch_model.bin", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-pytorch_model.bin", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-pytorch_model.bin", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-pytorch_model.bin", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-pytorch_model.bin", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/pytorch_model.bin", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/pytorch_model.bin", }
https://stackoverflow.com/questions/59701981/
what does dim=-1 or -2 mean in torch.sum()?
let me take a 2D matrix as example: mat = torch.arange(9).view(3, -1) tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) torch.sum(mat, dim=-2) tensor([ 9, 12, 15]) I find the result of torch.sum(mat, dim=-2) is equal to torch.sum(mat, dim=0) and dim=-1 equal to dim=1. My question is how to understand the negative dimension here. What if the input matrix has 3 or more dimensions?
The minus essentially means you go backwards through the dimensions. Let A be a n-dimensional matrix. Then dim=n-1=-1, dim=n-2=-2, ..., dim=1=-(n-1), dim=0=-n. See the numpy doc for more information, as pytorch is heavily based on numpy.
https://stackoverflow.com/questions/59702785/
What is a dimensional range of [-1,0] in Pytorch?
So I'm struggling to understand some terminology about collections in Pytorch. I keep running into the same kinds of errors about the range of my tensors being incorrect, and when I try to Google for a solution often the explanations are further confusing. Here is an example: m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([0.3300, 0.3937, -0.3113, -0.2880]) output = m(input) I don't see anything wrong with the above code, and I've defined my LogSoftmax to accept a 1 dimensional input. So according to my experience with other programming languages the collection [0.3300, 0.3937, -0.3113, -0.2880] is a single dimension. The above triggers the following error for m(input): IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) What does that mean? I passed in a one dimensional tensor, but then it tells me that it was expecting a range of [-1, 0], but got 1. A range of what? Why is the error comparing a dimension of 1 to [-1, 0]? What do the two numbers [-1, 0] mean? I searched for an explanation for this error, and I find things like this link which make no sense to me as a programmer: https://github.com/pytorch/pytorch/issues/5554#issuecomment-370456868 So I was able to fix the above code by adding another dimension to my tensor data. m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([[-0.3300, 0.3937, -0.3113, -0.2880]]) output = m(input) So that works, but I don't understand how [-1,0] explains a nested collection. Further experiments showed that the following also works: m = torch.nn.LogSoftmax(dim=1) input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]]) output = m(input) So dim=1 means a collection of collections, but I don't understand how that means [-1, 0]. When I try using LogSoftmax(dim=2) m = torch.nn.LogSoftmax(dim=2) input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]]) output = m(input) The above gives me the following error: IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) Confusion again that dim=2 equals [-2, 1], because where did the 1 value come from? I can fix the error above by nesting collections another level, but at this point I don't understand what values LogSoftmax is expecting. m = torch.nn.LogSoftmax(dim=2) input = torch.tensor([[[0.0, 0.1]], [[1.0, 0.1]], [[2.0, 0.1]]]) output = m(input) I am super confused by this terminology [-1, 0] and [-2, 1]? If the first value is the nested depth, then why is it negative and what could the second number mean? There is no error code associated with this error. So it's been difficult to find documentation on the subject. It appears to be an extremely common error people get confused by and nothing that I can find in the Pytorch documentation that talks specifically about it.
When specifying a tensor's dimension as an argument for a function (e.g. m = torch.nn.LogSoftmax(dim=1)) you can either use positive dimension indexing starting with 0 for the first dimension, 1 for the second etc. Alternatively, you can use negative dimension indexing to start from the last dimension to the first: -1 indicate the last dimension, -2 the second from last etc. Example: If you have a 4D tensor of dimensions b-by-c-by-h-by-w then The "batch" dimension (the first) can be accessed as either dim=0 or dim=-4. The "channel" dimension (the second) can be accessed as either dim=1 or dim=-3. The "height"/"vertical" dimension (the third) can be accessed as either dim=2 or dim=-2. The "width"/"horizontal" dimension (the fourth) can be accessed as either dim=3 or dim=-1. Therefore, if you have a 4D tensor dim argument can take values in the range [-4, 3]. In your case you have a 1D tensor and therefore dim argument can be wither 0 or -1 (which in this deprecate case amounts to the same dimension).
https://stackoverflow.com/questions/59704538/
Pytorch AttributeError: module 'torch' has no attribute 'as_tensor'
$ python main.py --hetero Created directory results/ACMRaw_2020-01-13_01-20-26 Traceback (most recent call last): File "main.py", line 101, in <module> main(args) File "main.py", line 30, in main val_mask, test_mask = load_data(args['dataset']) File "/home/cnudi1/wook/dgl/examples/pytorch/han/utils.py", line 225, in load_data return load_acm_raw(remove_self_loop) File "/home/cnudi1/wook/dgl/examples/pytorch/han/utils.py", line 189, in load_acm_raw pa = dgl.bipartite(p_vs_a, 'paper', 'pa', 'author') File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/convert.py", line 260, in bipartite return create_from_scipy(data, utype, etype, vtype) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/convert.py", line 823, in create_from_scipy indptr = utils.toindex(spmat.indptr) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/utils.py", line 242, in toindex return data if isinstance(data, Index) else Index(data) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/utils.py", line 15, in __init__ self._initialize_data(data) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/utils.py", line 22, in _initialize_data self._dispatch(data) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/utils.py", line 75, in _dispatch self._user_tensor_data[F.cpu()] = F.zerocopy_from_numpy(self._pydata) File "/home/cnudi1/.conda/envs/lcr_env/lib/python3.6/site-packages/dgl-0.4-py3.6-linux-ppc64le.egg/dgl/backend/pytorch/tensor.py", line 276, in zerocopy_from_numpy return th.as_tensor(np_array) AttributeError: module 'torch' has no attribute 'as_tensor' I got an error when I try to run the code (https://github.com/dmlc/dgl/blob/master/examples/pytorch/han/main.py) from the DGL (https://github.com/dmlc/dgl) It requires CUDA and Pytorch so I managed to install it. But I got an error and couldn't find the solution with Google/Stackoverflow search My environment is Linux minsky 3.10.0-957.5.1.el7.ppc64le CentOS Python 3.6.9 Conda 4.5.11 CUDA 10.1 NVCC 10.1 Pytorch 0.4.0 Torchvision 0.2.1 Pytorch works fine in Python >>> import torch >>> print (torch.__version__) 0.4.0 >>> import torchvision >>> print (torchvision.__version__) 0.2.1 Please could you help me out? * DGL is installed from the source code ** Pytorch is installed with conda from channel:engility(How to install pytorch on Power 8 or PPC64 machine?) conda install -c engility pytorch because other ways(default conda, pip, install from the source code) never works for ppc64le
tl;dr Upgrade to PyTorch 0.4.1 Notice that DGL requires PyTorch 0.4.1 and you are using PyTorch 0.4.0. If you take a closer look, you'll see that as_tensor was proposed in 30 Apr 2018 and merged in 1 May 2018. You'll also see that PyTorch 0.4.0 was released before that on 24 Apr 2018, whereas PyTorch 0.4.1 was release after on 26 Jul 2018. In fact, if you take a look at the changelog of the 0.4.1 version, you'll notice a new operator being announced: torch.as_tensor :)
https://stackoverflow.com/questions/59705922/
Is there any essential difference between * and + for Pytorch autograd?
I was trying to understand the autograd mechanism in more depth. To test my understanding, I tried to write the following code which I expected to produce an error (i.e., Trying to backward through the graph a second time). b = torch.Tensor([0.5]) for i in range(5): b.data.zero_().add_(0.5) b = b + a c = b*a c.backward() Apparently, it should report an error when c.backward() is called for the second time in the for loop, because the history of b has been freed, however, nothing happens. But when I tried to change b + a to b * a as follows, b = torch.Tensor([0.5]) for i in range(5): b.data.zero_().add_(0.5) b = b * a c = b*a c.backward() It did report the error I was expecting. This looks pretty weird to me. I don't understand why there is no error evoked for the former case, and why it makes a difference to change from + to *.
The difference is adding a constant doesn't change the gradient, but mull by const does. It seems, autograd is aware of it, and optimizes out 'b = b + a'.
https://stackoverflow.com/questions/59709282/
How can I cast a tensor to the complex type in Pytorch?
I want to do some quantum mechanics calculations with Pytorch, where the quantities are sometimes complex. I would like to know how can I cast an existing real tensor to the complex type.
PyTorch does have complex number support. Try this: import torch a = torch.tensor([1.0, 2.0], dtype=torch.double) b = a.type(torch.complex64)
https://stackoverflow.com/questions/59717519/
What are C classes for a NLLLoss loss function in Pytorch?
I'm asking about C classes for a NLLLoss loss function. The documentation states: The negative log likelihood loss. It is useful to train a classification problem with C classes. Basically everything after that point depends upon you knowing what a C class is, and I thought I knew what a C class was but the documentation doesn't make much sense to me. Especially when it describes the expected inputs of (N, C) where C = number of classes. That's where I'm confused, because I thought a C class refers to the output only. My understanding was that the C class was a one hot vector of classifications. I've often found in tutorials that the NLLLoss was often paired with a LogSoftmax to solve classification problems. I was expecting to use NLLLoss in the following example: # Some random training data input = torch.randn(5, requires_grad=True) print(input) # tensor([-1.3533, -1.3074, -1.7906, 0.3113, 0.7982], requires_grad=True) # Build my NN (here it's just a LogSoftmax) m = nn.LogSoftmax(dim=0) # Train my NN with the data output = m(input) print(output) # tensor([-2.8079, -2.7619, -3.2451, -1.1432, -0.6564], grad_fn=<LogSoftmaxBackward>) loss = nn.NLLLoss() print(loss(output, torch.tensor([1, 0, 0]))) The above raises the following error on the last line: ValueError: Expected 2 or more dimensions (got 1) We can ignore the error, because clearly I don't understand what I'm doing. Here I'll explain my intentions of the above source code. input = torch.randn(5, requires_grad=True) Random 1D array to pair with one hot vector of [1, 0, 0] for training. I'm trying to do a binary bits to one hot vector of decimal numbers. m = nn.LogSoftmax(dim=0) The documentation for LogSoftmax says that the output will be the same shape as the input, but I've only seen examples of LogSoftmax(dim=1) and therefore I've been stuck trying to make this work because I can't find a relative example. print(loss(output, torch.tensor([1, 0, 0]))) So now I have the output of the NN, and I want to know the loss from my classification [1, 0, 0]. It doesn't really matter in this example what any of the data is. I just want a loss for a one hot vector that represents classification. At this point I get stuck trying to resolve errors from the loss function relating to expected output and input structures. I've tried using view(...) on the output and input to fix the shape, but that just gets me other errors. So this goes back to my original question and I'll show the example from the documentation to explain my confusion: m = nn.LogSoftmax(dim=1) loss = nn.NLLLoss() input = torch.randn(3, 5, requires_grad=True) train = torch.tensor([1, 0, 4]) print('input', input) # input tensor([[...],[...],[...]], requires_grad=True) output = m(input) print('train', output, train) # tensor([[...],[...],[...]],grad_fn=<LogSoftmaxBackward>) tensor([1, 0, 4]) x = loss(output, train) Again, we have dim=1 on LogSoftmax which confuses me now, because look at the input data. It's a 3x5 tensor and I'm lost. Here's the documentation on the first input for the NLLLoss function: Input: (N, C)(N,C) where C = number of classes The inputs are grouped by the number of classes? So each row of the tensor input is associated with each element of the training tensor? If I change the second dimension of the input tensor, then nothing breaks and I don't understand what is going on. input = torch.randn(3, 100, requires_grad=True) # 3 x 100 still works? So I don't understand what a C class is here, and I thought a C class was a classification (like a label) and meaningful only on the outputs of the NN. I hope you understand my confusion, because shouldn't the shape of the inputs for the NN be independent from the shape of the one hot vector used for classification? Both the code examples and documentations say that the shape of the inputs is defined by the number of classifications, and I don't really understand why. I have tried to study the documentations and tutorials to understand what I'm missing, but after several days of not being able to get past this point I've decided to ask this question. It's been humbling because I thought this was going to be one of the easier things to learn.
Basically you are missing a concept of batch. Long story short, every input to loss (and the one passed through the network) requires batch dimension (i.e. how many samples are used). Breaking it up, step by step: Your example vs documentation Each step will be each step compared to make it clearer (documentation on top, your example below) Inputs input = torch.randn(3, 5, requires_grad=True) input = torch.randn(5, requires_grad=True) In the first case (docs), input with 5 features is created and 3 samples are used. In your case there is only batch dimension (5 samples), you have no features which are required. If you meant to have one sample with 5 features you should do: input = torch.randn(5, requires_grad=True) LogSoftmax LogSoftmax is done across features dimension, you are doing it across batch. m = nn.LogSoftmax(dim=1) # apply over features m = nn.LogSoftmax(dim=0) # apply over batch It makes no sense usually for this operation as samples are independent of each other. Targets As this is multiclass classification and each element in vector represents a sample, one can pass as many numbers as one wants (as long as it's smaller than number of features, in case of documentation example it's 5, hence [0-4] is fine ). train = torch.tensor([1, 0, 4]) train = torch.tensor([1, 0, 0]) I assume, you wanted to pass one-hot vector as target as well. PyTorch doesn't work that way as it's memory inefficient (why store everything as one-hot encoded when you can just pinpoint exactly the class, in your case it would be 0). Only outputs of neural network are one hot encoded in order to backpropagate error through all output nodes, it's not needed for targets. Final You shouldn't use torch.nn.LogSoftmax at all for this task. Just use torch.nn.Linear as last layer and use torch.nn.CrossEntropyLoss with your targets.
https://stackoverflow.com/questions/59718130/
Using CNN with Dataset that has different depths between volumes
I am working with Medical Images, where I have 130 Patient Volumes, each volume consists of N number of DICOM Images/slices. The problem is that between the volumes the the number of slices N, varies. Majority, 50% of volumes have 20 Slices, rest varies by 3 or 4 slices, some even more than 10 slices (so much so that interpolation to make number of slices equal between volumes is not possible) I am able to use Conv3d for volumes where the depth N (number of slices) is same between volumes, but I have to make use of entire data set for the classification task. So how do I incorporate entire dataset and feed it to my network model ?
If I understand your question, you have 130 3-dimensional images, which you need to feed into a 3D ConvNet. I'll assume your batches, if N was the same for all of your data, would be tensors of shape (batch_size, channels, N, H, W), and your problem is that your N varies between different data samples. So there's two problems. First, there's the problem of your model needing to handle data with different values of N. Second, there's the more implementation-related problem of batching data of different lengths. Both problems come up in video classification models. For the first, I don't think there's a way of getting around having to interpolate SOMEWHERE in your model (unless you're willing to pad/cut/sample) -- if you're doing any kind of classification task, you pretty much need a constant-sized layer at your classification head. However, the interpolation doesn't have happen right at the beginning. For example, if for an input tensor of size (batch, 3, 20, 256, 256), your network conv-pools down to (batch, 1024, 4, 1, 1), then you can perform an adaptive pool (e.g. https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool3d) right before the output to downsample everything larger to that size before prediction. The other option is padding and/or truncating and/or resampling the images so that all of your data is the same length. For videos, sometimes people pad by looping the frames, or you could pad with zeros. What's valid depends on whether your length axis represents time, or something else. For the second problem, batching: If you're familiar with pytorch's dataloader/dataset pipeline, you'll need to write a custom collate_fn which takes a list of outputs of your dataset object and stacks them together into a batch tensor. In this function, you can decide whether to pad or truncate or whatever, so that you end up with a tensor of the correct shape. Different batches can then have different values of N. A simple example of implementing this pipeline is here: https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/03-advanced/image_captioning/data_loader.py Something else that might help with batching is putting your data into buckets depending on their N dimension. That way, you might be able to avoid lots of unnecessary padding.
https://stackoverflow.com/questions/59721052/
Installing OpenCV into PyCharm
Idk if this is a stackoverflow-appropriate post so forgive if the question is misplaced. I'm trying to install OpenCV into my Pycharm IDE through the conda virtual environment. I typed conda install -c conda-forge opencv inside the PyCharm terminal and it has been doing this for 11 hours and God knows how many more to go. Pycharm did this with PyTorch as well. Am I doing something wrong or is this normal?
While you can install packages directly in PyCharm by going to file->settings select Project Interpreter and click on the '+' icon on the top right (see image) I would recommendΒ creating a requirements.txt file in the root of your project, and write down all your required packages. When a package is missing, PyCharm will automatically suggest to install the package for you. e.g. for installing opencv you can add the following to you requirements.txt opencv-python Or even specify the version that your project needs opencv-python==4.1.2 edit: the advantage of using a requirement.txt is that you can more easily port the project to another machine, and re-install the packages if needed.
https://stackoverflow.com/questions/59721119/
Finding parameters with backpropagation and gradient descent in PyTorch
I am experimenting with PyTorch and autodifferentiation and gradient descent To that end I would like to estimate the parameters that would produce a certain value to an arbitrary linear in the parameters function. My code is here: import torch X = X.astype(float) X = np.array([[3.], [4.], [5.]]) X = torch.from_numpy(X) X.requires_grad = True W = np.random.randn(3,3) W = np.triu(W, k=0) W = torch.from_numpy(W) W.requires_grad = True out = 10 - ([email protected](X, 1,0) * W).sum() out is : My objective is to make out close to 0 (within an interval of [-.00001 , 0.0001]) by adjusting W using the gradient of W. How should I proceed from here to achieve this end with pytorch? Update @Umang: this is what I get when I run the code you propose: In fact the algorithm diverges.
# your code as it is import torch import numpy as np X = np.array([[3.], [4.], [5.]]) X = torch.from_numpy(X) X.requires_grad = True W = np.random.randn(3,3) W = np.triu(W, k=0) W = torch.from_numpy(W) W.requires_grad = True # define parameters for gradient descent max_iter=100 lr_rate = 1e-3 # we will do gradient descent for max_iter iteration, or convergence till the criteria is met. i=0 out = compute_out(X,W) while (i<max_iter) and (torch.abs(out)>0.01): loss = (out-0)**2 W = W - lr_rate*torch.autograd.grad(loss, W)[0] i+=1 print(f"{i}: {out}") out = compute_out(X,W) print(W) We define a loss function such that its minima is at the desired point and run gradient descent. Here, I have used squared-error but you may use other loss functions too with desired minima.
https://stackoverflow.com/questions/59727904/
Loading ResNet50 on RTX2070 - Out of Memory
I'm trying to load ResNext50, and on top of it CenterNet, I'm able to do it with Google Colab or Kaggle's GPU. But, Would love to know how much GPU Memory (VRAM) does this network need? When using RTX 2070 with free 5.5GB VRAM left on it (out of 8GB), I'm not able to load it. Batch size is 1, #of workers is 1, everything is set to minimum values. OS: Ubuntu 18.04 (Using PyTorch) In TensorFlow, I know that I can restrict the amount of VRAM (which enables me to load and run networks although I don't have enough VRAM), but in PyTorch I didn't find this functionality yet. Any ideas how to solve this?
Using third party dependency You could get size of model in bytes using third party library torchfunc (disclaimer I'm the author). import torchfunc # Assuming model is loaded print(torchfunc.sizeof(model)) No dependencies This function is pretty simple and shirt, you could just copy it, see source code Exact size It's just size of your model, there is more VRAM memory usage during forward and backward and depends on size of your batch. You could try pytorch_modelsize library for estimations of those (not sure whether it will work for your networks though). Clear cache You should clear your cache before running network (sometimes restarting workstation helped in my case), as you definitely have enough memory. Killing processes using GPU should help as well. Once again torchfunc could help, issue the following command: import torchfunc torchfunc.cuda.reset()
https://stackoverflow.com/questions/59728088/
I want to show accuracy under each picture output shown in Image Classification with Transfer Learning in PyTorch
I'm follow this link : https://stackabuse.com/image-classification-with-transfer-learning-and-pytorch/#settingupapretrainedmodel but I'm new with coding skill. please tell me how to show accuracy value under image.
The model used in that example returns a tensor of logits of shape (batch size, classes). Assuming what you mean by "accuracy value" is the predicted probability of the class with the largest probability, what you need to do is first compute your probabilities by taking the SoftMax of the output from the model, which gives the predicted probabilities for each image in your batch. Their visualize_model function would look something like the following, though I haven't tested it. def visualize_model(model, num_images=6): was_training = model.training model.eval() images_handeled = 0 fig = plt.figure() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) probabilities = nn.functional.softmax(outputs, dim=-1) # compute probabilities _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): images_handeled += 1 ax = plt.subplot(num_images//2, 2, images_handeled) ax.axis('off') ax.set_title('predicted: {}, probability: {}'.format(class_names[preds[j]], probabilities[preds[j]])) # add predicted class probability imshow(inputs.cpu().data[j]) if images_handeled == num_images: model.train(mode=was_training) return model.train(mode=was_training) Or do you mean overall classification accuracy?
https://stackoverflow.com/questions/59731860/
Using `DataParallel` when network needs a shared (constant) `Tensor`
I would like to use DataParallel to distribute my computations across multiple GPUs along the batch dimension. My network requires a Tensor (let's call it A) internally, which is constant and doens't change through the optimization. It seems that DataParallel does not automatically copy this Tensor to all the GPUs in question, and the network will thus complain that the chunk of the input data x that it sees resides on a different GPU than A. Is there a way DataParallel can handle this situation automatically? Alternatively, is there a way to copy a Tensor to all GPUs? Or should I just keep one Tensor for each GPU and manually figure out which copy to use depending on where the chunk seen by forward resides?
You should wrap your tensor in torch.nn.Parameter and set requires_grad=False during it's creation. torch.nn.Parameter does not mean the tensor has to be trainable. It merely means it is part of the model and should be transferred if needed (e.g. multiple GPU). If that wasn't the case, there is no way for torch to know which tensor inside __init__ is part of model (you could do some operations on tensors and add to self just to get something done). I don't see a need for another function to do just that, though the name might be confusing a little bit.
https://stackoverflow.com/questions/59732129/
Image clustering - allocating memory on GPU
I've written this code for image classification by pretrained googlenet: gnet = models.googlenet(pretrained=True).cuda() transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(32), transforms.ToTensor()]) images = {} resultDist = {} i = 1 for f in glob.iglob("/data/home/student/HW3/trainData/train2014/*"): print(i) i = i + 1 image = Image.open(f) # transform, create batch and get gnet weights img_t = transform(image).cuda() batch_t = torch.unsqueeze(img_t, 0).cuda() try: gnet.eval() out = gnet(batch_t) resultDist[f[-10:-4]] = out del out except: print(img_t.shape) del img_t del batch_t image.close() torch.cuda.empty_cache() i = i + 1 torch.save(resultDist, '/data/home/student/HW3/googlenetOutput1.pkl') I deleted all the possible tensors from the GPU after using them, but after about 8000 images from my dataset the GPU is full. I found the problem to be in: resultDist[f[-10:-4]] = out The dictionary taking alot of space and I can't delete it because I want to save my data to pkl file.
Since you're not doing backprop wrap your whole loop with a with torch.no_grad(): statement since otherwise a computation graph is created and intermittent results may be stored on the GPU for later application of backprop. This takes a fair amount of space. Also you probably want to save out.cpu() so your results aren't left on the GPU. ... with torch.no_grad(): for f in glob.iglob("/data/home/student/HW3/trainData/train2014/*"): ... resultDist[f[-10:-4]] = out.cpu() ... torch.save(resultDist, '/data/home/student/HW3/googlenetOutput1.pkl')
https://stackoverflow.com/questions/59741210/
no reference to Module.parameters() after using more than once
I have a class that inherits from torch.nn.Module, now when I do this code: d = net.parameters() print(len(list(d))) print(len(list(d))) print(len(list(d))) the output is: 10 0 0 So I have reference to the net.parameters() only once, whys that? Then it apparently disappear.. I got this error while trying to make my own Optimizer, so I pass this net.parameters() as a parameter to my new class, and apparently I couldn't use it because of that odd situation.
This is working as expected. Module.parameters() returns an iterator, more specifically, a Python generator. One thing about them is that you cannot rewind a generator. So, in the first list(d) call, you are actually "consuming" all the generator. Then, if you try to do that again, it will be empty. If you're wondering, the .parameters() implementation can be seen here, and it is very simple: def parameters(self, recurse=True): for name, param in self.named_parameters(recurse=recurse): yield param Perhaps it is easier to wrap your mind around it with this toy example: def g(): for x in [0, 1, 2, 3, 4]: yield x d = g() print(list(d)) # prints: [0, 1, 2, 3, 4] print(list(d)) # prints: []
https://stackoverflow.com/questions/59743624/
Tensor Entry Selection Logic Divergence in PyTorch & Numpy
Description I'm setting up a torch.Tensor for masking purpose. When attempting to select entries by indices, it turns out that behaviors between using numpy.ndarray and torch.Tensor to hold index data are different. I would like to have access to the design in both frameworks and related documents that explain the difference. Steps to replicate Environment Pytorch 1.3 in container from official release: pytorch/pytorch:1.3-cuda10.1-cudnn7-devel Example Say I need to set up mask as torch.Tensor object with shape [3,3,3] and set values at entries (0,0,1) & (1,2,0) to 1. The code below explains the difference. mask = torch.zeros([3,3,3]) indices = torch.tensor([[0, 1], [0, 2], [1, 0]]) mask[indices.numpy()] = 1 # Works # mask[indices] = 1 # Incorrect result I noticed that when using mask[indices.numpy()] a new torch.Tensor of shape [2], while mask[indices] returns a new torch.Tensor of shape [3, 2, 3, 3], which suggests difference in tensor slicing logic.
You get different results because that's how indexing is implemented in Pytorch. If you pass an array as index, then it gets "unpacked". For example: indices = torch.tensor([[0, 1], [0, 2], [1, 0]]) mask = torch.arange(1,28).reshape(3,3,3) # tensor([[[ 1, 2, 3], # [ 4, 5, 6], # [ 7, 8, 9]], # [[10, 11, 12], # [13, 14, 15], # [16, 17, 18]], # [[19, 20, 21], # [22, 23, 24], # [25, 26, 27]]]) The mask[indices.numpy()] is equivalent to mask[[0, 1], [0, 2], [1, 0]], i.e. the elements of the i-th row of indices.numpy() are used to select elements of mask along i-th axis. So it returns tensor([mask[0,0,1], mask[1,2,0]]), i.e. tensor([2, 16]). On the other hand, when passing a tensor as index (I don't know the exact reason for this differentiation between arrays and tensors for indexing), it is not "unpacked" like an array, and the elements of the i-th row of the indices tensor are used for selecting the elements of mask along the axis-0. That is, mask[indices] is equivalent to mask[[[0, 1], [0, 2], [1, 0]], :, :] >>> mask[ind] tensor([[[[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]], [[10, 11, 12], [13, 14, 15], [16, 17, 18]]], [[[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]], [[19, 20, 21], [22, 23, 24], [25, 26, 27]]], [[[10, 11, 12], [13, 14, 15], [16, 17, 18]], [[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9]]]]) which is basically tensor(mask[[0,1], :, :], mask[[0,2],: ,:], mask[[1,0], :, :]) and has shape indices.shape + mask[0,:,:].shape == (3,2,3,3). So whole "sheets" are selected and stacked into new dimensions. Note that this is not a new tensor, but a special view of mask. Therefore if you assign mask[indices] = 1, with this particular indices, then all the elements of mask will become 1.
https://stackoverflow.com/questions/59744467/
How to specify pytorch / cuda version in pipenv
I am trying to install a specific version of pytorch that is compatible with a specific cuda driver version with pipenv. The pytorch website shows how to to this with pip: pip3 install torch==1.3.1+cu92 torchvision==0.4.2+cu92 -f https://download.pytorch.org/whl/torch_stable.html I tried to convert this into an entry in my Pipfile like this: [[source]] name = "pytorch" url = "https://download.pytorch.org/whl/torch_stable.html" verify_ssl = false pytorch = {version="==1.3.1+cu92", index="pytorch"} torchvision = {version="==0.4.2+cu92", index="pytorch"} However, this does not work. The dependency with this version can not be resolved. I am not sure if the url that is listed with the -f parameter in the pip3 command is even a valid source for pipenv. I could install both libraries by just passing the command through to pip like this: pipenv run pip install torch==1.3.1+cu92 torchvision==0.4.2+cu92 -f https://download.pytorch.org/whl/torch_stable.html but I am not really satisfied with that solution since the dependencies are not in the Pipfile and I have to manually document the usage of this command.
The problem with the approach above lies in the structure of https://download.pytorch.org/whl/torch_stable.html. Pipenv can only find torch versions 0.1 to 0.4.1 because all others have the cuda (or cpu) version as a prefix e.g. cu92/torch-0.4.1-cp27-cp27m-linux_x86_64.whl. But the cuda version is a subdirectory. So if you change the url of the source to the cuda version and only specify the torch version in the dependencies it works. [[source]] name = "pytorch" url = "https://download.pytorch.org/whl/cu92" verify_ssl = false [packages] torch = {index = "pytorch",version = "==1.4.0"} The only problem I encountered is that numpy is not recognized as a dependency of pytoch 1.4.0. But this seems to be a problem of the specific pytorch wheel. With version 1.3.1 or 1.5.1 and a recent pipenv version it works. So if after the installation with pipenv install, the command pipenv run python -c "import torch" throws an error, numpy must be added manually.
https://stackoverflow.com/questions/59752559/
Error while downloading pre-trained GAN model in Pytorch : 'memory' file not found
I was following the steps given in https://modelzoo.co/model/pytorch-cyclegan-and-pix2pix to download a pre-trained model. These where the first 3 commands given there: git clone https://github.com/pytorch/vision cd vision **python setup.py install** However when I ran the third line, I got an error: fatal error: 'memory' file not found #include <memory> error: command 'gcc' failed with exit status 1 If anyone has some idea on how to overcome this error, it would be really helpful.
See this post. Try the line apt-get install build-essential
https://stackoverflow.com/questions/59754529/
Repeating a pytorch tensor without copying memory
Does pytorch support repeating a tensor without allocating significantly more memory? Assume we have a tensor t = torch.ones((1,1000,1000)) t10 = t.repeat(10,1,1) Repeating t 10 times will require take 10x the memory. Is there a way how I can create a tensor t10 without allocating significantly more memory? Here is a related question, but without answers.
You can use torch.expand t = torch.ones((1, 1000, 1000)) t10 = t.expand(10, 1000, 1000) Keep in mind that the t10 is just a reference to t. So for example, a change to t10[0,0,0] will result in the same change in t[0,0,0] and every member of t10[:,0,0]. Other than direct access, most operations performed on t10 will cause memory to be copied which will break the reference and cause more memory to be used. For example: changing the device (.cpu(), .to(device=...), .cuda()), changing the datatype (.float(), .long(), .to(dtype=...)), or using .contiguous().
https://stackoverflow.com/questions/59757933/
How does torch.distributed.barrier() work
I've read all the documentations I could find about torch.distributed.barrier(), but still having trouble understanding how it's being used in this script and would really appreciate some help. So the official doc of torch.distributed.barrier says it "Synchronizes all processes.This collective blocks processes until the whole group enters this function, if async_op is False, or if async work handle is called on wait()." It's used in two places in the script: First place if args.local_rank not in [-1, 0] and not evaluate: torch.distributed.barrier() # Make sure only the first process in distributed training process the dataset, and the others will use the cache ... (preprocesses the data and save the preprocessed data) if args.local_rank == 0 and not evaluate: torch.distributed.barrier() Second place if args.local_rank not in [-1, 0]: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab ... (loads the model and the vocabulary) if args.local_rank == 0: torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab I'm having trouble relating the comment in the code to the functionality of this function stated in the official doc. How does it make sure only the first process executes the code between the two calls of torch.distributed.barrier() and why it only checks whether the local rank is 0 before the second call? Thanks in advance!
First you need to understand the ranks. To be brief: in a multiprocessing context we typically assume that rank 0 is the first process or base process. The other processes are then ranked differently, e.g. 1, 2, 3, totalling four processes in total. Some operations are not necessary to be done in parallel or you just need one process to do some preprocessing or caching so that the other processes can use that data. In your example, if the first if statement is entered by the non-base processes (rank 1, 2, 3), they will block (or "wait") because they run into the barrier. They wait there, because barrier() blocks until all processes have reached a barrier, but the base process has not reached a barrier yet. So at this point the non-base processes (1, 2, 3) are blocked, but the base process (0) continues. The base process will do some operations (preprocess and cache data, in this case) until it reaches the second if-statement. There, the base process will run into a barrier. At this point, all processes have stopped at a barrier, meaning that all current barriers can be lifted and all processes can continue. Because the base process prepared the data, the other processes can now use that data. Perhaps the most important thing to understand is: when a process encounters a barrier it will block the position of the barrier is not important (not all processes have to enter the same if-statement, for instance) a process is blocked by a barrier until all processes have encountered a barrier, upon which those barriers are lifted for all processes
https://stackoverflow.com/questions/59760328/
Is there a way to remove 1-2 (or more) specific neuron connections between the layers in the NN using Tensorflow (with Keras) or PyTorch?
I am in the process of making a GUI for creating different unique Neural Networks. I am down for using TensorFlow 2.0 (with Keras API) or PyTorch as the back-end. But I am lacking information on those topics, and would be super grateful if anyone can answer these: Language: Python 1) How to remove specific neuron connections between the layers in NN using any of these frameworks? 2) How to set the specific learning rule for some of the neurons in the layer? 3) How to set the specific activating function for some of the neurons in the layer? Huge thanks, please don't hesitate to answer, any information is useful. P.S. If anyone wants to contribute to the project, It would be awesome.
The way these libraries work is that the connections/weights for a layer are represented as a "tensor" (i.e. a multi-dimensional array or matrix). This makes the application of a layer behave as a linear algebra operation (a matrix multiplication). PyTorch/Tensorflow aren't representing the individual neural connections as distinct objects in the code in a way that makes sense to think about them as something to be operated on or deleted individually. 1) How to remove specific neuron connections between the layers in NN using any of these frameworks? You could set one of the weights to zero, i.e. layer.weights[x, y]=0, though that doesn't actually "remove" it or prevent it from later being changed to non-zero. Maybe you could use a sparse tensor instead of dense, which is a coordinate format containing a list of all non-zero indices and values. Sparse is only going to be more efficient if you have a low percent non-zero values. 2) How to set the specific learning rule for some of the neurons in the layer? By learning rule do you mean optimizers? You can find other posts about multiple optimizers, which would probably work out to be similar to (3) below. E.g. https://discuss.pytorch.org/t/two-optimizers-for-one-model/11085 3) How to set the specific activating function for some of the neurons in the layer? Operators and activation functions are typically implemented to efficiently operate on a full tensor. You could split a layer into two separate smaller layers, and run them next to each other (at the same level in the network). E.g. if you had layer1=torch.nn.Linear(10, 10) but instead of just torch.relu(layer1(input)) you wanted to apply relu to some outputs and, say, sigmoid to others, you could just: layer1a = torch.nn.Linear(10, 5) layer2b = torch.nn.Linear(10, 5) and then torch.cat( (torch.relu(layer1a(x)), torch.sigmoid(layer1b(x)) ), 0) Similarly you can split any tensor into pieces, apply various functions to different ranges/values, and stitch the results back together with torch.cat.
https://stackoverflow.com/questions/59763208/
How to select correct value of learning rate multiplier?
I want to manually select correct Learning rate in an image classification problem using Pytorch by running the model for few epochs. I have used LR scheduler to decay the learning rate and also have manipulated Learning rate in optimizer parameter group but i am unable to see any change in Loss.
Adjusting the learning rate and finding "the one" can be very tedious and time consuming. Luckily for you, you are not the first one to be bothered by this issue and there are several approaches to adjust the learning rate in a more systematic way. To name just two of these methods: Hyper-gradient descent - treating the learning rate as a parameter and "learning" it as well. Cyclical learning rates - searching for an "optimal" learning rate within an interval.
https://stackoverflow.com/questions/59769309/
How to implement LadderNet (2 U-Nets) in Keras? (With available PyTorch script as reference)
I am trying to implement the architecture of LadderNet (https://arxiv.org/abs/1810.07810) in Keras, with only the PyTorch version available as reference. The architecture in the paper is comprised of 2 U-Nets: The codes for the PyTorch implementation of LadderNet's architecture (obtained from https://github.com/juntang-zhuang/LadderNet/blob/master/src/LadderNetv65.py) and Keras' implementation of U-Net (obtained from https://github.com/zhixuhao/unet/blob/master/model.py) are respectively: drop = 0.25 def conv3x3(in_planes, out_planes, stride=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=True) class BasicBlock(nn.Module): expansion = 1 def __init__(self, inplanes, planes, stride=1, downsample=None): super(BasicBlock, self).__init__() if inplanes!= planes: self.conv0 = conv3x3(inplanes,planes) self.inplanes = inplanes self.planes = planes self.conv1 = conv3x3(planes, planes, stride) #self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) #self.conv2 = conv3x3(planes, planes) #self.bn2 = nn.BatchNorm2d(planes) self.downsample = downsample self.stride = stride self.drop = nn.Dropout2d(p=drop) def forward(self, x): if self.inplanes != self.planes: x = self.conv0(x) x = F.relu(x) out = self.conv1(x) #out = self.bn1(out) out = self.relu(out) out = self.drop(out) out1 = self.conv1(out) #out1 = self.relu(out1) out2 = out1 + x return F.relu(out2) class Bottleneck(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1, downsample=None): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample self.stride = stride def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: residual = self.downsample(x) out += residual out = self.relu(out) return out class Initial_LadderBlock(nn.Module): def __init__(self,planes,layers,kernel=3,block=BasicBlock,inplanes = 3): super().__init__() self.planes = planes self.layers = layers self.kernel = kernel self.padding = int((kernel-1)/2) self.inconv = nn.Conv2d(in_channels=inplanes,out_channels=planes, kernel_size=3,stride=1,padding=1,bias=True) # create module list for down branch self.down_module_list = nn.ModuleList() for i in range(0,layers): self.down_module_list.append(block(planes*(2**i),planes*(2**i))) # use strided conv instead of poooling self.down_conv_list = nn.ModuleList() for i in range(0,layers): self.down_conv_list.append(nn.Conv2d(planes*2**i,planes*2**(i+1),stride=2,kernel_size=kernel,padding=self.padding)) # create module for bottom block self.bottom = block(planes*(2**layers),planes*(2**layers)) # create module list for up branch self.up_conv_list = nn.ModuleList() self.up_dense_list = nn.ModuleList() for i in range(0, layers): self.up_conv_list.append(nn.ConvTranspose2d(in_channels=planes*2**(layers-i), out_channels=planes*2**max(0,layers-i-1), kernel_size=3, stride=2,padding=1,output_padding=1,bias=True)) self.up_dense_list.append(block(planes*2**max(0,layers-i-1),planes*2**max(0,layers-i-1))) def forward(self, x): out = self.inconv(x) out = F.relu(out) down_out = [] # down branch for i in range(0,self.layers): out = self.down_module_list[i](out) down_out.append(out) out = self.down_conv_list[i](out) out = F.relu(out) # bottom branch out = self.bottom(out) bottom = out # up branch up_out = [] up_out.append(bottom) for j in range(0,self.layers): out = self.up_conv_list[j](out) + down_out[self.layers-j-1] #out = F.relu(out) out = self.up_dense_list[j](out) up_out.append(out) return up_out class LadderBlock(nn.Module): def __init__(self,planes,layers,kernel=3,block=BasicBlock,inplanes = 3): super().__init__() self.planes = planes self.layers = layers self.kernel = kernel self.padding = int((kernel-1)/2) self.inconv = block(planes,planes) # create module list for down branch self.down_module_list = nn.ModuleList() for i in range(0,layers): self.down_module_list.append(block(planes*(2**i),planes*(2**i))) # use strided conv instead of poooling self.down_conv_list = nn.ModuleList() for i in range(0,layers): self.down_conv_list.append(nn.Conv2d(planes*2**i,planes*2**(i+1),stride=2,kernel_size=kernel,padding=self.padding)) # create module for bottom block self.bottom = block(planes*(2**layers),planes*(2**layers)) # create module list for up branch self.up_conv_list = nn.ModuleList() self.up_dense_list = nn.ModuleList() for i in range(0, layers): self.up_conv_list.append(nn.ConvTranspose2d(planes*2**(layers-i), planes*2**max(0,layers-i-1), kernel_size=3, stride=2,padding=1,output_padding=1,bias=True)) self.up_dense_list.append(block(planes*2**max(0,layers-i-1),planes*2**max(0,layers-i-1))) def forward(self, x): out = self.inconv(x[-1]) down_out = [] # down branch for i in range(0,self.layers): out = out + x[-i-1] out = self.down_module_list[i](out) down_out.append(out) out = self.down_conv_list[i](out) out = F.relu(out) # bottom branch out = self.bottom(out) bottom = out # up branch up_out = [] up_out.append(bottom) for j in range(0,self.layers): out = self.up_conv_list[j](out) + down_out[self.layers-j-1] #out = F.relu(out) out = self.up_dense_list[j](out) up_out.append(out) return up_out class Final_LadderBlock(nn.Module): def __init__(self,planes,layers,kernel=3,block=BasicBlock,inplanes = 3): super().__init__() self.block = LadderBlock(planes,layers,kernel=kernel,block=block) def forward(self, x): out = self.block(x) return out[-1] class LadderNetv6(nn.Module): def __init__(self,layers=3,filters=16,num_classes=2,inplanes=3): super().__init__() self.initial_block = Initial_LadderBlock(planes=filters,layers=layers,inplanes=inplanes) #self.middle_block = LadderBlock(planes=filters,layers=layers) self.final_block = Final_LadderBlock(planes=filters,layers=layers) self.final = nn.Conv2d(in_channels=filters,out_channels=num_classes,kernel_size=1) def forward(self,x): out = self.initial_block(x) #out = self.middle_block(out) out = self.final_block(out) out = self.final(out) #out = F.relu(out) out = F.log_softmax(out,dim=1) return out and def unet(pretrained_weights = None,input_size = (256,256,1)): inputs = Input(input_size) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) drop4 = Dropout(0.5)(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) drop5 = Dropout(0.5)(conv5) up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5)) merge6 = concatenate([drop4,up6], axis = 3) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6)) merge7 = concatenate([conv3,up7], axis = 3) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7)) merge8 = concatenate([conv2,up8], axis = 3) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) merge9 = concatenate([conv1,up9], axis = 3) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9) model = Model(input = inputs, output = conv10) model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) #model.summary() if(pretrained_weights): model.load_weights(pretrained_weights) return model I'm very new to PyTorch, and I am still familiarizing myself with the transition between Keras and PyTorch, and I'm also hoping that the above can help in this transition of mine. With regards to the implementation in Keras for LadderNet, if I understood the paper correctly, is it simply just 2 U-Nets superimposed side-by-side (named LaddderNetKeras) as follows: def LadderNetKeras(pretrained_weights = None,input_size = (256,256,1)): inputs = Input(input_size) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) drop4 = Dropout(0.5)(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) drop5 = Dropout(0.5)(conv5) up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5)) merge6 = concatenate([drop4,up6], axis = 3) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6)) merge7 = concatenate([conv3,up7], axis = 3) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7)) merge8 = concatenate([conv2,up8], axis = 3) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) merge9 = concatenate([conv1,up9], axis = 3) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9) # SECOND U-NET conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv10) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) drop4 = Dropout(0.5)(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) drop5 = Dropout(0.5)(conv5) up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5)) merge6 = concatenate([drop4,up6], axis = 3) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6)) merge7 = concatenate([conv3,up7], axis = 3) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7)) merge8 = concatenate([conv2,up8], axis = 3) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) merge9 = concatenate([conv1,up9], axis = 3) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9) model = Model(input = inputs, output = conv10) model.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) #model.summary() if(pretrained_weights): model.load_weights(pretrained_weights) return model Thank you and some insights will be deeply appreciated!
There is an implementation of laddernet in Keras available here : https://github.com/divamgupta/ladder_network_keras/blob/master/ladder_net.py. Consider this as a starting point, I have used at a point this repository successfully.
https://stackoverflow.com/questions/59782403/
Why use relu before maxpooling?
In this pytorch neural network tutorial tutorial link I'm confuse why we need to use relu before max pooling. Isn't the pixel values in the image are already positive? I don't know why relu max(0, x) is needed. Can anybody give me some advise on this issue? class Net(nn.Module): ...(init function) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) window
The weights of the neural net can be negative thus you can have a negative activation and by using the relu function, you're only activating the nodes that serve the purpose.
https://stackoverflow.com/questions/59782904/
how to add tensor size pytorch
I have a problem while working on PyTorch. I'm trying to increase tensor size from [1,97,1] to [1,97,2]. How can I add "1" in tensor size?
Use pad import torch.nn.functional as F a = torch.empty((1,97,1)) a = F.pad(input=a, pad=(0,1)) # pad = (padding_left,padding_right) print(a.shape) >>> torch.Size([1, 97, 2]) it will pad 0 constant by default at right sight.
https://stackoverflow.com/questions/59783956/
Pytorch - Apply pooling on specific dimension
I have a 3 dimension vector. I would like to perform a 1d max pool on the second dimension. According to the documentation of pytorch the pooling is always performed on the last dimension. https://pytorch.org/docs/stable/nn.html#maxpool1d For example: >>> x = torch.rand(5, 64, 32) >>> pool = nn.MaxPool1d(2, 2) >>> pool(x).shape torch.Size([5, 64, 16]) My desired output: torch.Size([5, 32, 32]) How can i do that?
You can simply permute the dimensions: x = torch.rand(5, 128, 32) pool = nn.MaxPool1d(2, 2) pool(x.permute(0,2,1)).permute(0,2,1) # shape (5, 128, 32) -> (5, 64, 32)
https://stackoverflow.com/questions/59788096/
How do I make a mask of diagonal matrix, but starting from the 2nd column?
So here is what I can get with torch.eye(3,4) now The matrix I get: [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]] Is there any (easy)way to transform it, or make such a mask in this format: The matrix I want: [[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]
You can do it by using torch.diagonal and specifying the diagonal you want: >>> torch.diag(torch.tensor([1,1,1]), diagonal=1)[:-1] tensor([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]) If :attr:diagonal = 0, it is the main diagonal. If :attr:diagonal > 0, it is above the main diagonal. If :attr:diagonal < 0, it is below the main diagonal.
https://stackoverflow.com/questions/59794662/
Why does renewing an optimizer give a bad result?
I tried to change my optimizer, but first of all, I want to check whether the following two codes give the same results: optimizer = optim.Adam(params, lr) for epoch in range(500): .... optimizer.zero_grad() loss.backward() optimizer.step() for epoch in range(500): .... optimizer.zero_grad() loss.backward() optimizer.step() If I insert the same optimizer between 'for loops', optimizer = optim.Adam(params, lr) for epoch in range(500): .... optimizer.zero_grad() loss.backward() optimizer.step() optimizer = optim.Adam(params, lr) for epoch in range(500): .... optimizer.zero_grad() loss.backward() optimizer.step() The result become bad. Why does this happens? Doesn't optimizer just receive gradients from loss and operate gradiet descent like steps?
Different optimizers may have some "memory". For instance, Adam updates rule tracks the first and second moments of the gradients of each parameter and uses them to calculate the step size for each parameter. Therefore, if you initialize your optimizer you erase this information and consequently make the optimizer "less informed" resulting with sub optimal choises for step sizes.
https://stackoverflow.com/questions/59805592/
PyTorch model to C++
I have trained the detection algorithm and saved my best model. Now I want to convert my model (pretrained) to C++ and use it in my app. I wanted to know what are the possible ways to convert a pyTorch model to c++? Thanks!
You can use TorchScript intermediate representation of a PyTorch model, through tracing and scripting, that can be run in C++ environment. For this, you'll probably have to modify the model itself in order for it to be traced or scripted. You can use ONNX (Open Neural Network Exchange), through which you can export your model and load it in another C++ framework such as Caffe. It comes with its own implications though. The easiest is to try Embedding Python, through which you can run your python (pytorch) model in C++ environment. Note that the model will still run in python, but only through C++, so there won't be any speed gains that you might be expecting in C++. Also, with the release of torchvision 0.5, all models in torchvision have native support for TorchScript and ONNX.
https://stackoverflow.com/questions/59806553/