instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
where is torch.tensor.item() defined at the Github?
I am torch Newbie. when I was learning torch tutorial like below, I was curious about where tensor.item() is defined. import torch a = torch.tensor([1]) print( a.item() ) # it works without problem. In order to find what i dont know, First, I used VScode. but I got this. enter image description here not like this enter image description here Second, I searched "def item()" at the torch Github But, I could not find. T^T Could you tell me where tensor.item() is defined at the Github? Or where Class_torch.tensor's Menber functions(Method function) are defined??
Short answer: torch/csrc/autograd/python_variable_indexing.cpp:268 Long answer: I hope you like C++. ;) The first thing to know is that item() isn't (usually) a method in most Python classes. Instead, Python converts calls to item() into other underlying methods like __getitem__() as a convenience. Knowing that: class Tensor is defined at torch/tensor.py:40. Most of Torch's underlying, compute intensive functionality is implemented in C and C++, including Tensor. "class Tensor" is based on Torch._C.TensorBase which is provided to Python via a C API from torch/csrc/autograd/python_variable.cpp:812 THPVariableType is a map given to Python describing the C++ functions available on the object from Python. It's defined at torch/csrc/autograd/python_variable. The part relevant to you is the tp_as_mapping entry (line 752), which provides functions for objects implementing the mapping protocol - basically Pyton Array-like objects (Python Documentation) The THPVariable_as_mapping structure at line 725 provides the mapping methods.The second variable provides the a subscript function used to get items by index (Python documentation) Therefore, The C++ function TPHVariable_getitem provides the implementation of Torch.Tensor.item(), and it is defined at torch/csrc/autograd/python_variable_indexing
https://stackoverflow.com/questions/65445621/
PyTorch : How to apply the same random transformation to multiple image?
I am writing a simple transformation for a dataset which contains many pairs of images. As a data augmentation, I want to apply some random transformation for each pair but the images in that pair should be transformed in the same way. For example, given a pair of two images A and B, if A is flipped horizontally, B must be flipped horizontally as A. Then the next pair C and D should be differently transformed from A and B but C and D are transformed in the same way. I am trying that in the way below import random import numpy as np import torchvision.transforms as transforms from PIL import Image img_a = Image.open("sample_ajpg") # note that two images have the same size img_b = Image.open("sample_b.png") img_c, img_d = Image.open("sample_c.jpg"), Image.open("sample_d.png") transform = transforms.RandomChoice( [transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip()] ) random.seed(0) display(transform(img_a)) display(transform(img_b)) random.seed(1) display(transform(img_c)) display(transform(img_d)) Yet、 the above code does not choose the same transformation and as I tested, it is dependent on the number of times transform is called. Is there any way to force transforms.RandomChoice to use the same transform when specified?
Usually a workaround is to apply the transform on the first image, retrieve the parameters of that transform, then apply with a deterministic transform with those parameters on the remaining images. However, here RandomChoice does not provide an API to get the parameters of the applied transform since it involves a variable number of transforms. In those cases, I usually implement an overwrite to the original function. Looking at the torchvision implementation, it's as simple as: class RandomChoice(RandomTransforms): def __call__(self, img): t = random.choice(self.transforms) return t(img) Here are two possible solutions. You can either sample from the transform list on __init__ instead of on __call__: import random import torchvision.transforms as T class RandomChoice(torch.nn.Module): def __init__(self): super().__init__() self.t = random.choice(self.transforms) def __call__(self, img): return self.t(img) So you can do: transform = T.RandomChoice([ T.RandomHorizontalFlip(), T.RandomVerticalFlip() ]) display(transform(img_a)) # both img_a and img_b will display(transform(img_b)) # have the same transform transform = T.RandomChoice([ T.RandomHorizontalFlip(), T.RandomVerticalFlip() ]) display(transform(img_c)) # both img_c and img_d will display(transform(img_d)) # have the same transform Or better yet, transform the images in batch: import random import torchvision.transforms as T class RandomChoice(torch.nn.Module): def __init__(self, transforms): super().__init__() self.transforms = transforms def __call__(self, imgs): t = random.choice(self.transforms) return [t(img) for img in imgs] Which allows to do: transform = T.RandomChoice([ T.RandomHorizontalFlip(), T.RandomVerticalFlip() ]) img_at, img_bt = transform([img_a, img_b]) display(img_at) # both img_a and img_b will display(img_bt) # have the same transform img_ct, img_dt = transform([img_c, img_d]) display(img_ct) # both img_c and img_d will display(img_dt) # have the same transform
https://stackoverflow.com/questions/65447992/
How to handle this case in pytorch?
I have the following kinds of two corelated tensors data and mask. The size of data is 1x2x24x2 and the size of mask is 1x2x24. In mask, the True means the corresponding data in data is efficient and should be gradient back-propagated. data: tensor([[[[ 1.0663e+03, 5.5993e+02], [ 1.0612e+03, 7.2023e+02], [ 1.0831e+03, 7.2179e+02], [ 1.0945e+03, 5.6083e+02], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10]], [[ 6.9314e+02, 1.9700e+02], [ 6.3300e+02, 2.6924e+02], [ 6.3300e+02, 3.4165e+02], [ 7.7515e+02, 4.6000e+02], [ 8.2805e+02, 4.6000e+02], [ 9.0900e+02, 3.6276e+02], [ 9.0900e+02, 2.9035e+02], [ 7.9688e+02, 1.9700e+02], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10], [-1.0000e+10, 1.0000e+10]]]]) torch.Size([1, 2, 24, 2]) mask: tensor([[[ True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], [ True, True, True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]]]) torch.Size([1, 2, 24]) Based on the data and mask, I need further processing. For example, I need to do this kind of for loop: B, N = data.shape[0:2] result = torch.zeros((B, N), dtype=torch.float32) for b in range(B): for n in range(N): tmp_data = data[b,n] # 24 x 2 tmp_mask = mask[b,n] # 24 selected = tmp_data[tmp_mask] selected = torch.cat([selected, selected[0][None]], dim=0) total = selected[0:-1, 0] * selected[1:, 1] - selected[0:-1, 1] * selected[1:, 0] # do cross product total = torch.sum(total, dim=0) result[b,n] = torch.abs(total) / 2 Then, the result tensor is fed into subsequent processing, e.g. loss computation. My question is, is there any speed up method to get rid of the for-loop? The main difficulty lies in that for each b and n, the count of True in mask is not equal, sometimes it is 3, sometimes it is 5, but the maximum count of True in each sample is 8. So, in the for-loop, it is necessary to handle each case one by one. Is there any vectorized way for this? Another difficulty lies in the gradient back-propagation, which I need further investigation to check if this routine can work correctly for gradient back-propagation. I'm looking forward to an elegant way to handle this kind of tensor processing in pytorch. Thanks in advance! UPDATED: The data and mask were sorted by the previous routine, for example: sorted_inds = ... # B x N x 24 data = torch.gather(data_original, dim=2, index=sorted_indices.unsqueeze(-1).repeat(1,1,1,2)) # B x N x 24 x 2 mask= torch.gather(mask_original, dim=2, index=sorted_indices) # B x N x 24 # then the data and mask are obtained # continue processing in this post ...
You are looking to vectorize the two for loops with tensor operations. Generally speaking this is possible by just ignoring the first two axes with performing operations on the last two. However, in in this particular case I don't think this is achievable. The reason lies with the masking that takes place on data with mask (via indexing). This operation will only return a flattened tensor:. data[mask] tensor([[1066.3000, 559.9300], [1061.2000, 720.2300], [1083.1000, 721.7900], [1094.5000, 560.8300], [ 693.1400, 197.0000], [ 633.0000, 269.2400], [ 633.0000, 341.6500], [ 775.1500, 460.0000], [ 828.0500, 460.0000], [ 909.0000, 362.7600], [ 909.0000, 290.3500], [ 796.8800, 197.0000]]) As you can see the correct values are retained but not the desired shape. That's where the problem lies. This is what we would have wished to have (in order to proceed with a vectorized form). This is not possible as all tensor on a dimension must have the same length: the size on that dimension. tensor([[[1066.3000, 559.9300], [1061.2000, 720.2300], [1083.1000, 721.7900], [1094.5000, 560.8300]], [[ 693.1400, 197.0000], [ 633.0000, 269.2400], [ 633.0000, 341.6500], [ 775.1500, 460.0000], [ 828.0500, 460.0000], [ 909.0000, 362.7600], [ 909.0000, 290.3500], [ 796.8800, 197.0000]]) Concerning the cross-product. PyTorch provides a function for that: torch.cross, but it can only be used for 3D tensors. Above that it is not possible to use it. I think your implementation is fine. Some suggestions, these are not groundbreaking improvements but can help: There's a trick with indexing to avoid having to do x.unsqueeze(0) (or as you did x[None]) on one-element tensors: by slicing x[:1]. Argument dim generally defaults to 0. For .sum() you don't need it since you are looking for the total sum and not a sum per-axis. You could flatten the first two axes, perform a single loop and reshape at the end. Here, I have taken the liberty of tweaking your code: B, N, *_ = data.shape data_ = data.reshape(B*N, -1, 2) mask_ = mask.reshape(B*N, -1) result = [] for i in range(B*N): selected = data_[i][mask_[i]] looped = torch.cat([selected, selected[:1]]) cross = looped[:-1, 0]*looped[1:, 1] - looped[:-1, 1]*looped[1:, 0] result.append(cross.sum().abs() / 2) torch.tensor(result).reshape(B, N)
https://stackoverflow.com/questions/65449180/
BigBird, or Sparse self-attention: How to implement a sparse matrix?
This question is related to the new paper: Big Bird: Transformers for Longer Sequences. Mainly, about the implementation of the Sparse Attention (that is specified in the Supplemental material, part D). Currently, I am trying to implement it in PyTorch. They suggest a new way to speed up the computation by blocking the original query and key matrices (see, below) When you do the matrix multiplaciton in the step (b), you end up with something like that: . So I was wondering: how would you go from that representation (image above) to a sparse matrix (using PyTorch, see below)? In the paper, they just say: "simply reshape the result", and I do not know any easy ways to do so (especially, when I have multiple blocks in different positions (see step (c) on the first image). RESOLUTION: Huggingface has an implementation of BigBird in pytorch.
I end up following the guidelines in the paper. When it comes to the unpacking of the result I use: torch.sparse_coo_tensor EDIT: Sparse tensors are still memory-hungry! The more efficient solution is described here
https://stackoverflow.com/questions/65450215/
What is the opacity graph in tensorboard?
I am new to Tensorboard. This is an output I have by this line: self.logger.experiment.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch) I can only touch the dark blue line, which is probably the loss I logged, but not the light blue line. What is the light blue line? Why is the dark blue line not strictly continuous?
I asked the same question on Github. The light one is the actual data, and the bold curve is the smoothed curve. Use the slider in the UI to adjust it.
https://stackoverflow.com/questions/65450774/
Pytorch: Why batch is the second dimension in the default LSTM?
In the PyTorch LSTM documentation it is written: batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False I'm wondering why they chose the default batch dimension as the second one and not the first one. for me, it is easier to imaging my data as [batch, seq, feature] than [seq, batch, feature]. The first one is more intuitive for me and the second one is counterintuitive. I'm asking here to know if the is any reason behind this and if you can help me to have some understanding about it.
As far as I know, there is not a heavily justified answer. Nowadays, it is different from other frameworks where, as you say, the shape is more intuitive, such as Keras, but only for compatibility reasons with older versions, changing a default parameter that modifies the dimensions of a vector would probably break half of the models out there if their maintainers update to newer PyTorch versions. Probably the idea, in the beginning, was to set the temporal dimension first to simplify the iterating process over time, so you can just do a for t, out_t in enumerate(my_tensor) instead of having to do less visual stuff such as accessing with my_tensor[:, i] and having to iterate in range(time).
https://stackoverflow.com/questions/65451265/
why pytorch linear model isn't using sigmoid function
I'm expecting that the linear model in pytorch yields sigmoid(WX+b). But I see it is only returning Wx+b. Why is this the case? In Udacity "Intro to deep learning with pytorch" -> Lesson 2: Introduction to Neural Networks, They say that the output is sigmoid: ̂ =(11+22+) From the below code I was expecting y cap to be 0.38391371665752183, but it is just the value of WX+b, which I confirmed the output. Why is this discrepancy? import torch from torch import nn import numpy as np torch.manual_seed(0) model = nn.Linear(2,1) w1 = model.weight.detach().numpy() b1 = model.bias.detach().numpy() print (f'model.weight = {w1}, model.bias={b1}') x = torch.tensor([[0.2877, 0.2914]]) print(f'model predicted {model(x)}') z = x.numpy()[0][0] * w1[0][0] + x.numpy()[0][1] * w1 [0][1] + b1[0] print(f'manual multiplication yielded {z}') ycap = 1/(1+ np.exp(-z)) print(f'y cap is {ycap}') Output: model.weight = [[-0.00529398 0.3793229 ]], model.bias=[-0.58198076] model predicted tensor([[-0.4730]], grad_fn=<AddmmBackward>) manual multiplication yielded -0.4729691743850708 y cap is 0.38391371665752183
The nn.Linear layer is a linear fully connected layer. It corresponds to wX+b, not sigmoid(WX+b). As the name implies, it's a linear function. You can see it as a matrix multiplication (with or without a bias). Therefore it does not have an activation function (i.e. nonlinearities) attached. If you want to append an activation function to it, you can do so by defining a sequential model: model = nn.Sequential( nn.Linear(2, 1) nn.Sigmoid() ) Edit - if you want to make sure: x = torch.tensor([[0.2877, 0.2914]]) model = nn.Linear(2,1) m1 = nn.Sequential(model, nn.Sigmoid()) m1(x)[0].item(), torch.sigmoid(model(x))[0].item()
https://stackoverflow.com/questions/65451589/
Expected input batch_size (18) to match target batch_size (6)
Is RNN for image classification available only for gray image? The following program works for gray image classification. If RGB images are used, I have this error: Expected input batch_size (18) to match target batch_size (6) at this line loss = criterion(outputs, labels). My data loading for train, valid and test are as follows. input_size = 300 inputH = 300 inputW = 300 #Data transform (normalization & data augmentation) stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) train_resize_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) train_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.RandomHorizontalFlip(), tt.ToTensor(), tt.Normalize(*stats)]) valid_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) test_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) #Create dataset train_ds = ImageFolder('./data/train', train_tfms) valid_ds = ImageFolder('./data/valid', valid_tfms) test_ds = ImageFolder('./data/test', test_tfms) from torch.utils.data.dataloader import DataLoader batch_size = 6 #Training data loader train_dl = DataLoader(train_ds, batch_size, shuffle = True, num_workers = 8, pin_memory=True) #Validation data loader valid_dl = DataLoader(valid_ds, batch_size, shuffle = True, num_workers = 8, pin_memory=True) #Test data loader test_dl = DataLoader(test_ds, 1, shuffle = False, num_workers = 1, pin_memory=True) My model is as follows. num_steps = 300 hidden_size = 256 #size of hidden layers num_classes = 5 num_epochs = 20 learning_rate = 0.001 # Fully connected neural network with one hidden layer num_layers = 2 # 2 RNN layers are stacked class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True, dropout=0.2)#batch must have first dimension #our inpyt needs to have shape #x -> (batch_size, seq, input_size) self.fc = nn.Linear(hidden_size, num_classes)#this fc is after RNN. So needs the last hidden size of RNN def forward(self, x): #according to ducumentation of RNN in pytorch #rnn needs input, h_0 for inputs at RNN (h_0 is initial hidden state) #the following one is initial hidden layer h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)#first one is number of layers and second one is batch size #output has two outputs. The first tensor contains the output features of the hidden last layer for all time steps #the second one is hidden state f out, _ = self.rnn(x, h0) #output has batch_size, num_steps, hidden size #we need to decode hidden state only the last time step #out (N, 30, 128) #Since we need only the last time step #Out (N, 128) out = out[:, -1, :] #-1 for last time step, take all for N and 128 out = self.fc(out) return out stacked_rnn_model = RNN(input_size, hidden_size, num_layers, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss()#cross entropy has softmax at output #optimizer = torch.optim.Adam(stacked_rnn_model.parameters(), lr=learning_rate) #optimizer used gradient optimization using Adam optimizer = torch.optim.SGD(stacked_rnn_model.parameters(), lr=learning_rate) # Train the model n_total_steps = len(train_dl) for epoch in range(num_epochs): t_losses=[] for i, (images, labels) in enumerate(train_dl): # origin shape: [6, 3, 300, 300] # resized: [6, 300, 300] images = images.reshape(-1, num_steps, input_size).to(device) print('images shape') print(images.shape) labels = labels.to(device) # Forward pass outputs = stacked_rnn_model(images) print('outputs shape') print(outputs.shape) loss = criterion(outputs, labels) t_losses.append(loss) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() Printing images and outputs shapes are images shape torch.Size([18, 300, 300]) outputs shape torch.Size([18, 5]) Where is the mistake?
Tl;dr: You are flattening the first two axes, namely batch and channels. I am not sure you are taking the right approach but I will write about that layer. In any case, let's look at the issue you are facing. You have a data loader that produces (6, 3, 300, 300), i.e. batches of 6 three-channel 300x300 images. By the look of it you are looking to reshape each batch element (3, 300, 300) into (step_size=300, -1). However instead of that you are affecting the first axis - which you shouldn't - with images.reshape(-1, num_steps, input_size). This will have the desired effect when working with a single-channel images since dim=1 wouldn't be the "channel axis". In your case your have 3 channels, therefore, the resulting shape is: (6*3*300*300//300//300, 300, 300) which is (18, 300, 300) since num_steps=300 and input_size=300. As a result you are left with 18 batch elements instead of 6. Instead what you want is to reshape with (batch_size, num_steps, -1). Leaving the last axis (a.k.a. seq_length) of variable size. This will result in a shape (6, 300, 900). Here is a corrected and reduced snippet: batch_size = 6 channels = 3 inputH, inputW = 300, 300 train_ds = TensorDataset(torch.rand(100, 3, inputH, inputW), torch.rand(100, 5)) train_dl = DataLoader(train_ds, batch_size) class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() # (batch_size, seq, input_size) self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) # (batch_size, hidden_size) self.fc = nn.Linear(hidden_size, num_classes) # (batch_size, num_classes) def forward(self, x): out, _ = self.rnn(x) out = out[:, -1, :] out = self.fc(out) return out num_steps = 300 input_size = inputH*inputW*channels//num_steps hidden_size = 256 num_classes = 5 num_layers = 2 rnn = RNN(input_size, hidden_size, num_layers, num_classes) for x, y in train_dl: print(x.shape, y.shape) images = images.reshape(batch_size, num_steps, -1) print(images.shape) outputs = rnn(images) print(outputs.shape) break As I said in the beginning I am a bit wary about this approach because you are essentially feeding your RNN a RGB 300x300 image in the form of a sequence of 300 flattened vectors... I can't say if that makes sense and terms of training and if the model will be able to learn from that. I could be wrong!
https://stackoverflow.com/questions/65454550/
How can I run both TensorFlow and Torch on Mac M1 MacBook Pro?
I encounter some problems on my MacBook Pro M1. I thought it would be easier to start coding on it, apparently it's not a ML beast yet... I need to use both PyTorch and TensorFlow on Python. I have installed TensorFlow 2.0 for Mac OS. The problem is: TensorFlow won't work when you use a x86_64 terminal. (So it doesn't work with PyCharm). However, I can import TensorFlow 2.0 from an arm terminal. Paradoxically, PyTorch won't install on a arm terminal, only on a x86_64 terminal. So, on the same Python terminal, I'm not able to import both torch and TensorFlow 2.0. Since HuggingFace transformers is crucial for me, and transformers needs both TensorFlow 2.0 and PyTorch, I need to go back on my old computer to code. I'm very disappointed! Anyone successfully imported both PyTorch and TensorFlow on a Mac M1 device? And does anyone know if there is a way to force PyCharm to use an arm terminal, so I can use TensorFlow 2.0 on PyCharm on my M1 MPB? Thank you!
After some research, I found this answer: https://github.com/pytorch/pytorch/issues/48145 . So if someone tries to run both Tensorflow and Torch on Mac M1 with PyCharm Apple Silicon Version, here's how to proceed: Create a new virtual env with Tensorflow Mac OS From terminal (not from PyCharm, I got an error), with sudo rights, install the whl file for torch, from the GitHub issue: https://github.com/wizyoung/AppleSiliconSelfBuilds/blob/main/builds/torch-1.8.0a0-cp39-cp39-macosx_11_0_arm64.whl Now you can open a PyCharm project with your freshly created virtual environment, and you'll be able to import both Tensorflow and Torch. However, a lot of librairies will be tricky to install like PyTorch...
https://stackoverflow.com/questions/65454946/
What does does if self.transforms mean?
What does if self.transforms: data = self.transforms(data) do? I don't understand the logic behind this line - what is the condition the line is using? I'm reading an article on creating a custom dataset with pytorch based on the below implementation: #custom dataset class MNISTDataset(Dataset): def __init__(self, images, labels=None, transforms=None): self.X = images self.y = labels self.transforms = transforms def __len__(self): return (len(self.X)) def __getitem__(self, i): data = self.X.iloc[i, :] data = np.asarray(data).astype(np.uint8).reshape(28, 28, 1) if self.transforms: data = self.transforms(data) if self.y is not None: return (data, self.y[i]) else: return data train_data = MNISTDataset(train_images, train_labels, transform) test_data = MNISTDataset(test_images, test_labels, transform) # dataloaders trainloader = DataLoader(train_data, batch_size=128, shuffle=True) testloader = DataLoader(test_data, batch_size=128, shuffle=True) thank you! i'm basically trying to understand why it works & how it applies transforms to the data.
The dataset MNISTDataset can optionnaly be initialized with a transform function. If such transform function is given it be saved in self.transforms else it will keep its default values None. When calling a new item with __getitem__, it first checks if the transform is a truthy value, in this case it checks if self.transforms can be coerced to True which is the case for a callable object. Otherwise it means self.transforms hasn't been provided in the first place and no transform function is applied on data. Here's a general example, out of a torch/torchvision context: def do(x, callback=None): if callback: # will be True if callback is a function/lambda return callback(x) return x do(2) # returns 2 do(2, callback=lambda x: 2*x) # returns 4
https://stackoverflow.com/questions/65455986/
How to use an embedding layer as a linear layer in PyTorch?
I'm currently working on a personal reimplementation of the Transformer paper and had a question. On page 5 in section "3.4 Embeddings and Softmax," it states: In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation. I've currently implemented my model to use just one embedding layer for both source and target tensors, but I'm wondering if there would be a way that I could use the weights of the embedding layer as a linear layer. What I've currently done is something like: output = previous_layer(previous_input) final_output = torch.matmul(output, embedding_layer.embedding.weight.transpose(1, 0)) I've transposed the weight matrix before matrix multiplication because it's of shape (vocab_size, embedding_dim) and the shape of output is (batch_size, seq_len, embedding_dim). Is this the proper way to use an embedding layer as a linear layer? If not, I'd like some tips on what I should be doing. Thanks. Edit Specific line for weight sharing from link in answer.
You could define a nn.Linear layer and replace its weights by copying the weights from the nn.Embedding: trg_emb = nn.Embedding(trg_enc_dim, embedding_dim) src_emb = nn.Embedding(src_enc_dim, embedding_dim) trg_projection = nn.Linear(embedding_dim, trg_enc_dim, bias=False) trg_projection.weight = trg_emb.weight # copy to nn.Linear You can also copy to the source embedding so your two embedding layers share the same weights as well. src_emb.weight = trg_emb.weight This would mean the source embedding would end up having the same size as the target embedding i.e. trg_enc_dim x embedding_dim. Here's a possible PyTorch implementation for inspiration.
https://stackoverflow.com/questions/65456174/
PyTorch equivalent to tf.nn.softmax_cross_entropy_with_logits and tf.nn.sigmoid_cross_entropy_with_logits
I found the post here. Here, we try to find an equivalence of tf.nn.softmax_cross_entropy_with_logits in PyTorch. The answer is still confusing to me. Here is the Tensorflow 2 code import tensorflow as tf import numpy as np # here we assume 2 batch size with 5 classes preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]]) labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]]) tf_preds = tf.convert_to_tensor(preds, dtype=tf.float32) tf_labels = tf.convert_to_tensor(labels, dtype=tf.float32) loss = tf.nn.softmax_cross_entropy_with_logits(logits=tf_preds, labels=tf_labels) It give me the loss as <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.2427604, 1.0636061], dtype=float32)> Here is the PyTorch code import torch import numpy as np preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]]) labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]]) torch_preds = torch.tensor(preds).float() torch_labels = torch.tensor(labels).float() loss = torch.nn.functional.cross_entropy(torch_preds, torch_labels) However, it raises: RuntimeError: 1D target tensor expected, multi-target not supported It seems that the problem is still unsolved. How to implement tf.nn.softmax_cross_entropy_with_logits in PyTorch? What about tf.nn.sigmoid_cross_entropy_with_logits?
- tf.nn.softmax_cross_entropy_with_logits Edit: This is actually not equivalent to F.cross_entropy. The latter can only handle the single-class classification setting. Not the more general case of multi-class classification, whereby the label can be comprised of multiple classes. Indeed, F.cross_entropy takes a unique class id as target (per instance), not a probability distribution over classes as tf.nn.softmax_cross_entropy_with_logits can expect to receive. >>> logits = torch.tensor([[4.0, 2.0, 1.0], [0.0, 5.0, 1.0]]) >>> labels = torch.tensor([[1.0, 0.0, 0.0], [0.0, 0.8, 0.2]]) In order to get the desired result apply a log-softmax to your logits then take the negative log-likelihood: >>> -torch.sum(F.log_softmax(logits, dim=1) * labels, dim=1) tensor([0.1698, 0.8247]) - tf.nn.sigmoid_cross_entropy_with_logits For this one you can apply F.binary_cross_entropy_with_logits. >>> F.binary_cross_entropy_with_logits(logits, labels, reduction='none') tensor([[0.0181, 2.1269, 1.3133], [0.6931, 1.0067, 1.1133]]) It is equivalent to applying a sigmoid then the negative log-likelihood, considering each class as a binary classification task: >>> labels*-torch.log(torch.sigmoid(logits)) + (1-labels)*-torch.log(1-torch.sigmoid(logits)) tensor([[0.0181, 2.1269, 1.3133], [0.6931, 1.0067, 1.1133]]) having imported torch.nn.functional as F.
https://stackoverflow.com/questions/65458736/
Changing batch,height,width,alpha to batch,alpha,height,width for pytorch
I have batch of images like torch.Size([10, 512, 512, 3]) I can loop to the images and can see 10 images. But to feed this batch to pytorch i have to convert it to torch.Size([10, 3, 512, 512]) I tried lot of ways but unable to get the solution for this How can we do that ?
Use permute: import torch x = torch.rand(10, 512, 512, 3) y = x.permute(0, 3, 1, 2) x.shape: torch.Size([10, 512, 512, 3]) y.shape: torch.Size([10, 3, 512, 512])
https://stackoverflow.com/questions/65463967/
Tensor manipulation - creating a positional tensor from a given tensor
I have a input tensor which has zero padding at the start and then a sequence of values. So something like: x = torch.tensor([[0, 2, 8, 12], [0, 0, 6, 3]]) What I need is another tensor having same shape and retaining 0's for the padding and an increasing sequence for the rest of the numbers. So my output tensor should be: y = ([[0, 1, 2, 3], [0, 0, 1, 2]]) I tried something like: MAX_SEQ=4 seq_start = np.nonzero(x) start = seq_start[0][0] pos_id = torch.cat((torch.from_numpy(np.zeros(start, dtype=int)).to(device), torch.arange(1, MAX_SEQ-start+1).to(device)), 0) print(pos_id) This works if the tensor is 1 dimensional but needs additional logic to handle it for 2-D shape. This can be done as np.nonzeros returns a tuple and we could probably loop thru' those tuples updating a counter or something. However I am sure there must be a simple tensor operation which should do this in 1-2 lines of code and also perhaps more effectively. Help appreciated
A possible solution in three small steps: Find the index of the first non zero element for each row. This can be done with a trick explained here (adapted here for non-binary tensors). > idx = torch.arange(x.shape[1], 0, -1) tensor([4, 3, 2, 1]) > xbin = torch.where(x == 0, 0, 1) tensor([[0, 1, 1, 1], [0, 0, 1, 1]]) > xbin*idx tensor([[0, 3, 2, 1], [0, 0, 2, 1]]) > indices = torch.argmax(xbin*idx, dim=1, keepdim=True) tensor([[1], [2]]) Create an arangement for the resulting tensor (without padding). This can be done by applying torch.repeat and torch.view on a torch.arange call: > rows, cols = x.shape > seq = torch.arange(1, cols+1).repeat(1, rows).view(-1, cols) tensor([[1, 2, 3, 4], [1, 2, 3, 4]]) Lastly - here's the trick! - we substract the index of the first non-zero element with the arangement, for each row. Then we mask the padding values and replace them with zeros: > pos_id = seq - indices tensor([[ 0, 1, 2, 3], [-1, 0, 1, 2]]) > mask = indices > seq - 1 tensor([[ True, False, False, False], [ True, True, False, False]]) > pos_id[mask] = 0 tensor([[0, 1, 2, 3], [0, 0, 1, 2]])
https://stackoverflow.com/questions/65464515/
Convert 5D tensor to 4D tensor in PyTorch
In PyTorch I have a 5D tensor X of dimensions B x 9 x C x H x W. I want to convert it into a 4D tensor Y with dimensions B x 9C x H x W such that concatenation happens channel wise. To illustrate let, a = X[1,0,:,:,:] b = X[1,1,:,:,:] c = X[1,2,:,:,:] ... i = X[1,8,:,:,:] Then in the tensor Y, a to i should be channel wise concatenated.
Try: import torch x = torch.rand(3, 4, 3, 2, 6) print(x.shape) y=x.flatten(start_dim=1, end_dim=2) print(y.shape) torch.Size([3, 4, 3, 2, 6]) torch.Size([3, 12, 2, 6])
https://stackoverflow.com/questions/65465646/
transpose on PyTorch : IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
I want to transpose the data I have using transpose, but I am encountering such an error. My data and related process uploaded to github. https://github.com/nurkbts/error/blob/main/error.ipynb
When using torch.bmm (batch matrix multiplication), both tensors must have three dimensions (first one being the batch). Please read the documentation for the details. Since you were trying to use bmm, you should just use the @ operator (equivalent to applying torch.matmul). Also, don't forget to transpose. This will give you a shape (64, 64). _scores = [email protected] / np.sqrt(64)
https://stackoverflow.com/questions/65466114/
ValueError: Expected target size (128, 44), got torch.Size([128, 100]), LSTM Pytorch
I want to build a model, that predicts next character based on the previous characters. I have spliced text into sequences of integers with length = 100(using dataset and dataloader). Dimensions of my input and target variables are: inputs dimension: (batch_size,sequence length). In my case (128,100) targets dimension: (batch_size,sequence length). In my case (128,100) After forward pass I get dimension of my predictions: (batch_size, sequence_length, vocabulary_size) which is in my case (128,100,44) but when I calculate my loss using nn.CrossEntropyLoss() function: batch_size = 128 sequence_length = 100 number_of_classes = 44 # creates random tensor of your output shape output = torch.rand(batch_size,sequence_length, number_of_classes) # creates tensor with random targets target = torch.randint(number_of_classes, (batch_size,sequence_length)).long() # define loss function and calculate loss criterion = nn.CrossEntropyLoss() loss = criterion(output, target) print(loss) I get an error: ValueError: Expected target size (128, 44), got torch.Size([128, 100]) Question is: how should I handle calculation of the loss function for many-to-many LSTM prediction? Especially sequence dimension? According to nn.CrossEntropyLoss Dimension must be(N,C,d1,d2...dN), where N is batch_size,C - number of classes. But what is D? Is it related to sequence length?
As a general comment, let me just say that you have asked many different questions, which makes it difficult for someone to answer. I suggest asking just one question per StackOverflow post, even if that means making several posts. I will answer just the main question that I think you are asking: "why is my code crashing and how to fix it?" and hopefully that will clear up your other questions. Per your code, the output of your model has dimensions (128, 100, 44) = (N, D, C). Here N is the minibatch size, C is the number of classes, and D is the dimensionality of your input. The cross entropy loss you are using expects the output to have dimension (N, C, D) and the target to have dimension (N, D). To clear up the documentation that says (N, C, D1, D2, ..., Dk), remember that your input can be an arbitrary tensor of any dimensionality. In your case inputs have length 100, but nothing is to stop someone from making a model with, say, a 100x100 image as input. (In that case the loss would expect output to have dimension (N, C, 100, 100).) But in your case, your input is one dimensional, so you have just a single D=100 for the length of your input. Now we see the error, outputs should be (N, C, D), but yours is (N, D, C). Your targets have the correct dimensions of (N, D). You have two paths the fix the issue. First is to change the structure of your network so that its output is (N, C, D), this may or may not be easy or what you want in the context of your model. The second option is to transpose your axes at the time of loss computation using torch.transpose https://pytorch.org/docs/stable/generated/torch.transpose.html batch_size = 128 sequence_length = 100 number_of_classes = 44 # creates random tensor of your output shape (N, D, C) output = torch.rand(batch_size,sequence_length, number_of_classes) # transposes dimensionality to (N, C, D) tansposed_output = torch.transpose(output, 1, 2) # creates tensor with random targets target = torch.randint(number_of_classes, (batch_size,sequence_length)).long() # define loss function and calculate loss criterion = nn.CrossEntropyLoss() loss = criterion(transposed_output, target) print(loss)
https://stackoverflow.com/questions/65470212/
How to add a new dimension to a PyTorch tensor?
In NumPy, I would do a = np.zeros((4, 5, 6)) a = a[:, :, np.newaxis, :] assert a.shape == (4, 5, 1, 6) How to do the same in PyTorch?
a = torch.zeros(4, 5, 6) a = a[:, :, None, :] assert a.shape == (4, 5, 1, 6)
https://stackoverflow.com/questions/65470807/
When I import torchvision, I get an error that the cuda version of pytorch and torchvision are different
I am getting the following error when importing torchvision. Detected that PyTorch and torchvision were compiled with different CUDA versions. PyTorch has CUDA Version=11.0 and torchvision has CUDA Version=10.1. Please reinstall the torchvision that matches your PyTorch install. How can I change the cuda version of pytorch to 10.1? 'conda install pytorch torchvision cudatoolkit=10.1-c pytorch' from anaconda prompt I get the same error even if I do it. I am using windows10, and I am using python version 3.7 in the virtual environment of jupyter notebook.
Uninstalling and installing it back might work !conda uninstall pytorch torchvision With: !conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
https://stackoverflow.com/questions/65475421/
LSTM for predicting characters: cell state and hidden state in the training loop
My goal is to build a model that predicts next character. I have built a model and here is my training loop: model = Model(input_size = 30,hidden_size = 256,output_size = len(dataset.vocab)) EPOCH = 10 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) init_states = None for epoch in range(EPOCH): loss_overall = 0.0 for i, (inputs,targets) in enumerate(dataloader): optimizer.zero_grad() pred = model.forward(inputs) loss = criterion(pred, targets) loss.backward() optimizer.step() As you can see I return only predictions of the model, but not cell_state and hidden_state. So alternative is : pred,cell_state,hidden_state = model.forward(inputs) My question is: should I do it for the prediction of characters task? Why/why not? And in general: when should I return my hidden and cell state?
To understand hidden states, here's a excellent diagram by @nnnmmm from this other StackOverflow post. The hidden states are (h_n, c_n) i.e. the hidden states at the last timestep. Notice how you can't access the previous states for timesteps < t and all hidden layers. Retrieving those final hidden states would be useful if you need to access hidden states for a bigger RNN comprised of multiple hidden layers. However, usually you would just use a single nn.LSTM module and set its num_layers to the desired value. You don't need to use hidden states. If you want to read more about this thread from the PyTorch forum. Back to your other question, let's take this model as an example rnn = nn.LSTM(input_size=10, hidden_size=256, num_layers=2, batch_first=True) This means an input sequence has seq_length elements of size input_size. Considering the batch on the first dimension, its shape turns out to be (batch, seq_len, input_size). out, (h, c) = rnn(x) If you are looking to build a character prediction model I see two options. You could evaluate a loss at every timestep. Consider an input sequence x and it's target y and the RNN output out. This means for every timestep t you will compute loss(out[t], y[t])`. And the total loss on this input sequence would be averaged over all timesteps. Else just consider the prediction on the last timestep and compute the loss: loss(out[-1], y) where y is the target which only contains the seq_length+1-th character of the sequence. If you're using nn.CrossEntropyLoss, both approaches will only require a single function call, as explained in your last thread.
https://stackoverflow.com/questions/65479097/
How to conda install CUDA enabled PyTorch in a Docker container?
I am trying to build a Docker container on a server within which a conda environment is built. All the other requirements are satisfied except for CUDA enabled PyTorch (I can get PyTorch working without CUDA however, no problem). How do I make sure PyTorch is using CUDA? This is the Dockerfile : # Use nvidia/cuda image FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 # set bash as current shell RUN chsh -s /bin/bash # install anaconda RUN apt-get update RUN apt-get install -y wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion && \ apt-get clean RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh -O ~/anaconda.sh && \ /bin/bash ~/anaconda.sh -b -p /opt/conda && \ rm ~/anaconda.sh && \ ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \ find /opt/conda/ -follow -type f -name '*.a' -delete && \ find /opt/conda/ -follow -type f -name '*.js.map' -delete && \ /opt/conda/bin/conda clean -afy # set path to conda ENV PATH /opt/conda/bin:$PATH # setup conda virtual environment COPY ./requirements.yaml /tmp/requirements.yaml RUN conda update conda \ && conda env create --name camera-seg -f /tmp/requirements.yaml \ && conda install -y -c conda-forge -n camera-seg flake8 # From the pythonspeed tutorial; Make RUN commands use the new environment SHELL ["conda", "run", "-n", "camera-seg", "/bin/bash", "-c"] # PyTorch with CUDA 10.2 RUN conda activate camera-seg && conda install pytorch torchvision cudatoolkit=10.2 -c pytorch RUN echo "conda activate camera-seg" > ~/.bashrc ENV PATH /opt/conda/envs/camera-seg/bin:$PATH This gives me the following error when I try to build this container ( docker build -t camera-seg . ): ..... Step 10/12 : RUN conda activate camera-seg && conda install pytorch torchvision cudatoolkit=10.2 -c pytorch ---> Running in e0dd3e648f7b ERROR conda.cli.main_run:execute(34): Subprocess for 'conda run ['/bin/bash', '-c', 'conda activate camera-seg && conda install pytorch torchvision cudatoolkit=10.2 -c pytorch']' command failed. (See above for error) CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init <SHELL_NAME> Currently supported shells are: - bash - fish - tcsh - xonsh - zsh - powershell See 'conda init --help' for more information and options. IMPORTANT: You may need to close and restart your shell after running 'conda init'. The command 'conda run -n camera-seg /bin/bash -c conda activate camera-seg && conda install pytorch torchvision cudatoolkit=10.2 -c pytorch' returned a non-zero code: 1 This is the requirements.yaml: name: camera-seg channels: - defaults - conda-forge dependencies: - python=3.6 - numpy - pillow - yaml - pyyaml - matplotlib - jupyter - notebook - tensorboardx - tensorboard - protobuf - tqdm When I put pytorch, torchvision and cudatoolkit=10.2 within the requirements.yaml, then PyTorch is successfully installed but it cannot recognize CUDA ( torch.cuda.is_available() returns False ). I have tried various solutions, for example, this, this and this and some different combinations of them but all to no avail. Any help is much appreciated. Thanks.
I got it working after many, many tries. Posting the answer here in case it helps anyone. Basically, I installed pytorch and torchvision through pip (from within the conda environment) and rest of the dependencies through conda as usual. This is how the final Dockerfile looks: # Use nvidia/cuda image FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 # set bash as current shell RUN chsh -s /bin/bash SHELL ["/bin/bash", "-c"] # install anaconda RUN apt-get update RUN apt-get install -y wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion && \ apt-get clean RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh -O ~/anaconda.sh && \ /bin/bash ~/anaconda.sh -b -p /opt/conda && \ rm ~/anaconda.sh && \ ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \ echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \ find /opt/conda/ -follow -type f -name '*.a' -delete && \ find /opt/conda/ -follow -type f -name '*.js.map' -delete && \ /opt/conda/bin/conda clean -afy # set path to conda ENV PATH /opt/conda/bin:$PATH # setup conda virtual environment COPY ./requirements.yaml /tmp/requirements.yaml RUN conda update conda \ && conda env create --name camera-seg -f /tmp/requirements.yaml RUN echo "conda activate camera-seg" >> ~/.bashrc ENV PATH /opt/conda/envs/camera-seg/bin:$PATH ENV CONDA_DEFAULT_ENV $camera-seg And this is how the requirements.yaml looks like: name: camera-seg channels: - defaults - conda-forge dependencies: - python=3.6 - pip - numpy - pillow - yaml - pyyaml - matplotlib - jupyter - notebook - tensorboardx - tensorboard - protobuf - tqdm - pip: - torch - torchvision Then I build the container using the command docker build -t camera-seg . and PyTorch is now being able to recognize CUDA.
https://stackoverflow.com/questions/65492490/
Pytorch batch row-wise application of function
I would like to figure out a way to apply a function which calculates pairwise distances, let's call it dists(A, B), row-wise for every input element in a batch, meaning: (100, 16, 3) -- input, 100 is the batch size so 100 instances, 16 is let's say image size, and 3 filters (asking for Conv2D) (5, 3) -- tensor for which I want to calculate the row-wise distance (assume it's A in dists(A, B) and is fixed) Now, for every instance I am supposed to get back a matrix of shape (5, 16). Naturally, I could use a for to span the batch and get my final (100,5,16) result. However, I would love to know if there is an easier way to apply my function row-wise, in parallel, using GPU. Thank you very much for your time.
Suppose we are using the L1 distance: import torch # data and target a = torch.randn(100, 16, 3) b = torch.randn(5, 3) # Reshape the tensors a = a.unsqueeze(1) b = b.unsqueeze(0).unsqueeze(2) print(a.shape, b.shape) # Compute distance dist = (a-b).abs().sum(3) print(dist.shape)
https://stackoverflow.com/questions/65494167/
Pytorch tutorial code error: "NameError: name 'net' is not defined"
The code comes from a tutorial for PyTorch. I'm using Google Collabs notebook to run the code. import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) self.fc1 = nn.Linear(16 * 6 * 6, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2,2)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) # The code works up until here. It's the following chunk that returns an error. params = list(net.parameters()) print(len(params)) print(params[0].size()) The error is: NameError Traceback (most recent call last) <ipython-input-17-ad79a1eff4f3> in <module>() 32 print(net) 33 ---> 34 params = list(net.parameters()) 35 print(len(params)) 36 print(params[0].size()) NameError: name 'net' is not defined The tutorial says the output should be this instead: 10 torch.Size([6, 1, 3, 3]) It looks to me like net is defined, so I'm unclear why this error is happening. I'm not expert with Python to begin with, so maybe there is something obvious I'm missing.
Your indentation implies that these lines: net = Net() print(net) are part of the Net class because they are in the same scope as the class definition. Move them outside of that class definition (ie, remove the whitespace indentation for those lines) and it should work. I'd also suggest moving to indentations with four spaces, not two, to make Python's whitespace easier to scan.
https://stackoverflow.com/questions/65494534/
Cannot export PyTorch model to ONNX
I am trying to convert a pre-trained torch model to ONNX, but recive the following error: RuntimeError: step!=1 is currently not supported I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization Here is the code I ran in Google Colab: !git clone https://github.com/richzhang/colorization.git cd colorization/ import colorizers model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval() input_names = [ "input" ] output_names = [ "output" ] dummy_input = torch.randn(1, 1, 256, 256, device='cpu') torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True, input_names=input_names, output_names=output_names) I appreciate any help :) UPDATE 1: @Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error: [TensorRT] ERROR: Network must have at least one output Here is the code I used: import torch import pycuda.driver as cuda import pycuda.autoinit import tensorrt as trt import onnx TRT_LOGGER = trt.Logger() def build_engine(onnx_file_path): # initialize TensorRT engine and parse ONNX model builder = trt.Builder(TRT_LOGGER) builder.max_workspace_size = 1 << 25 builder.max_batch_size = 1 if builder.platform_has_fast_fp16: builder.fp16_mode = True network = builder.create_network() parser = trt.OnnxParser(network, TRT_LOGGER) # parse ONNX with open(onnx_file_path, 'rb') as model: print('Beginning ONNX file parsing') parser.parse(model.read()) print('Completed parsing of ONNX file') # generate TensorRT engine optimized for the target platform print('Building an engine...') engine = builder.build_cuda_engine(network) context = engine.create_execution_context() print("Completed creating Engine") return engine, context ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above engine,_ = build_engine(ONNX_FILE_PATH) I tried to force the build_engine function to use the output of the network by: network.mark_output(network.get_layer(network.num_layers-1).get_output(0)) but it did not work. I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model: self.model2(conv1_2[:,:,::2,::2]) Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem): def sla(x, step): diff = x % step x += (diff > 0)*(step - diff) # add length to be able to reshape properly return torch.arange(x).reshape((-1, step))[:, 0] usage: >> sla(11, 3) tensor([0, 3, 6, 9]) Now you can replace every slice like this: conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)]) NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it. I have tested it with my fork of the repo and I was able to save the model: https://github.com/prokotg/colorization
https://stackoverflow.com/questions/65496349/
How to dump confusion matrix using TensorBoard logger in pytorch-lightning?
The official doc only states >>> from pytorch_lightning.metrics import ConfusionMatrix >>> target = torch.tensor([1, 1, 0, 0]) >>> preds = torch.tensor([0, 1, 0, 0]) >>> confmat = ConfusionMatrix(num_classes=2) >>> confmat(preds, target) This doesn't show how to use the metric with the framework. My attempt (methods are not complete and only show relevant parts): def __init__(...): self.val_confusion = pl.metrics.classification.ConfusionMatrix(num_classes=self._config.n_clusters) def validation_step(self, batch, batch_index): ... log_probs = self.forward(orig_batch) loss = self._criterion(log_probs, label_batch) self.val_confusion.update(log_probs, label_batch) self.log('validation_confusion_step', self.val_confusion, on_step=True, on_epoch=False) def validation_step_end(self, outputs): return outputs def validation_epoch_end(self, outs): self.log('validation_confusion_epoch', self.val_confusion.compute()) After the 0th epoch, this gives Traceback (most recent call last): File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 521, in train self.train_loop.run_training_epoch() File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 588, in run_training_epoch self.trainer.run_evaluation(test_mode=False) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 613, in run_evaluation self.evaluation_loop.log_evaluation_step_metrics(output, batch_idx) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\evaluation_loop.py", line 346, in log_evaluation_step_metrics self.__log_result_step_metrics(step_log_metrics, step_pbar_metrics, batch_idx) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\evaluation_loop.py", line 350, in __log_result_step_metrics cached_batch_pbar_metrics, cached_batch_log_metrics = cached_results.update_logger_connector() File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 378, in update_logger_connector batch_log_metrics = self.get_latest_batch_log_metrics() File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 418, in get_latest_batch_log_metrics batch_log_metrics = self.run_batch_from_func_name("get_batch_log_metrics") File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 414, in run_batch_from_func_name results = [func(include_forked_originals=False) for func in results] File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 414, in <listcomp> results = [func(include_forked_originals=False) for func in results] File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 122, in get_batch_log_metrics return self.run_latest_batch_metrics_with_func_name("get_batch_log_metrics", *args, **kwargs) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 115, in run_latest_batch_metrics_with_func_name for dl_idx in range(self.num_dataloaders) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 115, in <listcomp> for dl_idx in range(self.num_dataloaders) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\epoch_result_store.py", line 100, in get_latest_from_func_name results.update(func(*args, add_dataloader_idx=add_dataloader_idx, **kwargs)) File "C:\code\EPMD\Kodex\Templates\Testing\venv\lib\site-packages\pytorch_lightning\core\step_result.py", line 298, in get_batch_log_metrics result[dl_key] = self[k]._forward_cache.detach() AttributeError: 'NoneType' object has no attribute 'detach' It does pass the sanity validation check before training. The failure happens on the return in validation_step_end. Makes little sense to me. The exact same method of using mertics works fine with accuracy. How to get a correct confusion matrix?
Updated answer, August 2022 class IntHandler: def legend_artist(self, legend, orig_handle, fontsize, handlebox): x0, y0 = handlebox.xdescent, handlebox.ydescent text = plt.matplotlib.text.Text(x0, y0, str(orig_handle)) handlebox.add_artist(text) return text class LightningClassifier(LightningModule): ... def _common_step(self, batch, batch_nb, stage: str): assert stage in ("train", "val", "test") logger = self._logger augmented_image, labels = batch outputs, aux_outputs = self(augmented_image) loss = self._criterion(outputs, labels) return outputs, labels, loss def validation_step(self, batch, batch_nb): stage = "val" outputs, labels, loss = self._common_step(batch, batch_nb, stage=stage) self._common_log(loss, stage=stage) return {"loss": loss, "outputs": outputs, "labels": labels} def validation_epoch_end(self, outs): # see https://github.com/Lightning-AI/metrics/blob/ff61c482e5157b43e647565fa0020a4ead6e9d61/docs/source/pages/lightning.rst # each forward pass, thus leading to wrong accumulation. In practice do the following: tb = self.logger.experiment # noqa outputs = torch.cat([tmp['outputs'] for tmp in outs]) labels = torch.cat([tmp['labels'] for tmp in outs]) confusion = torchmetrics.ConfusionMatrix(num_classes=self.n_labels).to(outputs.get_device()) confusion(outputs, labels) computed_confusion = confusion.compute().detach().cpu().numpy().astype(int) # confusion matrix df_cm = pd.DataFrame( computed_confusion, index=self._label_ind_by_names.values(), columns=self._label_ind_by_names.values(), ) fig, ax = plt.subplots(figsize=(10, 5)) fig.subplots_adjust(left=0.05, right=.65) sn.set(font_scale=1.2) sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='d', ax=ax) ax.legend( self._label_ind_by_names.values(), self._label_ind_by_names.keys(), handler_map={int: IntHandler()}, loc='upper left', bbox_to_anchor=(1.2, 1) ) buf = io.BytesIO() plt.savefig(buf, format='jpeg', bbox_inches='tight') buf.seek(0) im = Image.open(buf) im = torchvision.transforms.ToTensor()(im) tb.add_image("val_confusion_matrix", im, global_step=self.current_epoch) output: Also based on this
https://stackoverflow.com/questions/65498782/
Error in getting accuracy using test set with PyTorch
I am trying to find the accuracy of my model that I created with PyTorch, but I get an error. Originally I had a different error, which is fixed, but now I get this error. I use this to get my test set: testset = torchvision.datasets.FashionMNIST(MNIST_DIR, train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1), torchvision.transforms.ToTensor(), # image to Tensor torchvision.transforms.Normalize((0.1307,), (0.3081,)) # image, label ])) testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False) When I try to access the test set I created, it tries to retrain the model for some reason, then proceeds to error out. This is the code that gets the accuracy and calls the test set correct = 0 total = 0 with torch.no_grad(): print("entered here") for (x, y_gt) in testloader: x = x.to(device) y_gt = y_gt.to(device) outputs = teacher_model(x) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total)) This is the error I am getting: Traceback (most recent call last): File "[path]/train_teacher_1.py", line 134, in <module> outputs = teacher_model(x) File "[path]\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "[path]\models.py", line 17, in forward x = F.relu(self.layer1(x)) File "[path]\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "[path]\anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "[path]\anaconda3\lib\site-packages\torch\nn\functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: mat1 dim 1 must match mat2 dim 0 Please let me know if you would like the rest of the code for training the model. I left it out because the post got too long. I am new to PyTorch and any help is appreciated. Thanks in advance.
I was able to figure out the issue and I needed to check the size of x. I added this to the for loop to fix it: x = torch.flatten(x, start_dim=1, end_dim=-1)
https://stackoverflow.com/questions/65499901/
Confused Regarding PyTorch GRU Docs
This may be too basic of a question, but what do the docs mean by the input to the GRU needs to be 3 dimensional? The GRU docs for PyTorch state: input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() for details. https://pytorch.org/docs/stable/generated/torch.nn.GRU.html Let us say I am trying to predict the next # in a sequence and have the following dataset: n, label 1, 2 2, 3 3, 6 4, 9 ... If I window the data using the prior 2 inputs for consideration when guessing the next, the dataset becomes: t-2, t-1, t, label na, na, 1, 2 na, 1, 2, 3 1, 2, 3, 6 2, 3, 4, 10 ... where t-x just represents using an input value from a prior time step. So, when creating a sequential loader, it should create the following tensor for the line 1,2,3,6: inputs: tensor([[1,2,3]]) #shape(1,3) labels: tensor([[6]]) #shape(1,1) I currently understand the input shape as (# batches, # features per batch) and the output shape as (# batches, # output features per batch) My question is, should that input tensor actually look like: tensor([[[1],[2],[3]]]) Which represents (# batches, #prior inputs to consider, #features per input) I guess I am better trying to understand why the input to a GRU has 3 dimensions in PyTorch. What does that 3rd dimension fundamentally represent? And if I have a transformed dataset like above, how to properly pass it to the model. Edit: So the pattern present is: 1 + 1 = 2 2 + 1 = 3 3 + 2 + 1 = 6 4+ 3 + 2 + 1 = 10 I want it where t-2, t-1, and t represent the features at each time step used to help guess. For example, at every point in time there could be 2 features. The dimensions would be (1 batch size, 3 timesteps, 2 features). My question is wether the GRU takes a flattened input: (1 batch size, 3 time steps * 2 features per time step) or the unflattened input: (1 batch size, 3 time steps, 2 features per timestep) I am currently under the impression that it is the 2nd input, but would like to check my understanding.
I figured it out. Essentially, the sequence length of 3 means that the input to the system needs to be: [[[1],[2],[3]], [[2], [3], [4]]] for a batch size of 2, sequence length of 3, and feature input per time step of 1. Essentially each sequence is an input at some time t to consider.
https://stackoverflow.com/questions/65505820/
Pytorch, standard layer to convert sequential output to binary?
I am working on a new Pytorch model which takes sequential data as input and I need to output just a single value, which I will then use a binary cross-entropy function to evaluate as a probability of 1 or 0. To be more concrete, lets say my sequence is 1000 time steps and only 2 dimensions, like a 2-dimensional sine wave, so the data shape would be 1000 x 2. I have done something like this before using an RNN, which there is a lot of content online. Because of the recurrent structure of the RNN, in order to do this we just look at final output of the RNN after processing the sequence. In this way the the final step output would be 2 dimensions, then we can apply a linear layer to convert 2 -> 1 dimension, et voila, its done. MY PROBLEM: What I am attempting to do now is not using a recurrent network, but instead an encoder with attention (Transformer). So the output of the encoder is now still 1000 steps long and whatever my embedded dimension is, likes say 8. So the output of the sequential encoder is shape 1000 x 8. So my issue is that I need to convert this output to a single value, to which I can apply the binary cross-entropy function. I am not finding an obvious way to do this. IDEAS: Traditionally with this kind of sequential model, the encoder feeds into a decoder and the decoder can then output a variable length sequence (this is used to language translation problems). My problem is different in that I don't want to output another sequence but just a single value. Maybe I need to convert the decoder in such a way where this works? The decoder usually takes a target value as well as the output from the encoder as input. The output from the decoder then has the same shape as this target value. An idea would be to use the traditional decoder and give a 1 length target, I would then get a 1 length output and I could use a traditional linear layer to convert this to my desired output. However this doesn't seem entirely logical because I really am not interested in outputting a sequence but just 1 value. Anyways just looking for some more ideas from the community, if you have any. Thanks!
I think this paper does what you wanted :) (Probably not the first paper that does this but it is the one that I recently read) Prepend an extra token to your sequence. The token can have a learnable embedding. After the transformer, discard (or not compute) the output at other positions. We only take the output from the first position, and transform it to the target that you needed. Image taken from the paper:
https://stackoverflow.com/questions/65507784/
Pytorch normalize 2D tensor
For more robustnes of my model I want to normalize my feature tensor. I tried doing it the way that is to the best of my knowledge standard for pictures: class Dataset(torch.utils.data.Dataset): 'Characterizes a dataset for PyTorch' def __init__(self, input_tensor, transform = transforms.Normalize(mean= 0.5, std=0.5)): self.labels = input_tensor[:,:,-1] self.features = input_tensor[:,:,:-1] self.transform = transform def __len__(self): return self.labels_planned.shape[0] def __getitem__(self, index): # Load data and get label X = self.features[index] y = self.labelslabels[index] if self.transform: X = self.transform(X) return X, y But receive this error message: ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([8, 25]). Everywhere I looked people suggest that one should use .view to generate the third dimension in order to comply with the standard shape of pictures, but this seems very odd to me. Is there maybe a cleaner way to do this. Also where should I best place the normalization? Just for the batch or for the entire train dataset?
You are asking two different questions, I will try to answer both. Indeed, you should first reshape to (c, h, w) where c is the channel dimension In most cases, you will need that extra dimension because most 'image' layers are built to receive 3d dimensional tensors - not counting the batch dimension - such as nn.Conv2d, BatchNorm2d, etc... I don't believe there's anyways around it, and doing so would restrict yourself to one-layer image datasets. You can broadcast to the desired shape with torch.reshape or Tensor.view: X = X.reshape(1, *X.shape) Or by adding an additional dimension using torch.unsqueeeze: X.unsqueeze(0) About normalization. Batch-normalization and dataset-normalization are two different approaches. The former is a technique that can achieve improved performance in convolution networks. This kind of operation can be implemented using a nn.BatchNorm2d layer and is done using learnable parameters: a scale factor (~ std) and a bias (~ mean). This type of normalization is applied when the model is called and is applied per-batch. The latter is a pre-processing technique which allows making different features have the same scale. This normalization can be applied inside the dataset per-element. It requires you measure the mean and standard deviation of your training set.
https://stackoverflow.com/questions/65508577/
Pytorch Resnet CNN only works when test data contains all classes
Hoping to get a hand with a weird CNN training issue. I am training a Resnet classifier to predict 4 classes of image from a ~10k image dataset. The code is pretty simple. Here's the Resnet/CNN setup part: #################################### ########### LOAD RESNET ############ #################################### device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.resnet50(pretrained=True) # for param in model.parameters(): param.requires_grad = False # model.fc = nn.Sequential(nn.Linear(2048, 512), nn.ReLU(), #nn.Dropout(0.2), nn.Linear(512, 10), nn.LogSoftmax(dim=1)) # criterion = nn.NLLLoss() # optimizer = optim.Adam(model.fc.parameters(), lr=0.003) # move model to gpu model.to(device) And here is the training stage (it batches the data in 500 images and shuffles the test datasets) and some accuracy results after some epochs: trainloader, testloader, n_batches = make_trainloader(all_data, vals, batch_size=500, randomize=True) ... for inputs, labels in trainloader: ... inputs, labels = inputs.to(device), labels.to(device) ... # PREDICT; outputs = model(inputs) ... epoch #: 12 Loss: 0.1689 Acc: 0.9400 labels: tensor([0, 0, 1, 0, 3, 0, 0, 2, 1, 2], device='cuda:0') predictions: tensor([0, 0, 1, 0, 3, 0, 0, 2, 1, 2], device='cuda:0') So the weird thing is that I can't seem to predict well on single images but only on large batches of data with mixed classes. For example, if I provide 500 images from class 1, the prediction is random, but if I provide 500 images mixed from the 4 classes (much like during training), the prediction is great (just like during training). It seems that I'm confused about how to use the ResNet classifier on single images even though it does seem to learn to predict the individual labels of the input data (see labels and prediction output above). Or that my classifier isn't learning single images, but groups of images, not sure. Any help or direction is appreciated (I can provide more code, but didn't want to make too long of a message). Here's the prediction code: # Predict randomize = False # load data from above inputs = test_data[:2000] vals_inputs = test_vals[:2000] print ("test data size: ", vals_inputs.shape) trainloader, testloader, n_batches = make_trainloader(inputs, vals_inputs, batch_size=500, randomize=randomize) for inputs, labels in trainloader: # load to device inputs, labels = inputs.to(device), labels.to(device) # PREDICT; outputs = model(inputs) _, preds = torch.max(outputs, 1) print ("prediction: ", preds[:10]) print ("labels: ", labels[:10]) ... test data size: torch.Size([2000]) prediction: tensor([1, 1, 2, 1, 2, 3, 2, 3, 2, 3], device='cuda:0') labels: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0') 0 Loss: 3.2936 Acc: 0.1420 prediction: tensor([1, 3, 3, 3, 3, 1, 2, 1, 1, 2], device='cuda:0') labels: tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cuda:0') 0 Loss: 2.1462 Acc: 0.2780 prediction: tensor([3, 3, 1, 2, 0, 1, 2, 1, 3, 2], device='cuda:0') labels: tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2], device='cuda:0') 0 Loss: 2.1975 Acc: 0.2560 Versus when I simply shuffle the data the accuracy is very high: # Predict randomize = True ... test data size: torch.Size([2000]) prediction: tensor([0, 0, 3, 2, 0, 2, 0, 3, 0, 2], device='cuda:0') labels: tensor([0, 0, 3, 2, 0, 2, 0, 3, 0, 2], device='cuda:0') 0 Loss: 0.1500 Acc: 0.9580 prediction: tensor([0, 3, 3, 3, 0, 0, 3, 2, 3, 3], device='cuda:0') labels: tensor([0, 2, 3, 0, 0, 0, 3, 2, 0, 3], device='cuda:0') 0 Loss: 0.1714 Acc: 0.9340 prediction: tensor([3, 3, 2, 2, 3, 1, 3, 0, 2, 2], device='cuda:0') labels: tensor([3, 3, 2, 2, 3, 1, 3, 0, 2, 2], device='cuda:0') 0 Loss: 0.1655 Acc: 0.9400
You need to call model.eval() before testing. (And change it back by calling model.train() before training) In training mode, BatchNorm will normalize your features by mean and variance. And you can expect that a batch with all classes being 1 will have very different statistic than a batch with a mixture of classes.
https://stackoverflow.com/questions/65511054/
How to find intersection of two sets of 2D tensors (points on a 2D plane) in Pytorch
I have two lists of Pytorch 2D tensors, which are points on a plane: ListA = tensor([ [1.0,2.0], [1.0,3.0], [4.0,8.0] ], device='cuda:0') ListB = tensor([ [5.0,7.0], [1.0,2.0], [4.0,8.0] ], device='cuda:0') How to compute ? Desired output = tensor([ [1.0,2.0] , [4.0,8.0] ], device='cuda:0') I would like to find the Intersection between two lists ListA and ListB. Note : Computation should be carried out only on CUDA.
There is no direct way in PyTorch to accomplish this (i.e., through a function). However, a workaround can be. Flattening both tensors: combined = torch.cat((ListA.view(-1), ListB.view(-1))) combined Out[52]: tensor([1., 2., 1., 3., 4., 8., 5., 7., 1., 2., 4., 8.], device='cuda:0') Finding unique elements: unique, counts = combined.unique(return_counts=True) intersection = unique[counts > 1].reshape(-1, ListA.shape[1]) intersection Out[55]: tensor([[1., 2.], [4., 8.]], device='cuda:0') Benchmarks: def find_intersection_two_tensors(A: tensor, B:tensor): combined = torch.cat((A.view(-1), B.view(-1))) unique, counts = combined.unique(return_counts=True) return unique[counts > 1].reshape(-1, A.shape[1]) Timing it %timeit find_intersection_two_tensors(ListA, ListB) 207 µs ± 2.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) If you are ok with moving to CPU, numpy could be a better solution in regards to performance: def find_intersection_two_ndarray(AGPU: tensor, BGPU: tensor): A = AGPU.view(-1).cpu().numpy() B = BGPU.view(-1).cpu().numpy() C = np.intersect1d(A, B) return torch.from_numpy(C).cuda('cuda:0') Timing it %timeit find_intersection_two_ndarray(ListA, ListB) 85.4 µs ± 1.57 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
https://stackoverflow.com/questions/65515708/
Can you use a different image size during transfer learning?
I have made a switch from TensorFlow to PyTorch recently. I use a famous Github repo for training on EfficientNets. I wrote the model initiation class as follows: class CustomEfficientNet(nn.Module): def __init__(self, config: type, pretrained: bool=True): super().__init__() self.config = config self.model = geffnet.create_model( model_name='EfficientNetB5', pretrained=pretrained) n_features = self.model.classifier.in_features self.model.classifier = nn.Linear(n_features, num_classes=5) def forward(self, input_neurons): output_predictions = self.model(input_neurons) return output_predictions In addition, in my transforms, I tend to useResize(img_size = 512, img_size=512) for my training on certain image classification tasks (Mostly Kaggle Competitions). So the question here is, the official input size for EfficientNetB5 is 456x456, but I used 512x512 or even 256x256 and get very decent results. Is this normal? Or did I miss out the source code where the author will resize into the native resolution for you? PS: This seems to be the norm in all the PyTorch Tutorials I saw on Kaggle. My full code can be seen here in this notebook here; I like to not leave logic gaps and therefore this question popped up.
Yes you can use different input sizes when it comes to transfer learning, after all the model that you load is just the set of weights of the fixed sequence of layers and fixed convolution kernel sizes. But I believe that there is some sort of minimum size that the model needs to work efficiently. You would still need to re-train the model but it will still converge quite quickly. You would have to check the official implementation on the minimum size of the model like the one in VGG16 where they specify that the width and height need to be at least 32.
https://stackoverflow.com/questions/65516526/
Unable to make predictions due to incompatible matrix shape/size
I am trying to make a model to predict insurance cost based on the individual. And this is the code for it. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader, TensorDataset import numpy as np import pandas as pd from LSR import ListSearchReplace as LSR csv = pd.read_csv("main.csv") partialInputs = csv[["age", "bmi", "children"]] smoker, sex = list(csv["smoker"]), list(csv["sex"]) L1 = LSR(smoker) L1.replace("yes", 1, True) L1.replace("no", 0, True) L2 = LSR(sex) L2.replace("female", 1, True) L2.replace("male", 0, True) pdReadySmoker = pd.DataFrame({"smoker": smoker}) pdReadySex = pd.DataFrame({"sex": sex}) SmokerAndSex = pd.merge(pdReadySmoker, pdReadySex, how="outer", left_index=True, right_index=True) INPUTS = pd.merge(partialInputs, SmokerAndSex, how="outer", left_index=True, right_index=True) TARGETS = csv["charges"] INPUTS = torch.from_numpy(np.array(INPUTS, dtype='float32')) TARGETS = torch.from_numpy(np.array(TARGETS, dtype='float32')) print(INPUTS.shape, TARGETS.shape) loss_fn = F.mse_loss model = nn.Linear(5, 3) # <-- changing this, changes the error message. opt = torch.optim.SGD(model.parameters(), lr=1e-5) trainDataset = TensorDataset(INPUTS, TARGETS) BATCH_SIZE = 5 trainDataloader = DataLoader(trainDataset, BATCH_SIZE, shuffle=True) def fit(numEpochs, model, loss_fn, opt, trainDataloader): for epochs in range(numEpochs): for inputBatch, targetBatch in trainDataloader: preds = model(inputBatch) loss = loss_fn(preds, targetBatch) loss.backward() opt.step() opt.zero_grad() e = epoch + 1 if e % 10 == 0: print(f"Epoch: {e/numEpochs}, loss: {loss.item():.4f}") fit(100, model, loss_fn, opt, trainDataloader) <-- error Error produced: <ipython-input-7-b7028a3d94fd>:5: UserWarning: Using a target size (torch.Size([5])) that is different to the input size (torch.Size([5, 3])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. loss = loss_fn(preds, targetBatch) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-d8f5bcdc847d> in <module> ----> 1 fit(100, model, loss_fn, opt, trainDataloader) <ipython-input-7-b7028a3d94fd> in fit(numEpochs, model, loss_fn, opt, trainDataloader) 3 for inputBatch, targetBatch in trainDataloader: 4 preds = model(inputBatch) ----> 5 loss = loss_fn(preds, targetBatch) 6 loss.backward() 7 D:\coding\machine-learning\env-ml\lib\site-packages\torch\nn\functional.py in mse_loss(input, target, size_average, reduce, reduction) 2657 reduction = _Reduction.legacy_get_string(size_average, reduce) 2658 -> 2659 expanded_input, expanded_target = torch.broadcast_tensors(input, target) 2660 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) 2661 D:\coding\machine-learning\env-ml\lib\site-packages\torch\functional.py in broadcast_tensors(*tensors) 69 if any(type(t) is not Tensor for t in tensors) and has_torch_function(tensors): 70 return handle_torch_function(broadcast_tensors, tensors, *tensors) ---> 71 return _VF.broadcast_tensors(tensors) # type: ignore 72 73 RuntimeError: The size of tensor a (3) must match the size of tensor b (5) at non-singleton dimension 1 I've tried changing the dimensions of the of model, and these are a few of the changes made and the associated errors: model = nn.Linear(5, 1338) Error: RuntimeError: The size of tensor a (1338) must match the size of tensor b (5) at non-singleton dimension 1 model = nn.Linear(1338, 1338) Error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (5x5 and 1338x1338) Sometimes this error, will make me change the matrix to the correct shape, but that results in the previous error regarding non-singleton dimension
This should be quite straight-forward, you only have a single layer. This is a matter of sorting the shapes right. You are feeding a nn.Linear layer an input with shape input_shape. This type of layer takes two arguments: in_features the number of features in the input vector, and out_features the number of features in the resulting vector. Since you are using the F.mse_loss, your target vector needs to have the same shape as your prediction. Bear in mind the first dimension is the batch dimension. In summary, your input tensor has shape (batch, input_size), your dense layer is defined as nn.Linear(input_size, out_size) and your target tensor has shape (batch, output_size). Coming back to your case, your TARGETS tensor is of shape (1338) so you either mean to: have a single prediction with 1338 components which would match a nn.Linear(?, 1338) and it would actually correspond to (1, 1338) (a single element in the batch). This can be fixed with TARGETS = TARGETS.unsqueeeze(0). or, there are actually 1338 predictions one element, which would match a nn.Linear(?, 1) and the appropriate target shape would be (1338, 1). This can be fixed with TARGETS = TARGETS.unsqueeeze(-1) (adds an additional axis after the last dimension).
https://stackoverflow.com/questions/65517495/
Create a knn adjacent matrix in Pytorch
In Pytorch, say I have a top-k indexing matrix P(B,N,k), a weight matrix W(B,N,N) and a target matrix A(B,N,N), I want to get a adjacent matrix that operates as the following loops: for i in range(B): for ii in range(N): for j in range(k): if weighted: A[i][ii][P[i][ii][j]] = W[i][ii][P[i][ii][j]] else: A[i][ii][P[i][ii][j]] = 1 How to implement it more efficiently and concisely in Pytorch?
I think you are looking for torch.scatter_: A.scatter_(dim=2, index=P, src=W) # for the weighted version A.scatter_(dim=2, index=P, src=torch.ones_like(W)) # for the un-weighted version
https://stackoverflow.com/questions/65517572/
Pytorch Loss Function for making embeddings similar
I am working on an embedding model, where there is a BERT model, which takes in text inputs and output a multidimensional vector. The goal of the model is to find similar embeddings (high cosine similarity) for texts which are similar and different embeddings (low cosine similarity) for texts that are dissimilar. When training in mini-batch mode, the BERT model gives a N*D dimensional output where N is the batch size and D is the output dimension of the BERT model. Also, I have a target matrix of dimension N*N, which contains 1 in the [i, j] th position if the sentence[i] and sentence[j] are similar in sense and -1 if not. What I want to do is find the loss/error for the entire batch by finding the cosine similarity of all embeddings in the BERT output and comparing it to the target matrix. What I did was simply multiply the tensor with its transpose and then take elementwise sigmoid. scores = torch.matmul(document_embedding, torch.transpose(document_embedding, 0, 1)) scores = torch.sigmoid(scores) loss = self.bceloss(scores, targets) But this does not seem to work. Is there any other way to do this? P.S. What I want to do is similar to the method described in this paper.
To calculate the cosine similarity between two vectors you would have used nn.CosineSimilarity. However, I don't think this allows you to get the pair-similarity from a set of n vectors. Fortunately enough, you can implement it yourself with some tensor manipulation. Let us call x your document_embedding of shape (n, d) where d is the embedding size. We'll take n=3 and d=5. So x is made up of [x1, x2, x3].T. >>> x = torch.rand(n, d) tensor([[0.8620, 0.9322, 0.4220, 0.0280, 0.3789], [0.2747, 0.4047, 0.6418, 0.7147, 0.3409], [0.6573, 0.3432, 0.5663, 0.2512, 0.0582]]) The cosine similarity is a normalized dot product. The [email protected] matrix multiplication will give you the pairwise dot product: which contains: ||x1||², <x1/x2>, <x1/x3>, <x2/x1>, ||x2||², etc... >>> sim = [email protected] tensor([[1.9343, 1.0340, 1.1545], [1.0340, 1.2782, 0.8822], [1.1545, 0.8822, 0.9370]]) To normalize take the vector of all norms: ||x1||, ||x2||, and ||x3||: >>> norm = x.norm(dim=1) tensor([1.3908, 1.1306, 0.9680]) Construct the matrix containing the normalization factors: ||x1||², ||x1||.||x2||, ||x1||.||x3||, ||x2||.||x1||, ||x2||², etc... >>> factor = norm*norm.unsqueeze(1) tensor([[1.9343, 1.5724, 1.3462], [1.5724, 1.2782, 1.0944], [1.3462, 1.0944, 0.9370]]) Then normalize: >>> sim /= factor tensor([[1.0000, 0.6576, 0.8576], [0.6576, 1.0000, 0.8062], [0.8576, 0.8062, 1.0000]]) Alternatively, a quicker way which avoids having to create the norm matrix, is to normalize before multiplying: >>> x /= x.norm(dim=1, keepdim=True) >>> sim = [email protected] tensor([[1.0000, 0.6576, 0.8576], [0.6576, 1.0000, 0.8062], [0.8576, 0.8062, 1.0000]]) For the loss function I would apply nn.CrossEntropyLoss straight away between the predicted similarity matrix and the target matrix, instead of applying sigmoid + BCE. Note: nn.CrossEntropyLoss includes nn.LogSoftmax.
https://stackoverflow.com/questions/65521840/
Why am I getting a low error before I did any optimization?
I am using a model training program I have built for a toy example and trying to use it on another example. The only difference is this model was used for regression, hence I was using MSE as the error criterion, and now it is used for binary classification, hence I am using BCEWithLogitsLoss. The model is very simple: class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc1 = nn.Sequential( nn.Linear(input_size, 8*input_size), nn.PReLU() #parametric relu - same as leaky relu except the slope is learned ) self.fc2 = nn.Sequential( nn.Linear(8*input_size, 80*input_size), nn.PReLU() ) self.fc3 = nn.Sequential( nn.Linear(80*input_size, 32*input_size), nn.PReLU() ) self.fc4 = nn.Sequential( nn.Linear(32*input_size, 4*input_size), nn.PReLU() ) self.fc = nn.Sequential( nn.Linear(4*input_size, output_size), nn.PReLU() ) def forward(self, x, dropout=dropout, batchnorm=batchnorm): x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) x = self.fc4(x) x = self.fc(x) return x And this is where I run it: model = Model(input_size, output_size) if (loss == 'MSE'): criterion = nn.MSELoss() if (loss == 'BCELoss'): criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.SGD(model.parameters(), lr = lr) model.train() for epoch in range(num_epochs): # Forward pass and loss train_predictions = model(train_features) print(train_predictions) print(train_targets) loss = criterion(train_predictions, train_targets) # Backward pass and update loss.backward() optimizer.step() # zero grad before new step optimizer.zero_grad() train_size = len(train_features) train_loss = criterion(train_predictions, train_targets).item() pred = train_predictions.max(1, keepdim=True)[1] correct = pred.eq(train_targets.view_as(pred)).sum().item() #train_loss /= train_size accuracy = correct / train_size print('\nTrain set: Loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( train_loss, correct, train_size, 100. * accuracy)) However, when I print the loss, for some reason the loss already starts very low (around 0.6) before I have done any backwards pass! It remains this low all subsequent epochs. The prediction vector, however, looks like random garbage... tensor([[-0.0447], [-0.0640], [-0.0564], ..., [-0.0924], [-0.0113], [-0.0774]], grad_fn=<PreluBackward>) tensor([[0.], [0.], [0.], ..., [0.], [0.], [1.]]) epoch: 1, loss = 0.6842 I have no clue why is it doing that, and would appriciate any help. Thanks! EDIT: I added the params if they can help anyone figuring this out: if (dataset == 'adult_train.csv'): input_size=9 print_every = 1 output_size = 1 lr = 0.001 num_epochs = 10 loss='BCELoss' EDIT2: Added accuracy calculation in the middle block
BCELoss is not error. The entropy of a Bernoulli distribution with p=0.5 is -ln(0.5) = 0.693. This is the loss you would expect if Your data is evenly distributed Your network is guessing randomly or Your network always predicts a uniform distribution Your model is in the second case. The network is currently guessing slightly negative logits for every prediction. Those will be interpreted as 0 class predictions. Since it seems your data is imbalanced towards 0 labels your accuracy will be the same as a model that always predicts 0. This is just an artifact of random weight initialization. If you keep reinitializing your model you'll find that sometimes it will always predict 1 too.
https://stackoverflow.com/questions/65522548/
How do I load the CelebA dataset on Google Colab, using torch vision, without running out of memory?
I am following a tutorial on DCGAN. Whenever I try to load the CelebA dataset, torchvision uses up all my run-time's memory(12GB) and the runtime crashes. Am looking for ways on how I can load and apply transformations to the dataset without hogging my run-time's resources. To Reproduce Here is the part of the code that is causing issues. # Root directory for the dataset data_root = 'data/celeba' # Spatial size of training images, images are resized to this size. image_size = 64 celeba_data = datasets.CelebA(data_root, download=True, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ])) The full notebook can be found here Environment PyTorch version: 1.7.1+cu101 Is debug build: False CUDA used to build PyTorch: 10.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final) CMake version: version 3.12.0 Python version: 3.6 (64-bit runtime) Is CUDA available: True CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 418.67 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.4 [pip3] torch==1.7.1+cu101 [pip3] torchaudio==0.7.2 pip3] torchsummary==1.5.1 [pip3] torchtext==0.3.1 [pip3] torchvision==0.8.2+cu101 [conda] Could not collect Additional Context Some of the things I have tried are: Downloading and loading the dataset on seperate lines. e.g: # Download the dataset only datasets.CelebA(data_root, download=True) # Load the dataset here celeba_data = datasets.CelebA(data_root, download=False, transforms=...) Using the ImageFolder dataset class instead of the CelebA class. e.g: # Download the dataset only datasets.CelebA(data_root, download=True) # Load the dataset using the ImageFolder class celeba_data = datasets.ImageFolder(data_root, transforms=...) The memory problem is still persistent in either of the cases.
I did not manage to find a solution to the memory problem. However, I came up with a workaround, custom dataset. Here is my implementation: import os import zipfile import gdown import torch from natsort import natsorted from PIL import Image from torch.utils.data import Dataset from torchvision import transforms ## Setup # Number of gpus available ngpu = 1 device = torch.device('cuda:0' if ( torch.cuda.is_available() and ngpu > 0) else 'cpu') ## Fetch data from Google Drive # Root directory for the dataset data_root = 'data/celeba' # Path to folder with the dataset dataset_folder = f'{data_root}/img_align_celeba' # URL for the CelebA dataset url = 'https://drive.google.com/uc?id=1cNIac61PSA_LqDFYFUeyaQYekYPc75NH' # Path to download the dataset to download_path = f'{data_root}/img_align_celeba.zip' # Create required directories if not os.path.exists(data_root): os.makedirs(data_root) os.makedirs(dataset_folder) # Download the dataset from google drive gdown.download(url, download_path, quiet=False) # Unzip the downloaded file with zipfile.ZipFile(download_path, 'r') as ziphandler: ziphandler.extractall(dataset_folder) ## Create a custom Dataset class class CelebADataset(Dataset): def __init__(self, root_dir, transform=None): """ Args: root_dir (string): Directory with all the images transform (callable, optional): transform to be applied to each image sample """ # Read names of images in the root directory image_names = os.listdir(root_dir) self.root_dir = root_dir self.transform = transform self.image_names = natsorted(image_names) def __len__(self): return len(self.image_names) def __getitem__(self, idx): # Get the path to the image img_path = os.path.join(self.root_dir, self.image_names[idx]) # Load image and convert it to RGB img = Image.open(img_path).convert('RGB') # Apply transformations to the image if self.transform: img = self.transform(img) return img ## Load the dataset # Path to directory with all the images img_folder = f'{dataset_folder}/img_align_celeba' # Spatial size of training images, images are resized to this size. image_size = 64 # Transformations to be applied to each individual image sample transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ]) # Load the dataset from file and apply transformations celeba_dataset = CelebADataset(img_folder, transform) ## Create a dataloader # Batch size during training batch_size = 128 # Number of workers for the dataloader num_workers = 0 if device.type == 'cuda' else 2 # Whether to put fetched data tensors to pinned memory pin_memory = True if device.type == 'cuda' else False celeba_dataloader = torch.utils.data.DataLoader(celeba_dataset, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory, shuffle=True) This implementation is memory efficient and works for my use case, even during training the memory used averages around(4GB). I would however, appreciate further intuition as to what might be causing the memory problems.
https://stackoverflow.com/questions/65528568/
Display number of images per class using Pytorch
I am using Pytorch with FashionMNIST dataset I would like to display 8 image sample from each of the 10 classes. However, I did not figure how to split the training test into train_labels since I need to loop on the labels(class) and print 8 of each class. any idea how I can achieve this? classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot') # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), # transforms.Lambda(lambda x: x.repeat(3,1,1)), transforms.Normalize((0.5, ), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=True) print('Training set size:', len(trainset)) print('Test set size:',len(testset))
If I understand you correctly you want to group your dataset by labels then display them. You can start by constructing a dictionnary to store examples by label: examples = {i: [] for i in range(len(classes))} Then iterate over the trainset and append to the list using the label's index: for x, i in trainset: examples[i].append(x) However, this will go over the whole set. If you'd like to early stop and avoid gathering more than 8 per-class you can do so by adding conditions: n_examples = 8 for x, i in trainset: if all([len(ex) == n_examples for ex in examples.values()]) break if len(examples[i]) < n_examples: examples[i].append(x) Only thing left is to display with torchvision.transforms.ToPILImage: transforms.ToPILImage()(examples[3][0]) If you want to show more than one, you could use two consecutive torch.cat, one on dim=1 (by rows) then on dim=2 (by columns) to create a grid. grid = torch.cat([torch.cat(examples[i], dim=1) for i in range(len(classes))], dim=2) transforms.ToPILImage()(grid) Possible result:
https://stackoverflow.com/questions/65528954/
Huggingface Transformer - GPT2 resume training from saved checkpoint
Resuming the GPT2 finetuning, implemented from run_clm.py Does GPT2 huggingface has a parameter to resume the training from the saved checkpoint, instead training again from the beginning? Suppose the python notebook crashes while training, the checkpoints will be saved, but when I train the model again still it starts the training from the beginning. Source: here finetuning code: !python3 run_clm.py \ --train_file source.txt \ --do_train \ --output_dir gpt-finetuned \ --overwrite_output_dir \ --per_device_train_batch_size 2 \ --model_name_or_path=gpt2 \ --save_steps 100 \ --num_train_epochs=1 \ --block_size=200 \ --tokenizer_name=gpt2 From the above code, run_clm.py is a script provided by huggingface to finetune gpt2 to train with the customized dataset
To resume training from checkpoint you use the --model_name_or_path parameter. So instead of giving the default gpt2 you direct this to your latest checkpoint folder. So your command becomes: !python3 run_clm.py \ --train_file source.txt \ --do_train \ --output_dir gpt-finetuned \ --overwrite_output_dir \ --per_device_train_batch_size 2 \ --model_name_or_path=/content/models/checkpoint-5000 \ --save_steps 100 \ --num_train_epochs=1 \ --block_size=200 \ --tokenizer_name=gpt2
https://stackoverflow.com/questions/65529156/
Expected hidden[0] size (2, 8, 256), got [8, 256]
I have correct shape of hidden layer for printing as below. print(h0.shape) print(x.shape) torch.Size([2, 8, 256]) torch.Size([8, 300, 300]) But I still have error as Expected hidden[0] size (2, 8, 256), got [8, 256] What could be wrong? The whole code is as follows. import torch import torch.nn as nn import torchvision import matplotlib.pyplot as plt import torchvision.transforms as tt from torchvision.datasets import ImageFolder from PIL import Image import numpy as np from torch.autograd import Variable seq_len = input_size hidden_size = 256 #size of hidden layers num_classes = 5 num_epochs = 20 batch_size = 8 learning_rate = 0.001 # Fully connected neural network with one hidden layer num_layers = 2 # 2 RNN layers are stacked class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(LSTM, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)#batch must have first dimension #our inpyt needs to have shape #x -> (batch_size, seq, input_size) self.fc = nn.Linear(hidden_size, num_classes)#this fc is after RNN. So needs the last hidden size of RNN def forward(self, x): #according to ducumentation of RNN in pytorch #rnn needs input, h_0 for inputs at RNN (h_0 is initial hidden state) #the following one is initial hidden layer h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)#first one is number of layers and second one is batch size #output has two outputs. The first tensor contains the output features of the hidden last layer for all time steps #the second one is hidden state f print(h0.shape) print(x.shape) out, _ = self.lstm(x, h0) print(out.shape) #output has batch_size, seq_len, hidden size #we need to decode hidden state only the last time step #out (N, 30, 128) #Since we need only the last time step #Out (N, 128) out = out[:, -1, :] #-1 for last time step, take all for N and 128 out = self.fc(out) return out stacked_lstm_model = LSTM(input_size, hidden_size, num_layers, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss()#cross entropy has softmax at output optimizer = torch.optim.Adam(stacked_lstm_model.parameters(), lr=learning_rate) #optimizer used gradient optimization using Adam # Train the model n_total_steps = len(train_dl) for epoch in range(num_epochs): t_losses=[] for i, (images, labels) in enumerate(train_dl): # origin shape: [8, 1, 300, 300] # resized: [8, 300, 300] images = images.reshape(-1, seq_len, input_size).to(device) labels = labels.to(device) # Forward pass outputs = stacked_lstm_model(images) loss = criterion(outputs, labels) t_losses.append(loss) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}') avgd_trainloss = sum(t_losses)/len(t_losses) acc=0 v_losses=[] with torch.no_grad(): n_correct = 0 n_samples = 0 for v_images, v_labels in valid_dl: v_images = v_images.reshape(-1, seq_len, input_size).to(device) v_labels = v_labels.to(device) v_outputs = stacked_lstm_model(v_images) v_loss = criterion(v_outputs, v_labels) v_losses.append(v_loss) # max returns (value ,index) _, v_predicted = torch.max(v_outputs.data, 1) n_samples += v_labels.size(0) n_correct += (v_predicted == v_labels).sum().item() acc = 100.0 * n_correct / n_samples avgd_validloss = sum(v_losses)/len(v_losses) print (f'Epoch [{epoch+1}/{num_epochs}], Train loss: {avgd_trainloss.item():.4f}, Valid loss: {avgd_validloss.item():.4f}, Valid accu: {acc.item():.2f}') # Test the model # In test phase, we don't need to compute gradients (for memory efficiency) with torch.no_grad(): n_correct = 0 n_samples = 0 for images, labels in test_dl: images = images.reshape(-1, seq_len, input_size).to(device) labels = labels.to(device) outputs = stacked_lstm_model(images) # max returns (value ,index) _, predicted = torch.max(outputs.data, 1) n_samples += labels.size(0) n_correct += (predicted == labels).sum().item() acc = 100.0 * n_correct / n_samples print(f'Accuracy of the network on test images: {acc} %')
The LSTM requires two hidden states, not one. So instead of h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) use h0 = (torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device), torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)) So you need two hidden states in a tuple.
https://stackoverflow.com/questions/65530554/
RNN - RuntimeError: input must have 3 dimensions, got 2
I’m getting the following error: RuntimeError: input must have 3 dimensions, got 2 I have a single feature column that I am trying to feed into a GRU neural net. Below are my data loader and neural net. I have also included the output of my data loader when I retrieve a batch of data. What am I doing wrong??? def batch_data(feature1, sequence_length, batch_size): “”" Batch the neural network data using DataLoader :param feature1: the single feature column :param sequence_length: The sequence length of each batch :param batch_size: The size of each batch; the number of sequences in a batch :return: DataLoader with batched data “”" # total number of batches we can make n_batches = len(feature1)//batch_size # Keep only enough characters to make full batches feature1= feature1[:n_batches * batch_size] y_len = len(feature1) - sequence_length x, y = [], [] for idx in range(0, y_len): idx_end = sequence_length + idx x_batch = feature1[idx:idx_end] x.append(x_batch) # only making predictions after the last item in the batch batch_y = feature1[idx_end] y.append(batch_y) # create tensor datasets data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y))) data_loader = DataLoader(data, shuffle=False, batch_size=batch_size) # return a dataloader return data_loader # test dataloader on subset of actual data test_text = data_subset_b t_loader = batch_data(test_text, sequence_length=5, batch_size=10) data_iter = iter(t_loader) sample_x, sample_y = data_iter.next() print(sample_x.shape) print(sample_x) print() print(sample_y.shape) print(sample_y) When I pass in data, the following batch is generated… torch.Size([10, 5]) tensor([[ 0.0045, 0.0040, -0.0008, 0.0005, -0.0012], [ 0.0040, -0.0008, 0.0005, -0.0012, 0.0000], [-0.0008, 0.0005, -0.0012, 0.0000, -0.0015], [ 0.0005, -0.0012, 0.0000, -0.0015, 0.0008], [-0.0012, 0.0000, -0.0015, 0.0008, 0.0000], [ 0.0000, -0.0015, 0.0008, 0.0000, 0.0000], [-0.0015, 0.0008, 0.0000, 0.0000, -0.0008], [ 0.0008, 0.0000, 0.0000, -0.0008, -0.0039], [ 0.0000, 0.0000, -0.0008, -0.0039, -0.0026], [ 0.0000, -0.0008, -0.0039, -0.0026, -0.0082]], dtype=torch.float64) torch.Size([10]) tensor([ 0.0000, -0.0015, 0.0008, 0.0000, 0.0000, -0.0008, -0.0039, -0.0026, -0.0082, 0.0078], dtype=torch.float64)
As suggested by the error you got, the input tensor shape expected by the GRU is three dimensional with shape (batch_size, seq_len, input_size)1 But you are feeding a tensor of shape (10, 5). You said your input has one feature value, so you should add a dimension for input_size of size 1. This can be done like this sample_x.unsqueeze(-1)
https://stackoverflow.com/questions/65530822/
Adding noise when using embedding layer in pytorch
I'm building a generator g, that receives a latent-code (vector of shape 100) and outputs an image. Specifically, I have 1000 MNIST images, and I want the network to learn a latent code z_i for each image x_i, such that g(z_i)=x_i (this approach is known as Generative Latent Optimization). So I've used nn.Embedding(1000,embedding_dim=100) and standard generator architecture, that receives the code from embedding and outputs an image. As for loss, I combine reconstruction loss with regularization on the embedding-vector weights. My probelm is: I'd like to add noise to the latent-code vector before it is inserted to the generator (in order to make the latent-code compact). However I'm a beginner, and I don't know whether I should call detach() when adding the noise or not. I'm not sure of my approach entirely. I don't want to learn the scale of the noise or anything. Here's my attempt: class net(nn.Module): def __init__(): self.embed = nn.Embedding(1000,embedding_dim=100) self.generator = nn.sequential( nn.Linear(100, 84), .... ) def forward(batch_indices): batch_codes = self.embed(batch_indices) noise = torch.randn_like(batch_codes) * sigma noisy_batch_codes = batch_codes + noise # SHOULD THIS BE batch_codes.detach() + noise ?? return self.generator(noisy_batch_codes) g = net() optim = SGD(g.parameters(), lr=0.01) for epoch in range(num_epochs): for orig_images, orig_images_idx in trainloader: optim.zero_grad() output = g(orig_images_idx) reconstruction_loss = nn.MSELoss()(output, orig_images) embed_vector_weights = g.embed.weight[orig_images_idx] reg_loss = torch.norm(embed_vector_weights) * reg_coeff loss = reconstruction_loss + reg_loss loss.backward() optim.step()
If you detach before adding noise the gradients won't propagate to your encoder (the emedding layer in this case) so your encoder weights will never be updated. Therefore you should probably not detach if you want the encoder to learn.
https://stackoverflow.com/questions/65530928/
Lack of gradient when creating tensor from numpy
Can someone please explain to me the following behavior? import torch import numpy as np z = torch.tensor(np.array([1., 1.]), requires_grad=True).float() def pre_main(z): return z * 3.0 x = pre_main(z) x.backward(torch.tensor([1., 1.])) print(z.grad) prints: None Meanwhile: import torch import numpy as np z = torch.tensor([1., 1.], requires_grad=True).float() def pre_main(z): return z * 3.0 x = pre_main(z) x.backward(torch.tensor([1., 1.])) print(z.grad) prints: tensor([3., 3.]) Why are my gradients being destroyed when constructing from a numpy array? How do I fix this?
Your gradient is not destroyed: grad returns None because it has never been saved on the grad attribute. This is because non-leaf tensors don't have their gradients stored during backpropagation. Hence the warning message you received when running your first snippet: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). This is the case for your z tensor when it is defined as: >>> z = torch.tensor(np.array([1., 1.]), requires_grad=True).float() >>> z.is_leaf False Compared to: >>> z = torch.tensor([1., 1.], requires_grad=True).float() >>> z.is_leaf True which means the latter will have its gradient value in z.grad. But notice that: >>> z = torch.tensor(np.array([1., 1.]), requires_grad=True) >>> z.is_leaf True To further explain this: when a tensor is first initialized it is a leaf node (.is_leaf returns True). As soon as you apply a function on it (here .float() is an in-place operator) it is not a leaf anymore since it has parents in the computational graph. So really, there's nothing to fix... What you can do though is make sure the gradient is saved on z.grad when the backward pass is called. So, the second question comes down to how to store/access the gradient on a non-leaf node?. Now, if you would like to store the gradient on .backward() call. You could use retain_grad() as explained in the warning message: z = torch.tensor(np.array([1., 1.]), requires_grad=True).float() z.retain_grad() Or, since we expected it to be a leaf node, solve it by using FloatTensor to convert the numpy.array to a torch.Tensor: z = torch.FloatTensor(np.array([1., 1.])) z.requires_grad=True Alternatively, you could stick with torch.tensor and supply a dtype: z = torch.tensor(np.array([1., 1.]), dtype=torch.float64, requires_grad=True)
https://stackoverflow.com/questions/65532022/
Is there a depthwise constant convolutional layer option in PyTorch?
I'm interested in applying a convolutional kernel that's only got HxW parameters where (H, W) is kernel size. The kernel would still have dimensions CxHxW like a normal convolution, but the parameters are constant in the channel dimension. Is there an inbuilt option for this in PyTorch?
That would be equivalent to a convolution kernel with a 1-dimensional (summed) input. You can verify that mathematically (just factor out the weight). We can also verify it with code, so you can use this if you really wanted to do that. import torch import torch.nn as nn # Normal conv normal_conv = nn.Conv2d(1, 2, kernel_size=1) # We can artificially repeat the weight along the channel dimension -> constant depthwise repeated_conv = nn.Conv2d(6, 2, kernel_size=1) repeated_conv.weight.data = normal_conv.weight.data.expand(-1, 6, -1, -1) repeated_conv.bias.data = normal_conv.bias.data data = torch.randn(1, 6, 3, 3) # same result print(repeated_conv(data)) print(normal_conv(data.sum(1, keepdim=True))) So, you don't need a custom layer. Just create a convolution with the number of input channels = 1, and sum the input in the channel dimension before you feed it into the layer. UPDATE: Backward pass testing: data1 = torch.randn(1, 6, 3, 3) data2 = data1.clone() data1.requires_grad = True data2.requires_grad = True repeated_conv(data1).mean().backward() normal_conv(data2.sum(1, keepdim=True)).mean().backward() print(data1.grad, repeated_conv.weight.grad.sum(1)) print(data2.grad, normal_conv.weight.grad)
https://stackoverflow.com/questions/65532319/
ValueError: Target size (torch.Size([1000])) must be the same as input size (torch.Size([1000, 1]))
I am trying to train my first neural net in pyTorch (I'm not a programmer, just a confused chemist). The net itself is supposed to take 1064 element vectors and rate them with a float number. So far I have encountered a variety of errors, ranging from 'float instead of long' to 'Target 1 is out of bounds'. Thus I have redefined the dtypes, corrected the dimensions of the input vector, changed the loss function and now I am stuck in the situation when correcting the current error sets me back to the previous ones. Which is: ValueError: Target size (torch.Size([1000])) must be the same as input size (torch.Size([1000, 1])) at the 'loss=loss_calc(outputs, target)' line. I tried unsqueezing the label during the DataSet class definition, but this solution sets me back. When I tried to label = label.view(1,1), the resulting error changed to Target size (torch.Size([1000, 1, 1])) must be the same as input size (torch.Size([1000, 1])) Could anyone please help me figure this out? import pandas as pd import numpy as np import rdkit from rdkit import Chem from rdkit.Chem import AllChem import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader class dataset(Dataset): def __init__(self, path, transform=None): self.data = pd.read_excel(path) self.transform = transform def __len__(self): return len(self.data) def __getitem__(self, index): smiles=self.data.at[index, 'smiles'] mol=Chem.MolFromSmiles(smiles) morgan = torch.tensor(list(AllChem.GetMorganFingerprintAsBitVect(mol, 1, nBits=1064)), dtype=torch.float) label=torch.tensor(self.data.at[index, 'score'], dtype=torch.long) if self.transform is not None: morgan=self.transform(morgan) return morgan, label class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(1064, 1064) self.fc2 = nn.Linear(1064, 1) self.act = nn.Tanh() def forward(self, x): x = self.act(self.fc1(x)) x = self.act(self.fc2(x)) x = self.fc2(x) return x trainSet=dataset(r'C:\Users\BajMic\Peptides\trainingSet.xlsx') testSet=dataset(r'C:\Users\BajMic\Peptides\testSet.xlsx') net = Net() loss_calc = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): running_loss=0.0 for data in DataLoader(trainSet, batch_size=1000, shuffle=True): inputs, target = data optimizer.zero_grad() outputs = net(inputs) print(outputs) loss = loss_calc(outputs, target) loss.backward() optimizer.step() # print statistics running_loss += loss.item() print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss)) running_loss = 0.0 print('Finished Training')
When working with a loss function and having problems with shapes you will usually have an error message of this type: Target size (torch.Size([...])) must be the same as input size (torch.Size([...])) 'Target' refers to the label, the ground-truth, while 'input' refers to your model's output. In this case, the output is 1000 predictions (you set batch_size=1000) of 1 scalar value. Therefore the shape is (1000, 1). This last axis is the one bothering you, since the prediction vector is just a 1D tensor containing 1000 scalars, i.e. (1000). To solve this you can expand your target tensor with an extra dimension. With torch.unsqueeze(): target = target.unsqueeze(-1) # -1 stands for last here equivalent to 1
https://stackoverflow.com/questions/65537991/
How to reduce the inference time of Helsinki-NLP/opus-mt-es-en (translation model) from transformer
Currently Helsinki-NLP/opus-mt-es-en model takes around 1.5sec for inference from transformer. How can that be reduced? Also when trying to convert it to onxx runtime getting this error: ValueError: Unrecognized configuration class <class 'transformers.models.marian.configuration_marian.MarianConfig'> for this kind of AutoModel: AutoModel. Model type should be one of RetriBertConfig, MT5Config, T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, LayoutLMConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, FSMTConfig, XLMConfig, CTRLConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, BertGenerationConfig, DebertaConfig, DPRConfig, XLMProphetNetConfig, ProphetNetConfig, MPNetConfig, TapasConfig. Is it possible to convert this to onxx runtime?
The OPUS models are originally trained with Marian which is a highly optimized toolkit for machine translation written fully in C++. Unlike PyTorch, it does have the ambition to be a general deep learning toolkit, so it can focus on MT efficiency. The Marian configurations and instructions on how to download the models are at https://github.com/Helsinki-NLP/OPUS-MT. The OPUS-MT models for Huggingface's Transformers are converted from the original Marian models are meant more for prototyping and analyzing the models rather than for using them for translation in a production-like setup. Running the models in Marian will certainly much faster than in Python and it is certainly much easier than hacking Transformers to run with onxx runtime. Marian also offers further tricks to speed up the translation, e.g., by model quantization, which is however at the expense of the translation quality. With both Marian and Tranformers, you can speed things up if you use GPU or if you narrow the beam width during decoding (attribute num_beams in the generate method in Transformers).
https://stackoverflow.com/questions/65541788/
How to speed up the 'Adding visible gpu devices' process in tensorflow with a 30 series card?
I get stuck with that for ~2 minute every time I run the code. Many people on the Internet said that it would only take a long time in the first run, but that's not my case. Although it doesn't make anything go wrong, it's pretty annoying. When I'm stuck, the system is under pretty low usage, including the CPU, system RAM, GPU, video memory. I'm using Nvidia Geforce RTX 3070, Windows 10 x64 20H2.Here's my environment: # Name Version Build Channel blas 1.0 mkl defaults boto3 1.16.47 pypi_0 pypi botocore 1.19.47 pypi_0 pypi ca-certificates 2020.12.8 haa95532_0 defaults certifi 2020.12.5 py38haa95532_0 defaults click 7.1.2 pypi_0 pypi cudatoolkit 11.0.221 h74a9793_0 defaults freetype 2.10.4 hd328e21_0 defaults intel-openmp 2020.2 254 defaults jmespath 0.10.0 pypi_0 pypi joblib 1.0.0 pypi_0 pypi jpeg 9b hb83a4c4_2 defaults keras 2.4.3 pypi_0 pypi libpng 1.6.37 h2a8f88b_0 defaults libtiff 4.1.0 h56a325e_1 defaults libuv 1.40.0 he774522_0 defaults lz4-c 1.9.2 hf4a77e7_3 defaults mkl 2020.2 256 defaults mkl-service 2.3.0 py38h196d8e1_0 defaults mkl_fft 1.2.0 py38h45dec08_0 defaults mkl_random 1.1.1 py38h47e9c7a_0 defaults ninja 1.10.2 py38h6d14046_0 defaults numpy 1.19.2 py38hadc3359_0 defaults numpy-base 1.19.2 py38ha3acd2a_0 defaults olefile 0.46 py_0 defaults openssl 1.1.1i h2bbff1b_0 defaults pillow 8.0.1 py38h4fa10fc_0 defaults pip 20.3.3 py38haa95532_0 defaults python 3.8.5 h5fd99cc_1 defaults pytorch 1.7.1 py3.8_cuda110_cudnn8_0 pytorch regex 2020.11.13 pypi_0 pypi s3transfer 0.3.3 pypi_0 pypi sacremoses 0.0.43 pypi_0 pypi scikit-learn 0.24.0 pypi_0 pypi scipy 1.6.0 pypi_0 pypi sentencepiece 0.1.94 pypi_0 pypi setuptools 51.0.0 py38haa95532_2 defaults six 1.15.0 py38haa95532_0 defaults sklearn 0.0 pypi_0 pypi sqlite 3.33.0 h2a8f88b_0 defaults tb-nightly 2.5.0a20210101 pypi_0 pypi threadpoolctl 2.1.0 pypi_0 pypi thulac 0.2.1 pypi_0 pypi tk 8.6.10 he774522_0 defaults torchaudio 0.7.2 py38 pytorch torchvision 0.8.2 py38_cu110 pytorch transformers 2.1.1 pypi_0 pypi typing_extensions 3.7.4.3 py_0 defaults vc 14.2 h21ff451_1 defaults vs2015_runtime 14.27.29016 h5e58377_2 defaults wheel 0.36.2 pyhd3eb1b0_0 defaults wincertstore 0.2 py38_0 defaults xz 5.2.5 h62dcd97_0 defaults zlib 1.2.11 h62dcd97_4 defaults zstd 1.4.5 h04227a9_0 defaults Although I'm using PyTorch, it's tensorflow rather than PyTorch to blame(according to the logs). I got the same issue with pure tensorflow 2.3 2021-01-03 01:17:50.516100: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2021-01-03 01:17:52.622054: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library nvcuda.dll 2021-01-03 01:17:52.645796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:0a:00.0 name: GeForce RTX 3070 computeCapability: 8.6 coreClock: 1.725GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s 2021-01-03 01:17:52.645998: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2021-01-03 01:17:52.649575: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll 2021-01-03 01:17:52.649707: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll 2021-01-03 01:17:52.649827: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll 2021-01-03 01:17:52.649928: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll 2021-01-03 01:17:52.651954: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll 2021-01-03 01:17:52.660165: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll 2021-01-03 01:17:52.660416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2021-01-03 01:17:52.660971: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-01-03 01:17:52.668967: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19659fe67d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-01-03 01:17:52.669132: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2021-01-03 01:17:52.669395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:0a:00.0 name: GeForce RTX 3070 computeCapability: 8.6 coreClock: 1.725GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s 2021-01-03 01:17:52.669576: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll 2021-01-03 01:17:52.669683: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cublas64_10.dll 2021-01-03 01:17:52.669790: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cufft64_10.dll 2021-01-03 01:17:52.669896: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library curand64_10.dll 2021-01-03 01:17:52.670072: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusolver64_10.dll 2021-01-03 01:17:52.670201: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cusparse64_10.dll 2021-01-03 01:17:52.670365: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudnn64_7.dll 2021-01-03 01:17:52.670542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2021-01-03 01:18:37.097681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-01-03 01:18:37.097876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2021-01-03 01:18:37.098025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2021-01-03 01:18:37.098301: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6591 MB memory) -> physical GPU (device: 0, name: GeForce RTX 3070, pci bus id: 0000:0a:00.0, compute capability: 8.6) 2021-01-03 01:18:37.101296: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1960330d0d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-01-03 01:18:37.101474: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce RTX 3070, Compute Capability 8.6 args: Namespace(articles_per_title=10, device='0,1,2,3', length=1000, model_config='config/model_config_small.json', model_path='model/final_model', no_wordpiece=False, repetition_penalty=1.0, save_path='generated/', segment=False, temperature=2.0, titles='我', titles_file='', tokenizer_path='cache/vocab_small.txt', topk=10, topp=0) I noticed that the tensorflow installation guide for GPU users said that GPUs with Ampere architecture may encounter this issue and can solve that by using export CUDA_CACHE_MAXSIZE=2147483648 to expand the default JIT cache. It doesn't work with Windows. I searched my environment variables and none of them namedCUDA_CACHE_MAXSIZE. I tried adding that on my own, but it still takes a long time to pass Adding Visible Devices 0. What should I do?
Just go to Windows Environment Variables and set CUDA_CACHE_MAXSIZE=2147483648 under system variables. And you need a REBOOT,then everything will be fine. You are lucky enough to get an Ampere card, since they're out of stock everywhere.
https://stackoverflow.com/questions/65542317/
Fairseq Transform model not working (Float can't be cast to long)
I've installed python 3.8, pytorch 1.7, and fairseq 0.10.1, on a new machine, then copied in a script and model from a machine with python 3.6, pytorch 1.4 and fairseq 0.9.0, where it is working. The model is loaded and prepared with: model = TransformerModel.from_pretrained(...) model.eval() model.cuda() Then used with: inputs = [model.binarize(encode(src, str)) for str in texts] batched_hypos = model.generate(inputs, beam) inputs looks like e.g. [tensor([ 116, 1864, 181, 6, 2]), tensor([ 5, 432, 7, 2])] It asserts, with the last bit of the call stack being: ... batched_hypos = model.generate(inputs, beam) File "/path/to/fairseq/hub_utils.py", line 125, in generate sample = self._build_sample(tokens) File "/path/to/fairseq/hub_utils.py", line 196, in _build_sample assert torch.is_tensor(src_tokens) If instead I use fairseq-interactive from the commandline it fails with RuntimeError: result type Float can't be cast to the desired output type Long. (Full stack trace below.) As using the cli also fails, my hunch is that my model built with fairseq 0.9.x cannot be used with fairseq 0.10.x. If so, is there a way to update the model (i.e. without having to retrain it). And if not, what could the problem be, and how do I fix it? BTW, exactly the same error if I add --cpu to the commandline args, so the GPU or cuda version can be eliminated as a possible cause. $ fairseq-interactive path/to/dicts --path models/big.pt --source-lang ja --target-lang en --remove-bpe sentencepiece File "/path/to/bin/fairseq-interactive", line 11, in <module> sys.exit(cli_main()) File "/path/to/lib/python3.8/site-packages/fairseq_cli/interactive.py", line 190, in cli_main main(args) File "/path/to/lib/python3.8/site-packages/fairseq_cli/interactive.py", line 149, in main translations = task.inference_step(generator, models, sample) File "/path/to/lib/python3.8/site-packages/fairseq/tasks/fairseq_task.py", line 265, in inference_step return generator.generate(models, sample, prefix_tokens=prefix_tokens) File "/path/to/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/path/to/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 113, in generate return self._generate(model, sample, **kwargs) File "/path/to/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/path/to/lib/python3.8/site-packages/fairseq/sequence_generator.py", line 376, in _generate cand_scores, cand_indices, cand_beams = self.search.step( File "/path/to/lib/python3.8/site-packages/fairseq/search.py", line 81, in step torch.div(self.indices_buf, vocab_size, out=self.beams_buf) RuntimeError: result type Float can't be cast to the desired output type Long
(UPDATE: the below instructions install pytorch without GPU support. Going back to using the pytorch channel gets GPU support but fairseq breaks again. I've not yet cracked the secret code to get everything working together.) Solved this by wiping conda and starting again; I've decided to self-answer, rather than delete the question, as those error messages turned out to be useless (to put it politely) so maybe it will help someone else when they google. First: I actually had fairseq 0.9.0 installed. Even though 0.10.1 was listed first on conda-forge. This obviously meant my hunch was wrong, and something more obscure was at work. I then couldn't get uninstall or upgrade to work. Hence my decision to wipe anaconda completely and start again. Second, I noticed something deep in the conda documentation saying to install everything in one go, to avoid conflicts. Not my definition of how a package manager should work, but anyway. Third, I created a "test" conda environment, rather than using the "base" default. I suspect this had nothing to do with getting it to work, but I mention it just in case. So, my successful install command was: conda install -c conda-forge pytorch cudatoolkit=11.0 nvidia-apex fairseq==0.10.1 sentencepiece This gives me python 3.7.9 (not the 3.8.5 the OS has installed), pytorch 1.7.1, fairseq 0.10.1, and sentencepiece 0.1.92.
https://stackoverflow.com/questions/65543178/
AttributeError: 'tuple' object has no attribute 'size'
UPDATE: after looking back on this question, most of the code was unnecessary. In summary, the hidden layer of a Pytorch RNN needs to be a torch tensor. When I posted the question, the hidden layer was a tuple. Below is my data loader. from torch.utils.data import TensorDataset, DataLoader def batch_data(log_returns, sequence_length, batch_size): """ Batch the neural network data using DataLoader :param log_returns: asset's daily log returns :param sequence_length: The sequence length of each batch :param batch_size: The size of each batch; the number of sequences in a batch :return: DataLoader with batched data """ # total number of batches we can make n_batches = len(log_returns)//batch_size # Keep only enough characters to make full batches log_returns = log_returns[:n_batches * batch_size] y_len = len(log_returns) - sequence_length x, y = [], [] for idx in range(0, y_len): idx_end = sequence_length + idx x_batch = log_returns[idx:idx_end] x.append(x_batch) # only making predictions after the last word in the batch batch_y = log_returns[idx_end] y.append(batch_y) # create tensor datasets x_tensor = torch.from_numpy(np.asarray(x)) y_tensor = torch.from_numpy(np.asarray(y)) # make x_tensor 3-d instead of 2-d x_tensor = x_tensor.unsqueeze(-1) data = TensorDataset(x_tensor, y_tensor) data_loader = DataLoader(data, shuffle=False, batch_size=batch_size) # return a dataloader return data_loader def init_hidden(self, batch_size): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x batch_size x n_hidden, # initialized to zero, for hidden state and cell state of LSTM weight = next(self.parameters()).data if (train_on_gpu): hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(), weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda()) else: hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(), weight.new(self.n_layers, batch_size, self.n_hidden).zero_()) return hidden I don't know what is wrong. When I try to start training the model, I am getting the error message: AttributeError: 'tuple' object has no attribute 'size'
The issue comes from the fact that hidden (in the forward definition) isn't a Torch.Tensor. Therefore, r_output, hidden = self.gru(nn_input, hidden) raises a rather confusing error without specifying exaclty what's wrong in the arguments. Altough you can see it's raised inside a nn.RNN function named check_hidden_size()... I was confused at first, thinking that the second argument of nn.RNN: h0 was a tuple containing (hidden_state, cell_state). Same can be said ofthe second element returned by that call: hn. That's not the case h0 and hn are both Torch.Tensors. Interestingly enough though, you are able to unpack stacked tensors: >>> z = torch.stack([torch.Tensor([1,2,3]), torch.Tensor([4,5,6])]) >>> a, b = z >>> a, b (tensor([1., 2., 3.]), tensor([4., 5., 6.])) You are supposed to provide a tensor as the second argument of a nn.GRU __call__. Edit - After further inspection of your code I found out that you are converting hidden back again to a tuple... In cell [14] you have hidden = tuple([each.data for each in hidden]). Which basically overwrites the modification you did in init_hidden with torch.stack. Take a step back and look at the source code for RNNBase the base class for RNN modules. If the hidden state is not given to the forward it will default to: if hx is None: num_directions = 2 if self.bidirectional else 1 hx = torch.zeros(self.num_layers * num_directions, max_batch_size, self.hidden_size, dtype=input.dtype, device=input.device) This is essentially the exact init as the one you are trying to implement. Granted you only want to reset the hidden states on every epoch, (I don't see why...). Anyhow, a basic alternative would be to set hidden to None at the start of an epoch, passed as it is to self.forward_back_prop then to rnn, then to self.rnn which will in turn default initialize it for you. Then overwrite hidden with the hidden state returned by that RNN forward call. To summarize, I've only kept the relevant parts of the code. Remove the init_hidden function from AssetGRU and make those modifications: def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden): ... if hidden is not None: hidden = hidden.detach() ... output, hidden = rnn(inp, hidden) ... return loss.item(), hidden def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches): ... for epoch_i in range(1, n_epochs + 1): hidden = None for batch_i, (inputs, labels) in enumerate(train_loader, 1): loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden) ... ...
https://stackoverflow.com/questions/65543423/
Dimension mismatch in loading data
I have this codes as follows. import torch.nn.functional as f train_on_gpu=True class CnnLstm(nn.Module): def __init__(self): super(CnnLstm, self).__init__() self.cnn = CNN() self.rnn = nn.LSTM( input_size=180000, hidden_size=256, num_layers=2, batch_first=True) self.linear = nn.Linear(hidden_size, num_classes) def forward(self, x): print('before forward ') print(x.shape) batch_size, time_steps, channels, height, width = x.size() c_in = x.view(batch_size * time_steps, channels, height, width) _, c_out = self.cnn(c_in) r_in = c_out.view(batch_size, time_steps, -1) r_out, (_, _) = self.rnn(r_in) r_out2 = self.linear(r_out[:, -1, :]) return f.log_softmax(r_out2, dim=1) cnnlstm_model = CnnLstm().to(device) optimizer = torch.optim.Adam(cnnlstm_model.parameters(), lr=learning_rate) #optimizer = torch.optim.SGD(cnnlstm_model.parameters(), lr=learning_rate) #criterion = nn.functional.nll_loss() criterion = nn.CrossEntropyLoss() # Train the model n_total_steps = len(train_dl) num_epochs = 20 for epoch in range(num_epochs): t_losses=[] for i, (images, labels) in enumerate(train_dl): # origin shape: [5, 3, 300, 300] # resized: [5, 300, 300] print('load data '+str(images.shape)) images = np.expand_dims(images, axis=1) print('after expand ') print(images.shape) images = torch.FloatTensor(images) images, labels = images.cuda(), labels.cuda() images, labels = Variable(images), Variable(labels) optimizer.zero_grad() outputs = cnnlstm_model(images) loss = criterion(outputs, labels) t_losses.append(loss) loss.backward() optimizer.step() Three places are printing out. (1) print('load data '+str(images.shape)) (2) print('after expand ') print(images.shape) (3) print('before forward ') print(x.shape) I have batch size of 5 images. Loading 2629 batches and only last batch has issues. Earlier batches of loading images has no issues and loaded as load data torch.Size([5, 3, 300, 300]) after expand (5, 1, 3, 300, 300) before forward torch.Size([5, 1, 3, 300, 300]) load data torch.Size([5, 3, 300, 300]) after expand (5, 1, 3, 300, 300) before forward torch.Size([5, 1, 3, 300, 300]) . . . load data torch.Size([5, 3, 300, 300]) after expand (5, 1, 3, 300, 300) before forward torch.Size([5, 1, 3, 300, 300]) load data torch.Size([5, 3, 300, 300]) after expand (5, 1, 3, 300, 300) before forward torch.Size([5, 1, 3, 300, 300]) At the last batch loading, load data torch.Size([5, 3, 300, 300]) after expand (5, 1, 3, 300, 300) before forward torch.Size([5, 1, 3, 300, 300]) before forward torch.Size([15, 300, 300]) Why do I have the 'before forward' log printed twice? Futhermore, it's not same shape. What could be wrong? EDIT: This is code for loading data. inputH = input_size inputW = input_size #Data transform (normalization & data augmentation) stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) train_resize_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) train_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.RandomHorizontalFlip(), tt.ToTensor(), tt.Normalize(*stats)]) valid_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) test_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize(*stats)]) #Create dataset train_ds = ImageFolder('./data/train', train_tfms) valid_ds = ImageFolder('./data/valid', valid_tfms) test_ds = ImageFolder('./data/test', test_tfms) from torch.utils.data.dataloader import DataLoader batch_size = 5 #Training data loader train_dl = DataLoader(train_ds, batch_size, shuffle = True, num_workers = 8, pin_memory=True) #Validation data loader valid_dl = DataLoader(valid_ds, batch_size, shuffle = True, num_workers = 8, pin_memory=True) #Test data loader test_dl = DataLoader(test_ds, 1, shuffle = False, num_workers = 1, pin_memory=True)
I made some changes to data loader and finally it worked. class DataLoader: stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) @staticmethod def get_train_data(batch_size): train_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.RandomHorizontalFlip(), tt.ToTensor(), tt.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]) train_ds = ImageFolder('./data/train', train_tfms) return torch.utils.data.DataLoader( train_ds, batch_size=batch_size, shuffle=True, num_workers = 8, pin_memory=True) @staticmethod def get_validate_data(valid_batch_size): valid_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]) valid_ds = ImageFolder('./data/valid', valid_tfms) return torch.utils.data.DataLoader( valid_ds, batch_size=valid_batch_size, shuffle=True, num_workers = 8, pin_memory=True) @staticmethod def get_test_data(test_batch_size): test_tfms = tt.Compose([tt.Resize((inputH, inputW), interpolation=2), tt.ToTensor(), tt.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))]) test_ds = ImageFolder('./data/test', test_tfms) return torch.utils.data.DataLoader( test_ds, batch_size=test_batch_size, shuffle=False, num_workers = 1, pin_memory=True)
https://stackoverflow.com/questions/65548312/
Using PyTorch nn.Sequential() to define a network in a flexible way but with results beyond expectation
I tried to define a network in a more flexible way using nn.Sequential so that I can define its number of layers according to layernum: seed = 0 torch.manual_seed(seed) # ====== net_a ===== layers = [ nn.Linear(7, 64), nn.Tanh()] for i in range(layernum-1): # layernum = 3 layers.append(nn.Linear(64, 64)) layers.append(nn.Tanh()) layers.append(nn.Linear(64, 8)) net_x = nn.Sequential(*layers) net_y = nn.Sequential(*layers) net_z = nn.Sequential(*layers) # ====== net_b ===== net_x = nn.Sequential( nn.Linear(7, 64), nn.Tanh(), nn.Linear(64, 64), nn.Tanh(), nn.Linear(64, 64), nn.Tanh(), nn.Linear(64, 8), ) net_y = nn.Sequential( #... same as net_x ) net_z = nn.Sequential( #... same as net_x ) # print(net_x) # print(net_x[0].weight) I use both of them individually, i.e. they are in the same .py file, but I use either of them and make the other as comments. Both of them consist of 3 networks with respect to 3 dimentions (x,y, and z). I expected them to be the same network with same training performance. The structure seems to be the same according to print(net_x): # Sequential( # (0): Linear(in_features=7, out_features=64, bias=True) # (1): Tanh() # (2): Linear(in_features=64, out_features=64, bias=True) # (3): Tanh() # (4): Linear(in_features=64, out_features=64, bias=True) # (5): Tanh() # (6): Linear(in_features=64, out_features=8, bias=True) # ) But their initial weights are different according to print(net_x[0].weight): print(net_x[0].weight) # net_a # tensor([[-0.0028, 0.2028, -0.3111, -0.2782, -0.1456, 0.1014, -0.0075], # [ 0.2997, -0.0335, 0.1000, -0.1142, -0.0743, -0.3611, -0.2503], # ...... print(net_x[0].weight) # net_b # tensor([[ 0.2813, 0.2968, 0.0078, 0.1518, 0.3776, -0.3247, 0.0071], # [ 0.3448, -0.0988, -0.2798, 0.3347, 0.3581, 0.2229, 0.2841], # ...... ======ADDED===== I trained the network like this: def train_on_batch(x, y, net, stepsize=innerstepsize): x = totorch(x) y = totorch(y) if(use_cuda): x,y = x.cuda(),y.cuda() net.zero_grad() ypred = net(x) loss = (ypred - y).pow(2).mean() loss.backward() for param in net.parameters(): param.data -= stepsize * param.grad.data iteration = 100 for iter in range(iteration): # TRAIN PrepareSample() # get in_support for i in range(tnum_support): out_x = trajectory_support_x[i,1:9] out_y = trajectory_support_y[i,1:9] out_z = trajectory_support_z[i,1:9] # Do SGD on this task for _ in range(innerepochs): # SGD 1 times train_on_batch(in_support[i], out_x, net_x) train_on_batch(in_support[i], out_y, net_y) train_on_batch(in_support[i], out_z, net_z) # TEST if iter==0 or (iter+1) % 10 == 0: ind = [0,1,2,3,4,5,6,7,8,9] loss = [0,0,0,0,0,0] for i in range(tnum_test): inputs = in_test[i] outputs_x = trajectory_test_x[i].tolist() x_test = trajectory_test_x[i,[0,9]] y_test = trajectory_test_x[i,1:9] pred_x = np.hstack((x_test[0],predict(inputs, net_x),x_test[1])) loss[i] = np.square(predict(inputs, net_x) - y_test).mean() # mse inputs = in_test[i] outputs_y = trajectory_test_y[i].tolist() x_test = trajectory_test_y[i,[0,9]] y_test = trajectory_test_y[i,1:9] pred_y = np.hstack((x_test[0],predict(inputs, net_y),x_test[1])) loss[i+2] = np.square(predict(inputs, net_y) - y_test).mean() # mse inputs = in_test[i] outputs_z = trajectory_test_z[i].tolist() x_test = trajectory_test_z[i,[0,9]] y_test = trajectory_test_z[i,1:9] pred_z = np.hstack((x_test[0],predict(inputs, net_z),x_test[1])) loss[i+4] = np.square(predict(inputs, net_z) - y_test).mean() # mse iterNum.append(iter+1) avgloss.append(np.mean(loss)) both of them are trained with exactly the same data (they are in the same .py file and of course use the same data). =====This is avgloss of net_a: =====This is avgloss of net_a with torch.manual_seed(seed) before every network definition: =====This is avgloss of net_b: =====This is avgloss of net_b with torch.manual_seed(seed) before every network definition: The training of net_a is weird, the MSE is high at initial time, and didn't reduce. On the contrary, the training of net_b seems common, the MSE is relatively low at first, and reduce to a smaller value after 100 iterations. Is anyone know how to fix this? I would like to go through different layer number, layer size, activation functions of the network. I don't want to write every network of specified hyper-parameters.
The random state is different after torch initialized the weights in the first network. You need to reset the random state to keep the same initialization by calling torch.manual_seed(seed) after the definition of the first network and before the second one. The problem lies in net_x/y/z -- it will be perfectly fine if it were just net_x. When you use nn.Sequential, it does not create new modules but instead stores a reference to the given module. So, in your first definition, you only have one copy of the layers, meaning that all net_x/y/z are shared-weight. They have independent weights in your second definition, which is naturally what we are after. You might definite it like this instead: def get_net(): layers = [ nn.Linear(7, 64), nn.Tanh()] for i in range(layernum-1): # layernum = 3 layers.append(nn.Linear(64, 64)) layers.append(nn.Tanh()) layers.append(nn.Linear(64, 8)) return layers net_x = nn.Sequential(*get_net()) net_y = nn.Sequential(*get_net()) net_z = nn.Sequential(*get_net()) Each time get_net is called, it creates a new copy.
https://stackoverflow.com/questions/65551280/
output.grad None even after loss.backward()
I'm confused... My model output: tensor([[0.0000,0.1537],...],grad_fn=<ReluBackward0>) If I use print(output.grad) it gives me None but even after gradient computation with loss.backward() I get the same result, which again is None... Even with with torch.set_grad_enabled(True): added, still the same. I've tried now with multiple model variants, always the same. I was achieving good result with my model there seemed no problem, now I see this and I'm not sure any more if there not might be a major flaw I didn't recognized so far. But my model is learning, it improves so I guess it has to? Why do I get None instead of an actual value?
You are getting None because the gradient is only stored on the .grad property for leaf tensors. Those are tensors that don't have parents in the computational graph. You can check whether a tensor is a leaf or not with is_leaf: >>> x = torch.FloatTensor([1,2,3]) >>> x.requires_grad = True >>> x.sum().backward() # backward pass >>> x.is_leaf True >>> x.grad tensor([1., 1., 1.]) The tensor you printed shows grad_fn=<ReluBackward0> indicating it's the result of a ReLU layer, and therefore not a leaf tensor. Here is an example of a non-leaf tensor: >>> x = torch.FloatTensor([1,2,3]) >>> x.requires_grad=True >>> z = x.sum() >>> z.backward() >>> z.is_leaf False >>> z.grad None Notice that z will show as tensor(6., grad_fn=<SumBackward0>). Actually accessing .grad will give a warning: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. And if you want to access the gradient on non-leaf tensors, following the warning message: >>> z.retain_grad() >>> z = x.sum() >>> z.retain_grad() >>> z.backward() >>> z.is_leaf False >>> z.grad tensor(1.)
https://stackoverflow.com/questions/65552333/
Understanding convolutional layers shapes
I've been reading about convolutional nets and I've programmed a few models myself. When I see visual diagrams of other models it shows each layer being smaller and deeper than the last ones. Layers have three dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is.
TLDR; 256x256x32 refers to the layer's output shape rather than the layer itself. There are many articles and posts out there explaining how convolution layers work. I'll try to answer your question without going into too many details, just focusing on shapes. Assuming you are working with 2D convolution layers, your input and output will both be three-dimensional. That is, without considering the batch which would correspond to a 4th axis... Therefore, the shape of a convolution layer input will be (c, h, w) (or (h, w, c) depending on the framework) where c is the number of channels, h is the width of the input and w the width. You can see it as a c-channel hxw image. The most intuitive example of such input is the input of the first convolution layer of your convolutional neural network: most likely an image of size hxw with c channels for example c=1 for greyscale or c=3 for RGB... What's important is that for all pixels of that input, the values on each channel gives additional information on that pixel. Having three channels will give each pixel ('pixel' as in position in the 2D input space) a richer content than having a single. Since each pixel will be encoded with three values (three channels) vs. a single one (one channel). This kind of intuition about what channels represent can be extrapolated to a higher number of channels. As we said an input can have c channels. Now going back to convolution layers, here is a good visualization. Imagine having a 5x5 1-channel input. And a convolution layer consisting of a single 3x3 filter (i.e. kernel_size=3) input filter convolution output shape (1, 5, 5) (3, 3) (3,3) representation Now keep in mind the dimension of the output will depend on the stride and padding of the convolution layer. Here the shape of the output is the same as the shape of the filter, it does not necessarily have to be! Take an input shape of (1, 5, 5), with the same convolution settings, you would end up with a shape of (4, 4) (which is different from the filter shape (3, 3). Also, something to note is that if the input had more than one channel: shape (c, h, w), the filter would have to have the same number of channels. Each channel of the input would convolve with each channel of the filter and the results would be averaged into a single 2D feature map. So you would have an intermediate output of (c, 3, 3), which after averaging over the channels, would leave us with (1, 3, 3)=(3, 3). As a result, considering a convolution with a single filter, however many input channels there are, the output will always have a single channel. From there what you can do is assemble multiple filters on the same layer. This means you define your layer as having k 3x3 filters. So a layer consists k filters. For the computation of the output, the idea is simple: one filter gives a (3, 3) feature map, so k filters will give k (3, 3) feature maps. These maps are then stacked into what will be the channel dimension. Ultimately, you're left with an output shape of... (k, 3, 3). Let k_h and k_w, be the kernel height and kernel width respectively. And h', w' the height and width of one outputted feature map: input layer output shape (c, h, w) (k, c, k_h, k_w) (k, h', w') description c-channel hxw feature map k filters of shape (c, k_h, k_w) k-channel h'xw' feature map Back to your question: Layers have 3 dimensions like 256x256x32. What is this third number? I assume the first two numbers are the number of nodes but I don't know what the depth is. Convolution layers have four dimensions, but one of them is imposed by your input channel count. You can choose the size of your convolution kernel, and the number of filters. This number will determine is the number of channels of the output. 256x256 seems extremely high and you most likely correspond to the output shape of the feature map. On the other hand, 32 would be the number of channels of the output, which... as I tried to explain is the number of filters in that layer. Usually speaking the dimensions represented in visual diagrams for convolution networks correspond to the intermediate output shapes, not the layer shapes. As an example, take the VGG neural network: Very Deep Convolutional Networks for Large-Scale Image Recognition Input shape for VGG is (3, 224, 224), knowing that the result of the first convolution has shape (64, 224, 224) you can determine there is a total of 64 filters in that layer. As it turns out the kernel size in VGG is 3x3. So, here is a question for you: knowing there is a single bias parameter per filter, how many total parameters are in VGG's first convolution layer?
https://stackoverflow.com/questions/65554032/
How to shuffle the batches themselves in pytorch?
How to keep the sequences in each batch unshuffled, while shuffling the batches? Inspired by the question asked here.
While this is not a direct answer to your question. I want to address an issue with the answer you posted yourself. In my opinion, doing the following, is a very bad idea: dataloader = random.sample(list(dataloader), len(dataloader)) This defeats the whole purpose of creating a dataset and a data loader in the first place. Because as soon as you call list(dataloader) you end up compiling down your dataset into a single list of tensors. In other words, it will call __getitem__ for each index in the dataset. A data loader is designer to load data batch by batch (or more depending on the number of workers) avoiding to load the entire dataset in memory at once. This is even more important when working with images which requires image loading from the file system. This is critical and I believe you shouldn't be doing this at all. Take a look here, with a dummy dataset: class DS(Dataset): def __getitem__(self, _): return torch.rand(100) def __len__(self): return 10000 dl = DataLoader(DS(), batch_size=16) x = list(dl) Here x will contain 10,000 tensors of size 100, which your computer can handle. Now imagine having a dataset made up of 10,000 512x512 RGB images, you just can't hold that much in memory! Futhermore, something I haven't even mentioned is data augmentation. Which is only possible when retaining a data loader (i.e. a generator). So the transformation are computed on the input data at runtime vs at compile-time (if you will) when using list(dataloader). I would instead suggest you make your Dataset generate that unshuffled sequence for each item, then make a DataLoader out of it with shuffle=True. This feels a lot more natural than generating a DataLoader only to compile it down. Use your dataset class as it's supposed to. It should be the one constructing each sequence (i.e. datapoint), or as @Prune puts it a "single observation object".
https://stackoverflow.com/questions/65554711/
Multi Layer Perceptron Deep Learning in Python using Pytorch
I am having errors in executing the train function of my code in MLP. This is the error: mat1 and mat2 shapes cannot be multiplied (128x10 and 48x10) My code for the train function is this: class net(nn.Module): def __init__(self, input_dim2, hidden_dim2, output_dim2): super(net, self).__init__() self.input_dim2 = input_dim2 self.fc1 = nn.Linear(input_dim2, hidden_dim2) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_dim2, hidden_dim2) self.fc3 = nn.Linear(hidden_dim2, output_dim2) def forward(self, x): x = self.fc1(x) x = self.relu(x) x = self.fc2(x) x = self.relu(x) x = self.fc3(x) x = F.softmax(self.fc3(x)) return x model = net(input_dim2, hidden_dim2, output_dim2) #create the network criterion = nn.CrossEntropyLoss() optimizer = torch.optim.RMSprop(model.parameters(), lr = learning_rate2) def train(num_epochs2): for i in range(num_epochs2): tmp_loss = [] for (x,y) in train_loader: print(y.shape) print(x.shape) outputs = model(x) #forward pass print(outputs.shape) loss = criterion(outputs, y) #loss computation tmp_loss.append(loss.item()) #recording the loss optimizer.zero_grad() #all the accumulated gradient loss.backward() #auto-differentiaton - accumulation of gradient optimizer.step() # a gradient step print("Loss at {}th epoch: {}".format(i, np.mean(tmp_loss))) I don't know where I'm wrong. My code seems to work okay.
From the limited message, I guess the place you are wrong are the following snippets: x = self.fc3(x) x = F.softmax(self.fc3(x)) Try to replace with: x = self.fc3(x) x = F.softmax(x) A good question should include: error backtrace information and complete toy example which could repeat the errors!
https://stackoverflow.com/questions/65557065/
Backtransforming a PyTorch Tensor
I have trained a WGAN on the CelebA dataset in PyTorch following this youtube video. Since I do this on Google Cloud Platform where TensorBoard is not availabe, I save one figure of generated images by the GAN every epoch to see how the GAN is actually doing. Now, the saved pdf files look sth like this: generated images. Unfortunately, this is not really readable, and I suspect this has to do with the preprocessing I do: trafo = transforms.Compose( [transforms.Resize(size = (64, 64)), transforms.ToTensor(), transforms.Normalize( mean = (0.5,), std = (0.5,))]) Is there any way to kind of undo this transformation when I save the image? Currently, I save the image every epoch as follows: visualization = torchvision.utils.make_grid( tensor = gen(fixed_noise), nrow = 8, normalize = False) plt.savefig("generated_WGAN_" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".pdf") Also, I should probably mention that in the Jupyter notebook, I get the following warning: "Clipping input data to the valid range for imshow with RGB data ([0..1]) for floats or [0..255] for integers)."
It seems like your output pixel values are in range [-1, 1] (please verify this). Therefore, when you save the images, the negative part is being clipped (as the error message you got suggests). Try: visualization = torchvision.utils.make_grid( tensor = torch.clamp(gen(fixed_noise), -1, 1) * 0.5 + 0.5, # from [-1, 1] -> [0, 1] nrow = 8, normalize = False) plt.savefig("generated_WGAN_" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".pdf")
https://stackoverflow.com/questions/65561828/
How to map element in pytorch tensor to id?
Given a tensor: A = torch.tensor([2., 3., 4., 5., 6., 7.]) Then, give each element in A an id: id = torch.arange(A.shape[0], dtype = torch.int) # tensor([0,1,2,3,4,5]) In other words, id of 2. in A is 0 and id of 3. in A is 1: 2. -> 0 3. -> 1 4. -> 2 5. -> 3 6. -> 4 7. -> 5 Then, I have a new tensor: B = torch.tensor([3., 6., 6., 5., 4., 4., 4.]) In pytorch, is there any way in Pytorch to map each element in B to id? In other words, I want to obtain tensor([1, 4, 4, 3, 2, 2, 2]), in which each element is id of the element in B.
I don't think there is such a function in PyTorch to map a tensor. It seems quite unreasonable to solve this by comparing each value from B to values from B. Here are two possible solutions to solve this problem. Using a dictionary as a map You can use a dictionary. Not so not much of a pure-PyTorch solution but will most probably be the fastest and safest way... Just create a dict to map each element to an id, then use it to map B: >>> map = {x.item(): i for i, x in enumerate(A)} >>> torch.tensor([map[x.item()] for x in B]) tensor([1, 4, 4, 3, 2, 2, 2]) Change of basis approach An alternative only using torch.Tensors. This will require the values you want to map - the content of A - to be integers because they will be used to index a tensor. Encode the content of A into one-hot encodings: >>> A_enc = torch.zeros((int(A.max())+1,)*2) >>> A_enc[A, torch.arange(A.shape[0])] = 1 >>> A_enc tensor([[0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0.]]) We'll use A_enc as our basis to map integers: >>> v = torch.argmax(A_enc, dim=0) tensor([0, 0, 0, 1, 2, 3, 4, 5]) Now, given an integer for instance x=3, we can encode it into a one-hot-encoding: x_enc = [0, 0, 0, 1, 0, 0, 0, 0]. Then, use v to map it. With a simple dot product you can get the mapping of x_enc: here <v/x_enc> gives 1 which is the desired result (first element of mapped-B). But instead of giving x_enc, we will compute the matrix multiplication between v and encoded-B. First encode B then compute the matrix multiplcition vxB_enc: >>> B_enc = torch.zeros(A_enc.shape[0], B.shape[0]) >>> B_enc[B, torch.arange(B.shape[0])] = 1 >>> B_enc tensor([[0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 1., 1.], [0., 0., 0., 1., 0., 0., 0.], [0., 1., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0.]]) >>> v@B_enc.long() tensor([1, 4, 4, 3, 2, 2, 2]) Note - you will have to define your tensors with Long type.
https://stackoverflow.com/questions/65565461/
How to access Spark DataFrame data in GPU from ML Libraries such as PyTorch or Tensorflow
Currently I am studying the usage of Apache Spark 3.0 with Rapids GPU Acceleration. In the official spark-rapids docs I came across this page which states: There are cases where you may want to get access to the raw data on the GPU, preferably without copying it. One use case for this is exporting the data to an ML framework after doing feature extraction. To me this sounds as if one could make data that is already available on the GPU from some upstream Spark ETL process directly available to a framework such as Tensorflow or PyTorch. If this is the case how can I access the data from within any of these frameworks? If I am misunderstanding something here, what is the quote exactly referring to?
The link you references really only allows you to get access to the data still sitting on the GPU, but using that data in another framework, like Tensorflow or PyTorch is not that simple. TL;DR; Unless you have a library explicitly setup to work with the RAPIDS accelerator you probably want to run your ETL with RAPIDS, then save it, and launch a new job to train your models using that data. There are still a number of issues that you would need to solve. We have worked on these in the case of XGBoost, but it has not been something that we have tried to tackle for Tensorflow or PyTorch yet. The big issues are Getting the data to the correct process. Even if the data is on the GPU, because of security, it is tied to a given user process. PyTorch and Tensorflow generally run as python processes and not in the same JVM that Spark is running in. This means that the data has to be sent to the other process. There are several ways to do this, but it is non-trivial to try and do it as a zero-copy operation. The format of the data is not what Tensorflow or PyTorch want. The data for RAPIDs is in an arrow compatible format. Tensorflow and PyTorch have APIs for importing data in standard formats from the CPU, but it might take a bit of work to get the data into a format that the frameworks want and to find an API to let you pull it in directly from the GPU. Sharing GPU resources. Spark only recently added in support for scheduling GPUs. Prior to that people would just launch a single spark task per executor and a single python process so that the python process would own the entire GPU when doing training or inference. With the RAPIDS accelerator the GPU is not free any more and you need a way to share the resources. RMM provides some of this if both libraries are updated to use it and they are in the same process, but in the case of Pytorch and and Tensoflow they are typically in python processes so figuring out how to share the GPU is hard.
https://stackoverflow.com/questions/65565760/
How does PyTorch Tensor.index_select() evaluates tensor output?
I am not able to understand how complex indexing - non contiguous indexing of a tensor works. Here is a sample code and its output import torch def describe(x): print("Type: {}".format(x.type())) print("Shape/size: {}".format(x.shape)) print("Values: \n{}".format(x)) indices = torch.LongTensor([0,2]) x = torch.arange(6).view(2,3) describe(torch.index_select(x, dim=1, index=indices)) Returns output as Type: torch.LongTensor Shape/size: torch.Size([2, 2]) Values: tensor([[0, 2], [3, 5]]) Can someone explain how did it arrive to this output tensor? Thanks!
You are selecting the first (indices[0] is 0) and third (indices[1] is 2) tensors from x on the first axis (dim=0). Essentially, torch.index_select with dim=1 works the same as doing a direct indexing on the second axis with x[:, indices]. >>> x tensor([[0, 1, 2], [3, 4, 5]]) So selecting columns (since you're looking at dim=1 and not dim=0) which indices are in indices. Imagine having a simple list [0, 2] as indices: >>> indices = [0, 2] >>> x[:, indices[0]] # same as x[:, 0] tensor([0, 3]) >>> x[:, indices[1]] # same as x[:, 2] tensor([2, 5]) So passing the indices as a torch.Tensor allows you to index on all elements of indices directly, i.e. columns 0 and 2. Similar to how NumPy's indexing works. >>> x[:, indices] tensor([[0, 2], [3, 5]]) Here's another example to help you see how it works. With x defined as x = torch.arange(9).view(3, 3) so we have 3 rows (a.k.a. dim=0) and 3 columns (a.k.a. dim=1). >>> indices tensor([0, 2]) # namely 'first' and 'third' >>> x = torch.arange(9).view(3, 3) tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> x.index_select(0, indices) # select first and third rows tensor([[0, 1, 2], [6, 7, 8]]) >>> x.index_select(1, indices) # select first and third columns tensor([[0, 2], [3, 5], [6, 8]]) Note: torch.index_select(x, dim, indices) is equivalent to x.index_select(dim, indices)
https://stackoverflow.com/questions/65566874/
Add blocks of values to a tensor at specific locations in PyTorch
I have a list of indices: indx = torch.LongTensor([ [ 0, 2, 0], [ 0, 2, 4], [ 0, 4, 0], [ 0, 10, 14], [ 1, 4, 0], [ 1, 8, 2], [ 1, 12, 0] ]) And I have a tensor of 2x2 blocks: blocks = torch.FloatTensor([ [[1.5818, 2.3108], [2.6742, 3.0024]], [[2.0472, 1.6651], [3.2807, 2.7413]], [[1.5587, 2.1905], [1.9231, 3.5083]], [[1.6007, 2.1426], [2.4802, 3.0610]], [[1.9087, 2.1021], [2.7781, 3.2282]], [[1.5127, 2.6322], [2.4233, 3.6836]], [[1.9645, 2.3831], [2.8675, 3.3770]] ]) What I want to do is to add each block at an index position to another tensor (i.e. so that it starts at that index). Let's assume that I want to add it to the following tensor: a = torch.ones([2,18,18]) Is there any efficient way to do so? So far I came up only with: i = 0 for b, x, y in indx: a[b, x:x+2, y:y+2] += blocks[i] i += 1 It is quite inefficient, I also tried to use index_add, but it did not work properly.
You are looking to index on three different dimensions at the same time. I had a look around in the documentation, torch.index_add will only receive a vector as index. My hopes were on torch.scatter but it doesn't to fit well to this problem. As it turns out you can achieve this pretty easily with a little work, the most difficult parts are the setup and teardown. Please hang on tight. I'll use a simplified example here, but the same can be applied with larger tensors. >>> indx tensor([[ 0, 2, 0], [ 0, 2, 4], [ 0, 4, 0]])) >>> blocks tensor([[[1.5818, 2.3108], [2.6742, 3.0024]], [[2.0472, 1.6651], [3.2807, 2.7413]], [[1.5587, 2.1905], [1.9231, 3.5083]]]) >>> a tensor([[[0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0.]]]) The main issue here is that you are looking index with slicing. That not possible in a vectorize form. To counter that though you can convert your a tensor into 2x2 chunks. This will be particulary handy since we will be able to access sub-tensors such as a[0, 2:4, 4:6] with just a[0, 1, 2]. Since the 2:4 slice on dim=1 will be grouped together on index=1 while the 4:6 slice on dim=0 will be grouped on index=2. First we will convert a to tensor made up of 2x2 chunks. Then we will update with blocks. Finally, we will stitch back the resulting tensor into the original shape. 1. Converting a to a 2x2-chunks tensor You can use a combination of torch.chunk and torch.cat (not torch.dog) twice: on dim=1 and dim=2. The shape of a is (1, h, w) so we're looking for a result of shape (1, h//2, w//2, 2, 2). To do so we will unsqueeze two axes on a: >>> a_ = a[:, None, :, None, :] >>> a_.shape torch.Size([1, 1, 6, 1, 6]) Then make 3 chunks on dim=2, then concatenate on dim=1: >>> a_row_chunks = torch.cat(torch.chunk(a_, 3, dim=2), dim=1) >>> a_row_chunks.shape torch.Size([1, 3, 2, 1, 6]) And make 3 chunks on dim=4, then concatenate on dim=3: >>> a_col_chunks = torch.cat(torch.chunk(a_row_chunks, 3, dim=4), dim=3) >>> a_col_chunks.shape torch.Size([1, 3, 2, 3, 2]) Finally reshape all. >>> a_chunks = a_col_chunks.reshape(1, 3, 3, 2, 2) Create a new index with adjusted values for our new tensor with. Essentially we divide all values by 2 except for the first column which is the index of dim=0 in a which was unchanged. There's some fiddling around with the types (in short: it has to be a float in order to divide by 2 but needs to be cast back to a long in order for the indexing to work): >>> indx_ = indx.clone().float() >>> indx_[:, 1:] /= 2 >>> indx_ = indx_.long() tensor([[0, 1, 0], [0, 1, 2], [0, 2, 0]]) 2. Updating with blocks We will simply index and accumulate with: >>> a_chunks[indx_[:, 0], indx_[:, 1], indx_[:, 2]] += blocks 3. Putting it back together I thought that was it, but actually converting a_chunk back to a 6x6 tensor is way trickier than it seems. Apparently torch.cat can only receive a tuple. I won't go into to much detail: tuple() will only consider the first axis, as a workaround you can use torch.permute to switch the axes. This combined with two torch.cat will do: >>> a_row_cat = torch.cat(tuple(a_chunks.permute(1, 0, 2, 3, 4)), dim=2) >>> a_row_cat.shape torch.Size([1, 3, 6, 2]) >>> A = torch.cat(tuple(a_row_cat.permute(1, 0, 2, 3)), dim=2) >>> A.shape torch.Size([1, 6, 6]) >>> A tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [1.5818, 2.3108, 0.0000, 0.0000, 2.0472, 1.6651], [2.6742, 3.0024, 0.0000, 0.0000, 3.2807, 2.7413], [1.5587, 2.1905, 0.0000, 0.0000, 0.0000, 0.0000], [1.9231, 3.5083, 0.0000, 0.0000, 0.0000, 0.0000]]]) Et voilà. If you didn't quite get how the chunks worked. Run this: for x in range(0, 6, 2): for y in range(0, 6, 2): a *= 0 a[:, x:x+2, y:y+2] = 1 print(a) And see for yourself: each 2x2 block of 1s corresponds to a chunk in a_chunks. So you can do the same with: for x in range(3): for y in range(3): a_chunks *= 0 a_chunks[:, x, y] = 1 print(a_chunks)
https://stackoverflow.com/questions/65571114/
Use of PyTorch permute in RCNN
I am looking at an implementation of RCNN for text classification using PyTorch. Full Code. There are two points where the dimensions of tensors are permuted using the permute function. The first is after the LSTM layer and before tanh. The second is after a linear layer and before a max pooling layer. Could you please explain why the permutation is necessary or useful? Relevant Code def forward(self, x): # x.shape = (seq_len, batch_size) embedded_sent = self.embeddings(x) # embedded_sent.shape = (seq_len, batch_size, embed_size) lstm_out, (h_n,c_n) = self.lstm(embedded_sent) # lstm_out.shape = (seq_len, batch_size, 2 * hidden_size) input_features = torch.cat([lstm_out,embedded_sent], 2).permute(1,0,2) # final_features.shape = (batch_size, seq_len, embed_size + 2*hidden_size) linear_output = self.tanh( self.W(input_features) ) # linear_output.shape = (batch_size, seq_len, hidden_size_linear) linear_output = linear_output.permute(0,2,1) # Reshaping fot max_pool max_out_features = F.max_pool1d(linear_output, linear_output.shape[2]).squeeze(2) # max_out_features.shape = (batch_size, hidden_size_linear) max_out_features = self.dropout(max_out_features) final_out = self.fc(max_out_features) return self.softmax(final_out) Similar Code in other Repositories Similar implementations of RCNN use permute or transpose. Here are examples: https://github.com/prakashpandey9/Text-Classification-Pytorch/blob/master/models/RCNN.py https://github.com/jungwhank/rcnn-text-classification-pytorch/blob/master/model.py
What permute function does is rearranges the original tensor according to the desired ordering, note permute is different from reshape function, because when apply permute, the elements in tensor follow the index you provide where in reshape it's not. Example code: import torch var = torch.randn(2, 4) pe_var = var.permute(1, 0) re_var = torch.reshape(var, (4, 2)) print("Original size:\n{}\nOriginal var:\n{}\n".format(var.size(), var) + "Permute size:\n{}\nPermute var:\n{}\n".format(pe_var.size(), pe_var) + "Reshape size:\n{}\nReshape var:\n{}\n".format(re_var.size(), re_var)) Outputs: Original size: torch.Size([2, 4]) Original var: tensor([[ 0.8250, -0.1984, 0.5567, -0.7123], [-1.0503, 0.0470, -1.9473, 0.9925]]) Permute size: torch.Size([4, 2]) Permute var: tensor([[ 0.8250, -1.0503], [-0.1984, 0.0470], [ 0.5567, -1.9473], [-0.7123, 0.9925]]) Reshape size: torch.Size([4, 2]) Reshape var: tensor([[ 0.8250, -0.1984], [ 0.5567, -0.7123], [-1.0503, 0.0470], [-1.9473, 0.9925]]) With the role of permute in mind we could see what first permute does is reordering the concatenate tensor for it to fit the inputs format of self.W, i.e with batch as first dimension; and the second permute does similar thing because we want to max pool the linear_output along the sequence and F.max_pool1d will pool along the last dimension.
https://stackoverflow.com/questions/65571264/
logistic regression model with L1 regularisations
I am trying to apply L1 regularization on a logistic model class LogisticRegression(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, x): x = x.reshape(-1, 784) output = self.linear(x) return output def training_step(self, batch): images, labels = batch output = self(images) loss = F.cross_entropy(output, labels) acc = accuracy(output, labels) return {'Training_loss': loss, 'Training_acc': acc} def training_epoch_end(self, outputs): batch_losses = [x['Training_loss'] for x in outputs] epoch_loss = torch.stack(batch_losses).mean() batch_accs = [x['Training_acc'] for x in outputs] epoch_acc = torch.stack(batch_accs).mean() return {'Training_loss': epoch_loss.item(), 'Training_acc': epoch_acc.item()} def epoch_end(self, epoch, result): print("Epoch [{}], Training_loss: {:.4f}, Training_acc: {:.4f}".format(epoch, result['Training_loss'], result['Training_acc'])) model = LogisticRegression() But I think I am doing it wrong the accuracy did not change. L1=0.2 def evaluate(model_b, trainloader): outputs = [model_b.training_step(batch) for batch in trainloader] return model_b.training_epoch_end(outputs) def fit(epochs, lr, model_b, trainloader, opt_func=torch.optim.SGD): history = [] optimizer = opt_func(model_b.parameters(), lr) for epoch in range(epochs): ##### Training Phase for batch in trainloader: loss = model_b.training_step(batch)['Training_loss'] loss_Lasso = loss + 0.5 * L1 # L1 reg loss_Lasso.backward() optimizer.step() optimizer.zero_grad() result = evaluate_b(model_b, trainloader) model_b.epoch_end(epoch, result) history.append(result) return history Can anyone help me with what I am missing and how I can really apply L1 regularization? Also, is L1 regularization called lasso?
I believe the l1-norm is a type of Lasso regularization, yes, but there are others. In your snippet L1 is set as a constant, instead you should measure the l1-norm of your model's parameters. Then sum it with your network's loss, as you did. In your example there is a single layer, so you will only need self.linear's parameters. First gather all parameters then measure the total norm with torch.norm. You could also use nn.L1Loss. params = torch.cat([x.view(-1) for x in model.linear.parameters()]) L1 = lamb*torch.norm(params, p=1) Where lamb is your lambda regularization parameter and model is initialized from the LogisticRegression class.
https://stackoverflow.com/questions/65575371/
Conversion pytorch to coreml for element-wise maximum operation
I tried to convert a PyTorch model to coreml with the element-wise maximum operation based on coremltools. With torch.max operation, I got ValueError: node input.2 (max) got 2 input(s), expected [3] With torch.maximum operation RuntimeError: PyTorch convert function for op 'maximum' not implemented. Any idea to solve this issue?
I encountered the same issue in conversion with pytorch element-wise operation to coreml model, but solved it by adding support for torch.maximum and torch.minimum for the MIL converter with the torch frontend. @register_torch_op def maximum(context, node): inputs = _get_inputs(context, node) x = inputs[0] y = inputs[1] out = mb.maximum(x=x, y=y, name=node.name) context.add(out)
https://stackoverflow.com/questions/65576885/
torch.nn.CrossEntropyLoss().ignore_index is crashing when importing transfomers library
I am using layoutlm github which require python 3.6, transformer 2.9.0. I created an conda env: name: env_test channels: - defaults - conda-forge dependencies: - python=3.6 - pip=20.3.3 - pytorch=1.4.0 - cudatoolkit=10.1 - pip: - transformers==2.9.0 I have the following test.py code to reproduce the issue: import sys import torch from torch.nn import CrossEntropyLoss from transformers import ( BertConfig, __version__ ) print (sys.version) print(torch.__version__) print(__version__) CrossEntropyLoss().ignore_index print("success!") Importing transformers library results in segmentation fault (core dumped) a when calling CrossEntropyLoss().ignore_index: $python test.py 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56) [GCC 7.3.0] 1.4.0 2.9.0 Segmentation fault (core dumped) I tried to investigate a bit but I don't really see from where the problem is coming from: gdb python GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python...done. (gdb) r test.py Starting program: /home/jupyter/.conda-env/env_test/bin/python test.py warning: Error disabling address space randomization: Operation not permitted [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56) [GCC 7.3.0] 1.4.0 2.9.0 Program received signal SIGSEGV, Segmentation fault. 0x00007f97000055fb in ?? () (gdb) where #0 0x00007f97000055fb in ?? () #1 0x00007f97f4755729 in void pybind11::cpp_function::initialize<void (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), void, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, pybind11::name, pybind11::scope, pybind11::sibling>(void (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), void (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) () from /home/jupyter/.conda-env/env_test/lib/python3.6/site-packages/torch/lib/libtorch_python.so #2 0x00007f97f436bca6 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /home/jupyter/.conda-env/env_test/lib/python3.6/site-packages/torch/lib/libtorch_python.so #3 0x000055fbadd73a14 in _PyCFunction_FastCallDict () at /tmp/build/80754af9/python_1599604603603/work/Objects/methodobject.c:231 #4 0x000055fbaddfba5c in call_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4851 #5 0x000055fbade1e25a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:3335 #6 0x000055fbaddf5c1b in _PyFunction_FastCall (globals=<optimized out>, nargs=1, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4933 #7 fast_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4968 #8 0x000055fbaddfbb35 in call_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4872 #9 0x000055fbade1e25a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:3335 #10 0x000055fbaddf5166 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4166 #11 0x000055fbaddf5e51 in fast_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4992 #12 0x000055fbaddfbb35 in call_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4872 #13 0x000055fbade1e25a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:3335 #14 0x000055fbaddf5166 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4166 #15 0x000055fbaddf5e51 in fast_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4992 #16 0x000055fbaddfbb35 in call_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4872 #17 0x000055fbade1e25a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:3335 #18 0x000055fbaddf5166 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4166 #19 0x000055fbaddf632c in _PyFunction_FastCallDict () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:5084 #20 0x000055fbadd73ddf in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1599604603603/work/Objects/abstract.c:2310 #21 0x000055fbadd78873 in _PyObject_Call_Prepend () at /tmp/build/80754af9/python_1599604603603/work/Objects/abstract.c:2373 #22 0x000055fbadd7381e in PyObject_Call () at /tmp/build/80754af9/python_1599604603603/work/Objects/abstract.c:2261 #23 0x000055fbaddcc88b in slot_tp_init () at /tmp/build/80754af9/python_1599604603603/work/Objects/typeobject.c:6420 #24 0x000055fbaddfbd97 in type_call () at /tmp/build/80754af9/python_1599604603603/work/Objects/typeobject.c:915 #25 0x000055fbadd73bfb in _PyObject_FastCallDict () at /tmp/build/80754af9/python_1599604603603/work/Objects/abstract.c:2331 #26 0x000055fbaddfbbae in call_function () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4875 #27 0x000055fbade1e25a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:3335 #28 0x000055fbaddf6969 in _PyEval_EvalCodeWithName (qualname=0x0, name=<optimized out>, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=<optimized out>, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0, locals=0x7f98035bf1f8, globals=0x7f98035bf1f8, _co=0x7f980357aae0) at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4166 #29 PyEval_EvalCodeEx () at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:4187 #30 0x000055fbaddf770c in PyEval_EvalCode (co=co@entry=0x7f980357aae0, globals=globals@entry=0x7f98035bf1f8, locals=locals@entry=0x7f98035bf1f8) at /tmp/build/80754af9/python_1599604603603/work/Python/ceval.c:731 #31 0x000055fbade77574 in run_mod () at /tmp/build/80754af9/python_1599604603603/work/Python/pythonrun.c:1025 #32 0x000055fbade77971 in PyRun_FileExFlags () at /tmp/build/80754af9/python_1599604603603/work/Python/pythonrun.c:978 #33 0x000055fbade77b73 in PyRun_SimpleFileExFlags () at /tmp/build/80754af9/python_1599604603603/work/Python/pythonrun.c:419 #34 0x000055fbade77c7d in PyRun_AnyFileExFlags () at /tmp/build/80754af9/python_1599604603603/work/Python/pythonrun.c:81 #35 0x000055fbade7b663 in run_file (p_cf=0x7fff210dc16c, filename=0x55fbaefa6dc0 L"test.py", fp=0x55fbaefda800) at /tmp/build/80754af9/python_1599604603603/work/Modules/main.c:340 #36 Py_Main () at /tmp/build/80754af9/python_1599604603603/work/Modules/main.c:811 #37 0x000055fbadd4543e in main () at /tmp/build/80754af9/python_1599604603603/work/Programs/python.c:69 #38 0x00007f9803fd6bf7 in __libc_start_main (main=0x55fbadd45350 <main>, argc=2, argv=0x7fff210dc378, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff210dc368) at ../csu/libc-start.c:310 #39 0x000055fbade24d0b in _start () at ../sysdeps/x86_64/elf/start.S:103 (gdb I am the following list of packages: _libgcc_mutex 0.1 main defaults _pytorch_select 0.2 gpu_0 defaults blas 1.0 mkl defaults ca-certificates 2020.12.8 h06a4308_0 defaults certifi 2020.12.5 py36h06a4308_0 defaults cffi 1.14.4 py36h261ae71_0 defaults chardet 4.0.0 pypi_0 pypi click 7.1.2 pypi_0 pypi cudatoolkit 10.1.243 h6bb024c_0 defaults cudnn 7.6.5 cuda10.1_0 defaults dataclasses 0.8 pypi_0 pypi filelock 3.0.12 pypi_0 pypi idna 2.10 pypi_0 pypi intel-openmp 2020.2 254 defaults joblib 1.0.0 pypi_0 pypi ld_impl_linux-64 2.33.1 h53a641e_7 defaults libedit 3.1.20191231 h14c3975_1 defaults libffi 3.3 he6710b0_2 defaults libgcc-ng 9.1.0 hdf63c60_0 defaults libstdcxx-ng 9.1.0 hdf63c60_0 defaults mkl 2020.2 256 defaults mkl-service 2.3.0 py36he8ac12f_0 defaults mkl_fft 1.2.0 py36h23d657b_0 defaults mkl_random 1.1.1 py36h0573a6f_0 defaults ncurses 6.2 he6710b0_1 defaults ninja 1.10.2 py36hff7bd54_0 defaults numpy 1.19.2 py36h54aff64_0 defaults numpy-base 1.19.2 py36hfa32c7d_0 defaults openssl 1.1.1i h27cfd23_0 defaults pip 20.3.3 py36h06a4308_0 defaults pycparser 2.20 py_2 defaults python 3.6.12 hcff3b4d_2 defaults pytorch 1.4.0 cuda101py36h02f0884_0 defaults readline 8.0 h7b6447c_0 defaults regex 2020.11.13 pypi_0 pypi requests 2.25.1 pypi_0 pypi sacremoses 0.0.43 pypi_0 pypi sentencepiece 0.1.94 pypi_0 pypi setuptools 51.0.0 py36h06a4308_2 defaults six 1.15.0 py36h06a4308_0 defaults sqlite 3.33.0 h62c20be_0 defaults tk 8.6.10 hbc83047_0 defaults tokenizers 0.7.0 pypi_0 pypi tqdm 4.55.1 pypi_0 pypi transformers 2.9.0 pypi_0 pypi urllib3 1.26.2 pypi_0 pypi wheel 0.36.2 pyhd3eb1b0_0 defaults xz 5.2.5 h7b6447c_0 defaults zlib 1.2.11 h7b6447c_3 defaults What is responsible fo this core dump (I have a VM with 30 GB of memory) ? Seems to be related to transformers. Some dependency issues not catched by conda ? This piece of code seems to work with the latest version of transformers 4.1.1 but this is not compatible with layoutlm. Any suggestions?
It seems something was broken on layoutlm with pytorch 1.4 related issue. Switching to pytorch 1.6 fix the issue with the core dump, and the layoutlm code run without any modification.
https://stackoverflow.com/questions/65582498/
Add a index selected tensor to another tensor with overlapping indices in pytorch
This is a follow up question to this question. I want to do the exactly same thing in pytorch. Is it possible to do this? If yes, how? import torch image = torch.tensor([[246, 50, 101], [116, 1, 113], [187, 110, 64]]) iy = torch.tensor([[1, 0, 2], [1, 0, 2], [2, 2, 2]]) ix = torch.tensor([[0, 2, 1], [1, 2, 0], [0, 1, 2]]) warped_image = torch.zeros(size=image.shape) I need something like torch.add.at(warped_image, (iy, ix), image) that gives the output as [[ 0. 0. 51.] [246. 116. 0.] [300. 211. 64.]] Note that the indices at (0,1) and (1,1) point to the same location (0,2). So, I want warped_image[0,2] = image[0,1] + image[1,1] = 51.
What you are looking for is torch.Tensor.index_put_ with the accumulate argument set to True: >>> warped_image = torch.zeros_like(image) >>> warped_image.index_put_((iy, ix), image, accumulate=True) tensor([[ 0, 0, 51], [246, 116, 0], [300, 211, 64]]) Or, using the out-place version torch.index_put: >>> torch.index_put(torch.zeros_like(image), (iy, ix), image, accumulate=True) tensor([[ 0, 0, 51], [246, 116, 0], [300, 211, 64]])
https://stackoverflow.com/questions/65584330/
How to convert PyTorch graph to ONNX and then inference from OpenCV?
I'm attempting to convert a PyTorch graph to ONNX via the torch.onnx.export function and then use the OpenCV functions blobFromImage, setInput, and forward to inference the converted graph. I think I'm on the right track but I keep running into errors and there are very few helpful examples of how to do this that I could find. I realize the general stack overflow policy is to post only relevant portions of code, however with the errors I'm getting it seems this is a case where the devil is in the details so I suspect I'll have to post a full example to make the cause of the errors clear. Here is my training net (pretty standard for MNIST): # MnistNet.py # Net Layout: # batchSize x 1 x 28 x 28 # conv1 Conv2d(1, 6, 5) # batchSize x 6 x 24 x 24 # relu(x) # max_pool2d(x, kernel_size=2) # batchSize x 6 x 12 x 12 # conv2 Conv2d(6, 16, 5) # batchSize x 16 x 8 x 8 # relu(x) # max_pool2d(x, kernel_size=2) # batchSize x 16 x 4 x 4 # view(-1, 16 * 4 * 4) Note: 16 * 4 * 4 = 256 # batchSize x 1 x 256 # fc1 Linear(256, 120) # relu(x) # batchSize x 1 x 120 # fc2 Linear(120, 84) # relu(x) # batchSize x 1 x 84 # fc3 Linear(84, 10) # batchSize x 1 x 10 import torch import torch.nn as nn import torch.nn.functional as F import torchvision class MnistNet(nn.Module): TRANSFORM = torchvision.transforms.Compose([ torchvision.transforms.Resize((28, 28)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.5], [0.5]) ]) def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(256, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) # end function def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=2) x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=2) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # end function # end class Here is my training script (again, pretty standard for MNIST): # 1_train.py from MnistNet import MnistNet import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader import torchvision from termcolor import colored BATCH_SIZE = 64 NUM_EPOCHS = 10 GRAPH_NAME = 'MNIST.pt' def main(): trainDataset = torchvision.datasets.MNIST('built_in_mnist_download', train=True, transform=MnistNet.TRANSFORM, download=True) trainDataLoader = DataLoader(trainDataset, batch_size=BATCH_SIZE, shuffle=True) # declare net, loss function, and optimizer mnistNet = MnistNet() lossFunction = nn.CrossEntropyLoss() optimizer = optim.Adam(mnistNet.parameters()) # get device (cuda or cpu) if torch.cuda.is_available(): device = torch.device('cuda') print(colored('using cuda', 'green')) else: device = torch.device('cpu') print(colored('GPU does not seem to be available, using CPU', 'red')) # end if # set network to device mnistNet.to(device) # set network to train mode mnistNet.train() print('beginning training . . .') # for each epoch . . . for epoch in range(1, NUM_EPOCHS+1): # variables to calculate loss and accuracy within the epoch epochLosses = [] epochAccuracies = [] # for each batch . . . for i, element in enumerate(trainDataLoader): # break out the input images and labels, note these are Tensors inputImages, labels = element inputImages = inputImages.to(device) labels = labels.to(device) # clear gradients from the previous step optimizer.zero_grad() # get net output outputs = mnistNet(inputImages) # calculate loss loss = lossFunction(outputs, labels) # call backward() to compute gradients loss.backward() # update parameters using gradients optimizer.step() # append the current classification loss to the list of epoch losses epochLosses.append(loss.item()) # calculate current classification accuracy # get the highest scoring classification for each prediction _, predictions = torch.max(outputs.data, 1) # number of labels and predictions should always be the same, log an error if this is not the case if labels.size(0) != predictions.size(0): print(colored('ERROR: labels.size(0) != predictions.size(0)', 'red')) # end if # determine the number of correct predictions for the current batch correctPredictions = 0 for j in range(len(labels)): if predictions[j].item() == labels[j].item(): correctPredictions += 1 # end if # end for # append the current batch accuracy to the list of accuracies epochAccuracies.append(correctPredictions / labels.size(0)) # end for # calculate epoch loss and accuracy from the respective lists epochLoss = sum(epochLosses) / len(epochLosses) epochAccuracy = sum(epochAccuracies) / len(epochAccuracies) print('epoch ' + str(epoch) + ', epochLoss = ' + '{:.4f}'.format(epochLoss) + ', epochAccuracy = ' + '{:.4f}'.format(epochAccuracy * 100) + '%') # end for print('finished training') # save the model torch.save(mnistNet.state_dict(), GRAPH_NAME) print('saved graph as ' + str(GRAPH_NAME)) # end function if __name__ == '__main__': main() Here is my best attempt so far at a script to convert a saved graph from PyTorch to ONNX (I'm not sure if this is correct, I can at least say it runs without error): # 3_convert_graph_to_onnx.py from MnistNet import MnistNet import torch GRAPH_NAME = 'MNIST.pt' ONNX_GRAPH_NAME = 'MNIST.onnx' def main(): net = MnistNet() net.load_state_dict(torch.load(GRAPH_NAME)) net.eval() # make a dummy input with a batch size of 1, 1 channel, 28 x 28 dummyInput = torch.randn(10, 1, 28, 28) torch.onnx.export(net, dummyInput, ONNX_GRAPH_NAME, verbose=True) # end function if __name__ == '__main__': main() Here is my attempt to inference the ONNX graph with OpenCV (Note that PyTorch is included, but is only used to load the test MNIST dataset, and the images are converted to OpenCV format before inferencing): # 4_onnx_opencv_inf.py from MnistNet import MnistNet import torchvision import cv2 import numpy as np from termcolor import colored ONNX_GRAPH_NAME = 'MNIST.onnx' def main(): testDataset = torchvision.datasets.MNIST('built_in_mnist_download', train=False, transform=MnistNet.TRANSFORM, download=True) labels = [ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ] net = cv2.dnn.readNetFromONNX(ONNX_GRAPH_NAME) # test on 3 images for i in range(3): # get PyTorch tensor image and ground truth index from dataset ptImage, gndTrIdx = testDataset[i] # convert to PIL image pilImage = torchvision.transforms.ToPILImage()(ptImage) # convert to OpenCV image, would convert RGB to BGR here if image was color openCvImage = np.array(pilImage) gndTr = labels[gndTrIdx] # can show OpenCV image here if desired # cv2.imshow('openCvImage', openCvImage) # cv2.waitKey() blob = cv2.dnn.blobFromImage(image=openCvImage, scalefactor=1.0/255.0, size=(64, 64)) net.setInput(blob) preds = net.forward() predIdx = np.array(preds)[0].argmax() prediction = str(predIdx) if prediction == gndTr: print(colored('i = ' + str(i) + ', predIdx = ' + str(predIdx) + ', gndTrIdx = ' + str(gndTrIdx) + ', correct answer', 'green')) else: print(colored('i = ' + str(i) + ', predIdx = ' + str(predIdx) + ', gndTrIdx = ' + str(gndTrIdx) + ', incorrect answer', 'red')) # end if # end for # end function if __name__ == '__main__': main() Currently this final script crashes with this error: $ python3 4_onnx_opencv_inf.py [ERROR:0] global /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/dnn.cpp (3441) getLayerShapesRecursively OPENCV/DNN: [Reshape]:(18): getMemoryShapes() throws exception. inputs=1 outputs=1/1 blobs=0 [ERROR:0] global /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/dnn.cpp (3447) getLayerShapesRecursively input[0] = [ 1 16 13 13 ] [ERROR:0] global /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/dnn.cpp (3451) getLayerShapesRecursively output[0] = [ 1 256 ] [ERROR:0] global /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/dnn.cpp (3457) getLayerShapesRecursively Exception message: OpenCV(4.4.0) /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/layers/reshape_layer.cpp:154: error: (-1:Backtrace) Can't infer a dim denoted by -1 in function 'computeShapeByReshapeMask' Traceback (most recent call last): File "4_onnx_opencv_inf.py", line 54, in <module> main() File "4_onnx_opencv_inf.py", line 38, in main preds = net.forward() cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-99ib2vsi/opencv/modules/dnn/src/layers/reshape_layer.cpp:154: error: (-1:Backtrace) Can't infer a dim denoted by -1 in function 'computeShapeByReshapeMask' I'm not really sure what to do next based on this error, can anybody please advise on this? I suspect that I'm at least doing the procedure generally correctly and missing a few small details.
I was using the wrong size in the ONNX inference script. In 4_onnx_opencv_inf.py changing: blob = cv2.dnn.blobFromImage(image=openCvImage, scalefactor=1.0/255.0, size=(64, 64)) to blob = cv2.dnn.blobFromImage(image=openCvImage, scalefactor=1.0/255.0, size=(28, 28)) makes it run (I'm using Ubuntu 20.04 and PyTorch 1.7.0), however the accuracy is worse. With regular PyTorch inferencing as above (2nd script) I'm getting 98.5% accuracy, with the OpenCV ONNX version I'm getting 95% accuracy. I suspect the difference is due to the parameters in cv2.dnn.blobFromImage not being set to handle the normalization correctly, but that is a different post entirely.
https://stackoverflow.com/questions/65587336/
Pytorch, how to extend a tensor
I want to extend a tensor in PyTorch in the following way: Let C be a 3x4 tensor which requires_grad = True. I want to have a new C which is 3x5 tensor and C = [C, ones(3,1)] (the last column is a one-vector, and others are the old C) Moreover, I need requires_grad = True for new C. What is an efficient way to do this?
You could do something like this - c = torch.rand((3,4), requires_grad=True) # a random matrix c of shape 3x4 ones = torch.ones(c.shape[0], 1) # creating a vector of 1's using shape of c # merging vector of ones with 'c' along dimension 1 (columns) c = torch.cat((c, ones), 1) # c.requires_grad will still return True...
https://stackoverflow.com/questions/65590589/
Cannot import "BasicBlock" from torchvision.models.resnet
I am trying to import the class BasicBlock from torchvision.models.resnet by doing this from torchvision.models.resnet import * It is giving no error but when I am trying to use BasicBlock class in my code (which is supposed to be imported already), I am getting the error that NameError: name 'BasicBlock' is not defined even though BasicBlock is present in torchvision.models.resnet But it gives no error when I import like this from torchvision.models.resnet import BasicBlock and then use it in my code Why am I getting this error?
BasicBlock is indeed defined, however it is not exported by the module: see here the definition of __all__. So torchvision/models/resnet.py only exports these: ResNet, resnet18, resnet34, resnet50, resnet101, resnet152, resnext50_32x4d, resnext101_32x8d, wide_resnet50_2, and wide_resnet101_2.
https://stackoverflow.com/questions/65591623/
Why does the output of the neural network is the same for all samples
I'm using Pytorch to make a regression model (neural network). Just train and test it on the same sample (because i'm just learning how to build a neural network). The data is the fish market dataset from kaggle: https://www.kaggle.com/aungpyaeap/fish-market Small sample from the training Tensor: tensor([[ 0.0000, 23.2000, 25.4000, 30.0000, 11.5200, 4.0200], [ 0.0000, 24.0000, 26.3000, 31.2000, 12.4800, 4.3056], [ 0.0000, 23.9000, 26.5000, 31.1000, 12.3778, 4.6961], [ 0.0000, 26.3000, 29.0000, 33.5000, 12.7300, 4.4555], [ 0.0000, 26.5000, 29.0000, 34.0000, 12.4440, 5.1340], [ 0.0000, 26.8000, 29.7000, 34.7000, 13.6024, 4.9274], [ 0.0000, 26.8000, 29.7000, 34.5000, 14.1795, 5.2785], [ 0.0000, 27.6000, 30.0000, 35.0000, 12.6700, 4.6900], [ 0.0000, 27.6000, 30.0000, 35.1000, 14.0049, 4.8438], [ 0.0000, 28.5000, 30.7000, 36.2000, 14.2266, 4.9594], [ 0.0000, 28.4000, 31.0000, 36.2000, 14.2628, 5.1042], [ 0.0000, 28.7000, 31.0000, 36.2000, 14.3714, 4.8146], [ 0.0000, 29.1000, 31.5000, 36.4000, 13.7592, 4.3680], Small Sample from the target tensor: tensor([[ 242.0000], [ 290.0000], [ 340.0000], [ 363.0000], [ 430.0000], [ 450.0000], [ 500.0000], [ 390.0000], [ 450.0000], [ 500.0000], [ 475.0000], [ 500.0000], [ 500.0000], [ 340.0000], [ 600.0000], [ 600.0000], [ 700.0000], [ 700.0000], The neural network: class NeuralNetFish(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(6, 10) self.tanh1 = nn.Tanh() self.fc2 = nn.Linear(10, 10) self.tanh2 = nn.Tanh() self.fc3 = nn.Linear(10, 1) def forward(self, x): out = self.fc1(x) out = self.tanh1(out) out = self.fc2(out) out = self.tanh2(out) out = self.fc3(out) return out model = NeuralNetFish() criteron = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.02) for i in range(10000): output = model(fish_tensor) loss = criteron(output, weight_tensor) optimizer.zero_grad() loss.backward() optimizer.step() if i % 1000 == 0: print("Epoch:", i, " --- Loss:", loss.item()) OUTPUT: Epoch: 0 --- Loss: 286071.78125 Epoch: 1000 --- Loss: 127342.515625 Epoch: 2000 --- Loss: 127342.515625 Epoch: 3000 --- Loss: 127342.515625 Epoch: 4000 --- Loss: 127342.515625 Epoch: 5000 --- Loss: 127342.515625 Epoch: 6000 --- Loss: 127342.515625 Epoch: 7000 --- Loss: 127342.515625 Epoch: 8000 --- Loss: 127342.515625 Epoch: 9000 --- Loss: 127342.515625 And i get the same output for all the samples.
Scale your inputs. Your inputs are roughly in range [0, 40], you take linear combinations of these numbers, and then apply the tanh function. Most likely your outputs will be at the "saturated" region of the tanh thus the gradients are roughly zero. Your neurons are dead. To overcome this issue: Scale your inputs: compute the mean and std over the training set (mean and std are 6-vectors in your case). Subtract the mean and divide by the std to have your inputs distributed roughly as N(0, 1). Replace the tanh activations with nn.PReLU. You should tune your learning-rate to be in accordance with the batch size (in your case batch size = entire dataset).
https://stackoverflow.com/questions/65592649/
How to convert dataset of images to tensor?
I've got a dataset of images that looks like this: array([[[[0.35980392, 0.26078431, 0.14313725], [0.38137255, 0.26470588, 0.15196078], [0.51960784, 0.3745098 , 0.26176471], ..., [0.34313725, 0.22352941, 0.15 ], [0.30784314, 0.2254902 , 0.15686275], [0.28823529, 0.22843137, 0.16862745]], [[0.38627451, 0.28235294, 0.16764706], [0.45098039, 0.32843137, 0.21666667], [0.62254902, 0.47254902, 0.36470588], ..., [0.34607843, 0.22745098, 0.15490196], [0.30686275, 0.2245098 , 0.15588235], [0.27843137, 0.21960784, 0.16176471]], [[0.41568627, 0.30098039, 0.18431373], [0.51862745, 0.38529412, 0.27352941], [0.67745098, 0.52058824, 0.40980392], ..., [0.34901961, 0.22941176, 0.15588235], [0.29901961, 0.21666667, 0.14901961], [0.26078431, 0.20098039, 0.14313725]], ..., What I need is convert it to tensor so that I could pass it to CNN. I'm trying to do it like that: from torchvision import transforms as transforms transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) How can I apply this transform to my dataset? Thanks for any help.
you probably want to create a dataloader. You will need a class which iterates over your dataset, you can do that like this: import torch import torchvision.transforms class YourDataset(torch.utils.data.Dataset): def __init__(self): # load your dataset (how every you want, this example has the dataset stored in a json file with open(<dataset-path>, "r") as f: self.dataset = json.load(f) def __getitem__(self, idx): sample = self.dataset[idx] data, label = sample[0], sample[1] transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) return transform(data), torch.tensor(label) def __len__(self): return len(self.dataset) Now you can create a dataloader: train_set = YourDataset() train_dataloader = torch.utils.data.DataLoader( train_set, batch_size=64, num_workers=1, shuffle=True, ) And now you can iterator over the dataloader in your train-loop: for samples, labels in self.train_set: . . . # samples will hold N samples of your dataset where N is the batchsize If you need more explanation, take a look a pytorchs documentation on this topic.
https://stackoverflow.com/questions/65594383/
How can I install Pytorch version 1.1 on Windows x64?
I need to install an older version of Pytorch, version 1.1, on my Windows 10 x64 machine. The instructions here tell me to download the wheel and install it. However, pip refuses to install this wheel, claiming it is not supported on my platform. I suppose that is because, judging by the the name of the wheel (torch-1.1.0-cp37-cp37m-win_amd64.whl) it is meant for AMD, and I'm on an Intel i7. I found no better matching wheel on the Pytorch site. What is the easiest way to install Pytorch 1.1 on my machine?
You can use conda conda install pytorch=1.1.0 I've checked the exisiting versions with conda search -f pytorch and 1.1.0 (and many others too) is available.
https://stackoverflow.com/questions/65595279/
How to calculate KL Divergence between two batches of distributions in Pytroch?
Given a batch of distributions, which represented as a pytorch tensor: A = torch.tensor([[0., 0., 0., 0., 1., 5., 1., 2.], [0., 0., 1., 0., 4., 2., 1., 1.], [0., 0., 1., 1., 0., 5., 1., 1.], [0., 1., 1., 0., 2., 3., 1., 1.], [0., 0., 2., 1., 3., 1., 1., 0.], [0., 0., 2., 0., 5., 0., 1., 0.], [0., 2., 1., 4., 0., 0., 1., 0.], [0., 0., 2., 4., 1., 0., 1., 0.]], device='cuda:0') A is a batch of distributions, which consists of eight distributions. Now, given another batch of distributions B: B = torch.tensor([[0., 0., 1., 4., 2., 1., 1., 0.], [0., 0., 0., 5., 1., 2., 1., 0.], [0., 0., 0., 4., 2., 3., 0., 0.], [0., 0., 1., 7., 0., 0., 1., 0.], [0., 0., 1., 2., 4., 0., 1., 1.], [0., 0., 1., 3., 1., 3., 0., 0.], [0., 0., 1., 4., 1., 0., 2., 0.], [1., 0., 1., 5., 0., 1., 0., 0.], [0., 1., 5., 1., 0., 0., 1., 0.], [0., 0., 3., 2., 2., 0., 1., 0.], [0., 2., 4., 0., 1., 0., 1., 0.], [1., 0., 4., 1., 1., 1., 0., 0.]], device='cuda:0') B has 12 distributions. I want to calculate the KL Divergence between each distribution in A and each distribution in B, and then obtain a KL Distance Matrix, of which shape is 12*8. I know to use loop structure and torch.nn.functional.kl_div() to reach it. Is there any other methods in pytorch to implement it without using for-loop? Here is my implementation using for-loop: p_1 = F.softmax(A, dim = -1) p_2 = F.softmax(B, dim = -1) C = torch.empty(size = (A.shape[0], B.shape[0]), dtype = torch.float) for i,a in enumerate(p_1): for j,b in enumerate(p_2): C[i][j] = torch.nn.functional.kl_div(a.log(), b) print(C) Output is : tensor([[0.4704, 0.5431, 0.3422, 0.6284, 0.3985, 0.2003, 0.4925, 0.5739, 0.5793, 0.3992, 0.5007, 0.4934], [0.3416, 0.4518, 0.2950, 0.5263, 0.0218, 0.2254, 0.3786, 0.4747, 0.3626, 0.1823, 0.2960, 0.2937], [0.3845, 0.4306, 0.2722, 0.5022, 0.4769, 0.1500, 0.3964, 0.4556, 0.4609, 0.3396, 0.4076, 0.3933], [0.2862, 0.3752, 0.2116, 0.4520, 0.1307, 0.1116, 0.3102, 0.3990, 0.2869, 0.1464, 0.2164, 0.2225], [0.1829, 0.2674, 0.1763, 0.3227, 0.0244, 0.1481, 0.2067, 0.2809, 0.1675, 0.0482, 0.1271, 0.1210], [0.4359, 0.5615, 0.4427, 0.6268, 0.0325, 0.4160, 0.4749, 0.5774, 0.3492, 0.2093, 0.3015, 0.3014], [0.0235, 0.0184, 0.0772, 0.0286, 0.3462, 0.1461, 0.0142, 0.0162, 0.3524, 0.1824, 0.2844, 0.2988], [0.0097, 0.0171, 0.0680, 0.0284, 0.2517, 0.1374, 0.0082, 0.0148, 0.2403, 0.1058, 0.2100, 0.1978]], device='cuda:0')
Looking at nn.KLDivLoss, the formula for computing the KL divergence is kl = torch.mean(b * (torch.log(b) - a)) We can use broadcasting to compute the KL efficiently: # avoid NaNs from log(0) lB = B.clone() lB[B==0] = 1. # do the computation efficiently C = (B[None, ...] * (torch.log(lB[None, ...]) - A[:, None, :])).mean(dim=-1) Coming to think of it, I'm not sure what you are asking makes much sense. Your A and B tensors are filled with numbers, but they do not represent distributions (they do not sum to 1). Please consider carefully what you are trying to do here.
https://stackoverflow.com/questions/65596026/
LSTM for time-series prediction failing to learn (PyTorch)
I'm currently working on building an LSTM network to forecast time-series data using PyTorch. I tried to share all the code pieces that I thought would be helpful, but please feel free to let me know if there's anything further I can provide. I added some comments at the end of the post regarding what the underlying issue might be. From the univariate time-series data indexed by date, I created 3 date features and split the data into training and validation sets as below. # X_train weekday monthday hour timestamp 2015-01-08 17:00:00 3 8 17 2015-01-12 19:30:00 0 12 19 2014-12-01 15:30:00 0 1 15 2014-07-26 09:00:00 5 26 9 2014-10-17 20:30:00 4 17 20 ... ... ... ... 2014-08-29 06:30:00 4 29 6 2014-10-13 14:30:00 0 13 14 2015-01-03 02:00:00 5 3 2 2014-12-06 16:00:00 5 6 16 2015-01-06 20:30:00 1 6 20 8256 rows × 3 columns # y_train value timestamp 2015-01-08 17:00:00 17871 2015-01-12 19:30:00 20321 2014-12-01 15:30:00 16870 2014-07-26 09:00:00 11209 2014-10-17 20:30:00 26144 ... ... 2014-08-29 06:30:00 9008 2014-10-13 14:30:00 17698 2015-01-03 02:00:00 12850 2014-12-06 16:00:00 18277 2015-01-06 20:30:00 19640 8256 rows × 1 columns # X_val weekday monthday hour timestamp 2015-01-08 07:00:00 3 8 7 2014-10-13 22:00:00 0 13 22 2014-12-07 01:30:00 6 7 1 2014-10-14 17:30:00 1 14 17 2014-10-25 09:30:00 5 25 9 ... ... ... ... 2014-09-26 12:30:00 4 26 12 2014-10-08 16:00:00 2 8 16 2014-12-03 01:30:00 2 3 1 2014-09-11 08:00:00 3 11 8 2015-01-15 10:00:00 3 15 10 2064 rows × 3 columns # y_val value timestamp 2014-09-13 13:00:00 21345 2014-10-28 20:30:00 23210 2015-01-21 17:00:00 17001 2014-07-20 10:30:00 13936 2015-01-29 02:00:00 3604 ... ... 2014-11-17 11:00:00 15247 2015-01-14 00:00:00 10584 2014-09-02 13:00:00 17698 2014-08-31 13:00:00 16652 2014-08-30 12:30:00 15775 2064 rows × 1 columns Then, I transformed the values in the datasets by using MinMaxScaler from the sklearn library. scaler = MinMaxScaler() X_train_arr = scaler.fit_transform(X_train) X_val_arr = scaler.transform(X_val) y_train_arr = scaler.fit_transform(y_train) y_val_arr = scaler.transform(y_val) After converting these NumPy arrays into PyTorch Tensors, I created iterable datasets using TensorDataset and DataLoader classes provided by PyTorch. from torch.utils.data import TensorDataset, DataLoader from torch.autograd import Variable train_features = torch.Tensor(X_train_arr) train_targets = torch.Tensor(y_train_arr) val_features = torch.Tensor(X_val_arr) val_targets = torch.Tensor(y_val_arr) train = TensorDataset(train_features, train_targets) train_loader = DataLoader(train, batch_size=64, shuffle=False) val = TensorDataset(val_features, val_targets) val_loader = DataLoader(train, batch_size=64, shuffle=False) Then, I defined my LSTM Model and train_step functions as follows: class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, layer_dim, output_dim): super(LSTMModel, self).__init__() # Hidden dimensions self.hidden_dim = hidden_dim # Number of hidden layers self.layer_dim = layer_dim # Building your LSTM # batch_first=True causes input/output tensors to be of shape # (batch_dim, seq_dim, feature_dim) self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True) # Readout layer self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): # Initialize hidden state with zeros h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # Initialize cell state c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_() # We need to detach as we are doing truncated backpropagation through time (BPTT) # If we don't, we'll backprop all the way to the start even after going through another batch out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) # Index hidden state of last time step out = self.fc(out[:, -1, :]) return out def make_train_step(model, loss_fn, optimizer): # Builds function that performs a step in the train loop def train_step(x, y): # Sets model to TRAIN mode model.train() # Makes predictions yhat = model(x) # Computes loss loss = loss_fn(y, yhat) # Computes gradients loss.backward() # Updates parameters and zeroes gradients optimizer.step() optimizer.zero_grad() # Returns the loss return loss.item() # Returns the function that will be called inside the train loop return train_step Finally, I start training my LSTM model in mini-batches with AdamOptimizer for 20 epochs, which is already long enough to see the model is not learning. import torch.optim as optim input_dim = n_features hidden_dim = 64 layer_dim = 3 output_dim = 1 model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim) criterion = nn.MSELoss(reduction='mean') optimizer = optim.Adam(model.parameters(), lr=1e-2) train_losses = [] val_losses = [] train_step = make_train_step(model, criterion, optimizer) n_epochs = 20 device = 'cuda' if torch.cuda.is_available() else 'cpu' for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.unsqueeze(dim=0).to(device) y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.unsqueeze(dim=0).to(device) y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") And this is the output: C:\Users\VS32XI\Anaconda3\lib\site-packages\torch\nn\modules\loss.py:446: UserWarning: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([64, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) [1] Training loss: 0.0505 Validation loss: 0.0315 [2] Training loss: 0.0317 Validation loss: 0.0315 [3] Training loss: 0.0317 Validation loss: 0.0315 [4] Training loss: 0.0317 Validation loss: 0.0315 [5] Training loss: 0.0317 Validation loss: 0.0315 [6] Training loss: 0.0317 Validation loss: 0.0315 [7] Training loss: 0.0317 Validation loss: 0.0315 [8] Training loss: 0.0317 Validation loss: 0.0315 [9] Training loss: 0.0317 Validation loss: 0.0315 [10] Training loss: 0.0317 Validation loss: 0.0315 [11] Training loss: 0.0317 Validation loss: 0.0315 [12] Training loss: 0.0317 Validation loss: 0.0315 [13] Training loss: 0.0317 Validation loss: 0.0315 [14] Training loss: 0.0317 Validation loss: 0.0315 [15] Training loss: 0.0317 Validation loss: 0.0315 [16] Training loss: 0.0317 Validation loss: 0.0315 [17] Training loss: 0.0317 Validation loss: 0.0315 [18] Training loss: 0.0317 Validation loss: 0.0315 [19] Training loss: 0.0317 Validation loss: 0.0315 [20] Training loss: 0.0317 Validation loss: 0.0315 Note 1: Looking at the warning given, I'm not sure if that's the real reason why the model is not learning well. After all, I'm trying to predict the future values in the time-series data; therefore, 1 would be a plausible output dimension. Note 2: To train the model in mini-batches, I relied on the class DataLoader. When iterating over the X and Y batches in both train and validation DataLoaders, the dimensions of x_batches were 2, while the model expected 3. So, I used PyTorch's unsqueeze function to match the expected dimension as in x_batch.unsqueeze(dim=0) . I'm not sure if this is how I should have gone about it, which could also be the issue.
Once I used Tensor View to reshape the mini-batches for the features in training and the validation set, the issue was resolved. As a side note, view() enable fast and memory-efficient reshaping, slicing, and element-wise operations, by avoiding an explicit data copy. It turned out that in the earlier implementation torch.unsqueeze() did not reshape the batches into tensors with the dimensions (batch size, timesteps, number of features). Instead, the function unsqueeze(dim=0) returns a new tensor with a singleton dimension inserted at the Oth index. So, the mini batches for the feature sets is shaped as follows x_batch = x_batch.view([batch_size, -1, n_features]).to(device) Then, the new training loop becomes: for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.view([batch_size, -1, n_features]).to(device) # <--- y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.view([batch_size, -1, n_features]).to(device) # <--- y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") Here's the output: [1] Training loss: 0.0235 Validation loss: 0.0173 [2] Training loss: 0.0149 Validation loss: 0.0086 [3] Training loss: 0.0083 Validation loss: 0.0074 [4] Training loss: 0.0079 Validation loss: 0.0069 [5] Training loss: 0.0076 Validation loss: 0.0069 ... [96] Training loss: 0.0025 Validation loss: 0.0028 [97] Training loss: 0.0024 Validation loss: 0.0027 [98] Training loss: 0.0027 Validation loss: 0.0033 [99] Training loss: 0.0027 Validation loss: 0.0030 [100] Training loss: 0.0023 Validation loss: 0.0028 Recently, I've decided to put together the things I had learned and the things I would have liked to know earlier. If you'd like to have a look, you can find the links down below. I hope you'll find it useful. Feel free to comment or reach out to me if you agree or disagree with any of the remarks I made above. Building RNN, LSTM, and GRU for time series using PyTorch Predicting future values with RNN, LSTM, and GRU using PyTorch
https://stackoverflow.com/questions/65596522/
mask list/tensor with multiple conditions?
The following code masks fine mask = targets >= 0 targets = targets[mask] However, when I try masking with two conditions, it gives an error of RuntimeError: Boolean value of Tensor with more than one value is ambiguous mask = (targets >= 0 and targets <= 5) targets = targets[mask] is there a way to do this?
You are making a mistake while using brackets. Bracket around each of the conditions so that NumPy considers them as individual arrays. targets = np.random.randint(0,10,(10,)) mask = (targets>=0) & (targets<=5) #<---------- print(mask) targets[mask] [ True False True False False True True True False True] array([4, 1, 3, 1, 5, 3]) You can create some complex logic using multiple masks and then directly index an array with them. Example - XNOR can be written as ~(mask1 ^ mask2)
https://stackoverflow.com/questions/65605321/
Is there a way to force any function to not be verbose in Python?
While loading a pretrained model with pytorch via model.load_state_dict(torch.load(MODEL_PATH)) the console is flooded with output containing information about the model (which is annoying). Afaik there is no verbosity option, neither in model.load_state_dict nor in torch.load. Please let me know if I overlooked this parameter. However, this led me to the question if there is any general way to force a function to be non verbose. Maybe something like: with os.nonverbose(): model.load_state_dict(torch.load(MODEL_PATH)) Any ideas?
As @triplee commented, most libraries would use Python logging, which can be modified extensively. I haven't worked with PyTorch before but from https://github.com/pytorch/vision/issues/330 it looks like it's actaully using print (which is horrible if it is but okay). In general, you can, however, suppress the stdout output of anything by redirecting stdout. Here's a good link https://wrongsideofmemphis.com/2010/03/01/store-standard-output-on-a-variable-in-python/ In your question you ask if this can be done with a context manager. I don't see why not and it seems appropriate given that you want to reset stdout after the function call. Something like this: from io import StringIO # Python3 import sys class SilencedStdOut: def __enter__(self): self.old_stdout = sys.stdout self.result = StringIO() sys.stdout = self.result def __exit__(self, *args, **kwargs): sys.stdout = self.old_stdout result_string = self.result.getvalue() # use if you want or discard. If you only ever want to supress a single function and never a block of code, however, then a decorator should work fine too.
https://stackoverflow.com/questions/65608502/
Overriding stem method in video classification model to change filter channel
I am trying to use torchvision’s video classification models (R3D, R(2+1)D, MC18) but my data is single channel (grey scale video), and these model uses 3 channel input, in that case I am trying to override the stem class , can someone please confirm if what I am doing is correct? For R3D18 and MC18 stem=BasicStem class BasicStemModified(nn.Sequential): def __init__(self): super(BasicStemModified, self).__init__( nn.Conv3d(1, 45, kernel_size=(7, 7, 1), #changing filter to 1 channel input stride=(2, 2, 1), padding=(3, 3, 0), bias=False), nn.BatchNorm3d(45), nn.ReLU(inplace=True), nn.Conv3d(45, 64, kernel_size=(1, 1, 3), stride=(1, 1, 1), padding=(0, 0, 1), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True)) model = torchvision.models.video.mc3_18(pretrained=False) model.stem = BasicStemModified() #here assigning the modified stem model.fc = nn.Sequential( nn.Dropout(0.3), nn.Linear(model.fc.in_features, num_classes) ) model.to('cuda:0') For R(2+1)D: #For R(2+1)D model `stem=R2Plus1dStem` class R2Plus1dStemModified(nn.Sequential): """R(2+1)D stem is different than the default one as it uses separated 3D convolution """ def __init__(self): super(R2Plus1dStemModified, self).__init__( nn.Conv3d(3, 45, kernel_size=(1, 7, 7), #changing filter to 1 channel input stride=(1, 2, 2), padding=(0, 3, 3), bias=False), nn.BatchNorm3d(45), nn.ReLU(inplace=True), nn.Conv3d(45, 64, kernel_size=(3, 1, 1), stride=(1, 1, 1), padding=(1, 0, 0), bias=False), nn.BatchNorm3d(64), nn.ReLU(inplace=True)) model = torchvision.models.video.mc3_18(pretrained=False) model.stem = R2Plus1dStemModified() #here assigning the modified stem model.fc = nn.Sequential( nn.Dropout(0.3), nn.Linear(model.fc.in_features, num_classes) ) model.to('cuda:0')
When switching from RGB to gray, the most simple way to go is to change the DATA and not the model: If you have an input frame with only one channel (gray), you can simply expand the singleton channel dimension to span three channels. This is trivial and allows you to use pre-trained models as-is. If you insist on modifying the model - you can do so while preserving most of the pre-trained weights: model = torchvision.models.video.mc3_18(pretrained=True) # get the pretrained # modify only the first conv layer origc = model.stem[0] # the orig conv layer # build a new layer only with one input channel c1 = torch.nn.Conv3d(1, origc.out_channels, kernel_size=origc.kernel_size, stride=origc.stride, padding=origc.padding, bias=origc.bias) # this is the nice part - init the new weights using the original ones with torch.no_grad(): c1.weight.data = origc.weight.data.sum(dim=1, keepdim=True)
https://stackoverflow.com/questions/65609363/
Why does torchvision.utils.make_grid() return copies of the wanted grid?
In the below coding example I can not understand why the output tensor , grid has a shape of 3,28,280. I understand why its 28 in height and 280 in width, but not the 3. It seems from running plt.imshow() on all 3 28x280 arrays along axis 0 that they are identical copies since printing any 1 of these gives me the image I want. Also I do not understand why I can pass grid as an argument to plt.imshow() given that it is supposed to take in a 2D array, not a 3D one as grid clearly is. import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np train_set = torchvision.datasets.FashionMNIST( root = './pytorch_obj_classifier/data/FashionMNIST', train = True, download = True, transform = transforms.Compose([ transforms.ToTensor() ]) ) sample = next(iter(train_loader)) image,label = sample print(image.shape) grid = torchvision.utils.make_grid(image,padding=0, nrow=10) print(grid.shape) plt.figure(figsize=(15,15)) grid = np.transpose(grid,(1,2,0)) grid1 = grid[:,:,0] grid2 = grid[:,:,1] grid3 = grid[:,:,2] plt.imshow(grid1,cmap = 'gray') plt.imshow(grid2,cmap = 'gray') plt.imshow(grid3,cmap = 'gray') plt.imshow(grid,cmap = 'gray')
The MNIST dataset consists of grascale images. If you look at the implementation detail of torchvision.utils.make_grid, single-channel images get their channel copied three times: if tensor.dim() == 4 and tensor.size(1) == 1: # single-channel images tensor = torch.cat((tensor, tensor, tensor), 1) As for matplotlib.pyplot.imshow it can take 2D, 3D or 4D inputs: The image data. Supported array shapes are: (M, N): an image with scalar data. The data is visualized using a colormap. (M, N, 3): an image with RGB values (0-1 float or 0-255 int). (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), i.e. including transparency. Generally speaking, we wouldn't refer to dimensions but rather describe tensors by their shape (the size on each of their axes). In PyTorch, images always have three axes, and have a shape that follows: (channel, height, width). Even for single-channel images: considering it as a 3D tensor (1, height, width) instead of a 2D tensor (height, width). This is to be consistant with cases where you have more than one channel, which is very often (cf. convolution neural networks).
https://stackoverflow.com/questions/65616837/
How to create a tensor of given shape and interval?
I am going through Pytorch and want to create a random tensor of shape 5X3 in the interval [3,7) torch.rand(5,3) will return a random tensor of shape 5 X 3, however, I could not figure to set the given interval. Please guide.
You can map U ~ [0, 1] to U ~ [a, b] with u -> (a - b)*u + b: (a - b)*torch.rand(5, 3) + b
https://stackoverflow.com/questions/65617507/
How to vectorize indexing and computation when indexed tensors are different dimensions?
I'm trying to vectorize the following for-loop in Pytorch. I'd be happy with just vectorizing the inner for-loop, but doing the whole batch would also be awesome. # B: the batch size # N: the number of training examples # dim: the dimension of each feature vector # K: the number of discrete labels. each vector has a single label # delta: margin for hinge loss batch_data = torch.tensor(...) # Tensor of shape [B x N x d] batch_labels = torch.tensor(...) # Tensor of shape [B x N x 1], each element is one of K labels (ints) batch_losses = [] # Ultimately should be [B x 1] batch_centroids = [] # Ultimately should be [B x K_i x dim] for i in range(B): centroids = [] # Keep track of the means for each class. classes = torch.unique(labels) # Get the unique labels for the classes. # NOTE: The number of classes K for each item in the batch might actually # be different. This may complicate batch-level operations. total_loss = 0 # For each class independently. This is the part I want to vectorize. for cl in classes: # Take the subset of training examples with that label. subset = data[torch.where(labels == cl)] # Find the centroid of that subset. centroid = subset.mean(dim=0) centroids.append(centroid) # Get the distance between each point in the subset and the centroid. dists = subset - centroid norm = torch.linalg.norm(dists, dim=1) # The loss is the mean of the hinge loss across the subset. margin = norm - delta hinge = torch.clamp(margin, min=0.0) ** 2 total_loss += hinge.mean() # Keep track of everything. If it's too hard to keep track of centroids, that's also OK. loss = total_loss.mean() batch_losses.append(loss) batch_centroids.append(centroids) I've been scratching my head on how to deal with the irregularly sized tensors. The number of classes in each batch K_i is different, and the size of each subset is different.
It turns out it actually is possible to vectorize across ragged arrays. I'll use numpy, but code should be directly translatable to torch. The key technique is to: Sort by ragged array membership Perform an accumulation Find boundary indices, compute adjacent differences For a single (non-batch) input of an n x d matrix X and an n-length array label, the following returns the k x d centroids and n-length distances to respective centroids: def vcentroids(X, label): """ Vectorized version of centroids. """ # order points by cluster label ix = np.argsort(label) label = label[ix] Xz = X[ix] # compute pos where pos[i]:pos[i+1] is span of cluster i d = np.diff(label, prepend=0) # binary mask where labels change pos = np.flatnonzero(d) # indices where labels change pos = np.repeat(pos, d[pos]) # repeat for 0-length clusters pos = np.append(np.insert(pos, 0, 0), len(X)) Xz = np.concatenate((np.zeros_like(Xz[0:1]), Xz), axis=0) Xsums = np.cumsum(Xz, axis=0) Xsums = np.diff(Xsums[pos], axis=0) counts = np.diff(pos) c = Xsums / np.maximum(counts, 1)[:, np.newaxis] repeated_centroids = np.repeat(c, counts, axis=0) aligned_centroids = repeated_centroids[inverse_permutation(ix)] dist = np.sum((X - aligned_centroids) ** 2, axis=1) return c, dist Batching requires little special handling. For an input B x n x d array batch_X, with B x n batch labels batch_labels, create unique labels for each batch: batch_k = batch_labels.max(axis=1) + 1 batch_k[1:] = batch_k[:-1] batch_k[0] = 0 base = np.cumsum(batch_k) batch_labels += base.expand_dims(1) So now each batch element has a unique contiguous range of labels. I.e., the first batch element will have n labels in some range [0, k0) where k0 = batch_k[0], the second will have range [k0, k0 + k1) where k1 = batch_k[1], etc. Then just flatten the n x B x d input to n*B x d and call the same vectorized method. Your loss function is derivable using the final distances and same position-array based reduction technique. For a detailed explanation of how the vectorization works, see my blog post.
https://stackoverflow.com/questions/65623906/
PyTorch convolutional block - CIFAR10 - RuntimeError
I am using PyTorch 1.7 and Python 3.8 with CIFAR-10 dataset. I am trying to create a block with: conv -> conv -> pool -> fc. Fully connected layer (fc) has 256 neurons. The code for this is as follows: # Testing- conv1 = nn.Conv2d( in_channels = 3, out_channels = 64, kernel_size = 3, stride = 1, padding = 1, bias = True ) conv2 = nn.Conv2d( in_channels = 64, out_channels = 64, kernel_size = 3, stride = 1, padding = 1, bias = True ) pool = nn.MaxPool2d( kernel_size = 2, stride = 2 ) fc1 = nn.Linear( in_features = 64 * 16 * 16, out_features = 256 bias = True ) images.shape # torch.Size([32, 3, 32, 32]) x = conv1(images) x.shape # torch.Size([32, 64, 32, 32]) x = conv2(x) x.shape # torch.Size([32, 64, 32, 32]) x = pool(x) x.shape # torch.Size([32, 64, 16, 16]) # This line of code gives error- x = fc1(x) RuntimeError: mat1 and mat2 shapes cannot be multiplied (32768x16 and 16384x256) What is going wrong?
You are nearly there! As you will have noticed nn.MaxPool returns a shape (32, 64, 16, 16) which is incompatible with a nn.Linear's input: a 2D dimensional tensor (batch, in_features). You need to broadcast to (batch, 64*16*16). I would recommend using a nn.Flatten layer rather than broadcasting yourself. It will act as x.view(x.size(0), -1) but is clearer. By default it preserves the first dimension: conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1) conv2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1) pool = nn.MaxPool2d(kernel_size=2, stride=2) flatten = nn.Flatten() fc1 = nn.Linear(in_features=64*16*16, out_features=256) x = conv1(images) x = conv2(x) x = pool(x) x = flatten(x) x = fc1(x) Alternatively, you could use the functional alternative torch.flatten, where you will have to provide the start_dim as 1: x = torch.flatten(x, start_dim=1). When you're done debugging, you could assemble your layers with nn.Sequential: model = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1), nn.MaxPool2d(kernel_size=2, stride=2), nn.Flatten(), nn.Linear(in_features=64*16*16, out_features=256) ) x = model(images)
https://stackoverflow.com/questions/65624700/
How to find the (Most important) responsible Words/ Tokens/ embeddings responsible for the label result of a text classification model in PyTorch
Let us suppose I have a model like: class BERT_Subject_Classifier(nn.Module): def __init__(self,out_classes,hidden1=128,hidden2=32,dropout_val=0.2): super(BERT_Subject_Classifier, self).__init__() self.hidden1 = hidden1 self.hidden2 = hidden2 self.dropout_val = dropout_val self.logits = logit self.bert = AutoModel.from_pretrained('bert-base-uncased') self.out_classes = out_classes self.unfreeze_n = unfreeze_n # make the last n layers trainable self.dropout = nn.Dropout(self.dropout_val) self.relu = nn.ReLU() self.fc1 = nn.Linear(768,self.hidden1) self.fc2 = nn.Linear(self.hidden1,self.hidden2) self.fc3 = nn.Linear(self.hidden2,self.out_classes) def forward(self, sent_id, mask): _, cls_hs = self.bert(sent_id, attention_mask=mask) x = self.fc1(cls_hs) x = self.relu(x) x = self.dropout(x) x = self.fc2(x) x = self.dropout(x) return self.fc3(x) I train my model and for a new data point x = ['My Name is Slim Shady'], I get my label result as 3. My Question is that how can I check which of the words in the sentence were responsible for the the classification? I mean it could be any collection of words. Is there a library or way to check the functionality? Just like shown in the paper and Tensorflow Implementation of show Attend and Tell, you can get the areas of images where the model is paying attention to. How can I do it for the Text?
Absolutely. One way to demonstrate which words have the greatest impact is through integrated gradients methods. For PyTorch, one package you can use is Captum. I would check out this page for a good example: https://captum.ai/tutorials/IMDB_TorchText_Interpret For Tensorflow, one package that you can use is Seldon. I would check out this page for a good example: https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html
https://stackoverflow.com/questions/65625130/
TypeError: hook() takes 3 positional arguments but 4 were given
I am trying to extract features from an intermediate of ResNet18 in PyTorch using forward hooks class CCLModel(nn.Module): def __init__(self,output_layer,*args): self.output_layer = output_layer super().__init__(*args) self.output_layer = output_layer #PRETRAINED MODEL self.pretrained = models.resnet18(pretrained=True) #TAKING OUTPUT FROM AN INTERMEDIATE LAYER #self._layers = [] for l in list(self.pretrained._modules.keys()): #self._layers.append(l) if l == self.output_layer: handle = getattr(self.pretrained,l).register_forward_hook(self.hook) def hook(self,input,output): return output def _forward_impl(self, x): x = self.pretrained(x) return x def forward(self, x): return self._forward_impl(x) I also want the predictions alongside the feature outputs from layer 4 But I am getting the TypeError: hook() takes 3 positional arguments but 4 were given The full error message is this TypeError Traceback (most recent call last) <ipython-input-66-18c4a0f917f2> in <module>() ----> 1 out = model(x.to('cuda:0').float()) 6 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-61-71fe0d1420a6> in forward(self, x) 78 79 def forward(self, x): ---> 80 return self._forward_impl(x) 81 82 '''def forward(self,x): <ipython-input-61-71fe0d1420a6> in _forward_impl(self, x) 73 #x = torch.flatten(x, 1) 74 #x = self.fc(x) ---> 75 x = self.pretrained(x) 76 77 return x /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torchvision/models/resnet.py in forward(self, x) 218 219 def forward(self, x): --> 220 return self._forward_impl(x) 221 222 /usr/local/lib/python3.6/dist-packages/torchvision/models/resnet.py in _forward_impl(self, x) 209 x = self.layer2(x) 210 x = self.layer3(x) --> 211 x = self.layer4(x) 212 213 x = self.avgpool(x) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 729 _global_forward_hooks.values(), 730 self._forward_hooks.values()): --> 731 hook_result = hook(self, input, result) 732 if hook_result is not None: 733 result = hook_result TypeError: hook() takes 3 positional arguments but 4 were given Why is the hook not working, although in various forums I see that this is the way to do it?
Here's a simple example of a forward hook, it must have three parameters model, input, and output: m = models.resnet18(pretrained=False) def hook(module, input, output): print(output.detach().shape) m.fc.register_forward_hook(hook) Try it with dummy data: >>> m(torch.rand(1, 3, 224, 224)) torch.Size([1, 1000]) <<< tensor(...) To combine it with your nn.Module, you need to implement hook with an extra argument self: class CCLModel(nn.Module): def __init__(self, output_layer, *args): super(CCLModel, self).__init__() self.pretrained = models.resnet18(pretrained=True) self.output_layer = output_layer self.output_layer.register_forward_hook(self.hook) def hook(self, module, input, output): return print(output.shape) def forward(self, x): x = self.pretrained(x) x = self.output_layer(x) return x Note - self corresponds to the CCLModel instance while model is the layer we're hooked on i.e. nn.Linear Here's an example: >>> m = CCLModel(nn.Linear(1000, 100)) >>> m(torch.rand(1, 3, 224, 224)) torch.Size([1, 100]) <<< tensor(...)
https://stackoverflow.com/questions/65627300/
Custom small CNN has better accuracy than the pretrained classifiers
I have a dataset of laser welding images of size 300*300 which contains two class of bad and good weld seam. I have followed Pytorch fine-tuning tutorial for an inception-v3 classifier. on the other hand, I also build a custom CNN with 3 conv layer and 3 fc. What I observed is that the fine tuning showed lots of variation on validation accuracy. basically, I see different maximum accuracy every time I train my model. Plus, my accuracy in fine-tuning is much less than my custom CNN!! for example the accuracy for my synthetic images from a GAN is 86% with inception-v3, while it is 94% with my custom CNN. The real data for both network shows almost similar behaviour and accuracy, however accuracy in custom CNN is about 2% more. I trained with different training scales of 200, 500 and 1000 train-set images (half of them for each class like for 200 images we have 100 good and 100 bad). I also include a resize transform of 224 in my train_loader; in fine tuning tutorial, this resize is automatically done to 299 for inception-v3. for each trial, the validation-size and its content is constant. Do you know what cause this behavior? Is it because my dataset is so different from the pretrained model classes? am I not supposed to get better results with fine-tuning? My custom CNN: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.pool = nn.MaxPool2d(2, 2) self.conv3 = nn.Conv2d(16, 24, 5) self.fc1 = nn.Linear(13824, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) #x = x.view(-1, 16 * 5 * 5) x = x.view(x.size(0),-1) #print(x.shape) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) #x = F.softmax(x, dim=1) return x model = Net() criterion = nn.CrossEntropyLoss() #optimizer = optim.Adam(model.parameters(), lr=0.001) optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=5e-4) model.to(device) with training loop of: epochs = 15 steps = 0 running_loss = 0 print_every = 10 train_losses, test_losses = [], [] train_acc, test_acc = [], [] for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() train_losses.append(running_loss/len(trainloader)) test_losses.append(test_loss/len(testloader)) #train_acc.append(running_loss/len(trainloader)) test_acc.append(accuracy/len(testloader)) print(f"Epoch {epoch+1}/{epochs}.. " f"Train loss: {running_loss/print_every:.3f}.. " f"Test loss: {test_loss/len(testloader):.3f}.. " f"Test accuracy: {accuracy/len(testloader):.3f}") running_loss = 0 model.train()
Here is my theory : Pre-training is useful when you want to leverage already existing data to help the model train on similar data, for which you have few instances. At least this was the reasoning behind the Unet architecture in medical image segmentation. Now, to me the key is in the notion of "similar". If your network have been pre-trained on cats, dogs and you want to extrapolate to weld seam there's a chance your pre-training is not helping or even getting in the way of the model training properly. Why ? When training your CNN you get randomly initialized weights, whereas using a pre-trained network you get pre-trainned weights. If the features your are extracting are similar across dataset then you get a head start by having the network already attuned to this features. For example, Cats and Dogs share similar spatial features visually (eye position, nose, ears...). So there's chance that you converge to a local minima faster during training since your are already starting from a good base that just need to adapt to the new specific of your data. Conclusions: If the similarity assumptions does not hold it means your model would have to "unlearn" what he already learned to adapt to the new specifics of your dataset and I guess that would be the reason why training is more difficult and does not give as good result as a blank slate CNN. (especially if you don't have that much data). PS : I'd be curious to see if your pre trained model end up catching up with your CNN if you give it more epochs to train.
https://stackoverflow.com/questions/65627620/
Does adding a forward hook to a layer of ensure that the gradient of the loss calculated using the layer's output will be calculated automatically?
I have a model class NewModel(nn.Module): def __init__(self,output_layer,*args): self.output_layer = output_layer super().__init__(*args) self.output_layer = output_layer self.selected_out = None #PRETRAINED MODEL self.pretrained = models.resnet18(pretrained=True) #TAKING OUTPUT FROM AN INTERMEDIATE LAYER #self._layers = [] for l in list(self.pretrained._modules.keys()): #self._layers.append(l) if l == self.output_layer: handle = getattr(self.pretrained,l).register_forward_hook(self.hook) def hook(self,module, input,output): self.selected_out = output def forward(self, x): return x = self.pretrained(x) I have two target outputs, one which is same as any label of an image and the second one is the same dimensions as the output obtained from self.output_layer, called target_feature out = model(img) layerout = model.selected_out Now, if I want to calculate the loss of layerout with the target feature map, can it be done like the line written below? loss = criterion(y_true, out) + feature_criterion(layerout, target_feature) Or do I need to add backward_hooks? In this Kaggle notebook https://www.kaggle.com/sironghuang/understanding-pytorch-hooks it is written that loss.backward() cannot be used when using backward_hooks. Quoting the author # backprop once to get the backward hook results out.backward(torch.tensor([1,1],dtype=torch.float),retain_graph=True) #! loss.backward(retain_graph=True) # doesn't work with backward hooks, #! since it's not a network layer but an aggregated result from the outputs of last layer vs target Then how can be gradient be calculated based on the loss function?
If I understand you correctly, you want to get two outputs from your model, calculate two losses, then combine them and backpropagate. I imagine you come from Tensorflow & Keras from the way you tried implementing it. In Pytorch, it's actually fairly straight foward, you can do this very easily because of its purely functional aspect. This is just an example: class NewModel(nn.Module): def __init__(self, output_layer, *args): super(MyModel, self).__init__() self.pretrained = models.resnet18(pretrained=True) self.output_layer = output_layer def forward(self, x): out = self.pretrained(x) features = self.output_layer(out) return out, features On inference, you will get two results per call: >>> m = NewModel(nn.Linear(1000, 10)) >>> x = torch.rand(16, 3, 224, 224) >>> y_pred, y_feature = m(x) Call you loss functions: >>> loss = criterion(y_pred, y_true) + feature_criterion(y_feature, target_feature) Then, backpropagate with loss.backward(). So no need for hooks, nor complicated gradient on your .backward call! Edit - If you wish to extract an intermediate layer output, keep the hook, that's good. And just modify the forward definition. def forward(self, x): out = self.pretrained(x) return out, self.selected_out For example: >>> m = NewModel(output_layer='layer1') >>> x = torch.rand(16, 3, 224, 224) >>> y_pred, y_feature = m(x) >>> y_pred.shape, y_feature.shape (torch.Size([16, 1000]), torch.Size([16, 64, 56, 56])) Also, what I said above about the loss stills stands. Compute your loss, then call loss.backward().
https://stackoverflow.com/questions/65628690/
Torchtext TabularDataset() reads in Datafields incorrectly
Goal: I want to create a text classifier based upon my custom Dataset, simillar (and following) This (now deleted) Tutorial from mlexplained. What happened I sucessfully formatted my data, created a training, validation and test dataset, and formatted it, so that it so that it equals the "toxic tweet" dataset they are using (with a column for each tag, with 1/0 for True/not True). Most of the other parts also worked just as intended, but when it came to iterating i got an Error. The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. The `device` argument should be set by using `torch.device` or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu. 0%| | 0/25517 [00:01<?, ?it/s] Traceback (most recent call last): ... (trace back messages) AttributeError: 'Example' object has no attribute 'text' The lines the Traceback indicated: opt = optim.Adam(model.parameters(), lr=1e-2) loss_func = nn.BCEWithLogitsLoss() epochs = 2 for epoch in range(1, epochs + 1): running_loss = 0.0 running_corrects = 0 model.train() # turn on training mode for x, y in tqdm.tqdm(train_dl): # **THIS LINE CONTAINS THE ERROR** opt.zero_grad() preds = model(x) loss = loss_func(y, preds) loss.backward() opt.step() running_loss += loss.data[0] * x.size(0) epoch_loss = running_loss / len(trn) # calculate the validation loss for this epoch val_loss = 0.0 model.eval() # turn on evaluation mode for x, y in valid_dl: preds = model(x) loss = loss_func(y, preds) val_loss += loss.data[0] * x.size(0) val_loss /= len(vld) print('Epoch: {}, Training Loss: {:.4f}, Validation Loss: {:.4f}'.format(epoch, epoch_loss, val_loss)) Attempts to solve the problems already made, and what i think was the Reson: I know that this Problem occured to others, there are even 2 Questions to it on here, bot both had the problem of skipping either Columns or Rows in the dataset (i checked for empty lines/Cokumns, and found none). Another Solution was that the Parameters given the model had to be in the same order (with none Missing) than the parameters in the .csv File. However, the relevant code (the loading and creating of the tst, trn and vld sets) def createTestTrain(): # Create a Tokenizer tokenize = lambda x: x.split() # Defining Tag and Text TEXT = Field(sequential=True, tokenize=tokenize, lower=True) LABEL = Field(sequential=False, use_vocab=False) # Our Datafield tv_datafields = [("ID", None), ("text", TEXT)] # Loading our Additional columns we added earlier with open(PATH + 'columnList.pickle', 'rb') as handle: addColumns = pickle.load(handle) # Adding the extra columns, no way we are defining 1000 tags by hand for column in addColumns: tv_datafields.append((column, LABEL)) #tv_datafields.append(("split", None)) # Loading Train/Test Split we created trn = TabularDataset( path=PATH+'train.csv', format='csv', skip_header=True, fields=tv_datafields) vld = TabularDataset( path=PATH+'train.csv', format='csv', skip_header=True, fields=tv_datafields) # Creating Test Datafield tst_datafields = [("id", None), ("text", TEXT)] # Using TabularDataset, as we want to Analyse Text on it tst = TabularDataset( path=PATH+"test.csv", # the file path format='csv', skip_header=True, fields=tst_datafields) return(trn, vld, tst) Has uses the same list and order, like my csv does. tv_datafields is structured exactly like the file. Furthermore, as Datafield objects are just Dicts with Datapoints, i read out the Keys of the dictionary, like the tutorial also did, via: trn[0].dict_keys() What Should have happened: The behaviour of the example was like this trn[0] torchtext.data.example.Example at 0x10d3ed3c8 trn[0].__dict__.keys() dict_keys(['comment_text', 'toxic', 'severe_toxic', 'threat', 'obscene', 'insult', 'identity_hate']) My result: trn[0].__dict__.keys() Out[19]: dict_keys([]) trn[1].__dict__.keys() Out[20]: dict_keys([]) trn[2].__dict__.keys() Out[21]: dict_keys([]) trn[3].__dict__.keys() Out[22]: dict_keys(['text']) While trn[0] does contain nothing, it is instead spread from 3 to 15, the amount of columns that should normally be there should be way more than that. Now i am at a loss, as to where i went wrong. The Data fits, the function obviously works, but TabularDataset() seems to read in my columns the wrong way (if at all). Did i classify # Defining Tag and Text TEXT = Field(sequential=True, tokenize=tokenize, lower=True) LABEL = Field(sequential=False, use_vocab=False) the wrong way? At least that is what my Debuggin seems to indicate. With the meager Documentation on Torchtext i have problems finding that out, but when im looking at the definitions of Data or Fields i cant see anything wrong with it. Thank you for your help.
I found out where my Problem was, apparently Torchtext only accepts Data in Quotes and only with "," as separator. My Data was not within quotes and has ";" as separator.
https://stackoverflow.com/questions/65629186/
AttributeError: type object 'TFLiteConverterV2' has no attribute 'from_frozen_gragh'
I was trying to convert my PyTorch model into TensorFlow lite for mobile. My model was pre-trained DenseNet 169 so I did this:- import sys import os import torch import torch.nn as nn import torch.nn.functional as F import onnx from collections import OrderedDict import tensorflow as tf from torch.autograd import Variable from onnx_tf.backend import prepare dummy_input = Variable(torch.randn(32, 3, 224, 224)) torch.onnx.export(trained_model, dummy_input, "mymodel.onnx") model = onnx.load("mymodel.onnx") tf_rep = prepare(model) print('inputs:', tf_rep.inputs) # Output nodes from the model print('outputs:', tf_rep.outputs) # All nodes in the model print('tensor_dict:') print(tf_rep.tensor_dict) tf_rep.export_graph("mymodel.pb") converter = tf.lite.TFLiteConverter.from_frozen_gragh("mymodel.pb/saved_model.pb", tf_rep.inputs, tf_rep.outputs) # **ERROR HERE** tflite_model = converter.convert() open("mymodel.tflite", "wb").write(tflite_model) HERE IS MY ERROR AttributeError Traceback (most recent call last) <ipython-input-37-0abbde392f91> in <module>() ----> 1 converter = tf.lite.TFLiteConverter.from_frozen_gragh("flowers.pb/saved_model.pb", tf_rep.inputs, tf_rep.outputs) 2 tflite_model = converter.convert() 3 open("flowers.tflite", "wb").write(tflite_model) AttributeError: type object 'TFLiteConverterV2' has no attribute 'from_frozen_gragh' When I tried with compat.v1 I got the same error but instead of TFLiteConverterV2 I got TFLiteConverter Thanks, in Advance. EDIT So I tried with compat.v1 and fixed the typo in 'from_frozen_gragh' and got this ugly error --------------------------------------------------------------------------- DecodeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in from_frozen_graph(cls, graph_def_file, input_arrays, output_arrays, input_shapes) 1804 graph_def = _graph_pb2.GraphDef() -> 1805 graph_def.ParseFromString(file_content) 1806 except (_text_format.ParseError, DecodeError): DecodeError: Error parsing message During handling of the above exception, another exception occurred: UnicodeDecodeError Traceback (most recent call last) 2 frames <ipython-input-32-46dac4006b0d> in <module>() ----> 1 tflitconverter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph("flowers.pb/saved_model.pb", tf_rep.inputs, tf_rep.outputs) 2 e_model = converter.convert() 3 open("flowers.tflite", "wb").write(tflite_model) /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in from_frozen_graph(cls, graph_def_file, input_arrays, output_arrays, input_shapes) 1812 file_content = six.ensure_binary(file_content, "utf-8") 1813 else: -> 1814 file_content = six.ensure_text(file_content, "utf-8") 1815 graph_def = _graph_pb2.GraphDef() 1816 _text_format.Merge(file_content, graph_def) /usr/local/lib/python3.6/dist-packages/six.py in ensure_text(s, encoding, errors) 933 """ 934 if isinstance(s, binary_type): --> 935 return s.decode(encoding, errors) 936 elif isinstance(s, text_type): 937 return s UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb7 in position 3: invalid start byte PLEASE HELP
it is "from_frozen_graph" not "from_frozen_gragh" You need to use compat.v1 since from_frozen_graph is not available in TF 2.x
https://stackoverflow.com/questions/65631170/
Segmentation to one-hot encoding
I have a batch of segmented images of size seg --> [batch, channels, imsize, imgsize] --> [16, 6, 50, 50] each scalar in this tensor specifies one of the segmentation classes. We have 2000 total segmentation classes. Now the goal is to convert [16, 6, 50, 50] --> [16, 2000, 50, 50] where each class is encoded in one hot fashion. How do I do it with pytorch api? I can only think of ridiculously inefficient looping construction. Example Here we will have only 2 initial channels (instead of 6), 4 labels (instead of 2000), size batch 1 (instead of 16) and 4x4 image instead of 50x50. 0, 0, 1, 1 0, 0, 0, 1 1, 1, 1, 1 1, 1, 1, 1 3, 3, 2, 2 3, 3, 2, 2 3, 3, 2, 2 3, 3, 2, 2 Now this turns into 4 channel output 1, 1, 0, 0 1, 1, 1, 0 0, 0, 0, 0 0, 0, 0, 0 0, 0, 1, 1 0, 0, 0, 1 1, 1, 1, 1 1, 1, 1, 1 1, 1, 0, 0 1, 1, 0, 0 1, 1, 0, 0 1, 1, 0, 0 0, 0, 1, 1 0, 0, 1, 1 0, 0, 1, 1 0, 0, 1, 1 The key observation is that a particular label appears only on a single input channel.
I think you can achieve this without too much trouble. Construct as many masks as there are labels, then stack those masks together, sum on the channel layer and convert to floats: >>> x tensor([[[0, 0, 1, 1], [0, 0, 0, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[3, 3, 2, 2], [3, 3, 2, 2], [3, 3, 2, 2], [3, 3, 2, 2]]]) >>> y = torch.stack([x==i for i in range(x.max()+1)], dim=1).sum(dim=2) tensor([[[1., 1., 0., 0.], [1., 1., 1., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], [[0., 0., 1., 1.], [0., 0., 0., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]], [[0., 0., 1., 1.], [0., 0., 1., 1.], [0., 0., 1., 1.], [0., 0., 1., 1.]], [[1., 1., 0., 0.], [1., 1., 0., 0.], [1., 1., 0., 0.], [1., 1., 0., 0.]]])
https://stackoverflow.com/questions/65637015/
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported
I am using this github repo: https://github.com/vchoutas/smplify-x and I am running a script. I get the following error. How can I fix it? I understand I might have to convert - to ~ however not sure where it exactly is. I don't want to downgrade my PyTorch. $ cat fit.sh export CUDA_VISIBLE_DEVICES=0 python smplifyx/main.py --config cfg_files/fit_smplx.yaml --data_folder ../../data/smplify-x/DATA_FOLDER --output_folder ../../data/smplify-x/RESULTS --visualize="False" --model_folder ../../data/smplify-x/models_smplx_v1_1/models/smplx/SMPLX_NEUTRAL.npz --vposer_ckpt ../../data/smplify-x/vposer_v1_0 --part_segm_fn ../../data/smplify-x/smplx_parts_segm.pkl -------------------------------------- (smplifyx) mona@ubuntu:~/mona/code/smplify-x$ ./fit.sh Processing: ../../data/smplify-x/DATA_FOLDER/images/woman_bike.jpg Found Trained Model: ../../data/smplify-x/vposer_v1_0/snapshots/TR00_E096.pt Traceback (most recent call last): File "smplifyx/main.py", line 272, in <module> main(**args) File "smplifyx/main.py", line 262, in main **args) File "~/mona/code/smplify-x/smplifyx/fit_single_frame.py", line 274, in fit_single_frame focal_length=focal_length, dtype=dtype) File "/home/mona/venv/smplifyx/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "~/mona/code/smplify-x/smplifyx/fitting.py", line 75, in guess_init pose_embedding, output_type='aa').view(1, -1) if use_vposer else None File "../../data/smplify-x/vposer_v1_0/vposer_smpl.py", line 114, in decode if output_type == 'aa': return VPoser.matrot2aa(Xout) File "../../data/smplify-x/vposer_v1_0/vposer_smpl.py", line 152, in matrot2aa pose = tgm.rotation_matrix_to_angle_axis(homogen_matrot).view(batch_size, 1, -1, 3).contiguous() File "/home/mona/venv/smplifyx/lib/python3.6/site-packages/torchgeometry/core/conversions.py", line 233, in rotation_matrix_to_angle_axis quaternion = rotation_matrix_to_quaternion(rotation_matrix) File "/home/mona/venv/smplifyx/lib/python3.6/site-packages/torchgeometry/core/conversions.py", line 302, in rotation_matrix_to_quaternion mask_c1 = mask_d2 * (1 - mask_d0_d1) File "/home/mona/venv/smplifyx/lib/python3.6/site-packages/torch/tensor.py", line 511, in __rsub__ return _C._VariableFunctions.rsub(self, other) RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead. I also have: (smplifyx) mona@ubuntu:~/mona/code/smplify-x$ python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.7.1' >>> import torchgeometry >>> torchgeometry.__version__ '0.1.2'
$ vi /home/mona/venv/smplifyx/lib/python3.6/site-packages/torchgeometry/core/conversions.py and comment and add lines similarly for mask conversion compatibility in PyTorch 1.7.1 mask_c0 = mask_d2 * mask_d0_d1 #mask_c1 = mask_d2 * (1 - mask_d0_d1) mask_c1 = mask_d2 * ~(mask_d0_d1) #mask_c2 = (1 - mask_d2) * mask_d0_nd1 mask_c2 = ~(mask_d2) * mask_d0_nd1 #mask_c3 = (1 - mask_d2) * (1 - mask_d0_nd1)
https://stackoverflow.com/questions/65637222/
create a ones tensor according to another lengths tensor
I have one tensor of the size of [batch_size,1] where each number for sample indicates an integer that is smaller than 5000. I'd like to create a new tensor of the size of [batch_size,5000] where the first numbers for each sample are ones, according to the first tensor, and the rest are zeros. For example: t1=[[3],[5]] so we will result with t2=[[1,1,1,0,0,...],[1,1,1,1,1,0,0,...]]
This is quite tricky since you want to fill values based on indices and not on the value itself... Yet you can still manage it, but you have to get creative. We need some way for indices to be reflected on the values themselves. We will keep batch_size=2 and a vector size of 10: >>> t1 = torch.LongTensor([[3], [5]]) >>> t2 = torch.zeros(len(t1), 10) tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) Here is the interesting part: arange a tensor with t2's shape and substract t1. >>> torch.arange(10).repeat(2, 1) - t1 >>> tensor([[-3, -2, -1, 0, 1, 2, 3, 4, 5, 6], [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4]]) Notice how values are negative before the breakpoint, and positive after. That's our mask: >>> mask = torch.arange(10).repeat(2, 1) - t1 < 0 tensor([[ True, True, True, False, False, False, False, False, False, False], [ True, True, True, True, True, False, False, False, False, False]]) To finish it off: >>> t2[mask] = 1 tensor([[1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 0., 0., 0., 0., 0.]])
https://stackoverflow.com/questions/65637465/
How do I make Input type and weight type same?
I am getting a runtime error that says inputs and weights must be on same. However I made sure that my model and input are on the same device yet I cannot get rid of the error. As far as I read, I know that my input data is not on GPU . Since, In this case image is an input so I tried img = torch.from_numpy(img).to(device) and pred = model(img)[0].to(device but no luck. Please let me know what can be done. Here's the code: source = '0' webcam = source == '0' image_size = 640 imgsz = check_img_size(image_size) # Load the model filepath = 'weights/mask.pt' # device = torch.device('cpu') device = select_device() # half = device.type != 'cpu' model = attempt_load(filepath, map_location = device) model.to(device).eval() # if half: # model.half() # Second stage classifier classify = False if classify: modelc = torch_utils.load_classifier(name = 'resnet101', n = 2) modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location = device)['modelc']) ########### modelc.to(device).eval() vid_path, vid_writer = None, None if webcam: view_img = True cudnn.benchmark = True dataset = LoadStreams(source, img_size = imgsz) names = model.module.names if hasattr(model, 'module') else model.names print(names) def process_image(image): h, w = image.shape[:2] desired_size = 416 ratio = desired_size/w print("Ratio",ratio) img = cv2.resize(image, (0, 0), fx = ratio, fy = ratio) h, w = img.shape[:2] img = cv2.copyMakeBorder(img, int((416-h)/2), int((416-h)/2), 0, 0, cv2.BORDER_CONSTANT) img = img[:, :, ::-1].transpose(2, 0, 1) img = np.ascontiguousarray(img) img = torch.from_numpy(img).to(device) img = img.float() img /=255.0 if img.ndimension() == 3: img = img.unsqueeze(0) return img def classify(image): # device = torch.device("cpu") #image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) #im = Image.fromarray(image) img = process_image(image) print('Image processed') # img = image.unsqueeze_(0) # img = image.float() pred = model(img)[0] # Apply NMS pred = non_max_suppression(pred, 0.4, 0.5, classes = [0, 1, 2], agnostic = None ) if classify: pred = apply_classifier(pred, modelc, img, im0s) print("1 ", pred) model.eval() model.cpu() classification = torch.cat(pred)[:, -1] if len(classification) == 0: return None index = int(classification[0]) print(names[index]) return names[index] def detect(frame): source = '0' webcam = source == '0' image_size = 640 imgsz = check_img_size(image_size) # Load model file_path = 'weights/yolov5s.pt' #device = torch.device('cpu') device = select_device() # half = device.type != 'cpu' model = attempt_load(file_path, map_location = device) model.to(device).eval() # if half: # model.half() names = model.module.names if hasattr(model, 'module') else model.names colors = [[75, 125, 2]] img = process_image(frame) pred = model(img)[0] pred = non_max_suppression(pred, 0.4, 0.5, classes = [0], agnostic = None) if classify: pred = apply_classifier(pred, modelc, img, im0s) gn = torch.tensor(frame.shape)[[1,0,1,0]] for i, det in enumerate(pred): det[:,:4] = scale_coords(img.shape[2:], det[:,:4], frame.shape).round() for *xyxy, conf, cls in reversed(det): xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh label = '%s %.2f' % (names[int(cls)], conf) if label is not None: if (label.split())[0] == 'person': plot_one_box(xyxy, frame, label = label, color = colors[0], line_thickness = 1) # utils.general Here's the main code: with tf.Graph().as_default(): gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.6) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) with sess.as_default(): pnet, rnet, onet = detect_face.create_mtcnn(sess, './models/') minsize = 20 # minimum size of face threshold = [0.6, 0.7, 0.7] # three steps's threshold factor = 0.709 # scale factor margin = 44 frame_interval = 3 batch_size = 1000 image_size = 182 input_image_size = 160 print('Loading feature extraction model') modeldir = './models/' facenet.load_model(modeldir) images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0") embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0") phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0") embedding_size = embeddings.get_shape()[1] classifier_filename = './myclassifier/my_classifier.pkl' classifier_filename_exp = os.path.expanduser(classifier_filename) with open(classifier_filename_exp, 'rb') as infile: (model, class_names) = pickle.load(infile) print('load classifier file-> %s' % type(class_names)) HumanNames = class_names video_capture = cv2.VideoCapture(0) c = 0 print('Start!') prevTime = 0 while True: ret, frame = video_capture.read() # frame = cv2.resize(frame, (0,0), fx=0.5, fy=0.5) #resize frame (optional) curTime = time.time() # calc fps timeF = frame_interval if (c % timeF == 0): find_results = [] if frame.ndim == 2: frame = facenet.to_rgb(frame) frame = frame[:, :, 0:3] bounding_boxes, _ = detect_face.detect_face(frame, minsize, pnet, rnet, onet, threshold, factor) nrof_faces = bounding_boxes.shape[0] # print('Bounding Boxes: ', bounding_boxes, 'Shape: ', bounding_boxes.shape, 'nrof_faces:: ', nrof_faces) # print('Detected_FaceNum: %d' % nrof_faces) if nrof_faces > 0: detect(frame) label = classify(frame) if label == "a": det = bounding_boxes[:, 0:4] img_size = np.asarray(frame.shape)[0:2] cropped = [] scaled = [] scaled_reshape = [] bb = np.zeros((nrof_faces,4), dtype=np.int32) for i in range(nrof_faces): emb_array = np.zeros((1, embedding_size)) # print("Embeddinigs:::::") # print(emb_array) # print("Embeddinigs:::::") bb[i][0] = det[i][0] bb[i][1] = det[i][1] bb[i][2] = det[i][2] bb[i][3] = det[i][3] if bb[i][0] <= 0 or bb[i][1] <= 0 or bb[i][2] >= len(frame[0]) or bb[i][3] >= len(frame): print('face is inner of range!') continue cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :]) cropped[0] = facenet.flip(cropped[0], False) scaled.append(misc.imresize(cropped[0], (image_size, image_size), interp='bilinear')) scaled[0] = cv2.resize(scaled[0], (input_image_size,input_image_size), interpolation=cv2.INTER_CUBIC) scaled[0] = facenet.prewhiten(scaled[0]) scaled_reshape.append(scaled[0].reshape(-1,input_image_size,input_image_size,3)) feed_dict = {images_placeholder: scaled_reshape[0], phase_train_placeholder: False} emb_array[0, :] = sess.run(embeddings, feed_dict=feed_dict) predictions = model.predict_proba(emb_array) best_class_indices = np.argmax(predictions, axis=1) best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices] cv2.rectangle(frame, (bb[i][0], bb[i][1]), (bb[i][2], bb[i][3]), (0, 255, 0), 2) text_x = bb[i][0] text_y = bb[i][3] + 20 (Edit) Error: Traceback (most recent call last): File "realtime.py", line 105, in <module> label = classify(frame) File "yolov5-master\myutils.py", line 117, in classify pred = model(img)[0] File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "yolov5-master\models\yolo.py", line 122, in forward return self.forward_once(x, profile) # single-scale inference, train File "yolov5-master\models\yolo.py", line 138, in forward_once x = m(x) # run File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "yolov5-master\models\common.py", line 94, in forward return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "yolov5-master\models\common.py", line 38, in fuseforward return self.act(self.conv(x)) File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\conv.py", line 419, in forward return self._conv_forward(input, self.weight) File "C:\Users\Anuj\anaconda3\envs\py36\lib\site-packages\torch\nn\modules\conv.py", line 416, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
You need to send the input tensor to your device, not its result: pred = model(img.to(device))[0] As a side note, I would point out that doing x.to(device) as an expression has no effect on the location of the tensor. Instead reassign the tensor with x = x.to(device): >>> x = torch.ones(1) >>> x.to(device) tensor([1], device='cuda:0') >>> x.is_cuda False This is not true for nn.Module, calling model.to(device) will suffice. Edit - in classify you are sending the model back to the cpu you call it on img. Since you're calling classify in a loop, the first forward will work, while the following calls won't.
https://stackoverflow.com/questions/65642468/
Inputs to the nn.MultiheadAttention?
I have n-vectors which need to be influenced by each other and output n vectors with same dimensionality d. I believe this is what torch.nn.MultiheadAttention does. But the forward function expects query, key and value as inputs. According to this blog, I need to initialize a random weight matrix of shape (d x d) for each of q, k and v and multiply each of my vectors with these weight matrices and get 3 (n x d) matrices. Now are the q, k and v expected by torch.nn.MultiheadAttention just these three matrices or do I have it mistaken?
When you want to use self attention, just pass your input vector into torch.nn.MultiheadAttention for the query, key and value. attention = torch.nn.MultiheadAttention(<input-size>, <num-heads>) x, _ = attention(x, x, x) The pytorch class returns the output states (same shape as input) and the weights used in the attention process.
https://stackoverflow.com/questions/65642832/
PyTorch - Shapes Don't Match
Unfortunately, I get the following RuntimeError: This error hapens at epoch 1 during the last batch (so all other batches run through), and I don't know what causes the error in my code. Here is a code snippet of my function def gradient_penalty(critic, real, fake, device): BATCH_SIZE, C, H, W = real.shape epsilon = torch.rand(size = (BATCH_SIZE, 1, 1, 1)).repeat(1, C, H, W).to(device) # generate tensor filles only with ones x = torch.ones(size = (BATCH_SIZE, C, H, W), dtype = int) # interpolate images interpolated_images = real * epsilon + fake * (x - epsilon) The variable real stands for images and has the shape (128, 3, 64, 64). I need to admit that I don't find the error message concretely, i. e. where don't the shapes of the tensors coincide? Any help would be appreciated!
You can discard incomplete batches when instantiating a DataLoader with the drop_last argument: torch.utils.data.DataLoader(trainset, batch_size=128, discard_last=True) However, this seems a bit drastic measure since 128 elements from your dataset will go to waste.
https://stackoverflow.com/questions/65645645/
ResNet object has no attribute 'predict'
I have trained a CNN model in PyTorch to detect skin diseases in 6 different classes. My model came out with an accuracy of 92% and I saved it in a .pth file. I wish to use this model for predictions but I don't know how to do so. If anyone can aid me in the necessary steps, I will be grateful. I have tried just taking the image input straight from the folder, resizing it, and then running it through the model for predictions. The error I face is a ModuleAttributeAError which says there is no attribute named predict. Now I do not understand where I went wrong and I know this is a simple task for most but I was hoping for some guidance in this regard. The dataset I used is the Skin Cancer MNIST: HAM10000 dataset from Kaggle and trained it on ResNet18. If anyone has any pointers on fine-tuning the model, I would greatly appreciate it. TLDR: I get an error called ModuleAttributeError that says the 'ResNet' module has no attribute 'predict'. The image is preprocessed here as follows: import os, cv2,itertools import matplotlib.pyplot as plt import numpy as np import pandas as pd import pickle from tqdm import tqdm from glob import glob from PIL import Image # pytorch libraries import torch from torch import optim,nn from torch.autograd import Variable from torch.utils.data import DataLoader,Dataset from torchvision import models,transforms # sklearn libraries from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report np.random.seed(10) torch.manual_seed(10) torch.cuda.manual_seed(10) print(os.listdir("/content/drive/My Drive/input")) from google.colab import drive drive.mount('/content/drive') """**Data analysis and preprocessing**""" data_dir = '/content/drive/My Drive/input' all_image_path = glob(os.path.join(data_dir, '*', '*.jpg')) imageid_path_dict = {os.path.splitext(os.path.basename(x))[0]: x for x in all_image_path} lesion_type_dict = { 'nv': 'Melanocytic nevi', 'mel': 'Melanoma', 'bkl': 'Benign keratosis-like lesions ', 'bcc': 'Basal cell carcinoma', 'akiec': 'Actinic keratoses', 'vasc': 'Vascular lesions', 'df': 'Dermatofibroma' } def compute_img_mean_std(image_paths): """ computing the mean and std of three channel on the whole dataset, first we should normalize the image from 0-255 to 0-1 """ img_h, img_w = 224, 224 imgs = [] means, stdevs = [], [] for i in tqdm(range(len(image_paths))): img = cv2.imread(image_paths[i]) img = cv2.resize(img, (img_h, img_w)) imgs.append(img) imgs = np.stack(imgs, axis=3) print(imgs.shape) imgs = imgs.astype(np.float32) / 255. for i in range(3): pixels = imgs[:, :, i, :].ravel() # resize to one row means.append(np.mean(pixels)) stdevs.append(np.std(pixels)) means.reverse() # BGR --> RGB stdevs.reverse() print("normMean = {}".format(means)) print("normStd = {}".format(stdevs)) return means,stdevs # norm_mean,norm_std = compute_img_mean_std(all_image_path) norm_mean = (0.763035, 0.54564625, 0.5700399) norm_std = (0.1409281, 0.15261264, 0.16997051) df_original = pd.read_csv(os.path.join(data_dir, 'HAM10000_metadata.csv')) df_original['path'] = df_original['image_id'].map(imageid_path_dict.get) df_original['cell_type'] = df_original['dx'].map(lesion_type_dict.get) df_original['cell_type_idx'] = pd.Categorical(df_original['cell_type']).codes df_original.head() # this will tell us how many images are associated with each lesion_id df_undup = df_original.groupby('lesion_id').count() # now we filter out lesion_id's that have only one image associated with it df_undup = df_undup[df_undup['image_id'] == 1] df_undup.reset_index(inplace=True) df_undup.head() # here we identify lesion_id's that have duplicate images and those that have only one image. def get_duplicates(x): unique_list = list(df_undup['lesion_id']) if x in unique_list: return 'unduplicated' else: return 'duplicated' # create a new colum that is a copy of the lesion_id column df_original['duplicates'] = df_original['lesion_id'] # apply the function to this new column df_original['duplicates'] = df_original['duplicates'].apply(get_duplicates) df_original.head() df_original['duplicates'].value_counts() # now we filter out images that don't have duplicates df_undup = df_original[df_original['duplicates'] == 'unduplicated'] df_undup.shape # now we create a val set using df because we are sure that none of these images have augmented duplicates in the train set y = df_undup['cell_type_idx'] _, df_val = train_test_split(df_undup, test_size=0.2, random_state=101, stratify=y) df_val.shape df_val['cell_type_idx'].value_counts() # This set will be df_original excluding all rows that are in the val set # This function identifies if an image is part of the train or val set. def get_val_rows(x): # create a list of all the lesion_id's in the val set val_list = list(df_val['image_id']) if str(x) in val_list: return 'val' else: return 'train' # identify train and val rows # create a new colum that is a copy of the image_id column df_original['train_or_val'] = df_original['image_id'] # apply the function to this new column df_original['train_or_val'] = df_original['train_or_val'].apply(get_val_rows) # filter out train rows df_train = df_original[df_original['train_or_val'] == 'train'] print(len(df_train)) print(len(df_val)) df_train['cell_type_idx'].value_counts() df_val['cell_type'].value_counts() # Copy fewer class to balance the number of 7 classes data_aug_rate = [15,10,5,50,0,40,5] for i in range(7): if data_aug_rate[i]: df_train=df_train.append([df_train.loc[df_train['cell_type_idx'] == i,:]]*(data_aug_rate[i]-1), ignore_index=True) df_train['cell_type'].value_counts() # # We can split the test set again in a validation set and a true test set: # df_val, df_test = train_test_split(df_val, test_size=0.5) df_train = df_train.reset_index() df_val = df_val.reset_index() # df_test = df_test.reset_index() Here is where I build the model: # feature_extract is a boolean that defines if we are finetuning or feature extracting. # If feature_extract = False, the model is finetuned and all model parameters are updated. # If feature_extract = True, only the last layer parameters are updated, the others remain fixed. def set_parameter_requires_grad(model, feature_extracting): if feature_extracting: for param in model.parameters(): param.requires_grad = False def initialize_model(model_name, num_classes, feature_extract, use_pretrained=True): # Initialize these variables which will be set in this if statement. Each of these # variables is model specific. model_ft = None input_size = 0 if model_name == "resnet": """ Resnet18, resnet34, resnet50, resnet101 """ model_ft = models.resnet18(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, num_classes) input_size = 224 elif model_name == "vgg": """ VGG11_bn """ model_ft = models.vgg11_bn(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.classifier[6].in_features model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes) input_size = 224 elif model_name == "densenet": """ Densenet121 """ model_ft = models.densenet121(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.classifier.in_features model_ft.classifier = nn.Linear(num_ftrs, num_classes) input_size = 224 elif model_name == "inception": """ Inception v3 """ model_ft = models.inception_v3(pretrained=use_pretrained) set_parameter_requires_grad(model_ft, feature_extract) # Handle the auxilary net num_ftrs = model_ft.AuxLogits.fc.in_features model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes) # Handle the primary net num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs,num_classes) input_size = 299 else: print("Invalid model name, exiting...") exit() return model_ft, input_size # resnet,vgg,densenet,inception model_name = 'resnet' num_classes = 7 feature_extract = False # Initialize the model for this run model_ft, input_size = initialize_model(model_name, num_classes, feature_extract, use_pretrained=True) # Define the device: device = torch.device('cuda:0') # Put the model on the device: model = model_ft.to(device) # norm_mean = (0.49139968, 0.48215827, 0.44653124) # norm_std = (0.24703233, 0.24348505, 0.26158768) # define the transformation of the train images. train_transform = transforms.Compose([transforms.Resize((input_size,input_size)),transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(),transforms.RandomRotation(20), transforms.ColorJitter(brightness=0.1, contrast=0.1, hue=0.1), transforms.ToTensor(), transforms.Normalize(norm_mean, norm_std)]) # define the transformation of the val images. val_transform = transforms.Compose([transforms.Resize((input_size,input_size)), transforms.ToTensor(), transforms.Normalize(norm_mean, norm_std)]) # Define a pytorch dataloader for this dataset class HAM10000(Dataset): def __init__(self, df, transform=None): self.df = df self.transform = transform def __len__(self): return len(self.df) def __getitem__(self, index): # Load data and get label X = Image.open(self.df['path'][index]) y = torch.tensor(int(self.df['cell_type_idx'][index])) if self.transform: X = self.transform(X) return X, y # Define the training set using the table train_df and using our defined transitions (train_transform) training_set = HAM10000(df_train, transform=train_transform) train_loader = DataLoader(training_set, batch_size=64, shuffle=True, num_workers=4) # Same for the validation set: validation_set = HAM10000(df_val, transform=train_transform) val_loader = DataLoader(validation_set, batch_size=64, shuffle=False, num_workers=4) # we use Adam optimizer, use cross entropy loss as our loss function optimizer = optim.Adam(model.parameters(), lr=1e-5) criterion = nn.CrossEntropyLoss().to(device) Lastly, is the training process with a prediction function: # this function is used during training process, to calculation the loss and accuracy class AverageMeter(object): def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count total_loss_train, total_acc_train = [],[] def train(train_loader, model, criterion, optimizer, epoch): model.train() train_loss = AverageMeter() train_acc = AverageMeter() curr_iter = (epoch - 1) * len(train_loader) for i, data in enumerate(train_loader): images, labels = data N = images.size(0) # print('image shape:',images.size(0), 'label shape',labels.size(0)) images = Variable(images).to(device) labels = Variable(labels).to(device) optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() prediction = outputs.max(1, keepdim=True)[1] train_acc.update(prediction.eq(labels.view_as(prediction)).sum().item()/N) train_loss.update(loss.item()) curr_iter += 1 if (i + 1) % 100 == 0: print('[epoch %d], [iter %d / %d], [train loss %.5f], [train acc %.5f]' % ( epoch, i + 1, len(train_loader), train_loss.avg, train_acc.avg)) total_loss_train.append(train_loss.avg) total_acc_train.append(train_acc.avg) return train_loss.avg, train_acc.avg def validate(val_loader, model, criterion, optimizer, epoch): model.eval() val_loss = AverageMeter() val_acc = AverageMeter() with torch.no_grad(): for i, data in enumerate(val_loader): images, labels = data N = images.size(0) images = Variable(images).to(device) labels = Variable(labels).to(device) outputs = model(images) prediction = outputs.max(1, keepdim=True)[1] val_acc.update(prediction.eq(labels.view_as(prediction)).sum().item()/N) val_loss.update(criterion(outputs, labels).item()) print('------------------------------------------------------------') print('[epoch %d], [val loss %.5f], [val acc %.5f]' % (epoch, val_loss.avg, val_acc.avg)) print('------------------------------------------------------------') return val_loss.avg, val_acc.avg import cv2 from PIL import Image, ImageOps import numpy as np model = model_ft model.load_state_dict(torch.load("/content/drive/MyDrive/input/trainbest.pth")) model.eval() def import_and_predict(image_data, model): size = (224, 224) image = ImageOps.fit(image_data, size, Image.ANTIALIAS) img = np.asarray(image) image_reshape = img[np.newaxis,...] prediction = model.predict(img_reshape) return prediction image = Image.open('/content/0365-0596-abd-88-05-0712-gf03.jpg') # st.image(image, use_column_width = True) predictions = import_and_predict(image, model) class_names = ["Melanocytic nevi", "dermatofibroma", "Benign keratosis-like lesions", "Basal cell carcinoma", "Actinic keratoses", "Vascular lesions", "Dermatofibroma"] string = "It is: " + class_names[np.argmax(predictions)] print(string) Here is the error that comes immediately after this is executed. --------------------------------------------------------------------------- ModuleAttributeError Traceback (most recent call last) <ipython-input-219-d563271b78c6> in <module>() 32 image = Image.open('/content/0365-0596-abd-88-05-0712-gf03.jpg') 33 # st.image(image, use_column_width = True) ---> 34 predictions = import_and_predict(image, model) 35 class_names = ["Melanocytic nevi", "dermatofibroma", "Benign keratosis-like lesions", "Basal cell carcinoma", "Actinic keratoses", "Vascular lesions", "Dermatofibroma"] 36 string = "It is: " + class_names[np.argmax(predictions)] 1 frames <ipython-input-219-d563271b78c6> in import_and_predict(image_data, model) 27 img = np.asarray(image) 28 image_reshape = img[np.newaxis,...] ---> 29 prediction = model.predict(img_reshape) 30 return prediction 31 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 777 return modules[name] 778 raise ModuleAttributeError("'{}' object has no attribute '{}'".format( --> 779 type(self).__name__, name)) 780 781 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: ModuleAttributeError: 'ResNet' object has no attribute 'predict' If anyone can help me fix the issue and get this to work as a classifier for skin diseases, I would be ever so thankful.
nn.Module don't have a predict function, just call the object for inference: prediction = model(img_reshape) This will call the object's __call__ function which, in turns, callsthe model forward function.
https://stackoverflow.com/questions/65647833/
Different Matrix multiplication behaviour between Keras and Pytorch
I was trying to understand how matrix multiplication works over 2 dimensions in DL frameworks and I stumbled upon an article here. He used Keras to explain the same and it works for him. But when I try to reproduce the same code in Pytorch, it fails with the error as in the output of the following code Pytorch Code: a = torch.ones((2,3,4)) b = torch.ones((7,4,5)) c = torch.matmul(a,b) print(c.shape) Output: RuntimeError: The size of tensor a (2) must match the size of tensor b (7) at non-singleton dimension 0 Keras Code: a = K.ones((2,3,4)) b = K.ones((7,4,5)) c = K.dot(a,b) print(c.shape) Output:(2, 3, 7, 5) Can somebody explain what is it that I'm doing wrong?
Matrix multiplication (aka matrix dot product) is a well defined algebraic operation taking two 2D matrices. Deep-learning frameworks (e.g., tensorflow, keras, pytorch) are tuned to operate of batches of matrices, hence they usually implement batched matrix multiplication, that is, applying matrix dot product to a batch of 2D matrices. The examples you linked to show how matmul processes a batch of matrices: a = tf.ones((9, 8, 7, 4, 2)) b = tf.ones((9, 8, 7, 2, 5)) c = tf.matmul(a, b) Note how all but last two dimensions are identical ((9, 8, 7)). This is NOT the case in your example - the leading ("batch") dimensions are different, hence the error. Using identical leading dimensions in pytorch: a = torch.ones((2,3,4)) b = torch.ones((2,4,5)) c = torch.matmul(a,b) print(c.shape) results with torch.Size([2, 3, 5]) If you insist on dot products with different batch dimensions, you will have to explicitly define how to multiply the two tensors. You can do that using the very flexible torch.einsum: a = torch.ones((2,3,4)) b = torch.ones((7,4,5)) c = torch.einsum('ijk,lkm->ijlm', a, b) print(c.shape) Resulting with: torch.Size([2, 3, 7, 5])
https://stackoverflow.com/questions/65650760/
Crossentropyloss Pytorch: Targetsize does not match Torchsize
I want to use the Crossentropyloss of pytorch but somehow my code only works with batchsize 2, so i am asuming there is something wrong with the shapes of target and output. I get following error: Value Error: Expected target size (50, 2), got torch.Size([50, 3]) My targetsize is (N=50,batchsize=3) and the output of my model is (N=50, batchsize=3, number of classes =2). Before the output layer my shape is (N=50,batchsize=3,dimensions=64). How do i need to change the shapes so that the Crossentropyloss works?
Without further information about your model, here's what I would do. You have a many-to-many RNN which outputs (seq_len, batch_size, nb_classes) and the target is (seq_len, seq_len). The nn.CrossEntropyLoss module can take additional dimensions (batch_size, nb_classes, d1​, d2​, ..., dK​) as an input. You could make it work by permuting the axes, such that the outputted tensor is of shape (batch_size, nb_classes, seq_len). This should make it happen: output = output.permute(0, 2, 1) Additionally, your target will also have to change to be (batch_size, seq_len): target = target.permute(1, 0)
https://stackoverflow.com/questions/65652731/
IndexError: index 2047 is out of bounds for axis 0 with size 1638
I want to train my dataset with training data and validation data. Total data is 2048, train data is 1638, and validation data is 410 (20% of total). Here are my codes loading data (org: total training data) org_x = train_csv.drop(['id', 'digit', 'letter'], axis=1).values org_x = org_x.reshape(-1, 28, 28, 1) org_x = org_x/255 org_x = np.array(org_x) org_x = org_x.reshape(-1, 1, 28, 28) org_x = torch.Tensor(org_x) x_test = test_csv.drop(['id','letter'], axis=1).values x_test = x_test.reshape(-1, 28, 28, 1) x_test = x_test/255 x_test = np.array(x_test) x_test = x_test.reshape(-1, 1, 28, 28) x_test = torch.Tensor(x_test) y = train_csv['digit'] y = list(y) print(len(y)) org_y = np.zeros([len(y), 1]) for i in range(len(y)): org_y[i] = y[i] org1 = np.array(org_y, dtype=object) splitting data (org: total training data) from sklearn.model_selection import train_test_split x_train, x_valid, y_train, y_valid = train_test_split( org, org1, test_size=0.2, random_state=42) transform transform = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor(), transforms.Normalize((0.5, ), (0.5, )) ]) dataset class kmnistDataset(data.Dataset): def __init__(self, images, labels=None, transforms=None): self.x = images self.y = labels self.transforms = transforms def __len__(self): return (len(self.x)) def __getitem__(self, idx): data = np.asarray(self.x[idx][0:]).astype(np.uint8) if self.transforms: data = self.transforms(data) if self.y is not None: return (data, self.y[i]) else: return data train_data = kmnistDataset(x_train, y_train, transform) valid_data = kmnistDataset(x_valid, y_valid, transform) train_loader = DataLoader(train_data, batch_size=16, shuffle=True) valid_loader = DataLoader(valid_data, batch_size=16, shuffle = False) I'll skip the model structure. training(Here, I got the error message) n_epochs = 30 valid_loss_min = np.Inf for epoch in range(1, n_epochs+1): train_loss = 0 valid_loss = 0 ################### # train the model # ################### model.train() for data in train_loader: inputs, labels = data[0], data[1] optimizer.zero_grad() output = model(inputs) loss = criterion(output, labels) loss.backward() optimizer.step() train_loss += loss.item()*data.size(0) ##################### # validate the model# ##################### model.eval() for data in valid_loader: inputs, labels = data[0], data[1] output = model(inputs) loss = criterion(output, labels) valid_loss += loss.item()*data.size(0) train_loss = train_loss/ len(train_loader.dataset) valid_loss = valid_loss / len(valid_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss)) Although I checked the data size, I got the error message below. index 2047 is out of bounds for axis 0 with size 1638 --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-42-b8783819421f> in <module> 11 ################### 12 model.train() ---> 13 for data in train_loader: 14 inputs, labels = data[0], data[1] 15 optimizer.zero_grad() /opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \ /opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in_next_data(self) 473 def _next_data(self): 474 index = self._next_index() # may raise StopIteration --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 476 if self._pin_memory: 477 data = _utils.pin_memory.pin_memory(data) /opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /opt/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] <ipython-input-38-e5c87dd8a7ff> in __getitem__(self, idx) 17 18 if self.y is not None: ---> 19 return (data, self.y[i]) 20 else: 21 return data IndexError: index 2047 is out of bounds for axis 0 with size 1638 Can you explain why and how to solve it?
At first glance, you are using incorrect shapes: org_x = org_x.reshape(-1, 28, 28, 1). The channel axis you be the second one (unlike in TensorFlow), as (batch_size, channels, height, width): org_x = org_x.reshape(-1, 1, 28, 28) Same with x_test x_test = x_test.reshape(-1, 1, 28, 28) Also, you are accessing a list out of bound. You accessed self.y with i. Seems to me you should be returning (data, self.y[idx]) instead.
https://stackoverflow.com/questions/65653192/
pytorch - visualization of epoch versus error
I'm trying to learn pytorch, moving over from tensorflow. Following a tutorial that used this code import torch import torch.nn as nn import numpy as np import matplotlib as plt from sklearn import datasets from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split # 0) Prepare data bc = datasets.load_breast_cancer() X, y = bc.data, bc.target n_samples, n_features = X.shape X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234) # scale sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) X_train = torch.from_numpy(X_train.astype(np.float32)) X_test = torch.from_numpy(X_test.astype(np.float32)) y_train = torch.from_numpy(y_train.astype(np.float32)) y_test = torch.from_numpy(y_test.astype(np.float32)) y_train = y_train.view(y_train.shape[0], 1) y_test = y_test.view(y_test.shape[0], 1) # 1) Model # Linear model f = wx + b , sigmoid at the end class Model(nn.Module): def __init__(self, n_input_features): super(Model, self).__init__() self.linear = nn.Linear(n_input_features, 1) def forward(self, x): y_pred = torch.sigmoid(self.linear(x)) return y_pred model = Model(n_features) # 2) Loss and optimizer num_epochs = 100 learning_rate = 0.01 criterion = nn.BCELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) # 3) Training loop for epoch in range(num_epochs): # Forward pass and loss y_pred = model(X_train) loss = criterion(y_pred, y_train) # Backward pass and update loss.backward() optimizer.step() # zero grad before new step optimizer.zero_grad() if (epoch+1) % 10 == 0: print(f'epoch: {epoch+1}, loss = {loss.item():.4f}') with torch.no_grad(): y_predicted = model(X_test) y_predicted_cls = y_predicted.round() acc = y_predicted_cls.eq(y_test).sum() / float(y_test.shape[0]) print(f'accuracy: {acc.item():.4f}') Trying to visualize the outputted model, and to visualize how similar the data input to output is, I'm having a bit of problems simply using matploblib, predicted = model(X_test).detach().numpy() plt.plot(X_test, y_train, 'ro') plt.plot(X_test, y_predicted, 'b') plt.show() What is a good way to visualize this data? Thanks in advance, using Ivan's answer - trying to add the train_acc to the function, train_loss, valid_loss, valid_acc, train_acc = [], [], [],[] for epoch in range(num_epochs): # Forward pass and loss y_pred = model(X_train) loss = criterion(y_pred, y_train) # Backward pass and update loss.backward() optimizer.step() # zero grad before new step optimizer.zero_grad() train_loss.append(loss.item()) # acc_init = model(X_test).round().eq(y_train).sum() / float(y_train.shape[0]) train_acc.append(acc_init) with torch.no_grad(): y_pred = model(X_test) loss = criterion(y_pred, y_test) y_pred_cls = y_pred.round() acc = y_pred_cls.eq(y_test).sum() / float(y_test.shape[0]) valid_loss.append(loss.item()) valid_acc.append(acc.item()) if (epoch+1) % 10 == 0: print(f'epoch: {epoch+1}, loss = {loss.item():.4f}') print(f'valid loss: {loss.item():.4f} valid acc: {acc.item():.4f}')
Simply collect the losses over epochs, then plot the values. You could compute the validation accuracy and its loss on every epoch, after you've updated the model. Something like: train_loss, valid_loss, valid_acc = [], [], [] for epoch in range(num_epochs): # Forward pass and loss y_pred = model(X_train) loss = criterion(y_pred, y_train) # Backward pass and update loss.backward() optimizer.step() # zero grad before new step optimizer.zero_grad() train_loss.append(loss.item()) with torch.no_grad(): y_pred = model(X_test) loss = criterion(y_pred, y_test) y_pred_cls = y_pred.round() acc = y_pred_cls.eq(y_test).sum() / float(y_test.shape[0]) valid_loss.append(loss.item()) valid_acc.append(acc.item()) if (epoch+1) % 10 == 0: print(f'epoch: {epoch+1}, loss = {loss.item():.4f}') print(f'valid loss: {loss.item():.4f} valid acc: {acc.item():.4f}') Then plot with matplotlib or any other library: plt.plot(train_loss, label='train_loss') plt.plot(valid_loss, label='valid_loss') plt.title('loss over epochs') plt.legend() Might as well compute the training accuracy!: plt.plot(train_acc, label='train_acc') plt.plot(valid_acc, label='valid_acc') plt.title('accuracy over epochs') plt.legend()
https://stackoverflow.com/questions/65653931/
How can I express this custom loss function in tensorflow?
I've got a loss function that fulfills my needs, but is only in PyTorch. I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line: y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid You can see the whole code in following and it is indeed pretty straight forward except for that line: def get_loss(y_hat, y): # No loss on diagonal B, N, _ = y_hat.shape y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid # calc loss loss = F.binary_cross_entropy_with_logits(y_hat, y) # cross entropy y_hat = torch.sigmoid(y_hat) tp = (y_hat * y).sum(dim=(1, 2)) fn = ((1. - y_hat) * y).sum(dim=(1, 2)) fp = (y_hat * (1. - y)).sum(dim=(1, 2)) loss = loss - ((2 * tp) / (2 * tp + fp + fn + 1e-10)).sum() # fscore return loss So far I came up with following: def get_loss(y_hat, y): loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y) # cross entropy (but no logits) y_hat = tf.math.sigmoid(y_hat) tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2]) fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2]) fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2]) loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10))) # fscore return loss so my questions boil down to: What does torch.finfo() do and how to express it in TensorFlow? Does y_hat.dtype just return the data type?
1. What does torch.finfo() do and how to express it in TensorFlow? .finfo() provides a neat way to get machine limits for floating-point types. This function is available in Numpy, Torch as well as Tensorflow experimental. .finfo().max returns the largest possible number representable as that dtype. NOTE: There is also a .iinfo() for integer types. Here are a few examples of finfo and iinfo in action. print('FLOATS') print('float16',torch.finfo(torch.float16).max) print('float32',torch.finfo(torch.float32).max) print('float64',torch.finfo(torch.float64).max) print('') print('INTEGERS') print('int16',torch.iinfo(torch.int16).max) print('int32',torch.iinfo(torch.int32).max) print('int64',torch.iinfo(torch.int64).max) FLOATS float16 65504.0 float32 3.4028234663852886e+38 float64 1.7976931348623157e+308 INTEGERS int16 32767 int32 2147483647 int64 9223372036854775807 If you want to implement this in tensorflow, you can use tf.experimental.numpy.finfo to solve this. print(tf.experimental.numpy.finfo(tf.float32)) print('Max ->',tf.experimental.numpy.finfo(tf.float32).max) #<---- THIS IS WHAT YOU WANT Machine parameters for float32 --------------------------------------------------------------- precision = 6 resolution = 1.0000000e-06 machep = -23 eps = 1.1920929e-07 negep = -24 epsneg = 5.9604645e-08 minexp = -126 tiny = 1.1754944e-38 maxexp = 128 max = 3.4028235e+38 nexp = 8 min = -max --------------------------------------------------------------- Max -> 3.4028235e+38 2. Does y_hat.dtype just return the data type? YES. In torch, it would return torch.float32 or something like that. In Tensorflow it would return tf.float32 or something like that.
https://stackoverflow.com/questions/65657086/
Pytorch CNN not learning
I am currently attempting to train a PyTorch CNN to classify demented and non-demented individuals based on MRI scans. However, during training, the loss of the model remains constant and the accuracy, while attempting to differentiate 3 classes, remains at .333. I have tried many of the suggestions offered by respondents to similar questions, yet none of them have worked for my specific task. These pieces of advice included changing the amount of convolutional units in the model, trying different loss functions, training the model on the raw dataset and then scaling up to a larger, augmented set of images, and altering parameters such as learning rate and batch size. I have attached my code and examples of input imagery below. Image Examples Healthy Brain Mild Cognitive Impairment Brain Alzheimer's Brain Preprocessing Code torch.cuda.set_device(0) g = True if g == True: for f in final_MRI_data: path = os.path.join(final_MRI_dir, f) matrix = nib.load(path) matrix.get_fdata() matrix = matrix.get_fdata() matrix.shape slice_ = matrix[90, :, :] img = Image.fromarray(slice_) img = img.crop((left, top, right, bottom)) img = ImageOps.grayscale(img) data_matrices.append(img) postda_data = [] for image in data_matrices: for i in range(30): transformed_img = transforms(image) transformed_img = np.asarray(transformed_img) postda_data.append(transformed_img) final_MRI_labels = list(itertools.chain.from_iterable(itertools.repeat(x, 30) for x in final_MRI_labels)) X = torch.Tensor(np.asarray([i for i in postda_data])).view(-1, 145, 200) print(X.size()) y = torch.Tensor([i for i in final_MRI_labels]) #Target labels for cross entropy loss function z = [] for val in final_MRI_labels: z.append(np.eye(3)[val]) z = torch.Tensor(np.asarray(z)) #Target one-hot encoded matrices for model testing function Network Class class Hl_Model(nn.Module): torch.cuda.set_device(0) def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3, stride=2) self.conv2 = nn.Conv2d(32, 64, 3, stride=2) self.conv3 = nn.Conv2d(64, 128, 3, stride=2) self.conv4 = nn.Conv2d(128, 256, 3, stride=2) x = torch.randn(145,200).view(-1,1,145,200) self._to_linear = None self.convs(x) self.fc1 = nn.Linear(self._to_linear, 128, bias=True) self.fc2 = nn.Linear(128, 3) def convs(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = F.max_pool2d(F.relu(self.conv4(x)), (2, 2), stride=2) if self._to_linear is None: self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2] return x def forward(self, x): x = self.convs(x) x = x.view(-1, self._to_linear) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.softmax(x, dim=1) Training Function def train(net, train_fold_x, train_fold_y): optimizer = optim.Adam(net.parameters(), lr=0.05) BATCH_SIZE = 5 EPOCHS = 50 for epoch in range(EPOCHS): for i in tqdm(range(0, len(train_fold_x), BATCH_SIZE)): batch_x = train_fold_x[i:i+BATCH_SIZE].view(-1, 1, 145, 200) batch_y = train_fold_y[i:i+BATCH_SIZE] batch_x, batch_y = batch_x.to(device), batch_y.to(device) optimizer.zero_grad() outputs = net(batch_x) batch_y = batch_y.long() loss = loss_func(outputs, batch_y) loss.backward() optimizer.step() print(f"Epoch: {epoch} Loss: {loss}") Testing Function def test(net, test_fold_x, test_fold_y): test_fold_x.to(device) test_fold_y.to(device) correct = 0 total = 0 with torch.no_grad(): for i in tqdm(range(len(test_fold_x))): real_class = torch.argmax(test_fold_y[i]).to(device) net_out = net(test_fold_x[i].view(-1, 1, 145, 200).to(device)) pred_class = torch.argmax(net_out) if pred_class == real_class: correct += 1 total +=1 Cross-Validation Loop for i in range(6): result = next(skf.split(X, y)) X_train = X[result[0]] X_test = X[result[1]] y_train = y[result[0]] y_test = z[result[1]] train(hl_model, X_train, y_train) test(hl_model, X_test, y_test) Output During Training: 0%| | 0/188 [00:00<?, ?it/s] 1%| | 1/188 [00:01<05:35, 1.79s/it] 5%|4 | 9/188 [00:01<03:45, 1.26s/it] 9%|8 | 16/188 [00:02<02:32, 1.13it/s] 12%|#2 | 23/188 [00:02<01:43, 1.60it/s] 16%|#6 | 31/188 [00:02<01:09, 2.27it/s] 21%|## | 39/188 [00:02<00:46, 3.20it/s] 25%|##5 | 47/188 [00:02<00:31, 4.49it/s] 30%|##9 | 56/188 [00:02<00:21, 6.26it/s] 35%|###4 | 65/188 [00:02<00:14, 8.67it/s] 39%|###9 | 74/188 [00:02<00:09, 11.85it/s] 44%|####4 | 83/188 [00:02<00:06, 15.92it/s] 49%|####8 | 92/188 [00:02<00:04, 21.01it/s] 54%|#####3 | 101/188 [00:03<00:03, 27.03it/s] 59%|#####8 | 110/188 [00:03<00:02, 33.91it/s] 63%|######3 | 119/188 [00:03<00:01, 41.21it/s] 68%|######8 | 128/188 [00:03<00:01, 48.36it/s] 73%|#######2 | 137/188 [00:03<00:00, 55.36it/s] 78%|#######7 | 146/188 [00:03<00:00, 61.09it/s] 82%|########2 | 155/188 [00:03<00:00, 65.87it/s] 87%|########7 | 164/188 [00:03<00:00, 69.85it/s] 92%|#########2| 173/188 [00:03<00:00, 72.93it/s] 97%|#########6| 182/188 [00:04<00:00, 74.88it/s] 100%|##########| 188/188 [00:04<00:00, 45.32it/s] Epoch: 0 Loss: 1.5514447689056396 0%| | 0/188 [00:00<?, ?it/s] 5%|4 | 9/188 [00:00<00:02, 85.13it/s] 10%|9 | 18/188 [00:00<00:02, 84.42it/s] 14%|#4 | 27/188 [00:00<00:01, 83.22it/s] 19%|#9 | 36/188 [00:00<00:01, 82.64it/s] 24%|##3 | 45/188 [00:00<00:01, 82.23it/s] 29%|##8 | 54/188 [00:00<00:01, 82.17it/s] 34%|###3 | 63/188 [00:00<00:01, 82.13it/s] 38%|###8 | 72/188 [00:00<00:01, 81.66it/s] 43%|####2 | 80/188 [00:00<00:01, 79.76it/s] 47%|####6 | 88/188 [00:01<00:01, 79.66it/s] 52%|#####1 | 97/188 [00:01<00:01, 80.58it/s] 56%|#####6 | 106/188 [00:01<00:01, 80.36it/s] 61%|######1 | 115/188 [00:01<00:00, 80.64it/s] 66%|######5 | 124/188 [00:01<00:00, 80.84it/s] 71%|####### | 133/188 [00:01<00:00, 80.54it/s] 76%|#######5 | 142/188 [00:01<00:00, 80.98it/s] 80%|######## | 151/188 [00:01<00:00, 80.86it/s] 85%|########5 | 160/188 [00:01<00:00, 80.77it/s] 90%|########9 | 169/188 [00:02<00:00, 78.81it/s] 94%|#########4| 177/188 [00:02<00:00, 78.53it/s] 98%|#########8| 185/188 [00:02<00:00, 77.88it/s] 100%|##########| 188/188 [00:02<00:00, 80.35it/s] Epoch: 1 Loss: 1.5514447689056396 0%| | 0/188 [00:00<?, ?it/s] 5%|4 | 9/188 [00:00<00:02, 83.56it/s] 10%|9 | 18/188 [00:00<00:02, 82.41it/s] 14%|#3 | 26/188 [00:00<00:01, 81.49it/s] 19%|#8 | 35/188 [00:00<00:01, 81.65it/s] 23%|##3 | 44/188 [00:00<00:01, 81.55it/s] 28%|##7 | 52/188 [00:00<00:01, 80.41it/s] 32%|###1 | 60/188 [00:00<00:01, 79.40it/s] 37%|###6 | 69/188 [00:00<00:01, 80.17it/s] 41%|####1 | 78/188 [00:00<00:01, 80.29it/s] 46%|####6 | 87/188 [00:01<00:01, 80.81it/s] 51%|#####1 | 96/188 [00:01<00:01, 80.95it/s] 55%|#####5 | 104/188 [00:01<00:01, 80.24it/s] 60%|###### | 113/188 [00:01<00:00, 80.56it/s] 65%|######4 | 122/188 [00:01<00:00, 80.56it/s] 70%|######9 | 131/188 [00:01<00:00, 80.78it/s] 74%|#######4 | 140/188 [00:01<00:00, 79.65it/s] 79%|#######9 | 149/188 [00:01<00:00, 80.14it/s] 84%|########4 | 158/188 [00:01<00:00, 80.70it/s] 89%|########8 | 167/188 [00:02<00:00, 80.88it/s] 94%|#########3| 176/188 [00:02<00:00, 81.22it/s] 98%|#########8| 185/188 [00:02<00:00, 81.03it/s] 100%|##########| 188/188 [00:02<00:00, 80.66it/s] Epoch: 2 Loss: 1.5514447689056396 This output repeats all the way to "Epoch: 49 Loss: 1.5514447689056396" Thanks in advance for any advice.
It seems that the problem is due to the softmax activation at the last step of model forward and your loss function, loss_func = nn.CrossEntropyLoss() which actually takes raw logits instead. Please check the official documentation: class torch.nn.CrossEntropyLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean') and This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. The input is expected to contain raw, unnormalized scores for each class.
https://stackoverflow.com/questions/65659888/
how many epoch for training 1k images
am doing training for detecting the objects using yolov3 but i face some problem when i set batch_size > 1 it causes me cuda out of memory so i searched in google to see another solution found it depends on my GPU (GTX 1070 8G) . may be the number of epoch is high and it require to be optimized . maybe the epoch number should be decreased? and for training 1k images and 200 pic for validations . what is best epoch should i set to avoid overfitting?
your model's overfitting wont depend on the no. of epochs you set..... since you hav made a val split in your data, make sure that your train loss - val loss OR train acc - val acc is nearly the same.This will assure that your model is not overfitting
https://stackoverflow.com/questions/65672181/