instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Why doesn't setting random seed give same performance across runs?
I am training some deep learning models using pytorch, which also includes the usage of numpy. Since the randomisation is not truly random and it is pseudo-random, why aren't the numbers i.e., accuracy etc same across different runs? I mean, even if I do not set some random seed, there should be some default random seed according to which my code must run and give same results across different runs. Is there something more to it?
I don't think the truly random vs. pseudo-random discussion is relevant here. Different numbers may be generated depending on factors like date and time, which is why you should set a seed. If you involve PyTorch and CUDA, things get a little more complicated. Here is an article talking about randomness and reproducibility. In short, you need to set seeds for numpy + PyTorch and also set the backend to deterministic operations.
https://stackoverflow.com/questions/64964537/
Is there a way to define a 'heterogeneous' kernel design to incorporate linear operators into the regression for GPflow (or GPytorch/GPy/...)?
I'm trying to perform a GP regression with linear operators as described in for example this paper by Särkkä: https://users.aalto.fi/~ssarkka/pub/spde.pdf In this example we can see from equation (8) that I need a different kernel function for the four covariance blocks (of training and test data) in the complete covariance matrix. This is definitely possible and valid, but I would like to include this in a kernel definition of (preferably) GPflow, or GPytorch, GPy or the like. However, in the documentation for kernel design in Gpflow, the only possibility is to define a covariance function that acts on all covariance blocks. In principle, the method above should be straight-forward to add myself (the kernel function expressions can be derived analytically), but I don't see any way of incorporating the 'heterogeneous' kernel functions into the regression or kernel classes. I tried to consult other packages such as Gpytorch and Gpy, but again, the kernel design does not seem to allow this. Maybe I'm missing something here, maybe I'm not familiar enough with the underlying implementation to asses this, but if someone has done this before or sees the (what should be reasonably straight-forward?) implementation possibility, I would be happy to find out. Thank you very much in advance for your answer! Kind regards
This should be reasonably straightforward, though requires building a custom kernel. Basically, you need a kernel that can know for each input what the linear operator for the corresponding output is (whether this is a function observation/identity operator, integral observation, derivative observation, etc). You can achieve this by including an extra column in your input matrix X, similar to how it's done for the gpflow.kernels.Coregion kernel (see this notebook). You would need to then need to define a new kernel with K and K_diag methods that for each linear operator type find the corresponding rows in the input matrix, and pass it to the appropriate covariance function (using tf.dynamic_partition and tf.dynamic_stitch, this is used in a very similar way in GPflow's SwitchedLikelihood class). The full implementation would probably take half a day or so, which is beyond what I can do here, but I hope this is a useful starting pointer, and you're very welcome to join the GPflow slack (invite link in the GPflow README) and discuss it in more detail there!
https://stackoverflow.com/questions/64967921/
How to resize a batch of images for use with Pytorch Linear Regression?
I am trying to create a simple linear regression neural net for use with batches of images. The input dimensions are [BatchSize, 3, Width, Height] with the second dimension representing the RGB channels of the input image. Here is my (broken) attempt at that regression model: class LinearNet(torch.nn.Module): def __init__(self, Chn, W,H, nHidden): """ Input: A [BatchSize x Channels x Width x Height] set of images Output: A fitted regression model with weights dimension : [Width x Height] """ super(LinearNet, self).__init__() self.Chn = Chn self.W = W self.H = H self.hidden = torch.nn.Linear(Chn*W*H,nHidden) # hidden layer self.predict = torch.nn.Linear(nHidden, Chn*W*H) # output layer def forward(self, x): torch.reshape(x, (-1,self.Chn*self.W*self.H)) # FAILS here # x = x.resize(-1,self.Chn*self.W*self.H) x = F.relu(self.hidden(x)) # activation function for hidden layer x = self.predict(x) # linear output x = x.resize(-1,self.Chn, self.W,self.H) return x When sending in a batch of images with dimensions [128 x 3 x 96 x 128] this fails on the indicated line: RuntimeError: mat1 and mat2 shapes cannot be multiplied (36864x128 and 36864x256) How should the matrix dimensions be properly manipulated to use these pytorch functions? Update Based on a (since deleted) comment I have updated the code to use torch.reshape.
Solution 1 As a possible solution, you can get a batch size from input x with x.shape[0] and use it in reshape later import torch batch = torch.zeros([128, 3, 96, 128], dtype=torch.float32) # -1 will compute last dimension automatically batch_upd = torch.reshape(batch, (batch.shape[0], -1)) print(batch_upd.shape) Output for this code is torch.Size([128, 36864]) Solution 2 As another possible solution you can use flatten batch_upd = batch.flatten(start_dim=1) will result in the same output As to your next problem, consider going through the modified forward code: def forward(self, x): x = x.flatten(1) # shape: [B, C, W, H] -> [B, C*W*H] x = F.relu(self.hidden(x)) # activation function for hidden layer x = self.predict(x) # linear output x = x.reshape((-1, self.Chn, self.W, self.H)) # shape: [B, C*W*H] -> [B, C, W, H] return x Here is the successful usage example: ln = LinearNet(3, 96, 128, 256) batch = torch.zeros((128, 3, 96, 128)) res = ln(batch) print(res.shape) # torch.Size([128, 3, 96, 128])
https://stackoverflow.com/questions/64971976/
openCv and PyTorch inverser Transform not working
I have a transforms class which only does: if transform is None: transform = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor() ]) root = os.path.join(PROJECT_ROOT_DIR, "data") super(AttributesDataset, self).__init__() self.data = torchvision.datasets.CelebA( root=root, split=split, target_type='attr', download=True, transform=transform ) From the documentation, I understand that this implies just a scale-down of values in the range 0,1 ie all pixel values shall lie between [0,1] (I have verified this as well). I want to visualize some of the outputs coming from the model. As such, I created a simple method which does:- for img, label in dataloader: img.squeeze_(0) # permute the channels. cv2 expects image in format (h, w, c) unscaled_img = img.permute(1, 2, 0) # move images to cpu and convert to numpy as required by cv2 library unscaled_img = torch.round(unscaled_img * 255) unscaled_img = unscaled_img.to(torch.uint8) # unscaled_img = np.rint(unscaled_img * 255).astype(np.uint8) unscaled_img = cv2.cvtColor(unscaled_img, cv2.COLOR_RGB2BGR) cv2.imshow(unscaled_img.numpy()) However, all the images that are created have an unusually blue shade. For instance, Can someone please tell me what exactly am I doing wrong here? Your help would be highly appreciated
Solved by @LajosArpad comment. The culprit was unscaled_img = cv2.cvtColor(unscaled_img, cv2.COLOR_RGB2BGR) Removing it resulted in correct values.
https://stackoverflow.com/questions/64973768/
neural networks in pytorch
I am using pytorch and need to implement this as a part of a neural network. Is there a particular way to code the layers shown in purple (s def forward(self, images: torch.Tensor) -> torch.Tensor: x= self.fc1(x) return x
Combining all my comments into the answer. To split the output vector from the first layer, which has a shape [batch_size, 4608], you can use torch.split function as follows batch = torch.zeros((10, 4608)) sub_batch_1, sub_batch_2 = torch.split(batch, 2304, dim=1) print(sub_batch_1.shape, sub_batch_2.shape) This code results in two tensors (torch.Size([10, 2304]), torch.Size([10, 2304])) I am not sure about MAX logic you need. If it is about getting the maximum element in a given position from two tensors obtained during split, then torch.maximum function can be used to do so.
https://stackoverflow.com/questions/64974317/
Translate Keras functional API to PyTorch nn.Module - Conv2d
I'm trying to translate the following Inception code from tutorial in Keras functional API (link) to PyTorch nn.Module: def conv_module(x, K, kX, kY, stride, chanDim, padding="same"): # define a CONV => BN => RELU pattern x = Conv2D(K, (kX, kY), strides=stride, padding=padding)(x) x = BatchNormalization(axis=chanDim)(x) x = Activation("relu")(x) # return the block return x def inception_module(x, numK1x1, numK3x3, chanDim): # define two CONV modules, then concatenate across the # channel dimension conv_1x1 = conv_module(x, numK1x1, 1, 1, (1, 1), chanDim) conv_3x3 = conv_module(x, numK3x3, 3, 3, (1, 1), chanDim) x = concatenate([conv_1x1, conv_3x3], axis=chanDim) # return the block return x I'm having trouble translating the Conv2D. If I understand correctly: There is no in_features in Keras - how should I represent it in PyTorch? Keras filters is PyTorch out_features kernel_size, stride and padding are the same (maybe a few options for padding are called differently) Do I understand this correctly? If so, what should I do with in_features? My code so far: class BasicConv2d(nn.Module): def __init__( self, in_channels: int, out_channels: int, kernel_size: int, stride: int ) -> None: super().__init__() self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride) self.bn = nn.BatchNorm2d(out_channels, eps=0.001) self.relu = nn.ReLU() def forward(self, x: Tensor) -> Tensor: x = self.conv(x) x = self.bn(x) x = self.relu(x) return x class Inception(nn.Module): def __init__( self, in_channels: int, num_1x1_filters: int, num_3x3_filters: int, ) -> None: super().__init__() # how to fill this further? self.conv_1d = BasicConv2d( num_1x1_filters, )
You're correct for the most part. The in_channels parameter in Con2d corresponds to the no. of output channels from the previous layer. If Conv2d is the first layer, the in_channels correspond to the no. of channels in your image. It will be 1 for a Grayscale image and 3 for an RGB image. But I'm not sure how you could concat the two BasicConv2d outputs. Fixing batch_size as 1, assume that the image size is 256*256 and out_channels for conv1x1 is 64. This would output a tensor of shape torch.Size([1, 64, 256, 256]). Assuming out_channels of the conv3x3 as 32, this layer would output a tensor of shape torch.Size([1, 32, 254, 254]). We will not be able to concat these two tensors without some trick, such as using padding=1 for the conv3x3 alone as this would produce an output of shape torch.Size([1, 32, 256, 256]) and therefore we would be able to concat.
https://stackoverflow.com/questions/64974605/
input for torch.nn.functional.gumbel_softmax
Say I have a tensor named attn_weights of size [1,a], entries of which indicate the attention weights between the given query and |a| keys. I want to select the largest one using torch.nn.functional.gumbel_softmax. I find docs about this function describe the parameter as logits - […, num_features] unnormalized log probabilities. I wonder whether should I take log of attn_weights before passing it into gumbel_softmax? And I find Wiki defines logit=lg(p/1-p), which is different from barely logrithm. I wonder which one should I pass to the function? Further, I wonder how to choose tau in gumbel_softmax, any guidelines?
I wonder whether should I take log of attn_weights before passing it into gumbel_softmax? If attn_weights are probabilities (sum to 1; e.g., output of a softmax), then yes. Otherwise, no. I wonder how to choose tau in gumbel_softmax, any guidelines? Usually, it requires tuning. The references provided in the docs can help you with that. From Categorical Reparameterizaion with Gumbel-Softmax: Figure 1, caption: ... (a) For low temperatures (τ = 0.1, τ = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a categorical random variable with the same logits. As the temperature increases (τ = 1.0, τ = 10.0), the expected value converges to a uniform distribution over the categories. Section 2.2, 2nd paragraph (emphasis mine): While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corresponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large, and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1). In practice, we start at a high temperature and anneal to a small but non-zero temperature. Lastly, they remind the reader that tau can be learned: If τ is a learned parameter (rather than annealed via a fixed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the "confidence" of proposed samples during the training process.
https://stackoverflow.com/questions/64980330/
comparing torch.nn.CrossEntropyLoss with label as int and prob tensor produces an error
I have this criterion: criterion = nn.CrossEntropyLoss() And during training phase i have: label = tensor([0.]) outputs = tensor([[0.0035, 0.0468]], grad_fn=<AddmmBackward>) When i try to compare them: criterion(outputs, label) I get this error: ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward
nn.CrossEntropyLoss expects its label input to be of type torch.Long and not torch.Float. Note that this behavior is opposite to nn.BCELoss where target is expected to be of the same type as the input. If you simply remove the . from your label: label = torch.tensor([0]) # no . after 0 - now it is an integer pytorch will automatically create label as torch.Long type, and you are okay: In [*]: criterion(outputs, torch.tensor([0])) Out[*]: tensor(0.7150) Commenting on the other answers by planet_pluto and Craig.Li: A more general way of casting an existing tensor is to use .to(...): label = torch.tensor([0]).to(dtype=torch.long) However, creating-and-casting is not very efficient way of doing stuff: Think of it, you make pytroch create a torch.float tensor and then cast it to torch.long. Alternatively, you can explicitly define the desired dtype upon creation of the tensor: label = torch.tensor([0.], dtype=torch.long) This way pytorch creates label with the desired dtype and no 2nd phase of casting is required.
https://stackoverflow.com/questions/64982109/
What exactly does the forward function output in Pytorch?
This example is taken verbatim from the PyTorch Documentation. Now I do have some background on Deep Learning in general and know that it should be obvious that the forward call represents a forward pass, passing through different layers and finally reaching the end, with 10 outputs in this case, then you take the output of the forward pass and compute the loss using the loss function one defined. Now, I forgot what exactly the output from the forward() pass yields me in this scenario. I thought that the last layer in a Neural Network should be some sort of activation function like sigmoid() or softmax(), but I did not see these being defined anywhere, furthermore, when I was doing a project now, I found out that softmax() is called later on. So I just want to clarify what exactly is the outputs = net(inputs) giving me, from this link, it seems to me by default the output of a PyTorch model's forward pass is logits? transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) print(outputs) break loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
it seems to me by default the output of a PyTorch model's forward pass is logits As I can see from the forward pass, yes, your function is passing the raw output def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x So, where is softmax? Right here: criterion = nn.CrossEntropyLoss() It's a bit masked, but inside this function is handled the softmax computation which, of course, works with the raw output of your last layer This is softmax calculation: where z_i are the raw outputs of the neural network So, in conclusion, there is no activation function in your last input because it's handled by the nn.CrossEntropyLoss class Answering what's the raw output that comes from nn.Linear: The raw output of a neural network layer is the linear combination of the values that come from the neurons of the previous layer
https://stackoverflow.com/questions/64987430/
Getting the output's grad with respect to the input
I'm currently trying to implement an ODE Solver with Pytorch, my solution requires computing the gradient of each output wtr to its input. y = model(x) for i in range(len(y)): #compute output grad wrt input y[i].backward(retain_graph=True) ydx=x.grad I was wondering if there is a more elegant way to compute the gradients for each output in the batch, since the code gets messy for higher order ODEs and PDEs. I tried using: torch.autograd.backward(x,y,retain_graph=True) without much success.
Try the torch.autograd.functional.jacobian if your version of PyTorch has the API implemented. I am doing the same for the Burger's equation, and posted this thread on the same subject: PyTorch how to compute second order jacobian? Solving PDE with DL is a hot topic right now
https://stackoverflow.com/questions/64988010/
Getting an error when inverting an integer type matrix in torch
I was attempting to find the inverse of a 3x3 matrix that is composed of random integers using: torch.randint(). However, when doing so, I was getting the error: "inverse_cpu" not implemented for 'Long' The code: A = torch.randint(0, 10, (3, 3)) A_inv = A.inverse() print(A @ A_inv, "\n", A_inv @ A) I believe that A.inverse() is expecting the inverse of matrix A to also be of type integer, but it's not. Maybe we can have it such that matrix A is of type float like torch.Tensor(), or have A_inv invert it regardless. Though I'm not quite sure how to do either. Thanks for your assistance!
Okay I figured out 2 ways to do this: 1.) A = torch.randint(0, 10, (3, 3), dtype=torch.float32) 2.) A = torch.Tensor(np.random.randint(0, 10, (3, 3))) And then inverting either gives no errors, as both are of type float32 now.
https://stackoverflow.com/questions/64990401/
What is the difference between these backward training methods in Pytorch?
I am a 3-month DL freshman who is doing small NLP projects with Pytorch. Recently I am trying to reappear a GAN network introduced by a paper, using my own text data, to generate some specific kinds of question sentences. Here is some background... If you have no time or interest about it, just kindly read the following question is OK. As that paper says, the generator is firstly trained normally with normal question data to make that the output at least looks like a real question. Then by using an auxiliary classifier's result (of classifying the outputs), the generator is trained again to just generate the specific (several unique categories) questions. However, as the paper do not reveal its code, I have to do the code all myself. I have these three training thoughts, but I do not know their differences, could you kindly tell me about it? If they have almost the same effect, could you tell me which is more recommended in Pytorch's grammar? Thank you very much! Suppose the discriminator loss to generator is loss_G_D, the classifier loss to generator is loss_G_C, and loss_G_D and loss_G_C has the same shape, i.e. [batch_size, loss value], then what is the difference? 1. optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_C = loss_func2(classifier(generated_data)) loss = loss_G+loss_C loss.backward() optimizer.step() optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_D.backward() loss_G_C = loss_func2(classifier(generated_data)) loss_G_C.backward() optimizer.step() optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_D.backward() optimizer.step() optimizer.zero_grad() loss_G_C = loss_func2(classifier(generated_data)) loss_G_C.backward() optimizer.step() Additional info: I observed that the classifier's classification loss is always very big compared with generator's loss, like -300 vs 3. So maybe the third one is better?
First of all: loss.backward() backpropagates the error and assigns a gradient for every parameter along the way that has requires_grad=True. optimizer.step() updates the model parameters using their stored gradients optimizer.zero_grad() sets the gradients to 0, so that you can backpropagate your loss and update your model parameters for each batch without interfering with other batches. 1 and 2 are quite similar, but if your model uses batch statistics or you have an adaptive optimizer they will probably perform differently. However, for instance, if your model doesn't use batch statistics and you have a plain old SGD optimizer, they will produce the same result, even though 1 would be faster since you do the backprop only once. 3 is a completely different case, since you update your model parameters with loss_G_D.backward() and optimizer.step() before processing and backpropagating loss_G_C. Given all of these, it's up to you which one to choose depending on your application.
https://stackoverflow.com/questions/64997947/
How to prevent memory use growth when updating weights and biases in a Pytorch model
I'm trying to build a VGG16 model to make an ONNX export using Pytorch. I want to force the model with my own set of weights and biases. But in this process my computer quickly runs out of memory. Here is how I want to do it (this is only a test, in the real version I read the weights and biases in a set of files), this example only force all values to 0.5 # Create empty VGG16 model (random weights) from torchvision import models from torchsummary import summary vgg16 = models.vgg16() # la structure est : vgg16.__dict__ summary(vgg16, (3, 224, 224)) # convolutive layers for layer in vgg16.features: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(dim) print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params') # Remplacement des poids et biais for i in range (dim[0]): layer.bias[i] = 0.5 for j in range (dim[1]): for k in range (dim[2]): for l in range (dim[3]): layer.weight[i][j][k][l] = 0.5 # Dense layers for layer in vgg16.classifier: print() print(layer) if (hasattr(layer,'weight')): dim = layer.weight.shape print(str(dim)+' --> '+str(dim[0]*(dim[1]+1))+' params') for i in range(dim[0]): layer.bias[i] = 0.5 for j in range(dim[1]): layer.weight[i][j] = 0.5 When I look at the memory usage of the computer, it grows linealrly and saturates the 16GB RAM during the first dense layer processing. Then python crashes... Is there another better way to do this, keeping in mind that I want to onnx export the model afterwards? Thanks for your help.
The memory growth is caused by the need to adjust gradient for every weight and bias change. Try setting .requires_grad attribute to False before the update and restoring it after the update. Example: for layer in vgg16.features: print() print(layer) if (hasattr(layer,'weight')): # supress .requires_grad layer.bias.requires_grad = False layer.weight.requires_grad = False dim = layer.weight.shape print(dim) print(str(dim[0]*(dim[1]*dim[2]*dim[3]+1))+' params') # Remplacement des poids et biais for i in range (dim[0]): layer.bias[i] = 0.5 for j in range (dim[1]): for k in range (dim[2]): for l in range (dim[3]): layer.weight[i][j][k][l] = 0.5 # restore .requires_grad layer.bias.requires_grad = True layer.weight.requires_grad = True
https://stackoverflow.com/questions/65000517/
Where should i put the input image dimesions in the following architecture in Pytorch?
class Discriminator(nn.Module): def __init__(self, channels=3): super(Discriminator, self).__init__() self.channels = channels def convlayer(n_input, n_output, k_size=4, stride=2, padding=0, bn=False): block = [nn.Conv2d(n_input, n_output, kernel_size=k_size, stride=stride, padding=padding, bias=False)] if bn: block.append(nn.BatchNorm2d(n_output)) block.append(nn.LeakyReLU(0.2, inplace=True)) return block self.model = nn.Sequential( *convlayer(self.channels, 32, 4, 2, 1), *convlayer(32, 64, 4, 2, 1), *convlayer(64, 128, 4, 2, 1, bn=True), *convlayer(128, 256, 4, 2, 1, bn=True), nn.Conv2d(256, 1, 4, 1, 0, bias=False), # FC with Conv. ) def forward(self, imgs): logits = self.model(imgs) out = torch.sigmoid(logits) return out.view(-1,1) The above architecture is of Discriminator of GAN model, i am little confused as in the first layer *convlayer(self.channels, 32, 4, 2, 1) self.channels ,which is 3 (colored image), is passed , I have an input image of 64 * 64 * 3. My first question is where the dimensions of input image are taken care in the above architecture? I have got this confusion because when i saw the generator architecture , class Generator(nn.Module): def __init__(self, nz=128, channels=3): super(Generator, self).__init__() self.nz = nz self.channels = channels def convlayer(n_input, n_output, k_size=4, stride=2, padding=0): block = [ nn.ConvTranspose2d(n_input, n_output, kernel_size=k_size, stride=stride, padding=padding, bias=False), nn.BatchNorm2d(n_output), nn.ReLU(inplace=True), ] return block self.model = nn.Sequential( *convlayer(self.nz, 1024, 4, 1, 0), # Fully connected layer via convolution. *convlayer(1024, 512, 4, 2, 1), *convlayer(512, 256, 4, 2, 1), *convlayer(256, 128, 4, 2, 1), *convlayer(128, 64, 4, 2, 1), nn.ConvTranspose2d(64, self.channels, 3, 1, 1), nn.Tanh() ) def forward(self, z): z = z.view(-1, self.nz, 1, 1) img = self.model(z) return img In the first layer *convlayer(self.nz, 1024, 4, 1, 0) they are passing self.nz ,which is 128 random latent points required to generate image of 64 * 64 * 3, as opposed to the above discriminator model where the channels are passed. My second question is, if i have an image of 300 * 300 * 3, what should i change in my architecture of Discriminator to process the image? P.S. I am new to Pytorch.
The dimensions of an input image are not at all required in convolutions. All you're going to do is to perform kernel convolutions (with/without) strides across the image. You just have to ensure that the input to a convolutional layer has a size more than the size of the kernel of that layer. For eg: You cannot apply a 3x3 kernel on a 2x2 image. Of course, you can come around this issue by padding, but in general, it's not possible. The discriminator is going to take a sample from your dataset or from the one generated by the Generator to evaluate whether it's true or fake. Since this is a CNN and not a Linear Layer Network, you do not need to specify the size of the input image. The generator is going to sample from the latent points and then generate an image. If you have a 300x300 image, you don't need to change anything with the discriminator.
https://stackoverflow.com/questions/65005201/
torch.matmul gives RuntimeError
I have two tensors t1=torch.Size([400, 32, 400]) t2= torch.Size([400, 32, 32]) when i excute this torch.matmul(t1,t2) i got this error RuntimeError: Expected tensor to have size 400 at dimension 1, but got size 32 for argument #2 'batch2' (while checking arguments for bmm) Any help will be much appreciated
You get the error because the order of matrix multiplication is wrong. It should be: a = torch.randn(400, 32, 400) b = torch.randn(400, 32, 32) out = torch.matmul(b, a) # You performed torch.matmul(a, b) # You can also do a simpler version of the matrix multiplication using the below code out = b @ a
https://stackoverflow.com/questions/65007755/
Understanding backward hooks
I wrote this snippet below to try and understand what's going on with these hooks. class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(10,5) self.fc2 = nn.Linear(5,1) self.fc1.register_forward_hook(self._forward_hook) self.fc1.register_backward_hook(self._backward_hook) def forward(self, inp): return self.fc2(self.fc1(inp)) def _forward_hook(self, module, input, output): print(type(input)) print(len(input)) print(type(output)) print(input[0].shape) print(output.shape) print() def _backward_hook(self, module, grad_input, grad_output): print(type(grad_input)) print(len(grad_input)) print(type(grad_output)) print(len(grad_output)) print(grad_input[0].shape) print(grad_input[1].shape) print(grad_output[0].shape) print() model = Model() out = model(torch.tensor(np.arange(10).reshape(1,1,10), dtype=torch.float32)) out.backward() Produces output <class 'tuple'> 1 <class 'torch.Tensor'> torch.Size([1, 1, 10]) torch.Size([1, 1, 5]) <class 'tuple'> 2 <class 'tuple'> 1 torch.Size([1, 1, 5]) torch.Size([5]) torch.Size([1, 1, 5]) You can also follow the CNN example here. In fact, it's needed to understand the rest of my question. I have a few questions: I would normally think that grad_input (backward hook) should be the same shape as output (forward hook) because when we go backwards, the direction is reversed. But the CNN example seems to indicate otherwise. I'm still a bit confused. Which way around is it? Why are grad_input[0] and grad_output[0] the same shape on my Linear layer here? Regardless of the answer to my question 1, at least one of them should be torch.Size([1, 1, 10]) right? What's with the second element of the tuple grad_input? In the CNN case I copy pasted the example and did print(grad_input[1].size()) with output torch.Size([20, 10, 5, 5]). So I presume it's the gradients of the weights. I also ran print(grad_input[2].size()) and got torch.Size([20]). So it seemed clear I was looking at the gradients of the biases. But then in my Linear example grad_input is length 2, so I can only access up to grad_input[1], which seems to be giving me the gradients of the biases. So then where are the gradients of the weights? In summary, there are two apparent contradictions between the behaviour of the backwards hook in the cases of Conv2d and `Linear' modules. This has left me totally confused about what to expect with this hook. Thanks for your help!
I would normally think that grad_input (backward hook) should be the same shape as output grad_input contains gradient (of whatever tensor the backward has been called on; normally it is the loss tensor when doing machine learning, for you it is just the output of the Model) wrt input of the layer. So it is the same shape as input. Similarly grad_output is the same shape as output of the layer. This is also true for the CNN example you have cited. Why are grad_input[0] and grad_output[0] the same shape on my Linear layer here? Regardless of the answer to my question 1, at least one of them should be torch.Size([1, 1, 10]) right? Ideally the grad_input should contain the gradients wrt the input of the layer and wrt the weights and the biases of the layer. That is the behaviour you see if you use the following backward hook for the CNN example: def _backward_hook(module, grad_input, grad_output): for i, inp in enumerate(grad_input): print("Input #", i, inp.shape) However this does not happen with the Linear layer. This is because of a bug. Top comment reads: module hooks are actually registered on the last function that the module has created So what really might be happening in the backend (my guess) is that it is calculating Y=((W^TX)+b). You can see that it is the adding of bias that is the last operation. So for that operation there is one input of shape (1,1,5) and the bias term has shape (5). These two (gradient wrt these actually) form your tuple grad_input. The result of the addition (gradient wrt it actually) is stored in grad_output which is of shape (1,1,5) What's with the second element of the tuple grad_input As answered above, it is just the gradient wrt whatever "layer parameters" gradients is calculated on; normally the weights/biases (whatever applicable) of that last operation.
https://stackoverflow.com/questions/65011884/
Tensor power and multiplication in pytorch
I have a matrix A and a tensor b of size (1,3) - so a vector of size 3. I want to compute C = b1 * A + b2 * A^2 + b3 * A^3 where ^n is the n-th power of A. At the end, C should have the same shape as A. How can I do this efficiently?
Let's try: A = torch.ones(1,2,3) b_vals = torch.tensor([2,3,4]) powers = torch.tensor([1,2,3]) C = (A[...,None]**powers + b_vals).sum(-1) Output: tensor([[[12., 12., 12.], [12., 12., 12.]]])
https://stackoverflow.com/questions/65012728/
PyTorch: 'CrossEntropyLoss" object has no attribute 'item'
Currently deploying a CNN model. model = CNN(height=96, width=96, channels=3) and looking to observe its cross entropy loss. criterion = nn.CrossEntropyLoss() The Trainer class is given as follows, class Trainer: def __init__( self, model: nn.Module, train_loader: DataLoader, val_loader: DataLoader, criterion: nn.Module, optimizer: Optimizer, summary_writer: SummaryWriter, device: torch.device, ): self.model = model.to(device) self.device = device self.train_loader = train_loader self.val_loader = val_loader self.criterion = criterion self.optimizer = optimizer self.summary_writer = summary_writer self.step = 0 def train( self, epochs: int, val_frequency: int, print_frequency: int = 20, log_frequency: int = 5, start_epoch: int = 0 ): self.model.train() for epoch in range(start_epoch, epochs): self.model.train() data_load_start_time = time.time() for batch, labels in self.train_loader: batch = batch.to(self.device) labels = labels.to(self.device) data_load_end_time = time.time() loss=self.criterion logits=self.model.forward(batch) with torch.no_grad(): preds = logits accuracy = compute_accuracy(labels, preds) data_load_time = data_load_end_time - data_load_start_time step_time = time.time() - data_load_end_time if ((self.step + 1) % log_frequency) == 0: self.log_metrics(epoch, accuracy, loss, data_load_time, step_time) if ((self.step + 1) % print_frequency) == 0: self.print_metrics(epoch, accuracy, loss, data_load_time, step_time) self.step += 1 data_load_start_time = time.time() self.summary_writer.add_scalar("epoch", epoch, self.step) if ((epoch + 1) % val_frequency) == 0: self.validate() self.model.train() The function to log the loss is, def log_metrics(self, epoch, accuracy, loss, data_load_time, step_time): self.summary_writer.add_scalar("epoch", epoch, self.step) self.summary_writer.add_scalars( "accuracy", {"train": accuracy}, self.step ) self.summary_writer.add_scalars( "loss", {"train": float(loss.item())}, self.step ) self.summary_writer.add_scalar( "time/data", data_load_time, self.step ) self.summary_writer.add_scalar( "time/data", step_time, self.step ) I have been receiving an attribute error "' CrossEntropyLoss' object has no attribute 'item'". I have tried removing several ways such as removing "item()" from different parts of code and trying different types of loss functions like MSELoss etc. Any solution or direction would be highly appreciated. Thank you. Edit-1: Here is the error traceback Traceback (most recent call last): File "/Users/xyz/main.py", line 316, in <module> main(parser.parse_args()) File "/Users/xyz/main.py", line 128, in main log_frequency=args.log_frequency, File "/Users/xyz/main.py", line 198, in train self.log_metrics(epoch, accuracy, loss, data_load_time, step_time) File "/Users/xyz/main.py", line 232, in log_metrics {"train": float(loss.item)}, File "/Users/xyz/main.py", line 585, in __getattr__ type(self).__name__, name)) AttributeError: 'CrossEntropyLoss' object has no attribute 'item'
It looks like the loss in the call self.log_metrics(epoch, accuracy, loss, data_load_time, step_time) is the criterion itself (CrossEntropyLoss object), not the result of calling it. Your training loop needs to call the criterion to compute the loss, I don't see it in the code your provided.
https://stackoverflow.com/questions/65012948/
Create a tensor with ones where another tensor has non-zero elements in Pytorch
Say I have a tensor A with any shape. It has a number k of non-zero elements. I want to build another tensor B, with 1s where A is non zero and 0s where A is zero. For example: A = [[1,2,0], [0,3,0], [0,0,5]] then B will be : B = [[1,1,0], [0,1,0], [0,0,1]] Is there a simple way to implement this in Pytorch?
I believe it's: B = (A!=0).int() Also: B = A.bool().int()
https://stackoverflow.com/questions/65013313/
Will it cause gradient vanishing by adding a constant positive and constant negative loss?
I want to ask a question about using two losses to train one model. I am going to generate some specific kinds of question sentences. To achieve it, I use (1) a normal GAN to generate the normal question space. Then (2) an auxiliary classifier to let the generator focus on generating that kind of questions. By pre-experiment, as I use BCELoss() on Generator-Discriminator loss (loss_G_D), the loss value is around 3. And as I use -Entropy on Generator-Classifier loss (loss_G_C), the loss value is always negative, and very big, e.g. -300. To not let them affect each other's training procedure, I used this training method. optimizer.zero_grad() loss_G_D = BCELoss(discriminator(generated_data)) loss_G_D.backward() optimizer.step() optimizer.zero_grad() loss_G_C = -Entropy(classifier(generated_data)) loss_G_C.backward() optimizer.step() However, this training procedure is a little bit slow and seems like they put the network training 'back and forth'. So a pal suggested me this method: optimizer.zero_grad() loss_G_D = BCELoss(discriminator(generated_data)) loss_G_C = -Entropy(classifier(generated_data)) loss = loss_G_D+loss_G_C # if you worry about the scale, give some weight, like # loss = loss_G_D+0.01*loss_G_C loss.backward() optimizer.step() I thought it makes sense. However, like the loss_G_D is 3, loss_G_C is -300, wouldn't it cause gradient vanishing by add them up? As using loss = loss_G_D+0.01*loss_G_C=0 ? Or can I say because they are different type of loss considering always positive and negative, we should not add them up? (PS. I think maybe map the -Entropy loss into an always positive activation function, then add the two loss up will work?)
No it won't. In your case it is the total loss that is "vanishing", not the gradient. The gradient is simply the sum of the two separate gradients calculated from two losses. Since one loss is only enforced on classifier and the other one is only enforced on discriminator, back propagating the former should only assign gradients to classifier and back propagating the latter should only assign gradients to discriminator. So they won't affect each other. Let me also explain with a simple example. Say you have only two parameters x and y. And you want to make x as small as possible by enforcing a loss L_x = abs(x). Meanwhile, you also want to make y as large as possible by enforcing L_y=-abs(y). So the total loss is actually L=abs(x)-abs(y). Suppose initially we have x=y=1. Then the gradient is (dL/dx,dL/dy)=(1,-1), while the loss is L=0. More specifically, at (x,y)=(1,1), L_x=1 gives a gradient of (d(L_x)/dx,d(L_x)/y)=(1,0) and L_y=-1 gives a gradient of (d(L_y)/dx,d(L_y)/y)=(0,-1). You see that even though L_x and L_y cancel each other, their gradients don't.
https://stackoverflow.com/questions/65014715/
How to input a numpy array to a neural network in pytorch?
This is the neural network that I defined class generator(nn.Module): def __init__(self, n_dim, io_dim): super().__init__() self.gen = nn.Sequential( nn.Linear(n_dim,64), nn.LeakyReLU(.01), nn.Linear(64, io_dim), ) def forward(self, x): return self.gen(x) #The input x is: x = numpy.random.dirichlet([10,6,3],3) Now I want the neural network to take dirichlet distributed samples (sampled using numpy.random.dirichlet([10,6,3],10) ) as an input. How to do that?
Instead of using numpy to sample from a dirichlet distribution, use pytorch. Here is the code: y = torch.Tensor([[10,6,3]]) m = torch.distributions.dirichlet.Dirichlet(y) z=m.sample() gen = generator(3,3) gen(z)
https://stackoverflow.com/questions/65017261/
Train n% last layers of BERT in Pytorch using HuggingFace Library (train Last 5 BERTLAYER out of 12 .)
Bert has an Architecture something like encoder -> 12 BertLayer -> Pooling. I want to train the last 40% layers of Bert Model. I can freeze all the layers as: # freeze parameters bert = AutoModel.from_pretrained('bert-base-uncased') for param in bert.parameters(): param.requires_grad = False But I want to Train last 40% layers. When I do len(list(bert.parameters())), it gives me 199. So let us suppose 79 is the 40% of parameters. Can I do something like: for param in list(bert.parameters())[-79:]: # total trainable 199 Params: 79 is 40% param.requires_grad = False I think it will freeze first 60% layers. Also, can someone tell me that which layers it will freeze according to architecture?
You are probably looking for named_parameters. for name, param in bert.named_parameters(): print(name) Output: embeddings.word_embeddings.weight embeddings.position_embeddings.weight embeddings.token_type_embeddings.weight embeddings.LayerNorm.weight embeddings.LayerNorm.bias encoder.layer.0.attention.self.query.weight encoder.layer.0.attention.self.query.bias encoder.layer.0.attention.self.key.weight ... named_parameters will also show you that you have not frozen the first 60% but the last 40%: for name, param in bert.named_parameters(): if param.requires_grad == True: print(name) Output: embeddings.word_embeddings.weight embeddings.position_embeddings.weight embeddings.token_type_embeddings.weight embeddings.LayerNorm.weight embeddings.LayerNorm.bias encoder.layer.0.attention.self.query.weight encoder.layer.0.attention.self.query.bias encoder.layer.0.attention.self.key.weight encoder.layer.0.attention.self.key.bias encoder.layer.0.attention.self.value.weight ... You can freeze the first 60% with: for name, param in list(bert.named_parameters())[:-79]: print('I will be frozen: {}'.format(name)) param.requires_grad = False
https://stackoverflow.com/questions/65017564/
Can YOLO pictures have a bounded box that covering the whole picture?
I wonder why YOLO pictures need to have a bounding box. Assume that we are using Darknet. Each image need to have a corresponding .txt file with the same name as the image file. Inside the .txt file it need to be. It's the same for all YOLO frameworks that are using bounded boxes for labeling. <object-class> <x> <y> <width> <height> Where x, y, width, and height are relative to the image's width and height. For exampel. If we goto this page and press YOLO Darknet TXT button and download the .zip file and then go to train folder. Then we can see a these files IMG_0074_jpg.rf.64efe06bcd723dc66b0d071bfb47948a.jpg IMG_0074_jpg.rf.64efe06bcd723dc66b0d071bfb47948a.txt Where the .txt file looks like this 0 0.7055288461538461 0.6538461538461539 0.11658653846153846 0.4110576923076923 1 0.5913461538461539 0.3545673076923077 0.17307692307692307 0.6538461538461539 Every image has the size 416x416. This image looks like this: My idéa is that every image should have one class. Only one class. And the image should taked with a camera like this. This camera snap should been taked as: Take camera snap Cut the camera snap into desired size Upscale it to square 416x416 Like this: And then every .txt file that correspons for every image should look like this: <object-class> 0 0 1 1 Question Is this possible for e.g Darknet or other framework that are using bounded boxes to labeling the classes? Instead of let the software e.g Darknet upscale the bounded boxes to 416x416 for every class object, then I should do it and change the .txt file to x = 0, y = 0, width = 1, height = 1 for every image that only having one class object. Is that possible for me to create a traing set in that way and train with it?
Little disclaimer I have to say that I am not an expert on this, I am part of a project and we are using darknet so I had some time experimenting. So if I understand it right you want to train with cropped single class images with full image sized bounding boxes. It is possible to do it and I am using something like that but it is most likely not what you want. Let me tell you about the problems and unexpected behaviour this method creates. When you train with images that has full image size bounding boxes yolo can not make proper detection because while training it also learns the backgrounds and empty spaces of your dataset. More specifically objects on your training dataset has to be in the same context as your real life usage. If you train it with dog images on the jungle it won't do a good job of predicting dogs in house. If you are only going to use it with classification you can still train it like this it still classifies fine but images that you are going to predict also should be like your training dataset, so by looking at your example if you train images like this cropped dog picture your model won't be able to classify the dog on the first image. For a better example, in my case detection wasn't required. I am working with food images and I only predict the meal on the plate, so I trained with full image sized bboxes since every food has one class. It perfectly classifies the food but the bboxes are always predicted as full image. So my understanding for the theory part of this, if you feed the network with only full image bboxes it learns that making the box as big as possible is results in less error rate so it optimizes that way, this is kind of wasting half of the algorithm but it works for me. Also your images don't need to be 416x416 it resizes to that whatever size you give it, you can also change it from cfg file. I have a code that makes full sized bboxes for all images in a directory if you want to try it fast.(It overrides existing annotations so be careful) Finally boxes should be like this for them to be centered full size, x and y are center of the bbox it should be center/half of the image. <object-class> 0.5 0.5 1 1 from imagepreprocessing.darknet_functions import create_training_data_yolo, auto_annotation_by_random_points import os main_dir = "datasets/my_dataset" # auto annotating all images by their center points (x,y,w,h) folders = sorted(os.listdir(main_dir)) for index, folder in enumerate(folders): auto_annotation_by_random_points(os.path.join(main_dir, folder), index, annotation_points=((0.5,0.5), (0.5,0.5), (1.0,1.0), (1.0,1.0))) # creating required files create_training_data_yolo(main_dir) ```
https://stackoverflow.com/questions/65020378/
What should be the architecture of the generator and discriminator model of the GAN for generating 300 * 300 * 3 images?
I have usually seen people generating images of 28 * 28 , 64 * 64 etc. For creating this sized image they usually start with the number of filters 512, 256, 128,.. so on in decreasing manner for generators and for discriminators in reverse manner. Usually they keep the same number of layers in discriminator and generators. My first question is what should be the architecture of discriminator and generator model to create 300 * 300 images. My second question is ..is it mandatory to have equal number of layers in both the discriminator and generator. What if I have more number of layers in my discriminator than the generator ? My third question depends upon the second question only, can I use feature extractor part of any famous models like resnet , vgg etc. for making discriminators ? P.S. if you are writing the architecture code please make it in pytorch or keras .
The architecture of the Generator is completely dependent on the resolution of the image that you desire. If you need to output a higher resolution image, you need to modify the kernel_size, stride, and padding of the ConvTranspose2d layers accordingly. See the below example: # 64 * 64 * 3 # Assuming a latent dimension of 128, you will perform the following sequence to generate a 64*64*3 image. latent = torch.randn(1, 128, 1, 1) out = nn.ConvTranspose2d(128, 512, 4, 1)(latent) out = nn.ConvTranspose2d(512, 256, 4, 2, 1)(out) out = nn.ConvTranspose2d(256, 128, 4, 2, 1)(out) out = nn.ConvTranspose2d(128, 64, 4, 2, 1)(out) out = nn.ConvTranspose2d(64, 3, 4, 2, 1)(out) print(out.shape) # torch.Size([1, 3, 64, 64]) # Note the values of the kernel_size, stride, and padding. # 284 * 284 * 3 # Assuming the same latent dimension of 128, you will perform the following sequence to generate a 284*284*3 image. latent = torch.randn(1, 128, 1, 1) out = nn.ConvTranspose2d(128, 512, 4, 1)(latent) out = nn.ConvTranspose2d(512, 256, 4, 3, 1)(out) out = nn.ConvTranspose2d(256, 128, 4, 3, 1)(out) out = nn.ConvTranspose2d(128, 64, 4, 3, 1)(out) out = nn.ConvTranspose2d(64, 3, 4, 3, 1)(out) print(out.shape) # torch.Size([1, 3, 284, 284]) # I have only increased the stride from 2 to 3 and you could see the difference in the output size. You can play with the values to get 300*300*3. If you would like to generate a higher sized output, take a look at progressive GANs. The general idea behind using symmetric layers in both the Generator and the Discriminator is that you want both the networks to be equally powerful. They compete against themselves and learn over time. Having asymmetric layers could cause imbalance while training. Yes. You can use any feature extractor instead of the basic Conv and ConvTranspose layers. You can use the ResidualBlock as part of the encoder and ResidualBlockUp as part of the decoder.
https://stackoverflow.com/questions/65022564/
RuntimeError: The size of tensor a (4000) must match the size of tensor b (512) at non-singleton dimension 1
I'm trying to build a model for document classification. I'm using BERT with PyTorch. I got the bert model with below code. bert = AutoModel.from_pretrained('bert-base-uncased') This is the code for training. for epoch in range(epochs): print('\n Epoch {:} / {:}'.format(epoch + 1, epochs)) #train model train_loss, _ = modhelper.train(proc.train_dataloader) #evaluate model valid_loss, _ = modhelper.evaluate() #save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(modhelper.model.state_dict(), 'saved_weights.pt') # append training and validation loss train_losses.append(train_loss) valid_losses.append(valid_loss) print(f'\nTraining Loss: {train_loss:.3f}') print(f'Validation Loss: {valid_loss:.3f}') this is my train method, accessible with the object modhelper. def train(self, train_dataloader): self.model.train() total_loss, total_accuracy = 0, 0 # empty list to save model predictions total_preds=[] # iterate over batches for step, batch in enumerate(train_dataloader): # progress update after every 50 batches. if step % 50 == 0 and not step == 0: print(' Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader))) # push the batch to gpu #batch = [r.to(device) for r in batch] sent_id, mask, labels = batch # clear previously calculated gradients self.model.zero_grad() print(sent_id.size(), mask.size()) # get model predictions for the current batch preds = self.model(sent_id, mask) #This line throws the error # compute the loss between actual and predicted values self.loss = self.cross_entropy(preds, labels) # add on to the total loss total_loss = total_loss + self.loss.item() # backward pass to calculate the gradients self.loss.backward() # clip the the gradients to 1.0. It helps in preventing the exploding gradient problem torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0) # update parameters self.optimizer.step() # model predictions are stored on GPU. So, push it to CPU #preds=preds.detach().cpu().numpy() # append the model predictions total_preds.append(preds) # compute the training loss of the epoch avg_loss = total_loss / len(train_dataloader) # predictions are in the form of (no. of batches, size of batch, no. of classes). # reshape the predictions in form of (number of samples, no. of classes) total_preds = np.concatenate(total_preds, axis=0) #returns the loss and predictions return avg_loss, total_preds preds = self.model(sent_id, mask) this line throws the following error(including full traceback). Epoch 1 / 1 torch.Size([32, 4000]) torch.Size([32, 4000]) Traceback (most recent call last): File "<ipython-input-39-17211d5a107c>", line 8, in <module> train_loss, _ = modhelper.train(proc.train_dataloader) File "E:\BertTorch\model.py", line 71, in train preds = self.model(sent_id, mask) File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\model.py", line 181, in forward #pass the inputs to the model File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 837, in forward embedding_output = self.embeddings( File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 201, in forward embeddings = inputs_embeds + position_embeddings + token_type_embeddings RuntimeError: The size of tensor a (4000) must match the size of tensor b (512) at non-singleton dimension 1 If you observe I've printed the torch size in the code. print(sent_id.size(), mask.size()) The output of that line of code is torch.Size([32, 4000]) torch.Size([32, 4000]). as we can see that size is the same but it throws the error. Please put your thoughts. Really appreciate it. please comment if you need further information. I'll be quick to add whatever is required.
The issue is regarding the BERT's limitation with the word count. I've passed the word count as 4000 where the maximum supported is 512(have to give up 2 more for '[cls]' & '[Sep]' at the beginning and the end of the string, so it is 510 only). Reduce the word count or use some other model for your promlem. something like Longformers as suggested by @cronoik in the comments above. Thanks.
https://stackoverflow.com/questions/65023526/
Custom loss function error: tensor does not have a grad_fn
Trying to utilize a custom loss function and getting error ‘RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn’. Error occurs during loss.backward() I’m aware that all computations must be done in tensors with ‘require_grad = True’. I’m having trouble implementing that as my code requires a nested for loop. I believe it could be the for loop. Is there a way to create an empty tensor and append it? Below is my code. def Gaussian_Kernal(x, mu, sigma): p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((Variable(x)**2) - mu)/(2. * (sigma**2)))) return p class MEE(torch.nn.Module): def __init__(self): super(MEE,self).__init__() def forward(self,output, target, mu, variance): error = torch.subtract(Variable(output),Variable(target)) error_diff = [] for i in range(0, error.size(0)): for j in range(0, error.size(0)): error_diff.append(error[i] - error[j]) error_diff = torch.cat(error_diff) torch.tensor(error_diff,requires_grad=True) loss = (1./(target.size(0)**2)) * torch.sum(Gaussian_Kernal(Variable(error_diff), mu, variance*(2**0.5))) loss = Variable(loss) return loss
As long as you operate on Tensors and apply PyTorch functions and basic operators, it should work. Therefore no need to wrap your variables with torch.tensor or Variable. The latter has been being deprecated (since v0.4, I believe). The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True. PyTorch docs I'm assuming output and target are tensors and mu and variance are reals and not tensors? Then, the first dimension of output and target would be the batch. def Gaussian_Kernel(x, mu, sigma): p = (1./(math.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((x**2) - mu)/(2. * (sigma**2)))) return p class MEE(torch.nn.Module): def __init__(self): super(MEE, self).__init__() def forward(self, output, target, mu, variance): error = output - target error_diff = [] for i in range(0, error.size(0)): for j in range(0, error.size(0)): error_diff.append(error[i] - error[j]) # Assuming that's the desired operation error_diff = torch.cat(error_diff) kernel = Gaussian_Kernel(error_diff, mu, variance*(2**0.5)) loss = (1./(target.size(0)**2))*torch.sum(kernel) return loss
https://stackoverflow.com/questions/65029210/
Expected input batch_size (500) to match target batch_size (1000)
I am trying to train a CNN in PyTorch on MNIST data. However, I am getting ValueError: Expected input batch_size (500) to match target batch_size (1000). This occurs when I run the test() command in the code below. I have looked up solutions to this problem but none of them help fix this issue. My code is as follows: n_epochs = 20 batch_size_train = 64 batch_size_test = 1000 learning_rate = 1e-4 log_interval = 50 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 64, kernel_size=5) self.conv2 = nn.Conv2d(64, 128, kernel_size=1) self.fc1 = nn.Linear(9216, 100) self.fc2 = nn.Linear(100, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2(x), 2)) x = x.view(-1, 9216) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) def loss_function(self, out, target): return F.cross_entropy(out, target) def init_weights(m): if type(m) == nn.Linear or type(m) == nn.Conv2d: torch.nn.init.xavier_uniform_(m.weight) m.bias.data.fill_(0.01) network = Net() network.apply(init_weights) network.cuda() optimizer = optim.Adam(network.parameters(), lr=1e-4) def train(epoch): network.train() for batch_idx, (data, target) in enumerate(train_loader): data = data.cuda() target = target.cuda() optimizer.zero_grad() output = network(data) loss = network.loss_function(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) def test(): network.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data = data.cuda() target = target.cuda() target = target.view(batch_size_test) output = network(data) test_loss += network.loss_function(output, target).item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(test_loader.dataset) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test() for epoch in range(1, n_epochs + 1): train(epoch) test() Full error log: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-ef6e122ea50c> in <module>() ----> 1 test() 2 for epoch in range(1, n_epochs + 1): 3 train(epoch) 4 test() 3 frames <ipython-input-9-23a4b65d1ae9> in test() 9 target = target.view(batch_size_test) 10 output = network(data) ---> 11 test_loss += network.loss_function(output, target).item() 12 pred = output.data.max(1, keepdim=True)[1] 13 correct += pred.eq(target.data.view_as(pred)).sum() <ipython-input-5-d97bf44ef6f0> in loss_function(self, out, target) 91 92 def loss_function(self, out, target): ---> 93 return F.cross_entropy(out, target) /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2466 if size_average is not None or reduce is not None: 2467 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2469 2470 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2260 if input.size(0) != target.size(0): 2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' -> 2262 .format(input.size(0), target.size(0))) 2263 if dim == 2: 2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) ValueError: Expected input batch_size (500) to match target batch_size (1000). Please let me know how to fix this. Thanks, Vinny
You data has the following shape [batch_size, c=1, h=28, w=28]. batch_size equals 64 for train and 1000 for test set, but that doesn't make any difference, we shouldn't deal with the first dim. To use F.cross_entropy, you must provide a tensor of size [batch_size, nb_classes], here nb_classes is 10. So the last layer of your model should have a total of 10 neurons. As a side note, when using this criterion you shouldn't use F.log_softmax on the model's output (see here). This criterion combines log_softmax and nll_loss in a single function. This is not the issue though. The problem is your model doesn't ouput [batch_size, 10] tensors. The problem is your use of view: the tensor goes from torch.Size([64, 128, 6, 6]) to torch.Size([32, 9216]). You've basically said "squash everything to a total of 9216 (128*6*6*64/2) on dim=1 and let the rest (32) stay on dim=0". This is not desired since your messing up the batches. It's easier to use a Flatten layer in this particular instance after your CNN layers. This will flatten all values from each channels. Make sure to preserve the first dimension though with start_dim=1. Here's an example, meant to show, layers are random but the code runs. You should tweak the kernel sizes, number of channels etc... to your liking! class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size=4) self.conv2 = nn.Conv2d(32, 32, kernel_size=8) self.fc1 = nn.Linear(128, 100) self.fc2 = nn.Linear(100, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2(x), 2)) x = torch.flatten(x, start_dim=1) x = F.relu(self.fc1(x)) x = self.fc2(x) return x
https://stackoverflow.com/questions/65029639/
converting einsum notation to for loops
I am trying to convert the following einsum notation into for loops as I cannot fully understand what is happening to the matrices and am unable to use loops to replicate the results myself: np.einsum('bijc,bijd->bcd', x, x) Any help is appreciated.
Let's call x_1 and x_2 the two inputs: np.einsum('bijc,bijd->bcd', x_1, x_2) bijc,bijd->bcd boils down to ijc,ijd->cd since the first dimension is not used. Imagine you have c channels of ixj on one hand, and d channels of ixj on the other. The result we're looking for is a cxb matrix. Combining each ixj layer from x_1 (there are c in total) to each ixj layer from x_2 (there are d in total) makes a total of c*d values, this is what we're looking for. It's actually the sum of what's called the Hadamard product between the two ixj layers. Since c is first, c will be in the first dim (the rows) while d will be the number of columns. Here's an idea: b_s, i_s, j_s, c_s = x_1.shape d_s = x_2.shape[3] y = np.zeros((b_s, c_s, d_s)) for b in range(b_s): for i in range(i_s): for j in range(j_s): for c in range(c_s): for d in range(d_s): y[b, c, d] += x_1[b, i, j, c]*x_2[b, i, j, d] This post might give you a better idea Also try the following to see what happens in a simple case with i=2, j=2, c=1 and d=1: a = [[[1], [0]], [[0], [1]]]; b = [[[4], [1]], [[2], [2]]] np.einsum('ijc,ijd->cd', a, b) result is a d*c matrix of size... 1x1 (since c and d are both equal to 1). Here the result is [6]
https://stackoverflow.com/questions/65030112/
ML model not loading full batch
I tried to build a machine learning model using CIFAR 10 dataset, but I am encountering a bug that my model stops training past i = 78 (looped 78 times, see code for more). import torch import torchvision.transforms as transforms from torchvision.datasets import CIFAR10 from torchvision.transforms import ToTensor from torch.utils.data.dataloader import DataLoader transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck') train_dataset = CIFAR10(root = './data', train = True, download = True, transform = transform) train_loader = DataLoader(train_dataset, batch_size = 4, shuffle = True, num_workers = 2) test_dataset = CIFAR10(root = './data', train = False, download = True, transform = transform) test_loader = DataLoader(test_dataset, batch_size = 128, shuffle = False, num_workers = 2) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() optimiser = torch.optim.SGD(model.parameters(), lr = 0.001, momentum=0.9) loss_fn = nn.CrossEntropyLoss() for epoch in range(2): running_loss = 0 for i, data in enumerate(test_loader, 0): images, labels = data outputs = model(images) loss = loss_fn(outputs, labels) optimiser.zero_grad() loss.backward() optimiser.step() running_loss += loss.item() print(i) if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0 Sorry, I had to post the entire code because I cannot spot the mistake I made. Moreover, since I could not make it work, I tried copying the tutorial's exact code, and it works as intended! I am posting that code too below, import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') Please help me find the bug!
Look at your main loop. you'll notice you are using the test_loader instead of train_loader . This for epoch in range(2): running_loss = 0 for i, data in enumerate(test_loader, 0): images, labels = data outputs = model(images) should look like this: for epoch in range(2): running_loss = 0 for i, data in enumerate(train_loader, 0): images, labels = data outputs = model(images)
https://stackoverflow.com/questions/65036814/
Add a index selected numpy array to another numpy array with overlapping indices
I have two numpy arrays image and warped_image and indices arrays ix,iy. I need to add image to warped_image such that image[i,j] is added to warped_image[iy[i,j],ix[i,j]]. The below code works if the pairs (iy[i,j], ix[i,j]) are unique for all i,j. But when they are not unique i.e. when 2 elements from image need to be added to the same element in warped_image, only one of them gets added. How can I add both elements from image to the same element in warped_image? Note that, I don't want to use any for loops. I want to keep this vectorized. I'm planning to convert the code to TensorFlow or PyTorch in the future to use GPU capabilities for this. That's because, I have hundreds of such images and each image is of full HD resolution. import numpy image = numpy.array([[246, 50, 101], [116, 1, 113], [187, 110, 64]]) iy = numpy.array([[1, 0, 2], [1, 1, 0], [2, 0, 2]]) ix = numpy.array([[0, 2, 1], [1, 2, 0], [0, 1, 2]]) warped_image = numpy.zeros(shape=image.shape) warped_image[iy, ix] += image >> warped_image Out[31]: array([[ 113., 110., 50.], [246., 116., 1.], [187., 101., 64.]]) For the above case, indices are unique and hence the output is as expected. import numpy image = numpy.array([[246, 50, 101], [116, 1, 113], [187, 110, 64]]) iy = numpy.array([[1, 0, 2], [1, 0, 2], [2, 2, 2]]) ix = numpy.array([[0, 2, 1], [1, 2, 0], [0, 1, 2]]) warped_image = numpy.zeros(shape=image.shape) warped_image[iy, ix] += image >> warped_image Out[32]: array([[ 0., 0., 1.], [246., 116., 0.], [187., 110., 64.]]) Expected Output: array([[ 0., 0., 51.], [246., 116., 0.], [300., 211., 64.]]) In this case, there are 3 pairs of indices which overlap and hence it fails. E.g. image[0,1] and image[1,1] should gt added to warped_image[0,2] to give a value 51. However only one of them (image[1,1]) gets added to give a value 1. Context: I'm trying to do warp an image from view1 to view2. I've computed which pixel has to go where. In case of overlapping pixels, I need to take a weighted average of them. So, I need to achieve the above. More details here
Use numpy.add.at: import numpy image = numpy.array([[246, 50, 101], [116, 1, 113], [187, 110, 64]]) iy = numpy.array([[1, 0, 2], [1, 0, 2], [2, 2, 2]]) ix = numpy.array([[0, 2, 1], [1, 2, 0], [0, 1, 2]]) warped_image = numpy.zeros(shape=image.shape) np.add.at(warped_image, (iy, ix), image) print(warped_image) Output [[ 0. 0. 51.] [246. 116. 0.] [300. 211. 64.]]
https://stackoverflow.com/questions/65038757/
How to resize all 4 dimensions (NCHW) in PyTorch with F.interpolate?
I've been trying to figure out how to resize the Batch, Channels, Height, and Width dimensions in a tensor. Currently I am able to resize the Channels, Height, and Width dimensions, but the Batch dimension remains the same. x = torch.ones(3,4,64,64) x = F.interpolate(x.unsqueeze(0), size=(3,4,4), mode="trilinear").squeeze(0) x.size() # (3,3,4,4) # batch dimension has not been resized. # I need x to be resized so that it has a size of: (1,3,4,4) # Is this a good idea? x = x.permute(1,0,2,3) x = F.interpolate(x.unsqueeze(0), size=(1, x.size(2), x.size(3)), mode="trilinear").squeeze(0) x = x.permute(1,0,2,3) x.size() # (1,3,4,4) Should I permute the tensor to resize the batch dimension? Or iterate through it in some way?
This seems to let me resize the batch dimension: x = x.permute(1,0,2,3) x = F.interpolate(x.unsqueeze(0), size=(1, x.size(2), x.size(3)), mode="trilinear").squeeze(0) x = x.permute(1,0,2,3)
https://stackoverflow.com/questions/65044130/
PyTorch is giving me a different value for a scalar
When I create a tensor from float using PyTorch, then cast it back to a float, it produces a different result. Why is this, and how can I fix it to return the same value? num = 0.9 float(torch.tensor(num)) Output: 0.8999999761581421
This is a floating-point "issue" and you read more about how Python 3 handles those here. Essentially, not even num is actually storing 0.9. Anyway, the print issue in your case comes from the fact that num is actually double-precision and torch.tensor uses single-precision by default. If you try: num = 0.9 float(torch.tensor(num, dtype=torch.float64)) you'll get 0.9.
https://stackoverflow.com/questions/65044179/
Training results are different for Classification using Pytorch APIs and Fast-ai
I have two training python scripts. One using Pytorch's API for classification training and another one is using Fast-ai. Using Fast-ai has much better results. Training outcomes are as follows. Fastai epoch train_loss valid_loss accuracy time 0 0.205338 2.318084 0.466482 23:02 1 0.182328 0.041315 0.993334 22:51 2 0.112462 0.064061 0.988932 22:47 3 0.052034 0.044727 0.986920 22:45 4 0.178388 0.081247 0.980883 22:45 5 0.009298 0.011817 0.996730 22:44 6 0.004008 0.003211 0.999748 22:43 Using Pytorch Epoch [1/10], train_loss : 31.0000 , val_loss : 1.6594, accuracy: 0.3568 Epoch [2/10], train_loss : 7.0000 , val_loss : 1.7065, accuracy: 0.3723 Epoch [3/10], train_loss : 4.0000 , val_loss : 1.6878, accuracy: 0.3889 Epoch [4/10], train_loss : 3.0000 , val_loss : 1.7054, accuracy: 0.4066 Epoch [5/10], train_loss : 2.0000 , val_loss : 1.7154, accuracy: 0.4106 Epoch [6/10], train_loss : 2.0000 , val_loss : 1.7232, accuracy: 0.4144 Epoch [7/10], train_loss : 2.0000 , val_loss : 1.7125, accuracy: 0.4295 Epoch [8/10], train_loss : 1.0000 , val_loss : 1.7372, accuracy: 0.4343 Epoch [9/10], train_loss : 1.0000 , val_loss : 1.6871, accuracy: 0.4441 Epoch [10/10], train_loss : 1.0000 , val_loss : 1.7384, accuracy: 0.4552 Using Pytorch is not converging. I used the same network (Wideresnet22) and both are trained from scratch without pretrained model. The network is here. Training using Pytorch is here. Using Fastai is as follows. from fastai.basic_data import DataBunch from fastai.train import Learner from fastai.metrics import accuracy #DataBunch takes data and internall create data loader data = DataBunch.create(train_ds, valid_ds, bs=batch_size, path='./data') #Learner uses Adam as default for learning learner = Learner(data, model, loss_func=F.cross_entropy, metrics=[accuracy]) #Gradient is clipped learner.clip = 0.1 #learner finds its learning rate learner.lr_find() learner.recorder.plot() #Weight decay helps to lower down weight. Learn in https://towardsdatascience.com/ learner.fit_one_cycle(5, 5e-3, wd=1e-4) What could be wrong in my training algorithm using Pytorch?
fastai is using a lot of tricks under the hood. A quick catch of what they're doing and you're not. Those are in the order that I think matters most, especially the first two should improve your scores. TLDR Use some scheduler (torch.optim.lr_scheduler.CyclicLR preferably) and AdamW instead of SGD. Longer version fit_one_cycle 1 cycle policy by Leslie Smith is used in fastai. In PyTorch one can create similar routine using torch.optim.lr_scheduler.CyclicLR but that would require some manual setup. Basically it starts with lower learning rate, gradually increases up to 5e-3 in your case and comes back to lower learning rate again (making a cycle). You can adjust how the lr should raise and fall (in fastai it does so using cosine annealing IIRC). Your learning rate is too high at the beginning, some scheduler should help, test it out first of all. Optimizer In the provided code snippet you use torch.optim.SGD (as optim_fn is None and default is set) which is harder to setup correctly (usually). On the other hand, if you manage to set it up manually correctly, you might generalize better. Also fastai does not use Adam by default! It uses AdamW if true_wd is set (I think, it will be default in your case anyway, see source code). AdamW decouples weight decay from adaptive learning rate which should improve convergence (read here or original paper Number of epochs Set the same number of epochs if you want to compare both approaches, currently it's apple to oranges. Gradient clipping You do not clip gradient (it is commented out), might help or not depending on the task. Would not focus on that one for now tbh. Other tricks Read about Learner and fit_one_cycle and try to setup something similar in PyTorch (rough guidelines described above) Also you might use some form of data augmentation to improve the scores even further, but that's out of the question's scope I suppose.
https://stackoverflow.com/questions/65049435/
RuntimeError: Given groups=1, weight of size [32, 1, 3, 3], expected input[1, 3, 6, 7] to have 1 channels, but got 3 channels instead
There is 6x7 numpy array: <class 'numpy.ndarray'> [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]]] Model is training normally, when it is passed to this network: class Net(BaseFeaturesExtractor): def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256): super(Net, self).__init__(observation_space, features_dim) # We assume CxHxW images (channels first) # Re-ordering will be done by pre-preprocessing or wrapper # n_input_channels = observation_space.shape[0] n_input_channels = 1 print("Input channels:", n_input_channels) self.cnn = nn.Sequential( nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=0), nn.ReLU(), nn.Flatten(), ) # Compute shape by doing one forward pass with th.no_grad(): n_flatten = self.cnn( th.as_tensor(observation_space.sample()[None]).float() ).shape[1] self.linear = nn.Sequential(nn.Linear(n_flatten, features_dim), nn.ReLU()) def forward(self, observations: th.Tensor) -> th.Tensor: return self.linear(self.cnn(observations)) 6x7 numpy array is modified to 3x6x7 numpy array: <class 'numpy.ndarray'> [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [1 1 1 1 1 1 1]]] After modifying the array, it is giving this error: RuntimeError: Given groups=1, weight of size [32, 1, 3, 3], expected input[1, 3, 6, 7] to have 1 channels, but got 3 channels instead In order to solve this problem, I have tried to change the number of channels: n_input_channels = 3 However, now it is showing this error: RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 1, 6, 7] to have 3 channels, but got 1 channels instead How can I make network accept 3x6x7 array? Update: I provide more code to make my case clear: 6x7 input array case: ... board = np.array(self.obs['board']).reshape(1, self.rows, self.columns) # board = board_3layers(self.obs.mark, board) print(type(board)) print(board) return board Output: <class 'numpy.ndarray'> [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]]] Number of channels is 3: n_input_channels = 1 It is working. I am trying to modify array to 3x6x7: board = np.array(self.obs['board']).reshape(1, self.rows, self.columns) board = board_3layers(self.obs.mark, board) print(type(board)) print(board) return board Output: <class 'numpy.ndarray'> [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [1 1 1 1 1 1 1]]] Number of channels is 3: n_input_channels = 3 I do not understand why it is showing this error: RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 1, 6, 7] to have 3 channels, but got 1 channels instead
Your model can work with either 1 channel input, or 3 channels input, but not both. If you set n_input_channels=1, you can work with 1x6x7 input arrays. If you set n_input_channels=3, you can work with 3x6x7 input arrays. You must pick one of the options - you cannot have them both simultanously.
https://stackoverflow.com/questions/65057069/
Pytorch Subclass of nn.Module has no attribute 'parameters'
Python Version: Python 3.8.5 Pytorch Version: '1.6.0' I am defining LSTM, a subclass of nn.Module. I am trying to create an optimizer but I am getting the following error: torch.nn.modules.module.ModuleAttributeError: 'LSTM' object has no attribute 'paramters' I have two code files, train.py and lstm_class.py (contain the LSTM class). I will try to produce a minimum working example, let me know if any other information is helpful. The code in lstm_class.py: import torch.nn as nn class LSTM(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.2): super(LSTM, self).__init__() # network size parameters self.n_layers = n_layers self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.embedding_dim = embedding_dim # the layers of the network self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim) self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers, dropout=drop_prob, batch_first=True) self.dropout = nn.Dropout(drop_prob) self.fc = nn.Linear(self.hidden_dim, self.vocab_size) def forward(self, input, hidden): # Defines forward pass, probably isn't relevant def init_hidden(self, batch_size): #Initializes hidden state, probably isn't relevant The code in train.py import torch import torch.optim import torch.nn as nn import lstm_class vocab_size = 1000 embedding_dim = 256 hidden_dim = 256 n_layers = 2 net = lstm_class.LSTM(vocab_size, embedding_dim, hidden_dim, n_layers) optimizer = torch.optim.Adam(net.paramters(), lr=learning_rate) I am getting the error on the last line written above. The full error message: Traceback (most recent call last): File "train.py", line 58, in <module> optimizer = torch.optim.Adam(net.paramters(), lr=learning_rate) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 771, in __getattr__ raise ModuleAttributeError("'{}' object has no attribute '{}'".format( torch.nn.modules.module.ModuleAttributeError: 'LSTM' object has no attribute 'paramters' Any tips for how to fix this would be appreciated. Also as written above, let me know if anything else would be relevant. Thanks
It's not net.paramters(), it's net.parameters() :)
https://stackoverflow.com/questions/65066059/
Weights&Biases Sweep - Why might runs be overwriting each other?
I am new to ML and W&B, and I am trying to use W&B to do a hyperparameter sweep. I created a few sweeps and when I run them I get a bunch of new runs in my project (as I would expect): Image: New runs being created However, all of the new runs say "no metrics logged yet" (Image) and are instead all of their metrics are going into one run (the one with the green dot in the photo above). This makes it not useable, of course, since all the metrics and images and graph data for many different runs are all being crammed into one run. Is there anyone that has some experience in W&B? I feel like this is an issue that should be relatively straightforward to solve - like something in the W&B config that I need to change. Any help would be appreciated. I didn't give too many details because I am hoping this is relatively straightforward, but if there are any specific questions I'd be happy to provide more info. The basics: Using Google Colab for training Project is a PyTorch-YOLOv3 object detection model that is based on this: https://github.com/ultralytics/yolov3 Thanks!
Update: I think I figured it out. I was using the train.py code from the repository I linked in the question, and part of that code specifies the id of the run (used for resuming). I removed the part where it specifies the ID, and it is now working :) Old code: wandb_run = wandb.init(config=opt, resume="allow", project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem, name=save_dir.stem, id=ckpt.get('wandb_id') if 'ckpt' in locals() else None) New code: wandb_run = wandb.init(config=opt, resume="allow", project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem, name=save_dir.stem)
https://stackoverflow.com/questions/65066167/
Pytorch ValueError: Expected target size (2, 13), got torch.Size([2]) when calling CrossEntropyLoss
I am trying to train a Pytorch LSTM network, but I'm getting ValueError: Expected target size (2, 13), got torch.Size([2]) when I try to calculate CrossEntropyLoss. I think I need to change the shape somewhere, but I can't figure out where. Here is my network definition: class LSTM(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.2): super(LSTM, self).__init__() # network size parameters self.n_layers = n_layers self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.embedding_dim = embedding_dim # the layers of the network self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim) self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers, dropout=drop_prob, batch_first=True) self.dropout = nn.Dropout(drop_prob) self.fc = nn.Linear(self.hidden_dim, self.vocab_size) def forward(self, input, hidden): # Perform a forward pass of the model on some input and hidden state. batch_size = input.size(0) print(f'batch_size: {batch_size}') print(Input shape: {input.shape}') # pass through embeddings layer embeddings_out = self.embedding(input) print(f'Shape after Embedding: {embeddings_out.shape}') # pass through LSTM layers lstm_out, hidden = self.lstm(embeddings_out, hidden) print(f'Shape after LSTM: {lstm_out.shape}') # pass through dropout layer dropout_out = self.dropout(lstm_out) print(f'Shape after Dropout: {dropout_out.shape}') #pass through fully connected layer fc_out = self.fc(dropout_out) print(f'Shape after FC: {fc_out.shape}') # return output and hidden state return fc_out, hidden def init_hidden(self, batch_size): #Initializes hidden state # Create two new tensors `with sizes n_layers x batch_size x hidden_dim, # initialized to zero, for hidden state and cell state of LSTM hidden = (torch.zeros(self.n_layers, batch_size, self.hidden_dim), torch.zeros(self.n_layers, batch_size, self.hidden_dim)) return hidden I added comments stating the shape of the network at each spot. My data is in a TensorDataset called training_dataset with two attributes, features and labels. Features has shape torch.Size([97, 3]), and labels has shape: torch.Size([97]). This is the code for the network training: # Size parameters vocab_size = 13 embedding_dim = 256 hidden_dim = 256 n_layers = 2 # Training parameters epochs = 3 learning_rate = 0.001 clip = 1 batch_size = 2 training_loader = DataLoader(training_dataset, batch_size=batch_size, drop_last=True, shuffle=True) net = LSTM(vocab_size, embedding_dim, hidden_dim, n_layers) optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) loss_func = torch.nn.CrossEntropyLoss() net.train() for e in range(epochs): print(f'Epoch {e}') print(batch_size) hidden = net.init_hidden(batch_size) # loops through each batch for features, labels in training_loader: # resets training history hidden = tuple([each.data for each in hidden]) net.zero_grad() # computes gradient of loss from backprop output, hidden = net.forward(features, hidden) loss = loss_func(output, labels) loss.backward() # using clipping to avoid exploding gradient nn.utils.clip_grad_norm_(net.parameters(), clip) optimizer.step() When I try to run the training I get the following error: Traceback (most recent call last): File "train.py", line 75, in <module> loss = loss_func(output, labels) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 947, in forward return F.cross_entropy(input, target, weight=self.weight, File "/usr/local/lib/python3.8/site-packages/torch/nn/functional.py", line 2422, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/usr/local/lib/python3.8/site-packages/torch/nn/functional.py", line 2227, in nll_loss raise ValueError('Expected target size {}, got {}'.format( ValueError: Expected target size (2, 13), got torch.Size([2]) Also here is the result of the print statements: batch_size: 2 Input shape: torch.Size([2, 3]) Shape after Embedding: torch.Size([2, 3, 256]) Shape after LSTM: torch.Size([2, 3, 256]) Shape after Dropout: torch.Size([2, 3, 256]) Shape after FC: torch.Size([2, 3, 13]) There is some kind of shape error happening, but I can't figure out where. Any help would be appreciated. If relevant I'm using Python 3.8.5 and Pytorch 1.6.0.
To anyone coming across this in the future, I asked this same question on the pytorch forums and got a great answer thanks to ptrblock, found here. The issue is that my LSTM layer had batch_first=True, which means that it returns the outputs of every member of the input sequence (size of (batch_size, sequence_size, vocab_size)). But, I only want the output of the last member of the input sequence (size of (batch_size, vocab_size). So, in my forward function, instead of # pass through LSTM layers lstm_out, hidden = self.lstm(embeddings_out, hidden) it should be # pass through LSTM layers lstm_out, hidden = self.lstm(embeddings_out, hidden) # slice lstm_out to just get output of last element of the input sequence lstm_out = lstm_out[:, -1] This solved the shape issue. The error message was kind of misleading since it said that the target was the wrong shape, when really the output was the wrong shape.
https://stackoverflow.com/questions/65066981/
Make sure BERT model does not load pretrained weights?
I want to make sure my BertModel does not loads pre-trained weights. I am using auto class (hugging face) which loads model automatically. My question is how do I load bert model without pretrained weights?
Use AutoConfig instead of AutoModel: from transformers import AutoConfig config = AutoConfig.from_pretrained('bert-base-uncased') model = AutoModel.from_config(config) this should set up the model without loading the weights. Documentation here and here
https://stackoverflow.com/questions/65072694/
How to change Pytorch model to work with 3d input instead 2d input?
I am trying to train an agent to play Connect4 game. I found an example of how it can be trained. Representation of board is 1x6x7 array: [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 2] [0 0 0 0 0 0 1]]] This neural network architecture is used: class Net(BaseFeaturesExtractor): def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256): super(Net, self).__init__(observation_space, features_dim) # We assume CxHxW images (channels first) # Re-ordering will be done by pre-preprocessing or wrapper n_input_channels = observation_space.shape[0] self.cnn = nn.Sequential( nn.Conv2d(n_input_channels, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=0), nn.ReLU(), nn.Flatten(), ) # Compute shape by doing one forward pass with th.no_grad(): n_flatten = self.cnn(th.as_tensor(observation_space.sample()[None]).float()).shape[1] self.linear = nn.Sequential(nn.Linear(n_flatten, features_dim), nn.ReLU()) def forward(self, observations: th.Tensor) -> th.Tensor: return self.linear(self.cnn(observations)) And it scored not so bad on a game with an agent 2 which moves randomly: Agent 1 Win Percentage: 0.59 Agent 2 Win Percentage: 0.38 Number of Invalid Plays by Agent 1: 3 Number of Invalid Plays by Agent 2: 0 Number of Draws (in 100 game rounds): 0 Here 3 layers representation was suggested as one of the ways how an agent can be improved: I have tried to implement it and this is the example of the new 3 layer representation of board: [[[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 1]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 1] [0 0 0 0 0 0 0]] [[0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 0] [0 0 0 0 0 0 1] [0 0 0 0 0 0 0] [1 1 1 1 1 1 0]]] When I run this with the current neural network architecture, the agent is not able to train appropriately: Agent 1 Win Percentage: 0.0 Agent 2 Win Percentage: 0.0 Number of Invalid Plays by Agent 1: 100 Number of Invalid Plays by Agent 2: 0 Number of Draws (in 100 game rounds): 0 Here you can see my code. As you can see now I have 3 layers instead of one. That's why I have tried to use Conv3d: class Net(BaseFeaturesExtractor): def __init__(self, observation_space: gym.spaces.Box, features_dim: int = 256): super(Net, self).__init__(observation_space, features_dim) # We assume CxHxW images (channels first) # Re-ordering will be done by pre-preprocessing or wrapper n_input_channels = observation_space.shape[0] self.cnn = nn.Sequential( nn.Conv3d(n_input_channels, 32, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv3d(32, 64, kernel_size=3, stride=1, padding=1), nn.ReLU(), nn.Conv3d(64, 128, kernel_size=3, stride=1, padding=0), nn.ReLU(), nn.Flatten(), ) # Compute shape by doing one forward pass with th.no_grad(): n_flatten = self.cnn(th.as_tensor(observation_space.sample()[None]).float()).shape[1] self.linear = nn.Sequential(nn.Linear(n_flatten, features_dim), nn.ReLU()) When I try to run this code, it is showing this error: RuntimeError: Expected 5-dimensional input for 5-dimensional weight [32, 1, 3, 3, 3], but got 4-dimensional input of size [1, 3, 6, 7] instead My question: how can I use Conv3D layer with 3x6x7 shaped input?
The comment from Shai is correct. You do not need to use Conv3D layer here. The shape of your Conv3D filters would violate the calculation of size after application of a convolutional filter by reducing at least 1 dimension to less than 1 which is why you are getting your error (you can't multiple with a value that does not exist). Simply using the original model implementation should work for you. Similar to images with 3 color bands, these are typically not processed with Conv3d (maybe a different case with hyperspectral images, but that is not relevant here). There is some discussion about how to treat each of the color bands, and you can affect this in a variety of ways. For example, adjusting the groups argument of the Conv2D layer at instantiation will change the connections between in_channels and out_channels of the layer and which are convolved to which, as per their documentation: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html. You may be able to optimize this in your model, or otherwise experiment with it. In any case, simply using the existing implementation with Conv2D should be fine for you. Conv3D is typically utilized in the case of 3 spatial dimensions, or sometimes 2 spatial and 1 temporal dimension. While your case is kind of like a limited version of 3 spacial dimensions, it is not necessarily the same as, say, a 3D vector-field of fluid flow in regards to how each "pixel" has some spatial relevance/correlation to its neighboring "pixels". Your "spatial pixels" have a somewhat different kind of relevance or correlation mapping than this.
https://stackoverflow.com/questions/65082648/
How to compute mean/max of HuggingFace Transformers BERT token embeddings with attention mask?
I'm using the HuggingFace Transformers BERT model, and I want to compute a summary vector (a.k.a. embedding) over the tokens in a sentence, using either the mean or max function. The complication is that some tokens are [PAD], so I want to ignore the vectors for those tokens when computing the average or max. Here's an example. I initially instantiate a BertTokenizer and a BertModel: import torch import transformers from transformers import AutoTokenizer, AutoModel transformer_name = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(transformer_name, use_fast=True) model = AutoModel.from_pretrained(transformer_name) I then input some sentences into the tokenizer and get out input_ids and attention_mask. Notably, an attention_mask value of 0 means that the token was a [PAD] that I can ignore. sentences = ['Deep learning is difficult yet very rewarding.', 'Deep learning is not easy.', 'But is rewarding if done right.'] tokenizer_result = tokenizer(sentences, max_length=32, padding=True, return_attention_mask=True, return_tensors='pt') input_ids = tokenizer_result.input_ids attention_mask = tokenizer_result.attention_mask print(input_ids.shape) # torch.Size([3, 11]) print(input_ids) # tensor([[ 101, 2784, 4083, 2003, 3697, 2664, 2200, 10377, 2075, 1012, 102], # [ 101, 2784, 4083, 2003, 2025, 3733, 1012, 102, 0, 0, 0], # [ 101, 2021, 2003, 10377, 2075, 2065, 2589, 2157, 1012, 102, 0]]) print(attention_mask.shape) # torch.Size([3, 11]) print(attention_mask) # tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0], # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]]) Now, I call the BERT model to get the 768-D token embeddings (the top-layer hidden states). model_result = model(input_ids, attention_mask=attention_mask, return_dict=True) token_embeddings = model_result.last_hidden_state print(token_embeddings.shape) # torch.Size([3, 11, 768]) So at this point, I have: token embeddings in a [3, 11, 768] matrix: 3 sentences, 11 tokens, 768-D vector for each token. attention mask in a [3, 11] matrix: 3 sentences, 11 tokens. A 1 value indicates non-[PAD]. How do I compute the mean / max over the vectors for the valid, non-[PAD] tokens? I tried using the attention mask as a mask and then called torch.max(), but I don't get the right dimensions: masked_token_embeddings = token_embeddings[attention_mask==1] print(masked_token_embeddings.shape) # torch.Size([29, 768] <-- WRONG. SHOULD BE [3, 11, 768] pooled = torch.max(masked_token_embeddings, 1) print(pooled.values.shape) # torch.Size([29]) <-- WRONG. SHOULD BE [3, 768] What I really want is a tensor of shape [3, 768]. That is, a 768-D vector for each of the 3 sentences.
For max, you can multiply with attention_mask: pooled = torch.max((token_embeddings * attention_mask.unsqueeze(-1)), axis=1) For mean, you can sum along the axis and divide by attention_mask along that axis: mean_pooled = token_embeddings.sum(axis=1) / attention_mask.sum(axis=-1).unsqueeze(-1)
https://stackoverflow.com/questions/65083581/
Mask R-CNN optimizer and learning rate scheduler in Pytorch
In the Mask R-CNN paper here the optimizer is described as follows training on MS COCO 2014/2015 dataset for instance segmentation (I believe this is the dataset, correct me if this is wrong) We train on 8 GPUs (so effective minibatch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt [45], we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. I'm trying to write an optimizer and learning rate scheduler in Pytorch for a similar application, to match this description. For the optimizer I have: def get_Mask_RCNN_Optimizer(model, learning_rate=0.02): optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=0.0001) return optimizer For the learning rate scheduler I have: def get_MASK_RCNN_LR_Scheduler(optimizer, step_size): scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gammma=0.1, verbose=True) return scheduler When the authors say "decreased by 10" do they mean divide by 10? Or do they literally mean subtract by 10, in which case we have a negative learning rate, which seems odd/wrong. Any insights appreciated.
The authors mean divide by 10, as pointed out in the comments.
https://stackoverflow.com/questions/65084779/
What is mean by (AttributeError: 'NoneType' object has no attribute '__array_interface__') error?
I am trying to build a ML model to detect landmarks on a cartoon image face. When I split the image dataset in to training and validation sets I got the following error. Here I am using pytorch to build the model. So what is mean by this error? This is how I split the dataset. # split the dataset into validation and test sets len_valid_set = int(0.2*len(dataset)) len_train_set = len(dataset) - len_valid_set print("The length of Train set is {}".format(len_train_set)) print("The length of Valid set is {}".format(len_valid_set)) train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set]) # shuffle and batch the datasets train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=4) valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=8, shuffle=True, num_workers=4) images, landmarks = next(iter(train_loader)) This is the error I got. The length of Train set is 105 The length of Valid set is 26 AttributeError Traceback (most recent call last) <ipython-input-61-ffb86a628e37> in <module>() ----> 1 images, landmarks = next(iter(train_loader)) 2 3 print(images.shape) 4 print(landmarks.shape) 3 frames /usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self) 426 # have message field 427 raise self.exc_type(message=msg) --> 428 raise self.exc_type(msg) 429 430 AttributeError: Caught AttributeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 272, in __getitem__ return self.dataset[self.indices[idx]] File "<ipython-input-12-5595ac89d75d>", line 38, in __getitem__ image, landmarks = self.transform(image, landmarks, self.crops[index]) File "<ipython-input-9-e38df55ee0d4>", line 46, in __call__ image = Image.fromarray(image) File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 2670, in fromarray arr = obj.__array_interface__ AttributeError: 'NoneType' object has no attribute '__array_interface__'
Basically it says when executing the line image = Image.fromarray(image), the Image.fromarray function is expecting image to be an array and that image implements a function called __array_interface__ that will turn itself into an image. However, during execution image is actually None (a python object type for nothing). Surely you can't turn None into an image. There could be something wrong with your data. I'd suggest not doing the random split first and check if the each item in the dataset is not None.
https://stackoverflow.com/questions/65091965/
Tensorflow to PyTorch
I'm transffering a Tensorflow code to a PyTorch code. Below lines are the problem I couldn't solve yet. I'm not familiar with PyTorch so that it's not easy for me to find the matching methods in PyTorch library. Anyone can help me? p.s. The shape of alpha is (batch, N). alpha_cumsum = tf.cumsum(alpha, axis = 1) len_batch = tf.shape(alpha_cumsum)[0] rand_prob = tf.random_uniform(shape = [len_batch, 1], minval = 0., maxval = 1.) alpha_relu = tf.nn.relu(rand_prob - alpha_cumsum) alpha_index = tf.count_nonzero(alpha_relu, 1) alpha_hard = tf.one_hot(alpha_index, len(a))
I've put all your functions followed by the corresponding pytorch function. Most are the same name and put in the pytorch docs (https://pytorch.org/docs/stable/index.html) tf.cumsum(alpha, axis = 1) torch.cumsum(alpha, dim=1) tf.shape(alpha_cumsum)[0] alpha_cumsum.shape[0] tf.random_uniform(shape = [len_batch, 1], minval = 0., maxval = 1.) torch.rand([len_batch,1]) tf.nn.relu(rand_prob - alpha_cumsum) torch.nn.functional.relu(rand_prob - alpha_cumsum) tf.count_nonzero(alpha_relu, 1) torch.count_nonzero(alpha_relu, dim=1) tf.one_hot(alpha_index, len(a)) torch.nn.functional.one_hot(alpha_index, len(a)) # assuming len(a) is number of classes
https://stackoverflow.com/questions/65092587/
How do I reshape pytorch tensor from [a, b*c] to [b,a,c]?
I'm getting this code error, and I am unsure on how to reshape my tensor. RuntimeError: Expected 3-dimensional input for 3-dimensional weight [32, 35, 2], but got 2-dimensional input of size [35, 64] instead
You can use reshape function from torch as follows: a = torch.rand((35,64)) b = torch.reshape(a, (32,35,2))
https://stackoverflow.com/questions/65092624/
Can't install pytorch: no matching distribution found for torch==1.7.0+cpu
When trying to install PyTorch, I can't progress past these errors: ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.7.0+cpu
The issue likely has to do with your Python version. PyTorch will not work with the 32-bit version of Python, so if you're using 32-bit it will give that error. I had this happen to me before, switching to 64-bit should fix the error.
https://stackoverflow.com/questions/65093180/
PyTorch - TypeError: forward() takes 1 positional argument but 2 were given
I do not understand where this error comes from, the number of arguments to the model seems correct, below is my model: class MancalaModel(nn.Module): def __init__(self, n_inputs=16, n_outputs=16): super().__init__() n_neurons = 256 def create_block(n_in, n_out): block = nn.ModuleList() block.append(nn.Linear(n_in, n_out)) block.append(nn.ReLU()) return block self.blocks = nn.ModuleList() self.blocks.append(create_block(n_inputs, n_neurons)) for _ in range(6): self.blocks.append(create_block(n_neurons, n_neurons)) self.actor_block = nn.ModuleList() self.critic_block = nn.ModuleList() for _ in range(2): self.actor_block.append(create_block(n_neurons, n_neurons)) self.critic_block.append(create_block(n_neurons, n_neurons)) self.actor_block.append(create_block(n_neurons, n_outputs)) self.critic_block.append(create_block(n_neurons, 1)) self.apply(init_weights) def forward(self, x): x = self.blocks(x) actor = F.softmax(self.actor_block(x)) critics = self.critic_block(x) return actor, critics Then I create an instance and make a forward pass with random number model = MancalaModel() x = model(torch.rand(1, 16)) Then I got the TypeError saying the number of arguments is not correct: 2 model = MancalaModel() ----> 3 x = model(torch.rand(1, 16)) 4 # summary(model, (16,), device='cpu') 5 d:\environments\python\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), D:\UOM\Year3\AI & Games\KalahPlayer\agents\model_agent.py in forward(self, x) 54 55 def forward(self, x): ---> 56 x = self.blocks(x) 57 actor = F.softmax(self.actor_block(x)) 58 critics = self.critic_block(x) d:\environments\python\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), TypeError: forward() takes 1 positional argument but 2 were given Any help is appreciated, thanks!
TL;DR You are trying to forward through nn.ModuleList - this is not defined. You need to convert self.blocks to nn.Sequential: def create_block(n_in, n_out): # do not work with ModuleList here either. block = nn.Sequential( nn.Linear(n_in, n_out), nn.ReLU() ) return block blocks = [] # simple list - not a member of self, for temporal use only. blocks.append(create_block(n_inputs, n_neurons)) for _ in range(6): blocks.append(create_block(n_neurons, n_neurons)) self.blocks = nn.Sequential(*blocks) # convert the simple list to nn.Sequential I was expecting you to get NotImplementedError, and not this TypeError, because your self.blocks is of type nn.ModuleList and its forward method throws NotImplementedError. I just made a pull request to fix this confusing issue. Update (April 22nd, 2021): the PR was merged. In future versions you should see NotImplementedError when calling nn.ModuleList or nn.ModuleDict.
https://stackoverflow.com/questions/65096679/
creating a train and a test dataloader
I have actually a directory RealPhotos containing 17000 jpg photos. I would be interested in creating a train dataloader and a test dataloader ls RealPhotos/ 2007_000027.jpg 2008_007119.jpg 2010_001501.jpg 2011_002987.jpg 2007_000032.jpg 2008_007120.jpg 2010_001502.jpg 2011_002988.jpg 2007_000033.jpg 2008_007123.jpg 2010_001503.jpg 2011_002992.jpg 2007_000039.jpg 2008_007124.jpg 2010_001505.jpg 2011_002993.jpg 2007_000042.jpg 2008_007129.jpg 2010_001511.jpg 2011_002994.jpg 2007_000061.jpg 2008_007130.jpg 2010_001514.jpg 2011_002996.jpg 2007_000063.jpg 2008_007131.jpg 2010_001515.jpg 2011_002997.jpg 2007_000068.jpg 2008_007133.jpg 2010_001516.jpg 2011_002999.jpg 2007_000121.jpg 2008_007134.jpg 2010_001518.jpg 2011_003002.jpg 2007_000123.jpg 2008_007138.jpg 2010_001520.jpg 2011_003003.jpg ... I know I can subclassing TensorDataset to make it compatible with unlabeled data with class UnlabeledTensorDataset(TensorDataset): """Dataset wrapping unlabeled data tensors. Each sample will be retrieved by indexing tensors along the first dimension. Arguments: data_tensor (Tensor): contains sample data. """ def __init__(self, data_tensor): self.data_tensor = data_tensor def __getitem__(self, index): return self.data_tensor[index] And something along these lines for training the autoencoder X_train = rnd.random((300,100)) train = UnlabeledTensorDataset(torch.from_numpy(X_train).float()) train_loader= data_utils.DataLoader(train, batch_size=1) for epoch in range(50): for batch in train_loader: data = Variable(batch) optimizer.zero_grad() output = model(data) loss = criterion(output, data)
You first need to define a Dataset (torch.utils.data.Dataset) then you can use DataLoader on it. There is no difference between your train and test dataset, you can define a generic dataset that will look into a particular directory and map each index to a unique file. class MyDataset(Dataset): def __init__(self, directory): self.files = os.listdir(directory) def __getitem__(self, index): img = Image.open(self.files[index]).convert('RGB') return T.ToTensor()(img) Where T refers to torchvision.transform and Image is imported from PIL. You can then instanciate a dataset with data_set = MyDataset('./RealPhotos') From there you can use torch.utils.data.random_split to perform the split: train_len = int(len(data_set)*0.7) train_set, test_set = random_split(data_set, [train_len, data_set - train_len]) Then use torch.utils.data.DataLoader as you did: train_loader = DataLoader(train_set, batch_size=1, shuffle=True) test_loader = DataLoader(test_set, batch_size=16, shuffle=False)
https://stackoverflow.com/questions/65097733/
Codes worked fine one week ago, but keep getting error since yesterday: Fine-tuning Bert model training via PyTorch on Colab
I am new to Bert. Two weeks ago I successfully ran a fine-tuning Bert model on a nlp classification task though the outcome was not brilliant. Yesterday, however, when I tried to run the same code and data, an AttributeError was always there, which says: 'str' object has no attribute 'dim'. Please know everything is on Colab and via PyTorch Transformers. What should I do to fix it? Here is one thing I tried when I installed transformers but turned out it did not work: instead of !pip install transformers , I tried to use previous transformers version: !pip install --target lib --upgrade transformers==3.5.0 Any feedback will be greatly appreciated! Please see the code and the error message as below: Code: train definition # function to train the model def train(): model.train() total_loss, total_accuracy = 0, 0 # empty list to save model predictions total_preds=[] # iterate over batches for step,batch in enumerate(train_dataloader): # progress update after every 50 batches. if step % 200 == 0 and not step == 0: print(' Batch {:>5,} of {:>5,}.'.format(step, len(train_dataloader))) # push the batch to gpu batch = [r.to(device) for r in batch] sent_id, mask, labels = batch # clear previously calculated gradients model.zero_grad() # get model predictions for the current batch preds = model(sent_id, mask) # compute the loss between actual and predicted values loss = cross_entropy(preds, labels) # add on to the total loss total_loss = total_loss + loss.item() # backward pass to calculate the gradients loss.backward() # clip the the gradients to 1.0. It helps in preventing the exploding gradient problem torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # update parameters optimizer.step() # update learning rate schedule # scheduler.step() # model predictions are stored on GPU. So, push it to CPU preds=preds.detach().cpu().numpy() # append the model predictions total_preds.append(preds) # compute the training loss of the epoch avg_loss = total_loss / len(train_dataloader) # predictions are in the form of (no. of batches, size of batch, no. of classes). # reshape the predictions in form of (number of samples, no. of classes) total_preds = np.concatenate(total_preds, axis=0) #returns the loss and predictions return avg_loss, total_preds training process # set initial loss to infinite best_valid_loss = float('inf') # empty lists to store training and validation loss of each epoch train_losses=[] valid_losses=[] #for each epoch for epoch in range(epochs): print('\n Epoch {:} / {:}'.format(epoch + 1, epochs)) #train model train_loss, _ = train() #evaluate model valid_loss, _ = evaluate() #save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'saved_weights.pt') # append training and validation loss train_losses.append(train_loss) valid_losses.append(valid_loss) print(f'\nTraining Loss: {train_loss:.3f}') print(f'Validation Loss: {valid_loss:.3f}') Error message: Epoch 1 / 10 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-41-c5138ddf6b25> in <module>() 12 13 #train model ---> 14 train_loss, _ = train() 15 16 #evaluate model 5 frames /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1686 if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): 1687 return handle_torch_function(linear, tens_ops, input, weight, bias=bias) -> 1688 if input.dim() == 2 and bias is not None: 1689 # fused op is marginally faster 1690 ret = torch.addmm(bias, input, weight.t()) AttributeError: 'str' object has no attribute 'dim'
As far as I remember - there was an old transformer version in colab. Something like 2.11.0. Try: !pip install transformers~=2.11.0 Change the version number until it works.
https://stackoverflow.com/questions/65099753/
what does it mean when kernel depth=1 in conv3d pytorch
I would like to understand the difference between conv2d and conv3d in PyTorch. What is the difference between: conv3d(in, out, kernel_size(1,3,3)) and conv2d(in,out,kernel_size(3,3)) I checked the official documentation but I couldn't quite understand the difference between the two. Should conv3d in this case be the same as conv2d since the depth is 1? Any help would be appreciated.
Well to give an intuitive understanding of how a kernel works I would recommend to look how the cells(a.k.a Neurons) of a single feature layer are obtained. For the 2D convolution you have the hight and width of the kernel layer(KL) that goes over a single feature layer(FL) from a 2D convolutional layers(CL). Because a CL can have numerous FLs, multiple KLs are then created in parallel and placed over the different FLs, so it can process them in parallel. This parallel processing is usually illustrated as a "stacked" structure of KLs, which is usually named just Kernel. Not only the singularity of the name "Kernel" from this plural structure can leads to confusion, but also the fact that the parallel processing of KLs are illustrated commonly as a stacking of KLs, creating an illusion of a depth dimension. So important to remember: this is not a third dimension, this is just a parallel processing of KLs-FLs. For a 3D convolution you need now 3D KL to process the 3D FL. This is now a true third dimension. The number of 3D KLs that will be "stacked", will be automatically adjusted so it can process parallel the multiple 3D FLs from your network. Regarding your question: Indeed, a 3D kernel with a depth 1 is the same as a 2D kernel, however the functions are builded differently for the different cases. That is, 2D for images with two dimensions (hight, width), and 3D for Images with three dimensions (hight, width, depth).
https://stackoverflow.com/questions/65103822/
Pytorch model training CPU Memory leak issue
When I trained my pytorch model on GPU device,my python script was killed out of blue.Dives into OS log files , and I find script was killed by OOM killer because my CPU ran out of memory.It’s very strange that I trained my model on GPU device but I ran out of my CPU memory. Snapshot of OOM killer log file In order to debug this issue,I install python memory profiler. Viewing log file from memory profiler, I find when column wise -= operation occurred, my CPU memory gradually increased until OOM killer killed my program. Snapshot of Python memory profiler It’s very strange, I try many ways to solve this issue.Finally, I found before assignment operation,I detach Tensor first.Amazingly,it solves this issue.But I don’t understand clearly why it works.Here is my original function code. def GeneralizedNabla(self, image): pad_size = 2 affinity = torch.zeros(image.shape[0], self.window_size**2, self.h, self.w).to(self.device) h = self.h+pad_size w = self.w+pad_size #pad = nn.ZeroPad2d(pad_size) image_pad = self.pad(image) for i in range(0, self.window_size**2): affinity[:, i, :, :] = image[:, :, :].detach() # initialization dy = int(i/5)-2 dx = int(i % 5)-2 h_start = pad_size+dy h_end = h+dy # if 0 <= dy else h+dy w_start = pad_size+dx w_end = w+dx # if 0 <= dx else w+dx affinity[:, i, :, :] -= image_pad[:, h_start:h_end, w_start:w_end].detach() self.Nabla=affinity return If everyone has any ideas,I will appreciate very much, thank you.
Previously when you did not use the .detach() on your tensor, you were also accumulating the computation graph as well and as you went on, you kept acumulating more and more until you ended up exuasting your memory to the point it crashed. When you do a detach(), you are effectively getting the data without the previously entangled history thats needed for computing the gradients.
https://stackoverflow.com/questions/65107933/
how to add L2 regulazation to cost function pytorch?
I try add L1 and L2 regulazation in my Loss function. But I fail. My code: criterion = nn.NLLLoss() + nn.L1Loss() it should be perfect for something like this: criterion = nn.NLLLoss() + _lambda * nn.L1Loss() How can i do it?
You need to first instantiate them both and them add them. they both expect two arguments : nll_loss = nn.NLLLoss() l1_loss = nn.L1Loss() loss = nll_loss(x, y) + _lambda * l1_loss(x, y)
https://stackoverflow.com/questions/65108218/
Filtering rows in pytorch tensor
If i have tensor which holds information on detected images in following shape: [[595.00000, 179.62500, 628.00000, 283.00000, 0.89062, 0.00000], [142.87500, 167.62500, 201.62500, 324.00000, 0.88086, 0.00000], [311.75000, 170.50000, 368.75000, 320.50000, 0.87549, 0.00000], [555.50000, 173.75000, 593.50000, 280.50000, 0.85791, 0.00000], [398.50000, 179.00000, 425.50000, 265.00000, 0.84180, 0.00000], [445.75000, 177.75000, 479.25000, 270.75000, 0.82129, 0.00000]] where each row represents image with following parameters : [ top, bottom, left, right, confidence, class ] What is most effective way of dropping images (rows) that are in size smaller than some user defined input for height which is top-bottom? Naturally I would iterate over rows and than drop each row where top-bottom < someValue with some list comprehension but I suspect there could be better way of doing this.
How about I drop some benchmarks (if that would be interesting to you)? Benchmarks PyTorch-ic way: In[2]: import torch ...: a = torch.Tensor( ...: [[595.00000, 179.62500, 628.00000, 283.00000, 0.89062, 0.00000], ...: [142.87500, 167.62500, 201.62500, 324.00000, 0.88086, 0.00000], ...: [311.75000, 170.50000, 368.75000, 320.50000, 0.87549, 0.00000], ...: [555.50000, 173.75000, 593.50000, 280.50000, 0.85791, 0.00000], ...: [398.50000, 179.00000, 425.50000, 265.00000, 0.84180, 0.00000], ...: [445.75000, 177.75000, 479.25000, 270.75000, 0.82129, 0.00000]]) In[3]: %timeit a[a[:, 0] - a[:, 1] > 300] Out[3]: 24.5 µs ± 904 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) In Numpy terms: In[4]: import numpy as np In[5]: %timeit np_arr[np.where(np_arr[:, 0] - np_arr[:, 1] > 300)] Out[5]: 4.75 µs ± 713 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In[6]: np_arr[np.where(np_arr[:, 0] - np_arr[:, 1] > 300)] Equity In[7]: torch.equal(torch.from_numpy(np_arr[np.where(np_arr[:, 0] - np_arr[:, 1] > 300)]), a[a[:, 0] - a[:, 1] > 300]) Out[7]: True Conclusion is that using numpy for your comparisons would be way faster than PyTorch.
https://stackoverflow.com/questions/65108606/
Name of Modules to compute sparsity
I'm writing a function that computes the sparsity of the weight matrices of the following fully connected network: class FCN(nn.Module): def __init__(self): super(FCN, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(hidden_dim, hidden_dim) self.relu2 = nn.ReLU() self.fc3 = nn.Linear(hidden_dim, hidden_dim) self.relu3 = nn.ReLU() self.fc4 = nn.Linear(hidden_dim, output_dim) def forward(self, x): out = self.fc1(x) out = self.relu1(out) out = self.fc2(out) out = self.relu2(out) out = self.fc3(out) out = self.relu3(out) out = self.fc4(out) return out The function I have written is the following: def print_layer_sparsity(model): for name,module in model.named_modules(): if 'fc' in name: zeros = 100. * float(torch.sum(model.name.weight == 0)) tot = float(model.name.weight.nelement()) print("Sparsity in {}.weight: {:.2f}%".format(name, zeros/tot)) But it gives me the following error: torch.nn.modules.module.ModuleAttributeError: 'FCN' object has no attribute 'name' It works fine when I manually enter the name of the layers (e.g., (model.fc1.weight == 0) (model.fc2.weight == 0) (model.fc3.weight == 0) .... but I'd like to make it independent from the network. In other words, I'd like to adapt my function in a way that, given any sparse network, it prints the sparsity of every layer. Any suggestions? Thanks!!
Try: getattr(model, name).weight In place of model.name.weight Your print_layer_sparsity function becomes: def print_layer_sparsity(model): for name,module in model.named_modules(): if 'fc' in name: zeros = 100. * float(torch.sum(getattr(model, name).weight == 0)) tot = float(getattr(model, name).weight.nelement()) print("Sparsity in {}.weight: {:.2f}%".format(name, zeros/tot)) You can't do model.name because name is a str. The in-built getattr function allows you to get the member variables / attributes of an object using its name as a string. For more information, checkout this answer.
https://stackoverflow.com/questions/65109583/
mat1 dim 1 must match mat2 dim 0 - PyTorch
Im new to PyTorch and I keep getting the error mat1 dim1 must match mat1 dim0 this is my code for the network class Net(Module): def __init__(self): super(Net, self).__init__() self.cnn_layers = Sequential( Conv1d(4,4,kernel_size=2, stride=1, padding=1), BatchNorm1d(4), ReLU(inplace=True), MaxPool1d(kernel_size=2,stride=1), ) self.linear_layers = Sequential( Linear(8267*4,2) ) def forward(self, x): print(x.shape) x = self.cnn_layers(x) print(x.shape) x = self.linear_layers(x) return x and where the print statements are: torch.Size([8267, 4, 1]) torch.Size([8267, 4, 1]) Any help/advice?
Assuming 8267 is your batch size. The output of your CNN is 8267x4x1. So you first need to flatten dim=1 and dim=2 into a single dimension to get a shape 8267x4. Then the following layer (dense) will require 4 neurons. self.cnn_layers = Sequential( Conv1d(4, 4, kernel_size=2, stride=1, padding=1), BatchNorm1d(4), ReLU(inplace=True), MaxPool1d(kernel_size=2, stride=1)) self.linear_layers = Sequential( Flatten(), Linear(4, 2))
https://stackoverflow.com/questions/65111622/
I've 2 folders.One image in 1 folder and another in another folder. I have to compare two images and find the dissimilarity
I've 2 folders.One image in 1 folder and another in another folder. I have to compare two images and find the dissimilarity but the code is written random folder. class InferenceSiameseNetworkDataset(Dataset): def __init__(self,imageFolderDataset,transform=None,should_invert=True): self.imageFolderDataset = imageFolderDataset self.transform = transform self.should_invert = should_invert def __getitem__(self,index): img0_tuple = random.choice(self.imageFolderDataset.imgs) img1_tuple = random.choice(self.imageFolderDataset.imgs) #we need to make sure approx 50% of images are in the same class should_get_same_class = random.randint(0,1) if should_get_same_class: while True: #keep looping till the same class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1]==img1_tuple[1]: break else: while True: #keep looping till a different class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1] !=img1_tuple[1]: break img0 = Image.open(img0_tuple[0]) img1 = Image.open(img1_tuple[0]) img0 = img0.convert("L") img1 = img1.convert("L") if self.should_invert: img0 = PIL.ImageOps.invert(img0) img1 = PIL.ImageOps.invert(img1) if self.transform is not None: img0 = self.transform(img0) img1 = self.transform(img1) return img0, img1 , torch.from_numpy(np.array([int(img1_tuple[1]!=img0_tuple[1])],dtype=np.float32)) def __len__(self): return len(self.imageFolderDataset.imgs) I took this code from GitHub and when I'm trying to compare the two images dissimilarity it's randomly choosing the images. The input folders are 2. one image should be in one folder and another image should be in another folder. When I'm trying to test it's testing on the same image sometimes I mean it's not checking for another image in another folder. testing_dir1 = '/content/drive/My Drive/Signature Dissimilarity/Forged_Signature_Verification/processed_dataset/training1/' folder_dataset_test = dset.ImageFolder(root=testing_dir1) siamese_dataset = InferenceSiameseNetworkDataset(imageFolderDataset=folder_dataset_test, transform=transforms.Compose([transforms.Resize((100,100)), transforms.ToTensor() ]) ,should_invert=False) test_dataloader = DataLoader(siamese_dataset,num_workers=6,batch_size=1,shuffle=False) dataiter = iter(test_dataloader) x0,_,_ = next(dataiter) for i in range(2): _,x1,label2 = next(dataiter) concatenated = torch.cat((x0,x1),0) output1,output2 = net(Variable(x0).cuda(),Variable(x1).cuda()) euclidean_distance = F.pairwise_distance(output1, output2) imshow(torchvision.utils.make_grid(concatenated),'Dissimilarity: {:.2f}'.format(euclidean_distance.item())) dis = 'Dissimilarity: {:.2f}'.format(euclidean_distance.item()) dis1 = dis dis1 = dis1.replace("Dissimilarity:", "").replace(" ", "") print(dis) if float(dis1) < 0.5: print("It's Same Signature") else: print("It's Forged Signature")
Just by assigning should_get_same_class=0 in __getitem__ function of your custom dataset class, InferenceSiameseNetworkDataset you can ensure that two images belong to different class/folder. Secondly, You should not concatinate samples from two batches that may not satisfy your condition. You should use x0,x1,label2 = next(dataiter) under the scope of loop followed by concatination.
https://stackoverflow.com/questions/65112063/
How do I add PyTorch w/ CUDA to Dask Helm Chart
Install PyTorch compiled for CUDA into the Dask helm chart, and it failed: Install PyTorch for CUDA per the instructions on pytorch.org (see image below). Dask helm chart example fails: - name: EXTRA_CONDA_PACKAGES value: "pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch"
You may want to check out the RAPIDS helm chart, which is an extension of the Dask helm chart but with additional GPU support. Install at runtime The RAPIDS Docker images also support the same EXTRA_PIP_PACKAGES, EXTRA_CONDA_PACKAGES and EXTRA_APT_PACKAGES that the Dask Docker images do. # config.yaml dask: scheduler: image: repository: rapidsai/rapidsai tag: cuda11.0-runtime-ubuntu18.04-py3.8 worker: image: repository: rapidsai/rapidsai tag: cuda11.0-runtime-ubuntu18.04-py3.8 env: - name: EXTRA_CONDA_PACKAGES value: "-c pytorch pytorch torchvision torchaudio" # If you're using the bundled Jupyter Lab instance you probably want to install these here too jupyter: image: repository: rapidsai/rapidsai tag: cuda11.0-runtime-ubuntu18.04-py3.8 env: - name: EXTRA_CONDA_PACKAGES value: "-c pytorch pytorch torchvision torchaudio" $ helm install rapidstest rapidsai/rapidsai -f config.yaml Install ahead of time The above approach means the dependencies will be installed every time a worker starts. Therefore you may prefer to create your own custom Docker image with these dependencies already included. # Dockerfile FROM rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8 RUN conda install -n rapids -c pytorch pytorch torchvision torchaudio $ docker build -t jacobtomlinson/customrapids:latest . $ docker push jacobtomlinson/customrapids:latest # config.yaml dask: scheduler: image: repository: jacobtomlinson/customrapids tag: latest worker: image: repository: jacobtomlinson/customrapids tag: latest # If you're using the bundled Jupyter Lab instance you probably want to install these here too jupyter: image: repository: jacobtomlinson/customrapids tag: latest $ helm install rapidstest rapidsai/rapidsai -f config.yaml
https://stackoverflow.com/questions/65112341/
Pytorch tensor representing a 3D grid with color values
Given a list of density values (scalars) that represent the density of an X,Y,Z coordinate on a 3D grid, how would I create a single tensor that can store this information? i.e. a tensor that has dimensions of 1x20x20x20 for example would represent a 20x20x20 grid such that: print(tensor[:,x1,y1,z1]) 0.6 print(tensor[:,x2,y2,z2]) 0.4
Based on your comments, you would like to turn a (2000) 1D-tensor into a (1x20x20x20) 4D-tensor. Assuiming you have your initial tensor layed out as something like: X = torch.tensor([111,112,113,121,122,123,131,132,133,211,212,213,221,222,223,231,232,233,311,312,313,321,322,323,331,332,333]) This is as simple as using view on it: xyz = X.view(3, 3, 3, 1) And, X[0, 1, 2] will give you [123] as expected.
https://stackoverflow.com/questions/65117507/
Pytorch CNN NotImplementedError
Here is my network class: class network(nn.Module): def __init__(self): super(network, self).__init__() self.conv1 = nn.Conv2d(3, 32, 3, stride=1, padding=1, padding_mode='zeros') self.conv2 = nn.Conv2d(32, 32, 3, stride=1, padding=1, padding_mode='zeros') self.conv3 = nn.Conv2d(32, 64, 3, stride=1, padding=1, padding_mode='zeros') self.conv4 = nn.Conv2d(64, 64, 3, stride=1, padding=1, padding_mode='zeros') self.maxpool1 = nn.MaxPool2d(2) # 14 * 14 self.conv5 = nn.Conv2d(64, 128, 3, stride=1, padding=1, padding_mode='zeros') self.conv6 = nn.Conv2d(128, 128, 3, stride=1, padding=1, padding_mode='zeros') self.conv7 = nn.Conv2d(128, 256, 3, stride=1, padding=1, padding_mode='zeros') self.conv8 = nn.Conv2d(256, 256, 3, stride=1, padding=1, padding_mode='zeros') self.maxpool2 = nn.MaxPool2d(2) # 7 * 7 self.gap = nn.AvgPool2d(7) self.fc1 = nn.Linear(256 * 1 * 1, 256) self.fc2 = nn.Linear(256, len(classes)) def foward(self, x): x = torch.nn.ReLU(self.conv1(x)) x = torch.nn.ReLU(self.conv2(x)) x = torch.nn.ReLU(self.conv3(x)) x = torch.nn.ReLU(self.conv4(x)) x = self.maxpool1(x) x = torch.nn.ReLU(self.conv5(x)) x = torch.nn.ReLU(self.conv6(x)) x = torch.nn.ReLU(self.conv7(x)) x = torch.nn.ReLU(self.conv8(x)) x = self.maxpool2(x) x = self.gap(x) x = torch.nn.ReLU(self.fc1(x)) x = torch.nn.Softmax(self.fc2(x)) print(f"output shape: {x.shape}") return x Main code network = network() print(network) loss = nn.CrossEntropyLoss().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) for epoch in range(nb_epochs + 1): iter = -1 for batch_index, sample in enumerate(dataloader): x_train, y_train = sample # Upload Dataset in GPU device X = x_train.to(device, dtype=torch.float) Y = y_train.to(device, dtype=torch.long) optimizer.zero_grad() output = network(X) cost = loss(output, torch.max(Y, 1)[1]) cost.backward() optimizer.step() iter = iter + 1 if iter % 100 == 99 or iter == dataloader.__len__(): print(f"Epoch {epoch + 1}/{nb_epochs} " f"Iteration {iter + 1}/{dataloader.__len__()} " f"Loss {str(float(cost))[0:7]}") print('Finished Training') PATH = './cifar10_vgg.pth' torch.save(model.state_dict(), PATH) And Error displayed: Traceback (most recent call last): File "D:/Files/works/0+Development/Python/0+DNN/0+torch_dc/tutorial_train_save.py", line 198, in <module> output = network(X) File "C:\Users\bolero\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\bolero\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 175, in _forward_unimplemented raise NotImplementedError NotImplementedError I just implemented a very simple Network with Pytorch, but it doesn't work. I wrote the "init" function and the forward function separately. I watched the pytorch tutorial, and I wrote code similarly in that tutorial. What I am aware of is that when writing a network class, I should inherit nn.Module and create init and forward functions. I know that the init function defines the layer type, and the forward function defines the actual operation part. If anyone knows about this issue, please comment.
Note the typo in def foward(self, x):, it should be def forward(self, x):.
https://stackoverflow.com/questions/65119889/
how do i deal with torchvision datasets
I am using a torchvision dataset called MNIST. I run the code in my python project to download it. Do I need to delete the code I wrote, or could I just continue writing my ai? Would the download code disturb it or redownload the whole dataset? The code I use to download it: train = datasets.MNIST("", train=True, download=True, transform = transforms.Compose([transforms.ToTensor()])) test = datasets.MNIST("", train=False, download=True, transform = transforms.Compose([transforms.ToTensor()])) The guy from the tutorial is using jupyter notebook, but I am using PyCharm. What do i need to do?
Just leave that in there, you need that to define your datasets: The download=True flag means that the dataset is only downloaded as long as it doesn't already exist in the specified folder: https://pytorch.org/docs/stable/torchvision/datasets.html#mnist
https://stackoverflow.com/questions/65121613/
PyTorch: How to multiply via broadcasting of two tensors with different shapes
I have the following two PyTorch tensors A and B. A = torch.tensor(np.array([40, 42, 38]), dtype = torch.float64) tensor([40., 42., 38.], dtype=torch.float64) B = torch.tensor(np.array([[[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5],[1,2,3,4,5]], [[4,5,6,7,8],[4,5,6,7,8],[4,5,6,7,8],[4,5,6,7,8],[4,5,6,7,8]], [[7,8,9,10,11],[7,8,9,10,11],[7,8,9,10,11],[7,8,9,10,11],[7,8,9,10,11]]]), dtype = torch.float64) tensor([[[ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.]], [[ 4., 5., 6., 7., 8.], [ 4., 5., 6., 7., 8.], [ 4., 5., 6., 7., 8.], [ 4., 5., 6., 7., 8.], [ 4., 5., 6., 7., 8.]], [[ 7., 8., 9., 10., 11.], [ 7., 8., 9., 10., 11.], [ 7., 8., 9., 10., 11.], [ 7., 8., 9., 10., 11.], [ 7., 8., 9., 10., 11.]]], dtype=torch.float64) Tensor A is of shape: torch.Size([3]) Tensor B is of shape: torch.Size([3, 5, 5]) How do I multiply tensor A with tensor B (using broadcasting) in such a way for eg. the first value in tensor A (ie. 40.) is multiplied with all the values in the first 'nested' tensor in tensor B, ie. tensor([[[ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.], [ 1., 2., 3., 4., 5.]], and so on for the other 2 values in tensor A and the other two nested tensors in tensor B, respectively. I could do this multiplication (via broadcasting) with numpy arrays if A and B are arrays of both shape (3,) - ie. A*B - but I can't seem to figure out a counterpart of this with PyTorch tensors. Any help would really be appreciated.
When applying broadcasting in pytorch (as well as in numpy) you need to start at the last dimension (check out https://pytorch.org/docs/stable/notes/broadcasting.html). If they do not match you need to reshape your tensor. In your case they can't directly be broadcasted: [3] # the two values in the last dimensions are not one and do not match [3, 5, 5] Instead you can redefine A = A[:, None, None] before muliplying such that you get shapes [3, 1, 1] [3, 5, 5] which satisfies the conditions for broadcasting.
https://stackoverflow.com/questions/65121614/
PyTorch import broken, unable to pip install
So all of a sudden my code broke. Error: Unable to import torch, No module named torch. So I attempt to install torch; error, ModuleNotFoundError: No module named 'tools.nnwrap'. I delete my venv and re-create it, same thing. I try it outside the venv, same issue. I look up the issue, apparently I should go here. I do, try several of the install commands, with cuda, without, all broken. What do I do? Am on Arch Linux if that's relevant, maybe I broke something.
https://github.com/pytorch/pytorch/issues/47116 It's a recent issue with Python 3.9, as expected: it's sadly not compatible with PyTorch as of right now (04/12/2020), and the error message isn't explicit about it. The issue is currently still open and pending, and all you can do is revert to 3.8 for the time being.
https://stackoverflow.com/questions/65123557/
PyTorch: How to multiply elements in a list containing tuples of integer and tensor
I have the following list ABC_lst containing tuples of an integer and a tensor. Code: import numpy as np import torch A_int = 40 A_tsr = torch.tensor(np.array([[1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5]])) A_tpl = (A_int, A_tsr) B_int = 42 B_tsr = torch.tensor(np.array([[4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8]])) B_tpl = (B_int, B_tsr) C_int = 38 C_tsr = torch.tensor(np.array([[7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11]])) C_tpl = (C_int, C_tsr) ABC_lst = [A_tpl, B_tpl, C_tpl] ABC_lst Output: [(40, tensor([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]])), (42, tensor([[4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8]])), (38, tensor([[ 7, 8, 9, 10, 11], [ 7, 8, 9, 10, 11], [ 7, 8, 9, 10, 11], [ 7, 8, 9, 10, 11], [ 7, 8, 9, 10, 11]]))] How do I multiply the integer with the corresponding tensor, for eg. multiply 40 with tensor([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]) multiply 42 with tensor([[4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8], [4, 5, 6, 7, 8]]) and so on. The returned object should be a tensor, which looks like this: tensor([[[ 40., 80., 120., 160., 200.], [ 40., 80., 120., 160., 200.], [ 40., 80., 120., 160., 200.], [ 40., 80., 120., 160., 200.], [ 40., 80., 120., 160., 200.]], [[168., 210., 252., 294., 336.], [168., 210., 252., 294., 336.], [168., 210., 252., 294., 336.], [168., 210., 252., 294., 336.], [168., 210., 252., 294., 336.]], [[266., 304., 342., 380., 418.], [266., 304., 342., 380., 418.], [266., 304., 342., 380., 418.], [266., 304., 342., 380., 418.], [266., 304., 342., 380., 418.]]]) In the above eg., I have 3 "sets" of integer and tensor. How do I generalize a code for the multiplication above for any arbitrary "sets" of integer and tensor? Would really appreciate it if anyone could help. EDIT: I need to do all the above in GPU, so need to work with tensors.
Starting from two lists: I the list of integers and X the list of tensors: I = [torch.tensor(40), torch.tensor(42), torch.tensor(38)] X = [ torch.tensor([[1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5], [1,2,3,4,5]]), torch.tensor([[4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8], [4,5,6,7,8]]), torch.tensor([[7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11], [7,8,9,10,11]]), ] You can zip both and create a list containing all multiplications results. Then stack this list into a single tensor, like so: torch.stack([i*x for i, x in zip(I, X)]) You can, of course, add more elements to your lists.
https://stackoverflow.com/questions/65126106/
Examples or explanations of pytorch dataloaders?
I am fairly new to Pytorch (and have never done advanced coding). I am trying to learn the basics of deep learning using the d2l.ai textbook but am having trouble with understanding the logic behind the code for dataloaders. I read the torch.utils.data docs and am not sure what the DataLoader class is meant for, and when for example I am supposed to use the torch.utils.data.TensorDataset class in combination with it. For example, d2l defines a function: def load_array(data_arrays, batch_size, is_train=True): """Construct a PyTorch data iterator.""" dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle=is_train) I assume this is supposed to return an iterable that iterates over different batches. However, I don't understand what the data.TensorDataset part does (seems like there are a lot of options listed on the docs page). Also, the documents say that there are two types of datasets: iterable and map style. When describing the former type, it says "This type of datasets is particularly suitable for cases where random reads are expensive or even improbable, and where the batch size depends on the fetched data." What does it mean for "a random read to be expensive or improbable" and for the batch_size to depend on the fetched data? Can anyone give an example of this? If there is any source where a CompSci noob like me can learn these basics, I'd really appreciate tips! Thanks very much!
I'll give you an example of how to use dataloaders and will explain the steps: Dataloaders are iterables over the dataset. So when you iterate over it, it will return B randomly from the dataset collected samples (including the data-sample and the target/label), where B is the batch-size. To create such a dataloader you will first need a class which inherits from the Dataset Pytorch class. There is a standard implementation of this class in pytorch which should be TensorDataset. But the standard way is to create an own one. Here is an example for image classification: import torch from PIL import Image class YourImageDataset(torch.utils.data.Dataset): def __init__(self, image_folder): self.image_folder = image_folder self.images = os.listdir(image_folder) # get sample def __getitem__(self, idx): image_file = self.images[idx] image = Image.open((self.image_folder + image_file)) image = np.array(image) # normalize image image = image / 255 # convert to tensor image = torch.Tensor(image).reshape(3, 512, 512) # get the label, in this case the label was noted in the name of the image file, ie: 1_image_28457.png where 1 is the label and the number at the end is just the id or something target = int(image_file.split("_")[0]) target = torch.Tensor(target) return image, target def __len__(self): return len(self.images) To get an example image you can call the class and pass some random index into the getitem function. It will then return the tensor of the image matrix and the tensor of the label at that index. For example: dataset = YourImageDataset("/path/to/image/folder") data, sample = dataset.__getitem__(0) # get data at index 0 Alright, so now you have created the class which preprocesses and returns ONE sample and its label. Now we have to create the datalaoder, which "wraps" around this class and then can return whole batches of samples from your dataset class. Lets create three dataloaders, one which iterates over the train set, one for the test set and one for the validation set: dataset = YourImageDataset("/path/to/image/folder") # lets split the dataset into three parts (train 70%, test 15%, validation 15%) test_size = 0.15 val_size = 0.15 test_amount, val_amount = int(dataset.__len__() * test_size), int(dataset.__len__() * val_size) # this function will automatically randomly split your dataset but you could also implement the split yourself train_set, val_set, test_set = torch.utils.data.random_split(dataset, [ (dataset.__len__() - (test_amount + val_amount)), test_amount, val_amount ]) # B is your batch-size, ie. 128 train_dataloader = torch.utils.data.DataLoader( train_set, batch_size=B, shuffle=True, ) val_dataloader = torch.utils.data.DataLoader( val_set, batch_size=B, shuffle=True, ) test_dataloader = torch.utils.data.DataLoader( test_set, batch_size=B, shuffle=True, ) Now you have created your dataloaders and are ready to train! For example like this: for epoch in range(epochs): for images, targets in train_dataloder: # now 'images' is a batch containing B samples # and 'targets' is a batch containing B targets (of the images in 'images' with the same index optimizer.zero_grad() images, targets = images.cuda(), targets.cuda() predictions = model.train()(images) . . . Normally you would create an own file for the "YourImageDataset" class and then import to the file in which you want to create the dataloaders. I hope I could make clear what the role of the dataloader and the Dataset class is and how to use them! I don't know much about iter-style datasets but from what I understood: The method I showed you above, is the map-style. You use that, if your dataset is stored in a .csv, .json or whatever kind of file. So you can iterate through all rows or entries of the dataset. Iter-style will take you dataset or a part of the dataset and will convert in to an iterable. For example, if your dataset is a list, this is what an iterable of the list would look like: dataset = [1,2,3,4] dataset = iter(dataset) print(next(a)) print(next(a)) print(next(a)) print(next(a)) # output: # >>> 1 # >>> 2 # >>> 3 # >>> 4 So the next will give you the next item of the list. Using this together with a Pytorch Dataloader is probably more efficient and faster. Normally the map-dataloader is fast enough and common to use, but the documentation supposed that when you are loading data-batches from a database (which can be slower) then iter-style dataset would be more efficient. This explanation of iter-style is a bit vague but I hope it makes you understand what I understood. I would recommend you to use the map-style first, as I explained it in my original answer.
https://stackoverflow.com/questions/65138643/
Pytorch CrossEntropyLoss Tensorflow Equivalent
I am currently translating a pytorch code into tensorflow. There is a point that i am aggregating 3 losses in a tensorflow custom loop and i get an error that i am passing a two dimensional array vs a 1 dimensional into CategoricalCrossEntropy of tensorflow which is very legit and i understand why this happens... but in the pytorch code i am passing the same shapes and it's perfectly working with CrossEntropyLoss. Does anybody know what i have to do to transfer this into TF? the shapes that are passing in are (17000,100) vs (17000)
Try using the loss loss=tf.keras.losses.sparse_categorical_crossentropy
https://stackoverflow.com/questions/65143404/
Feeding Multiple Inputs to LSTM for Time-Series Forecasting using PyTorch
I'm currently working on building an LSTM network to forecast time-series data using PyTorch. Following Roman's blog post, I implemented a simple LSTM for univariate time-series data, please see the class definitions below. However, it's been a few days since I ground to a halt on adding more features to the input data, say an hour of the day, day of the week, week of the year, and sorts. class Model(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(Model, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.lstm = nn.LSTMCell(self.input_size, self.hidden_size) self.linear = nn.Linear(self.hidden_size, self.output_size) def forward(self, input, future=0, y=None): outputs = [] # reset the state of LSTM # the state is kept till the end of the sequence h_t = torch.zeros(input.size(0), self.hidden_size, dtype=torch.float32) c_t = torch.zeros(input.size(0), self.hidden_size, dtype=torch.float32) for i, input_t in enumerate(input.chunk(input.size(1), dim=1)): h_t, c_t = self.lstm(input_t, (h_t, c_t)) output = self.linear(h_t) outputs += [output] for i in range(future): if y is not None and random.random() > 0.5: output = y[:, [i]] # teacher forcing h_t, c_t = self.lstm(output, (h_t, c_t)) output = self.linear(h_t) outputs += [output] outputs = torch.stack(outputs, 1).squeeze(2) return outputs class Optimization: "A helper class to train, test and diagnose the LSTM" def __init__(self, model, loss_fn, optimizer, scheduler): self.model = model self.loss_fn = loss_fn self.optimizer = optimizer self.scheduler = scheduler self.train_losses = [] self.val_losses = [] self.futures = [] @staticmethod def generate_batch_data(x, y, batch_size): for batch, i in enumerate(range(0, len(x) - batch_size, batch_size)): x_batch = x[i : i + batch_size] y_batch = y[i : i + batch_size] yield x_batch, y_batch, batch def train( self, x_train, y_train, x_val=None, y_val=None, batch_size=100, n_epochs=20, dropout=0.2, do_teacher_forcing=None, ): seq_len = x_train.shape[1] for epoch in range(n_epochs): start_time = time.time() self.futures = [] train_loss = 0 for x_batch, y_batch, batch in self.generate_batch_data(x_train, y_train, batch_size): y_pred = self._predict(x_batch, y_batch, seq_len, do_teacher_forcing) self.optimizer.zero_grad() loss = self.loss_fn(y_pred, y_batch) loss.backward() self.optimizer.step() train_loss += loss.item() self.scheduler.step() train_loss /= batch self.train_losses.append(train_loss) self._validation(x_val, y_val, batch_size) elapsed = time.time() - start_time print( "Epoch %d Train loss: %.2f. Validation loss: %.2f. Avg future: %.2f. Elapsed time: %.2fs." % (epoch + 1, train_loss, self.val_losses[-1], np.average(self.futures), elapsed) ) def _predict(self, x_batch, y_batch, seq_len, do_teacher_forcing): if do_teacher_forcing: future = random.randint(1, int(seq_len) / 2) limit = x_batch.size(1) - future y_pred = self.model(x_batch[:, :limit], future=future, y=y_batch[:, limit:]) else: future = 0 y_pred = self.model(x_batch) self.futures.append(future) return y_pred def _validation(self, x_val, y_val, batch_size): if x_val is None or y_val is None: return with torch.no_grad(): val_loss = 0 batch = 1 for x_batch, y_batch, batch in self.generate_batch_data(x_val, y_val, batch_size): y_pred = self.model(x_batch) loss = self.loss_fn(y_pred, y_batch) val_loss += loss.item() val_loss /= batch self.val_losses.append(val_loss) def evaluate(self, x_test, y_test, batch_size, future=1): with torch.no_grad(): test_loss = 0 actual, predicted = [], [] for x_batch, y_batch, batch in self.generate_batch_data(x_test, y_test, batch_size): y_pred = self.model(x_batch, future=future) y_pred = ( y_pred[:, -len(y_batch) :] if y_pred.shape[1] > y_batch.shape[1] else y_pred ) loss = self.loss_fn(y_pred, y_batch) test_loss += loss.item() actual += torch.squeeze(y_batch[:, -1]).data.cpu().numpy().tolist() predicted += torch.squeeze(y_pred[:, -1]).data.cpu().numpy().tolist() test_loss /= batch return actual, predicted, test_loss def plot_losses(self): plt.plot(self.train_losses, label="Training loss") plt.plot(self.val_losses, label="Validation loss") plt.legend() plt.title("Losses") You can find some of the helper functions that help me split and format data before feeding it to my LSTM network. def to_dataframe(actual, predicted): return pd.DataFrame({"value": actual, "prediction": predicted}) def inverse_transform(scaler, df, columns): for col in columns: df[col] = scaler.inverse_transform(df[col]) return df def split_sequences(sequences, n_steps): X, y = list(), list() for i in range(len(sequences)): # find the end of this pattern end_ix = i + n_steps # check if we are beyond the dataset if end_ix > len(sequences): break # gather input and output parts of the pattern seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1] X.append(seq_x) y.append(seq_y) return array(X), array(y) def train_val_test_split_new(df, test_ratio=0.2, seq_len = 100): y = df['value'] X = df.drop(columns = ['value']) tarin_ratio = 1 - test_ratio val_ratio = 1 - ((train_ratio - test_ratio) / train_ratio) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_ratio) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=val_ratio) return X_train, y_train, X_val, y_val, X_test, y_test I use the following data frames to train my model. # df_train value weekday monthday hour timestamp 2014-07-01 00:00:00 10844 1 1 0 2014-07-01 00:30:00 8127 1 1 0 2014-07-01 01:00:00 6210 1 1 1 2014-07-01 01:30:00 4656 1 1 1 2014-07-01 02:00:00 3820 1 1 2 ... ... ... ... ... 2015-01-31 21:30:00 24670 5 31 21 2015-01-31 22:00:00 25721 5 31 22 2015-01-31 22:30:00 27309 5 31 22 2015-01-31 23:00:00 26591 5 31 23 2015-01-31 23:30:00 26288 5 31 23 10320 rows × 4 columns # x_train weekday monthday hour timestamp 2014-08-26 16:30:00 1 26 16 2014-08-18 16:30:00 0 18 16 2014-10-22 20:00:00 2 22 20 2014-12-10 08:00:00 2 10 8 2014-07-27 22:00:00 6 27 22 ... ... ... ... 2014-08-24 05:30:00 6 24 5 2014-11-24 12:00:00 0 24 12 2014-12-18 06:00:00 3 18 6 2014-07-27 17:00:00 6 27 17 2014-12-05 21:00:00 4 5 21 6192 rows × 3 columns # y_train timestamp 2014-08-26 16:30:00 14083 2014-08-18 16:30:00 14465 2014-10-22 20:00:00 25195 2014-12-10 08:00:00 21348 2014-07-27 22:00:00 16356 ... 2014-08-24 05:30:00 2948 2014-11-24 12:00:00 16292 2014-12-18 06:00:00 7029 2014-07-27 17:00:00 18883 2014-12-05 21:00:00 26284 Name: value, Length: 6192, dtype: int64 After transforming and splitting time-series data into smaller batches, the training data set for X and y becomes as follows: X_data shape is (6093, 100, 3) y_data shape is (6093,) tensor([[[-1.0097, 1.1510, 0.6508], [-1.5126, 0.2492, 0.6508], [-0.5069, 0.7001, 1.2238], ..., [ 1.5044, -1.4417, -1.6413], [ 1.0016, -0.0890, 0.7941], [ 1.5044, -0.9908, -0.2087]], [[-1.5126, 0.2492, 0.6508], [-0.5069, 0.7001, 1.2238], [-0.5069, -0.6526, -0.4952], ..., [ 1.0016, -0.0890, 0.7941], [ 1.5044, -0.9908, -0.2087], [ 0.4988, 0.5874, 0.5076]], [[-0.5069, 0.7001, 1.2238], [-0.5069, -0.6526, -0.4952], [ 1.5044, 1.2637, 1.5104], ..., [ 1.5044, -0.9908, -0.2087], [ 0.4988, 0.5874, 0.5076], [ 0.4988, 0.5874, -0.6385]], ..., [[ 1.0016, 0.9255, -1.2115], [-1.0097, -0.9908, 1.0806], [-0.0041, 0.8128, 0.3643], ..., [ 1.5044, 0.9255, -0.9250], [-1.5126, 0.9255, 0.0778], [-0.0041, 0.2492, -0.7818]], [[-1.0097, -0.9908, 1.0806], [-0.0041, 0.8128, 0.3643], [-0.5069, 1.3765, -0.0655], ..., [-1.5126, 0.9255, 0.0778], [-0.0041, 0.2492, -0.7818], [ 1.5044, 1.2637, 0.7941]], [[-0.0041, 0.8128, 0.3643], [-0.5069, 1.3765, -0.0655], [-0.0041, -1.6672, -0.4952], ..., [-0.0041, 0.2492, -0.7818], [ 1.5044, 1.2637, 0.7941], [ 0.4988, -1.2163, 1.3671]]]) tensor([ 0.4424, 0.1169, 0.0148, ..., -1.1653, 0.5394, 1.6037]) Finally, just to check if the dimensions of all these training, validation, and test datasets are correct, I print out their shapes. train shape is: torch.Size([6093, 100, 3]) train label shape is: torch.Size([6093]) val shape is: torch.Size([1965, 100, 3]) val label shape is: torch.Size([1965]) test shape is: torch.Size([1965, 100, 3]) test label shape is: torch.Size([1965]) When I try to build the model as follows, I end up getting a RuntimeError pointing at inconsistent input sizes. model_params = {'train_ratio': 0.8, 'validation_ratio': 0.2, 'sequence_length': 100, 'teacher_forcing': False, 'dropout_rate': 0.2, 'batch_size': 100, 'num_of_epochs': 5, 'hidden_size': 24, 'n_features': 3, 'learning_rate': 1e-3 } train_ratio = model_params['train_ratio'] val_ratio = model_params['validation_ratio'] seq_len = model_params['sequence_length'] teacher_forcing = model_params['teacher_forcing'] dropout_rate = model_params['dropout_rate'] batch_size = model_params['batch_size'] n_epochs = model_params['num_of_epochs'] hidden_size = model_params['hidden_size'] n_features = model_params['n_features'] lr = model_params['learning_rate'] model = Model(input_size=n_features, hidden_size=hidden_size, output_size=1) loss_fn = nn.MSELoss() optimizer = optim.Adam(model.parameters(), lr=lr) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) optimization = Optimization(model, loss_fn, optimizer, scheduler) start_time = datetime.now() optimization.train(x_train, y_train, x_val, y_val, batch_size=batch_size, n_epochs=n_epochs, dropout=dropout_rate, do_teacher_forcing=teacher_forcing) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-192-6fc406c0113d> in <module> 6 7 start_time = datetime.now() ----> 8 optimization.train(x_train, y_train, x_val, y_val, 9 batch_size=batch_size, 10 n_epochs=n_epochs, <ipython-input-189-c18d20430910> in train(self, x_train, y_train, x_val, y_val, batch_size, n_epochs, dropout, do_teacher_forcing) 68 train_loss = 0 69 for x_batch, y_batch, batch in self.generate_batch_data(x_train, y_train, batch_size): ---> 70 y_pred = self._predict(x_batch, y_batch, seq_len, do_teacher_forcing) 71 self.optimizer.zero_grad() 72 loss = self.loss_fn(y_pred, y_batch) <ipython-input-189-c18d20430910> in _predict(self, x_batch, y_batch, seq_len, do_teacher_forcing) 93 else: 94 future = 0 ---> 95 y_pred = self.model(x_batch) 96 self.futures.append(future) 97 return y_pred ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-189-c18d20430910> in forward(self, input, future, y) 17 18 for i, input_t in enumerate(input.chunk(input.size(1), dim=1)): ---> 19 h_t, c_t = self.lstm(input_t, (h_t, c_t)) 20 output = self.linear(h_t) 21 outputs += [output] ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py in forward(self, input, hx) 963 964 def forward(self, input: Tensor, hx: Optional[Tuple[Tensor, Tensor]] = None) -> Tuple[Tensor, Tensor]: --> 965 self.check_forward_input(input) 966 if hx is None: 967 zeros = torch.zeros(input.size(0), self.hidden_size, dtype=input.dtype, device=input.device) ~\Anaconda3\lib\site-packages\torch\nn\modules\rnn.py in check_forward_input(self, input) 789 def check_forward_input(self, input: Tensor) -> None: 790 if input.size(1) != self.input_size: --> 791 raise RuntimeError( 792 "input has inconsistent input_size: got {}, expected {}".format( 793 input.size(1), self.input_size)) RuntimeError: input has inconsistent input_size: got 1, expected 3 I suspect my current LSTM model class does not support data with multiple features, and I've been trying out different approaches lately with no luck so far. Feel free to share your thoughts or point me in the right direction that could help me solve this problem. As suggested by @stackoverflowuser2010, I printed out the shapes of the tensors input_t, h_t and c_t that is fed into the forward step before the error is thrown. input_t torch.Size([100, 1, 3]) h_t torch.Size([100, 24]) c_t torch.Size([100, 24])
After muddling through for a couple of weeks, I solved the issue. This has been a fruitful journey for me, so I'd like to share what I have discovered. If you'd like to have a look at the complete walk-through with code, please check out my Medium post on the matter. Just as in Pandas, I found that things tend to work faster and smoother when I stick to the PyTorch way. Both libraries rely on NumPy, and I'm sure one can do pretty much all the table and matrix operations explicitly with NumPy arrays and functions. However, doing so does eliminate all the nice abstractions and performance improvements these libraries provide and turn each step into a CS exercise. It's fun until it isn't. Rather than shaping all the training and validation sets manually to pass them to the model, PyTorch's TensorDataset and DataLoaders classes have immensely helped me. Scaling the feature and target sets for training and validation, we then have NumPy arrays. We can transform these arrays into Tensors and use these Tensors to create our TensorDataset, or a custom Dataset depending on your requirements. Finally, DataLoaders allow us to iterate over such datasets with much less hassle than otherwise as they already provide built-in batching, shuffling, and dropping the last batch options. train_features = torch.Tensor(X_train_arr) train_targets = torch.Tensor(y_train_arr) val_features = torch.Tensor(X_val_arr) val_targets = torch.Tensor(y_val_arr) train = TensorDataset(train_features, train_targets) train_loader = DataLoader(train, batch_size=64, shuffle=False, drop_last=True) val = TensorDataset(val_features, val_targets) val_loader = DataLoader(val, batch_size=64, shuffle=False, drop_last=True) After transforming our data into iterable datasets, they can later be used to do mini-batch training. Instead of explicitly defining batches or wrestling with matrix operations, we can easily iterate over them via DataLoaders as follows. model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim) criterion = nn.MSELoss(reduction='mean') optimizer = optim.Adam(model.parameters(), lr=1e-2) train_losses = [] val_losses = [] train_step = make_train_step(model, criterion, optimizer) device = 'cuda' if torch.cuda.is_available() else 'cpu' for epoch in range(n_epochs): batch_losses = [] for x_batch, y_batch in train_loader: x_batch = x_batch.view([batch_size, -1, n_features]).to(device) y_batch = y_batch.to(device) loss = train_step(x_batch, y_batch) batch_losses.append(loss) training_loss = np.mean(batch_losses) train_losses.append(training_loss) with torch.no_grad(): batch_val_losses = [] for x_val, y_val in val_loader: x_val = x_val.view([batch_size, -1, n_features]).to(device) y_val = y_val.to(device) model.eval() yhat = model(x_val) val_loss = criterion(y_val, yhat).item() batch_val_losses.append(val_loss) validation_loss = np.mean(batch_val_losses) val_losses.append(validation_loss) print(f"[{epoch+1}] Training loss: {training_loss:.4f}\t Validation loss: {validation_loss:.4f}") Another cool feature that PyTorch provides is the view() function, which allows faster and memory-efficient reshaping of tensors. Since I earlier defined my LSTM model with batch_first = True, the batch tensor for the feature set must have the shape of (batch size, time steps, number of features). The line in the code above x_batch = x_batch.view([batch_size, -1, n_features]).to(device) just does that. I hope this answer helps those dealing with similar problems or at least gives an idea of which direction to take. I had changed a lot in the code shared in the original post, but I'll not put them all here for the sake of simplicity. Feel free to check out the rest of it in my other SO post here.
https://stackoverflow.com/questions/65144346/
Efficiently filling torch.Tensor at equal index positions
I have a 6 dimensional all-zero pytorch tensor lrel_w that I want to fill with 1s at positions where the indices of the first three dimensions and the indices of the last three dimensions match. I'm currently solving this trivially using 3 nested for loops: lrel_w = torch.zeros( input_size[0], input_size[1], input_size[2], input_size[0], input_size[1], input_size[2] ) for c in range(input_size[0]): for x in range(input_size[1]): for y in range(input_size[2]): lrel_w[c,x,y,c,x,y] = 1 I'm sure there must be a more efficient way of doing this, however I have not been able to figure it out.
You can try this one. import torch c, m, n = input_size[0], input_size[1], input_size[2] t = torch.zeros(c, m, n, c, m, n) i, j, k = torch.meshgrid(torch.arange(c), torch.arange(m), torch.arange(n)) i = i.flatten() j = j.flatten() k = k.flatten() t[i, j, k, i, j, k] = 1 Here is how meshgrid works in case you need reference.
https://stackoverflow.com/questions/65145069/
only one element tensors can be converted to python scalars
def Log(A): ''' theta = arccos((tr(A)-1)/2) K=1/(2sin(theta))(A-A^T) log(A)=theta K ''' theta=torch.acos(torch.tensor((torch.trace(A)-1)/2)) K=(1/(2*torch.sin(theta)))*(torch.add(A,-torch.transpose(A,0,1))) return theta*K def tensor_Log(A): blah=[[Log(A[i,j]) for j in range(A.shape[1])] for i in range(A.shape[0])] new=torch.tensor(blah) return new ValueError: only one element tensors can be converted to Python scalars during training to get the outputs of my network, the above function is producing the following error, it is called inside a custom layer and I do not know what it is referencing, any thoughts?
The issue in your list comprehension as blah is a list of list of tensors. I would create a flat list of tensors by looping on A.shape[0] and A.shape[1] and then stack everything into a single tensor. R = torch.stack([Log(A[i,j]) for i in range(A.shape[0]) for j in range(A.shape[1])]) You can then recover the desired format with reshape or view: R.reshape(A.shape)
https://stackoverflow.com/questions/65149455/
Calculate Batch Pairwise Sinkhorn Distance in PyTorch
I have two tensors and both are of same shape. I want to calculate pairwise sinkhorn distance using GeomLoss. What i have tried: import torch import geomloss # pip install git+https://github.com/jeanfeydy/geomloss a = torch.rand((8,4)) b = torch.rand((8,4)) geomloss.SamplesLoss('sinkhorn')(a,b) # ^ input shape [batch, feature_dim] # will return a scalar value geomloss.SamplesLoss('sinkhorn')(a.unsqueeze(1),b.unsqueeze(1)) # ^ input shape [batch, n_points, feature_dim] # will return a tensor of size [batch] of distances between a[i] and b[i] for each i However I would like to compute pairwise distance where the resultant tensor should be of size [batch, batch]. To achieve this, I tried the following to use broadcasting: geomloss.SamplesLoss('sinkhorn')(a.unsqueeze(0), b.unsqueeze(1)) But I got this error message: ValueError: Samples x and y should have the same batchsize.
Since the documentation doesn't give examples on how to use the distance's forward function. Here's a way to do it, which will require you to call the distance function batch times. We will construct the distance matrix line by line. Line i corresponds to the distances a[i]<->b[0], a[i]<->b[1], through to a[i]<->b[batch]. To do so we need to construct, for each line i, a (8x4) repeated version of tensor a[i]. This will do: a_i = torch.stack(8*[a[i]], dim=0) Then we calculate the distance between a[i] and each batch in b: dist(a_i.unsqueeze(1), b.unsqueeze(1)) Having a total of batch lines we can construct our final tensor stack. Here's the complete code: batch = a.shape[0] dist = geomloss.SamplesLoss('sinkhorn') distances = [dist(torch.stack(batch*[a[i]]).unsqueeze(1), b.unsqueeze(1)) for i in range(batch)] D = torch.stack(distances)
https://stackoverflow.com/questions/65150672/
Importing JPG files from a Folder for PyTorch transformation (Another error)?
I just recently posted a question about a file path error in which I thought I resolved... the folder that I have contains .jpg files. For some reason, i'm getting a new error: RuntimeError: Found 0 files in subfolders of: C:\Users\Lyn\Desktop\UTKFaceDataSet\Train_Set Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif,.tiff,.webp Here is my modified code: data_dir = "C:\\Users\\Lyn\\Desktop\\UTKFaceDataSet" train_dir = data_dir + '\\Train_Set' test_dir = data_dir + '\\Test_Set' training_transforms = transforms.Compose([transforms.Resize(100), transforms.ToTensor()]) testing_transforms = transforms.Compose([transforms.Resize(256), transforms.ToTensor()]) #Load the datasets with ImageFolder training_dataset = datasets.ImageFolder(train_dir, transform=training_transforms) testing_dataset = datasets.ImageFolder(test_dir, transform=testing_transforms)
You need your directory structure in the following format to use the datasets.ImageFolder(): dataset train_dataset dogs a.png, b.png cats c.png, d.png valid_dataset dogs e.png, f.png cats g.png, h.png train_dir = 'dataset/train_dataset/' training_dataset = datasets.ImageFolder(train_dir, transform=training_transforms) I used two categories (dogs and cats), but it could be one also.
https://stackoverflow.com/questions/65151126/
Computing intermediate gradients using backward method in Pytorch
I am having trouble in understanding backward method in pytorch x1 = tensor(2.).requires_grad_() x2 = tensor(3.).requires_grad_() # or x2 = tensor(3.) x3 = x1 + x2 l = (x3**2).sum() l.backward() print(x1) print(x3) print(x1.grad) print(x3.grad) Results are tensor(2., requires_grad=True) tensor(5., grad_fn=<AddBackward0>) tensor(10.) None Why is x3.grad still None? Shouldn't it be tensor(10.) ? When I run the following lines of code, x3.grad is evaluated to tensor(10.) x3 = tensor(5.).requires_grad_() l = (x3**2).mean() l.backward() print(x3.grad)
If you print x3.grad on your first example you might notice torch outputs a warning: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See here for more informations. To save memory the gradients of the non-leaf tensors (non user-created tensors) are not buffered. If you wish to see those gradients though you can retain the gradient on x3 by calling .retain_grad() before creating the graph (i.e. before calling .backward(). x3.retain_grad() l.backward() print(x3.grad) will indeed output tensor(10.)
https://stackoverflow.com/questions/65151627/
Error " 'Softmax' object has no attribute 'log_softmax' " while training Neural Network with PyTorch
I am working on a classifier for MNIST dataset. When I am running the code below, I am getting the error " 'Softmax' object has no attribute 'log_softmax' " at line loss = loss_function(output, y). I have not managed to find a solution to the problem. I will appreciate if you can advise on how the issue can be resolved. Thank you. import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader, Dataset, TensorDataset import torchvision import torchvision.transforms as transforms import numpy as np device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") batch_size = 512 # Image transformations of Torchvision will convert to the images to tensor and normalise with mean and standard deviation transformer = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.1307,), std=(0.3081,))]) data_train = DataLoader(torchvision.datasets.MNIST('Data/data/mnist', download=True, train=True, transform=transformer), batch_size=batch_size, drop_last=False, shuffle=True) data_test = DataLoader(torchvision.datasets.MNIST('Data/data/mnist', download=True, train=False, transform=transformer), batch_size=batch_size, drop_last=False, shuffle=True) class neural_nw(nn.Module): def __init__(self): super(neural_nw, self).__init__() self.fc1 = nn.Linear(784, 128, True) self.fc2 = nn.Linear(128, 128, True) self.fc3 = nn.Linear(128, 10, True) def forward(self, x): output = torch.sigmoid(self.fc1(x)) output = torch.sigmoid(self.fc2(output)) output = nn.Softmax(self.fc3(output)) return output MLP = neural_nw() loss_function = nn.CrossEntropyLoss() optimiser = optim.Adam(MLP.parameters(), lr = 0.01) Epochs = 50 for epoch in range(Epochs): for X, y in data_train: X = X.view(X.shape[0], -1) optimiser.zero_grad() output = MLP.forward(X) loss = loss_function(output, y) loss.backward() optimiser.step()
nn.Softmax defines a module, nn.Modules are defined as Python classes and have attributes, e.g., a nn.LSTM module will have some internal attributes like self.hidden_size. On the other hand, F.softmax defines the operation and needs all arguments to be passed (including the weights and bias). Implicitly, the modules will usually call their functional counterpart somewhere in the forward method. This explains why F.softmax instead of nn.Softmax resolves your issue.
https://stackoverflow.com/questions/65152114/
clang in neovim giving pp_file_not_found error for c++/pytorch basic example
I'm following this very basic c++/pytorch example: pytorch_installing And I can walk through this example with no errors. However, when creating the example-app.cpp file (or editing it at any point in time) with neovim, clang throws an error 'torch/torch.h' file not found [clang: pp_file_not_found]. My CMakeLists.txt file looks like this: cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(example-app) find_package(Torch REQUIRED) include_directories(SYSTEM /home/username/Downloads/libtorch/) set(CMAKE_PREFIX_PATH "/home/username/Downloads/libtorch/share/cmake/Torch") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") set_property(TARGET example-app PROPERTY CXX_STANDARD 14) This isn't a big issue for me with this small of a project. But I will be working on a larger project with libtorch and would like clang to recognize <torch/torch.h>. There were a couple similar Stackoverflow questions, but no answers. Update: I believe this is happening because clang is not seeing torch/torch.h because it is not part of the include paths. I printed the include paths for clang and it was not list. So I tried adding the include paths of libtorch to /usr/include/ but then it has issues seeing other header files referenced by header files I added. So I added the libtorch/include/torch/csrc/api/include/torch/ directory to /usr/include so that it can read #include <torch/torch.h. But then other header files are referenced in those header files with specified directory paths outside of the libtorch/include/torch/csrc/api/include/torch/ path. For example, libtorch/include/torch/csrc/ contains WindowsTorchApiMacro.h. There are other paths within the libtorch directory that contain more header files. I tried to add all header files to \usr\include but still received an error. I'm sure the package as a whole needs to be used. The original cmake file looks like this: cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(example-app) find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") set_property(TARGET example-app PROPERTY CXX_STANDARD 14) So cmake is able to find and reference all of those header files properly, and I know you can compile it at the command line with clang, but how can clang within neovim (during editing) see this package without error? I thought about using a package manager like vcpkg, but they do not have libtorch available, unfortunately. I can include output from a verbose and successful cmake run if needed.
Since the compilation succeeds with cmake-make, it is possible to ask CMake to generate the compilation database (typically a compile_commands.json file) with cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=1. Once the file is available, it can be read by the autocompletion compiler (clang in this case), which prevents the autocompleter from being lost in the filesystem, because all flags and headers are specified for all compilation units. The compilation database just needs to be put where the autocompletion plugin expects it to be.
https://stackoverflow.com/questions/65153317/
Implement SeparableConv2D in Pytorch
Main objective PyTorch equivalent for SeparableConv2D with padding = 'same': from tensorflow.keras.layers import SeparableConv2D x = SeparableConv2D(64, (1, 16), use_bias = False, padding = 'same')(x) What is the PyTorch equivalent for SeparableConv2D? This source says: If groups = nInputPlane, kernel=(K, 1), (and before is a Conv2d layer with groups=1 and kernel=(1, K)), then it is separable. While this source says: Its core idea is to break down a complete convolutional acid into a two-step calculation, Depthwise Convolution and Pointwise. This is my attempt: class SeparableConv2d(nn.Module): def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False): super(SeparableConv2d, self).__init__() self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias) self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias) def forward(self, x): out = self.depthwise(x) out = self.pointwise(out) return out Is this correct? Is this equivalent to tensorflow.keras.layers.SeparableConv2D? What about padding = 'same'? How to ensure that my input and output size is the same while doing this? My attempt: x = F.pad(x, (8, 7, 0, 0), ) Because the kernel size is (1,16), I added left and right padding, 8 and 7 respectively. Is this the right way (and best way) to achieve padding = 'same'? How can I place this inside my SeparableConv2d class, and calculate on the fly given the input data dimension size? All together class SeparableConv2d(nn.Module): def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False): super(SeparableConv2d, self).__init__() self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias) self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias) def forward(self, x): out = self.depthwise(x) out = self.pointwise(out) return out class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.separable_conv = SeparableConv2d( in_channels=32, out_channels=64, depth=1, kernel_size=(1,16) ) def forward(self, x): x = F.pad(x, (8, 7, 0, 0), ) x = self.separable_conv(x) return x Any problem with these codes?
The linked definitions are generally agreeing. The best one is in the article. "Depthwise" (not a very intuitive name since depth is not involved) - is a series of regular 2d convolutions, just applied to layers of the data separately. - "Pointwise" is same as Conv2d with 1x1 kernel. I suggest a few correction to your SeparableConv2d class: no need to use depth parameter - it is same as out_channels I set padding to 1 to ensure same output size with kernel=(3,3). if kernel size is different - adjust padding accordingly, using same principles as with regular Conv2d. Your example class Net() is no longer needed - padding is done in SeparableConv2d. This is the updated code, should be similar to tf.keras.layers.SeparableConv2D implementation: class SeparableConv2d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, bias=False): super(SeparableConv2d, self).__init__() self.depthwise = nn.Conv2d(in_channels, in_channels, kernel_size=kernel_size, groups=in_channels, bias=bias, padding=1) self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=bias) def forward(self, x): out = self.depthwise(x) out = self.pointwise(out) return out
https://stackoverflow.com/questions/65154182/
multiplying each element of a matrix by a vector (or array)
Say I have a an input array of size (64,100) t = torch.randn((64,100)) Now say I want to multiply each of the 6400 elements of t with 6400 separate vectors each of size 256 to produce a tensor of size [64, 100, 256]. This is what I am doing currently - import copy def clones(module, N): "Produce N identical layers." return nn.ModuleList([copy.deepcopy(module) for _ in range(N)]) linears = clones(nn.Linear(1,256, bias=False), 6400) idx = 0 t_final = [] for i in range(64): t_bs = [] for j in range(100): t1 = t[i, j] * linears[idx].weight.view(-1) idx += 1 t_bs.append(t1) t_bs = torch.cat(t_bs).view(1, 100, 256) t_final.append(t_bs) t_final = torch.cat(t_final) print(t_final.shape) Output: torch.Size([64, 100, 256]) Is there a faster and cleaner way of doing the same thing? I tried torch.matmul and torch.dot but couldn't do any better.
It seems broadcast is what you are looking for. t = torch.randn((64,100)).view(6400, 1) weights = torch.randn((6400, 256)) output = (t * weights).view(64, 100, 256)
https://stackoverflow.com/questions/65155494/
why autoencoder tutorial of pytorch changes the view of embedding layer output?
As shown here in PyTorch tutorials the code for an autoencoder model is like this: class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.embedding = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output = embedded output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) My question is what is the reason for use of view function on output of embedding layer?
The view function added extra dimension to given input shape to match expected input shape. In the function initHidden the hidden shape is initialized to (1, 1, 256). def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) Based on documentation, GRU input shape must have 3 dimensions, input of shape (seq_len, batch, input_size). https://pytorch.org/docs/stable/generated/torch.nn.GRU.html The shape of self.embedding(input) is (1, 256) and a sample output is, tensor([[ 0.1421, 0.4135, -1.0619, 0.0149, 0.0673, -0.3770, 0.4231, 2.2803, -1.6939, -0.0071, 1.1131, -1.0019, 0.6593, 0.1366, 1.1033, -0.8804, 1.3676, 0.4115, -0.5671, 0.3314, -0.2599, -0.3082, 1.3644, 0.5788, -0.1929, -2.0505, 0.4518, 0.8757, -0.2360, -0.4099, -0.5697, -1.5973, -0.6638, -1.1523, 1.4425, 1.3651, 1.9371, 0.5698, -0.3541, -1.3883, -0.0195, -1.0757, -1.4324, -1.6226, -2.4267, 0.3874, -0.7529, 1.4938, -2.5773, -1.1962, 0.3759, -0.6143, -1.0444, -0.6443, -0.8130, -1.7283, 1.4167, 1.3945, -1.2695, 0.7289, 0.7777, -0.0094, -1.8108, 0.2126, -0.2018, -0.4055, -0.7779, -0.8523, 0.0162, 0.2463, 0.5588, -0.7250, -0.0128, 0.6272, -0.7729, 0.4259, 0.7596, -1.9500, 0.5853, 0.3764, -0.1112, 0.7274, -2.8535, -0.0445, 0.4225, 1.2179, 0.2219, -0.7064, -0.9654, 1.0501, 1.7142, 0.5312, -0.8180, -1.5697, 1.3062, -0.9321, -0.1652, -1.5298, -0.3575, -1.2046, -0.6571, -0.7689, -0.7032, 1.0727, -1.3259, 0.1200, 1.9357, -0.2519, -0.3717, 0.8054, 0.1180, -0.6921, 1.0245, -1.5500, -0.5280, -0.7462, 0.7924, 2.2701, -1.5094, -0.1973, -1.5919, 0.4869, 0.6739, -0.5242, 0.2559, -0.0149, -0.5332, -1.8313, 0.3598, 0.0804, -0.0780, -0.2930, -0.2844, -0.4752, -0.9919, 0.1809, 0.7622, -2.5069, -0.7724, -0.9441, 1.6101, 0.6461, -0.8932, 0.0600, 0.6911, 0.5191, -0.1719, -0.5829, -0.9168, 1.5282, 1.4399, 0.3264, -0.8894, 0.2880, -0.0697, 0.8977, -0.5004, 0.3844, 0.0925, 0.5592, -0.1664, 0.8575, -1.0348, 0.7326, -0.2124, 0.7533, 0.6270, -0.9559, -1.4159, 0.6788, 0.6163, -0.5951, -0.1403, -1.6088, -0.7731, 0.3876, 1.0429, -2.0960, 0.1726, 1.7446, -0.3963, 0.0785, -0.4701, 1.0074, 0.3319, -2.2675, -1.6163, -0.4003, -0.5468, 0.0452, -2.5586, 0.4747, -0.0271, -1.2161, 1.2121, 1.8738, -1.2207, -0.9218, -0.1430, 0.2512, -0.5236, -0.2544, -0.5868, -0.7086, -1.3328, -0.0243, 0.4759, 1.4125, 0.4947, 0.5054, 1.6253, 0.4198, -0.9150, 0.6374, 0.4581, 1.1527, 1.4440, -0.0590, -0.4601, 0.2490, -0.5739, 0.6798, -0.2156, -1.1386, -0.5011, -0.7411, 0.2825, -0.2595, 0.8070, 0.5270, 0.2595, -0.1089, 0.4221, -0.7851, 0.7112, -0.3038, 0.6169, -0.1513, -0.5872, 0.3974, 0.2431, 0.4934, -0.9406, -0.9372, 1.4525, 0.1376, 0.2558, 0.0661, 0.3509, 2.1667, 2.8428, 0.9429, -0.6143, -1.0969, 0.0955, 0.0914]], device='cuda:0', grad_fn=<EmbeddingBackward>) The shape of self.embedding(input).view(1, 1, -1) is (1, 1, 256) and a sample output is, tensor([[[ 0.1421, 0.4135, -1.0619, 0.0149, 0.0673, -0.3770, 0.4231, 2.2803, -1.6939, -0.0071, 1.1131, -1.0019, 0.6593, 0.1366, 1.1033, -0.8804, 1.3676, 0.4115, -0.5671, 0.3314, -0.2599, -0.3082, 1.3644, 0.5788, -0.1929, -2.0505, 0.4518, 0.8757, -0.2360, -0.4099, -0.5697, -1.5973, -0.6638, -1.1523, 1.4425, 1.3651, 1.9371, 0.5698, -0.3541, -1.3883, -0.0195, -1.0757, -1.4324, -1.6226, -2.4267, 0.3874, -0.7529, 1.4938, -2.5773, -1.1962, 0.3759, -0.6143, -1.0444, -0.6443, -0.8130, -1.7283, 1.4167, 1.3945, -1.2695, 0.7289, 0.7777, -0.0094, -1.8108, 0.2126, -0.2018, -0.4055, -0.7779, -0.8523, 0.0162, 0.2463, 0.5588, -0.7250, -0.0128, 0.6272, -0.7729, 0.4259, 0.7596, -1.9500, 0.5853, 0.3764, -0.1112, 0.7274, -2.8535, -0.0445, 0.4225, 1.2179, 0.2219, -0.7064, -0.9654, 1.0501, 1.7142, 0.5312, -0.8180, -1.5697, 1.3062, -0.9321, -0.1652, -1.5298, -0.3575, -1.2046, -0.6571, -0.7689, -0.7032, 1.0727, -1.3259, 0.1200, 1.9357, -0.2519, -0.3717, 0.8054, 0.1180, -0.6921, 1.0245, -1.5500, -0.5280, -0.7462, 0.7924, 2.2701, -1.5094, -0.1973, -1.5919, 0.4869, 0.6739, -0.5242, 0.2559, -0.0149, -0.5332, -1.8313, 0.3598, 0.0804, -0.0780, -0.2930, -0.2844, -0.4752, -0.9919, 0.1809, 0.7622, -2.5069, -0.7724, -0.9441, 1.6101, 0.6461, -0.8932, 0.0600, 0.6911, 0.5191, -0.1719, -0.5829, -0.9168, 1.5282, 1.4399, 0.3264, -0.8894, 0.2880, -0.0697, 0.8977, -0.5004, 0.3844, 0.0925, 0.5592, -0.1664, 0.8575, -1.0348, 0.7326, -0.2124, 0.7533, 0.6270, -0.9559, -1.4159, 0.6788, 0.6163, -0.5951, -0.1403, -1.6088, -0.7731, 0.3876, 1.0429, -2.0960, 0.1726, 1.7446, -0.3963, 0.0785, -0.4701, 1.0074, 0.3319, -2.2675, -1.6163, -0.4003, -0.5468, 0.0452, -2.5586, 0.4747, -0.0271, -1.2161, 1.2121, 1.8738, -1.2207, -0.9218, -0.1430, 0.2512, -0.5236, -0.2544, -0.5868, -0.7086, -1.3328, -0.0243, 0.4759, 1.4125, 0.4947, 0.5054, 1.6253, 0.4198, -0.9150, 0.6374, 0.4581, 1.1527, 1.4440, -0.0590, -0.4601, 0.2490, -0.5739, 0.6798, -0.2156, -1.1386, -0.5011, -0.7411, 0.2825, -0.2595, 0.8070, 0.5270, 0.2595, -0.1089, 0.4221, -0.7851, 0.7112, -0.3038, 0.6169, -0.1513, -0.5872, 0.3974, 0.2431, 0.4934, -0.9406, -0.9372, 1.4525, 0.1376, 0.2558, 0.0661, 0.3509, 2.1667, 2.8428, 0.9429, -0.6143, -1.0969, 0.0955, 0.0914]]], device='cuda:0', grad_fn=<ViewBackward>) Code This code works, rnn1 = nn.GRU(256, 128, 1) input1 = torch.randn(100, 2, 256) h01 = torch.randn(1, 2, 128) output1, hn1 = rnn1(input1, h01) print(input1.shape, h01.shape) print(output1.shape, hn1.shape) Output torch.Size([100, 2, 256]) torch.Size([1, 2, 128]) torch.Size([100, 2, 128]) torch.Size([1, 2, 128]) Code This code also works, rnn1 = nn.GRU(256, 256) input1 = torch.randn(1, 1, 256) h01 = torch.randn(1, 1, 256) output1, hn1 = rnn1(input1, h01) print(input1.shape, h01.shape) print(output1.shape, hn1.shape) Output torch.Size([1, 1, 256]) torch.Size([1, 1, 256]) torch.Size([1, 1, 256]) torch.Size([1, 1, 256]) Code This does not work, rnn1 = nn.GRU(256, 256) input1 = torch.randn(1, 256) #input1 = input1.view(1, 1, -1) h01 = torch.randn(1, 1, 256) output1, hn1 = rnn1(input1, h01) print(input1.shape, h01.shape) print(output1.shape, hn1.shape) Output RuntimeError: input must have 3 dimensions, got 2
https://stackoverflow.com/questions/65155928/
Equivalent of embeddings_regularizer in pyTorch
tf.keras.layers.Embedding has parameter embeddings_regularizer. What would be equivalent of this in pyTorch or nn.Embedding?
There is no direct equivalent for PyTorch as PyTorch only supports L2 regularization on parameters via torch.optim optimizers. For example torch.optim.SGD has weight_decay parameter. If you set it up and you optimize your nn.Embedding it will be regularized by L2 with specified strength (you can pass only nn.Embedding for weight_decay, see per-parameter-options of optimizers). If you wish to use L1 regularization you would have to: code it on your own use available third party solutions Coding on your own Usually we add L1 regularization to loss and backpropagate, but this is inefficient approach. It is better to populate gradients of our parameters (there are some edge cases though) with the derivative of regularization (for L1 is it sign value). Something along those lines: import torch # Do this in your optimization loop AT THE TOP embedding = torch.nn.Embedding(150, 100) embedding.weight.grad = torch.sign(embedding.weight) # Do the rest of optimization AND clear gradients! ... Though it is a little harder to make it work in general (stuff like batch accumulation etc.) and pretty unclear IMO. You could apply L2 on top of that also. torchlayers third party library Disclaimer: I'm the author of this project You can install torchlayers-nightly and get per-layer L1 and L2 regularization. Install via pip: pip install -U torchlayers-nightly In your code you could do: import torchlayers as tl import torch embedding = torch.nn.Embedding(150, 100) regularized_embedding = tl.L1(embedding) # Do the rest as per usual This feature is experimental for now, but should work and I've used it with success previously. Also, you should be able to use tl.L2 the same way, see docstrings about usage of this particular layer. For more info check github repository and read documentation here.
https://stackoverflow.com/questions/65163143/
How does the nn.Embedding module relate intuitively to the idea of an embedding in general?
So, I'm having a hard time understanding nn.Embedding. Specifically, I can't connect the dots between what I understand about embeddings as a concept and what this specific implementation is doing. My understanding of an embedding is that it is a smaller dimension representation of some larger dimension data point. So it maps data in N-d to a M-d latent/embedding space such that M < N. As I understand it, this mapping is achieved through the learning process, as in an auto-encoder. The encoder learns the optimal embedding so that the decoder can reconstruct the original input. So my question is, how does this relate to nn.Embedding module: A simple lookup table that stores embeddings of a fixed dictionary and size. This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. does this layer "learn" a lower dimensional representation of a larger input space? Or is it something else entirely? What I'm looking for is to take the very abstract language of the documentation to something real: Let's say I have some input x. This input might be a vectorized image or maybe some sequence daily temperature data. In any case, this input x has 100 elements (100 days of temperature, or a 10x10 image). How can you explain the use of nn.Embedding() in this case? What does each argument mean in a real world context?
As you said, the aim when using an embedding is to reduce the dimension of your data. However, it does not learn a lower dimensional representation of a larger input space on its own. Starting from a random initialization you can improve this embedding through a learning process. This requires finding a suitable task to train the embedding on, I think, for another question. I believe it's called a "pretext task", where ultimately the objective is to have an accurate embedding matrix. You can check the parameters of any nn.Module with .parameters(). It will return an generator. << [x for x in nn.Embedding(10, 2).parameters()][0].shape >> torch.Size([10, 2]) Here, there are 10*2 parameters (i.e. dimension_input*dimension_output or by PyTorch's naming num_embeddings*embedding_dims). However it is, still, a lookup table: given an index it will return an embedding of size embedding_dims. But you these embeddings (the values of this matrix) can be changed. Here's a little experiment: E = nn.Embedding(10, 2) optim = optim.SGD(E.parameters(), lr=0.01) X = torch.randint(0, 10, size=(100,)) loss_before = E(X).mean() loss_before.backward() optim.step() loss_after = E(X).mean() As expected, loss_before and loss_after are different which shows nn.Embedding's parameters are learnable. Edit: your question comes down to, "how do I encode my data?". For those examples you gave precisely: Let's say I have some input x. This input might be a vectorized image or maybe some sequence daily temperature data. In any case, this input x has 100 elements (100 days of temperature, or a 10x10 image). You can't use a nn.Embedding to solve these cases. Embedding layers are different to a reduction matrix. The latter can be used to reduce every single vector of dimension d into dimension n where n<<d. The prerequisite to using an embedding layer is having a finite dictionnary of possible elements. For example, you might want to represent a word with a vector of size n then you would use a embedding of nb_possible_words x n. This way, for any given word in the dictionnary the layer will produce the corresponding n-size vector. As I said in the comments below, num_embeddings is the number of unique elements you are working with and embedding_dim is the size of the embedding, i.e. the size of the output vector. nn.Embedding is usually used at the head of a network to cast encoded data into a lower dimensionality space. It won't solve your problem by magically reducing your dimensions. If you have a sequence of temperatures you want to analyse. You could encode each temperature into a one-hot-encoding. But this vector representation might be very large (depending on the number of different temperatures). Using an embedding layer would allow to reduce the size of these vectors. This is important when the aim is to analyse the data with a RNN any other MLP for that matter. Since the bigger your input size, the more paramaters you will have!
https://stackoverflow.com/questions/65169371/
Visualize Autoencoder Output
I come with a pretty noob question but I'm stuck... I have created a Autoencoder with Pytorch and I trained it with the typical MNIST dataset and so on: class Autoencoder(nn.Module): def __init__(self, **kwargs): super().__init__() self.encoder_hidden_layer = nn.Linear( in_features=kwargs["input_shape"], out_features=kwargs["embedding_dim"] ) self.encoder_output_layer = nn.Linear( in_features=kwargs["embedding_dim"], out_features=kwargs["embedding_dim"] ) self.decoder_hidden_layer = nn.Linear( in_features=kwargs["embedding_dim"], out_features=kwargs["embedding_dim"] ) self.decoder_output_layer = nn.Linear( in_features=kwargs["embedding_dim"], out_features=kwargs["input_shape"] ) def forward(self, features): activation = self.encoder_hidden_layer(features) activation = torch.relu(activation) code = self.encoder_output_layer(activation) code = torch.relu(code) activation = self.decoder_hidden_layer(code) activation = torch.relu(activation) activation = self.decoder_output_layer(activation) reconstructed = torch.relu(activation) return reconstructed model = Autoencoder(input_shape=784, embedding_dim=128) criterion = nn.MSELoss() optimizer = optim.Adam(model.parameters(), lr=0.0001) What I want now is to visualize the reconstructed images, but I don't know how to do it. I know it's quite simple but I cannot find a way. I know that the shape of the output is [128,784] because the batch_size is 128 and 784 is 28x28(x1channel). Could anyone please tell me how could I get an image from my reconstructed tensor? Thank you so much!
First you will have to broadcast the tensor into 128x28x28: reconstructed = x.reshape(128, 1, 28, 28) Then, you can convert one of the batch elements into a PIL image using torchvision's functions. The following will show the first image: import torchvision.transforms as T img = T.ToPILImage()(reconstructed[0]) img.show()
https://stackoverflow.com/questions/65171277/
how to load one type of image in cifar10 or stl10 with pytorch
This is a very simple question, I'm just trying to select a specific class of images (eg "car") from a standard pytorch image dataset. At the moment the data loader looks like this: def cycle(iterable): while True: for x in iterable: yield x train_loader = torch.utils.data.DataLoader( torchvision.datasets.STL10('drive/My Drive/training/stl10', split='train+unlabeled', transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), ])), shuffle=True, batch_size=8) train_iterator = iter(cycle(train_loader)) class_names = ['airplane', 'bird', 'car', 'cat', 'deer', 'dog', 'horse', 'monkey', 'ship', 'truck'] train_iterator = iter(cycle(train_loader)) The iterator returns a batch of shuffled images of all types, but I would like to be able to select what types of images are returned, eg. just images of deer, or ships
Done it! def cycle(iterable): while True: for x in iterable: yield x # Return only images of certain class (eg. aeroplanes = class 0) def get_same_index(target, label): label_indices = [] for i in range(len(target)): if target[i] == label: label_indices.append(i) return label_indices # STL10 dataset train_dataset = torchvision.datasets.STL10('drive/My Drive/training/stl10', split='train+unlabeled', download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor()])) label_class = 1# birds # Get indices of label_class train_indices = get_same_index(train_dataset.labels, label_class) bird_set = torch.utils.data.Subset(train_dataset, train_indices) train_loader = torch.utils.data.DataLoader(dataset=bird_set, shuffle=True, batch_size=batch_size, drop_last=True) train_iterator = iter(cycle(train_loader))
https://stackoverflow.com/questions/65172786/
Building wheel for neural-renderer-pytorch (setup.py) ...installing multiperson and neural mesh renderer doesn't work for pytorch 1.6
I am trying to install a github repo named multiperson for PyTorch 1.6 and I get the following error. How can I make it work for PyTorch 1.6? (base) mona@mona:~/research$ cd phosa/ (base) mona@mona:~/research/phosa$ mkdir -p external (base) mona@mona:~/research/phosa$ git clone https://github.com/JiangWenPL/multiperson.git external/multiperson Cloning into 'external/multiperson'... remote: Enumerating objects: 752, done. remote: Counting objects: 100% (752/752), done. remote: Compressing objects: 100% (566/566), done. remote: Total 752 (delta 189), reused 723 (delta 173), pack-reused 0 Receiving objects: 100% (752/752), 48.29 MiB | 46.26 MiB/s, done. Resolving deltas: 100% (189/189), done. (base) mona@mona:~/research/phosa$ pip install external/multiperson/neural_renderer Processing ./external/multiperson/neural_renderer Building wheels for collected packages: neural-renderer-pytorch Building wheel for neural-renderer-pytorch (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/mona/anaconda3/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-nl9m5bw0 cwd: /tmp/pip-req-build-ma51z6r7/ Complete output (210 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.7 creating build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/load_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/perspective.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/vertices_to_faces.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/visibility.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/get_points_from_angles.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/look.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/projection.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/rasterize.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/save_obj.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.7/neural_renderer creating build/lib.linux-x86_64-3.7/neural_renderer/cuda copying neural_renderer/cuda/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer/cuda running build_ext building 'neural_renderer.cuda.load_textures' extension creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7 creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda Emitting ninja build file /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] /usr/bin/nvcc -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda_kernel.cu -o /tmp/pip-req- copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.7/neural_renderer copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.7/neural_renderer creating build/lib.linux-x86_64-3.7/neural_renderer/cuda copying neural_renderer/cuda/__init__.py -> build/lib.linux-x86_64-3.7/neural_renderer/cuda running build_ext building 'neural_renderer.cuda.load_textures' extension creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7 creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer creating /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda Emitting ninja build file /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 FAILED: /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o c++ -MMD -MF /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o.d -pthread -B /home/mona/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1: /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84: warning: ignoring #pragma omp parallel [-Wunknown-pragmas] 84 | #pragma omp parallel for if ((end - begin) >= grain_size) | /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp: In function ‘at::Tensor load_textures(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int)’: /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’ 17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’ 28 | CHECK_INPUT(image); | ^~~~~~~~~~~ In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1: /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here 268 | DeprecatedTypeProperties & type() const { | ^~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:23: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’? 15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’ 17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’ 28 | CHECK_INPUT(image); | ^~~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’ 17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:29:5: note: in expansion of macro ‘CHECK_INPUT’ 29 | CHECK_INPUT(faces); | ^~~~~~~~~~~ In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1: /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here 268 | DeprecatedTypeProperties & type() const { | ^~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’ 17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:30:5: note: in expansion of macro ‘CHECK_INPUT’ 30 | CHECK_INPUT(is_update); | ^~~~~~~~~~~ In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1: /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here 268 | DeprecatedTypeProperties & type() const { | ^~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:15:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’ 17 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:31:5: note: in expansion of macro ‘CHECK_INPUT’ 31 | CHECK_INPUT(textures); | ^~~~~~~~~~~ In file included from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/Context.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3, from /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda.cpp:1: /home/mona/anaconda3/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:30: note: declared here 268 | DeprecatedTypeProperties & type() const { | ^~~~ [2/2] /usr/bin/nvcc -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/home/mona/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/home/mona/anaconda3/include/python3.7m -c -c /tmp/pip-req-build-ma51z6r7/neural_renderer/cuda/load_textures_cuda_kernel.cu -o /tmp/pip-req-build-ma51z6r7/build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++14 ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build env=env) File "/home/mona/anaconda3/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-req-build-ma51z6r7/setup.py", line 40, in <module> cmdclass = {'build_ext': BuildExtension} File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/__init__.py", line 163, in setup return distutils.core.setup(**attrs) File "/home/mona/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/home/mona/anaconda3/lib/python3.7/distutils/command/install.py", line 545, in run self.run_command('build') File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/mona/anaconda3/lib/python3.7/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/home/mona/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/mona/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run _build_ext.run(self) File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run _build_ext.build_ext.run(self) File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 340, in run self.build_extensions() File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions build_ext.build_extensions(self) File "/home/mona/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions _build_ext.build_ext.build_extensions(self) File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "/home/mona/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension _build_ext.build_extension(self, ext) File "/home/mona/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension depends=ext.depends) File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile with_cuda=with_cuda) File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects error_prefix='Error compiling objects for extension') File "/home/mona/anaconda3/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build raise RuntimeError(message) RuntimeError: Error compiling objects for extension ---------------------------------------- Rolling back uninstall of neural-renderer-pytorch Moving to /home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer/ from /home/mona/anaconda3/lib/python3.7/site-packages/~eural_renderer Moving to /home/mona/anaconda3/lib/python3.7/site-packages/neural_renderer_pytorch-1.1.3.dist-info/ from /home/mona/anaconda3/lib/python3.7/site-packages/~eural_renderer_pytorch-1.1.3.dist-info ERROR: Command errored out with exit status 1: /home/mona/anaconda3/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ma51z6r7/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-ul8nk1jn/install-record.txt --single-version-externally-managed --compile --install-headers /home/mona/anaconda3/include/python3.7m/neural-renderer-pytorch Check the logs for full command output. I have: $ python Python 3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.__version__ '1.6.0' $ lsb_release -a LSB Version: core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal $ nvidia-smi Sun Dec 6 16:36:36 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A | | N/A 49C P8 21W / N/A | 1546MiB / 7982MiB | 8% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243
You would need to change all AT_CHECK in neural mesh renderer to TORCH_CHECK (base) mona@mona:~/research/phosa/external/multiperson/neural_renderer$ rg AT_CHECK neural_renderer/cuda/load_textures_cuda.cpp 15:#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") 16:#define CHECK_CONTIGUOUS(x) AT_CHECK(x.is_contiguous(), #x " must be contiguous") neural_renderer/cuda/create_texture_image_cuda.cpp 13:#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") 14:#define CHECK_CONTIGUOUS(x) AT_CHECK(x.is_contiguous(), #x " must be contiguous") neural_renderer/cuda/rasterize_cuda.cpp 69:#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") 70:#define CHECK_CONTIGUOUS(x) AT_CHECK(x.is_contiguous(), #x " must be contiguous") (base) mona@mona:~/research/phosa$ pip install external/multiperson/neural_renderer Processing ./external/multiperson/neural_renderer Building wheels for collected packages: neural-renderer-pytorch Building wheel for neural-renderer-pytorch (setup.py) ... done Created wheel for neural-renderer-pytorch: filename=neural_renderer_pytorch-1.1.3-cp37-cp37m-linux_x86_64.whl size=6321659 sha256=5e2f4afc2346a90c5cd804b226dd7c424ab95f477aad67f2ca3f15530484fbc6 Stored in directory: /tmp/pip-ephem-wheel-cache-_rf6c5ld/wheels/c7/1b/84/10bf7a286a267887d8c7d382677c292cf18e1bba4e2508ed33 Successfully built neural-renderer-pytorch Installing collected packages: neural-renderer-pytorch Attempting uninstall: neural-renderer-pytorch Found existing installation: neural-renderer-pytorch 1.1.3 Uninstalling neural-renderer-pytorch-1.1.3: Successfully uninstalled neural-renderer-pytorch-1.1.3 Successfully installed neural-renderer-pytorch-1.1.3
https://stackoverflow.com/questions/65173409/
truncated bptt pytorch implementation question
i'm trying to implement tbptt in pytorch. I've found an implementation below in a forum and I get the logic behind the code but I keep getting a "inplace operation" error. class TBPTT(): def __init__(self, one_step_module, loss_module, k1, k2, optimizer): self.one_step_module = one_step_module self.loss_module = loss_module self.k1 = k1 self.k2 = k2 self.retain_graph = k1 < k2 # You can also remove all the optimizer code here, and the # train function will just accumulate all the gradients in # one_step_module parameters self.optimizer = optimizer def train(self, input_sequence, init_state): states = [(None, init_state)] for j, (inp, target) in enumerate(input_sequence): state = states[-1][1].detach() state.requires_grad=True output, new_state = self.one_step_module(inp, state) states.append((state, new_state)) while len(states) > self.k2: # Delete stuff that is too old del states[0] if (j+1)%self.k1 == 0: loss = self.loss_module(output, target) optimizer.zero_grad() # backprop last module (keep graph only if they ever overlap) start = time.time() loss.backward(retain_graph=self.retain_graph) for i in range(self.k2-1): # if we get all the way back to the "init_state", stop if states[-i-2][0] is None: break curr_grad = states[-i-1][0].grad states[-i-2][1].backward(curr_grad, retain_graph=self.retain_graph) print("bw: {}".format(time.time()-start)) optimizer.step() seq_len = 20 layer_size = 50 idx = 0 class MyMod(nn.Module): def __init__(self): super(MyMod, self).__init__() self.lin = nn.Linear(2*layer_size, 2*layer_size) def forward(self, inp, state): global idx full_out = self.lin(torch.cat([inp, state], 1)) # out, new_state = full_out.chunk(2, dim=1) out = full_out.narrow(1, 0, layer_size) new_state = full_out.narrow(1, layer_size, layer_size) def get_pr(idx_val): def pr(*args): print("doing backward {}".format(idx_val)) return pr new_state.register_hook(get_pr(idx)) out.register_hook(get_pr(idx)) print("doing fw {}".format(idx)) idx += 1 return out, new_state one_step_module = MyMod() loss_module = nn.MSELoss() input_sequence = [(torch.rand(200, layer_size), torch.rand(200, layer_size))] * seq_len optimizer = torch.optim.SGD(one_step_module.parameters(), lr=1e-3) runner = TBPTT(one_step_module, loss_module, 5, 7, optimizer) runner.train(input_sequence, torch.zeros(200, layer_size)) print("done") Here is the weird thing. When I tried to run the code the first time, I kept getting another error and after a thorough speculation I found that in some of the variables such as "one_step_module", "input_sequence" where shadowing other variables outer scope. so after renaming those variables the code ran just fine. And then, I tried to revise the code a bit further for my own project, I started getting the "inplace operation" error. So, in order to see what went wrong, I fixed the code back to the original code above but I kept getting the error.. I even tried open a new file and copy paste the implementation right from the beginning, and I still can't get the code to run. This is driving me CRAZY. Here's the "inplace operation" error I started getting from the implementation above. C:\Users\bboyj\anaconda3\envs\jinkyu\python.exe C:/Users/bboyj/PycharmProjects/pythonProject/test1.py doing fw 0 doing fw 1 doing fw 2 doing fw 3 doing fw 4 doing backward 4 doing backward 3 doing backward 2 doing backward 1 doing backward 0 bw: 0.17385029792785645 doing fw 5 doing fw 6 doing fw 7 doing fw 8 doing fw 9 doing backward 9 doing backward 8 doing backward 7 doing backward 6 doing backward 5 doing backward 4 C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\autograd\__init__.py:130: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:100.) Variable._execution_engine.run_backward( Traceback (most recent call last): File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 100, in <module> runner.train(input_sequence, torch.zeros(200, layer_size)) File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 59, in train states[-i-2][1].backward(curr_grad, retain_graph=self.retain_graph) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\autograd\__init__.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 100]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). Process finished with exit code 1 just in case you want to see the specific code triggering the error. Here's the error log with torch anomaly detection. C:\Users\bboyj\anaconda3\envs\jinkyu\python.exe C:/Users/bboyj/PycharmProjects/pythonProject/test1.py doing fw 0 doing fw 1 doing fw 2 doing fw 3 doing fw 4 doing backward 4 doing backward 3 doing backward 2 doing backward 1 doing backward 0 bw: 0.17083358764648438 doing fw 5 doing fw 6 doing fw 7 doing fw 8 doing fw 9 doing backward 9 doing backward 8 doing backward 7 doing backward 6 doing backward 5 doing backward 4 C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\autograd\__init__.py:130: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:100.) Variable._execution_engine.run_backward( C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\autograd\__init__.py:130: UserWarning: Error detected in AddmmBackward. Traceback of forward call that caused the error: File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 101, in <module> runner.train(input_sequence, torch.zeros(200, layer_size)) File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 41, in train output, new_state = self.one_step_module(inp, state) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 78, in forward full_out = self.lin(torch.cat([inp111, state111], 1)) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\nn\modules\linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\nn\functional.py", line 1690, in linear ret = torch.addmm(bias, input, weight.t()) (Triggered internally at ..\torch\csrc\autograd\python_anomaly_mode.cpp:104.) Variable._execution_engine.run_backward( Traceback (most recent call last): File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 101, in <module> runner.train(input_sequence, torch.zeros(200, layer_size)) File "C:/Users/bboyj/PycharmProjects/pythonProject/test1.py", line 60, in train states[-i-2][1].backward(curr_grad, retain_graph=self.retain_graph) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\bboyj\anaconda3\envs\jinkyu\lib\site-packages\torch\autograd\__init__.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [100, 100]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! The main problem is that the first iteration is fine because the loss is calculated only with the new hidden state and the "detached and required_grad = True" state, but the second iteration when it tries to backward on the previous set of hidden states which already have "backwarded" it raises error. So in this case, after forwarding and backward on t =0,1,2,3,4 and forwarding on t = 5,6,7,8,9, when it tries to backward on t=9,8,7,6,5,4,3 (because k2 is 7), backward works fine for t=9,8,7,6,5 but fails on t = 4. Can anyone please shed some light on this??
After a careful speculation of the code, I've tracked the bug down. The problem was that after "backwarding" on the previous hidden states, the optimizer was trying to step on the hidden states that had been calculated already. I moved the optimizer out of scope of the for loop and every thing works fine! I'm leaving this answer for those of you who are trying to implement truncated bptt.
https://stackoverflow.com/questions/65175252/
Pytorch: Converting a VGG model into a sequential model, but getting different outputs
Background: I'm working on an adversarial detector method which requires to access the outputs from each hidden layer. I loaded a pretrained VGG16 from torchvision.models. To access the output from each hidden layer, I put it into a sequential model: vgg16 = models.vgg16(pretrained=True) vgg16_seq = nn.Sequential(*( list(list(vgg16.children())[0]) + [nn.AdaptiveAvgPool2d((7, 7)), nn.Flatten()] + list(list(vgg16.children())[2]))) Without nn.Flatten(), the forward method will complaint about dimensions don't match between mat1 and mat2. I looked into the torchvision VGG implementation, it uses the [feature..., AvgPool, flatten, classifier...] structure. Since AdaptiveAvgPool2d layer and Flatten layer have no parameters, I assume this should work, but I have different outputs. output1 = vgg16(X_small) print(output1.size()) output2 = vgg16_seq(X_small) print(output2.size()) torch.equal(output1, output2) Problem: They are in the same dimension but different outputs. torch.Size([32, 1000]) torch.Size([32, 1000]) False I tested the outputs right after the AdaptiveAvgPool2d layer, the outputs are equal: output1 = nn.Sequential(*list(vgg16.children())[:2])(X_small) print(output1.size()) output2 = nn.Sequential(*list(vgg16_seq)[:32])(X_small) print(output2.size()) torch.equal(output1, output2) torch.Size([32, 512, 7, 7]) torch.Size([32, 512, 7, 7]) True Can someone point out what went wrong? Thank you
You need to call the eval mode before doing inference. i.e. vgg16.eval() vgg16_seq.eval()
https://stackoverflow.com/questions/65175275/
generating segment labels for a Tensor given a value indicating segment boundaries
Does anyone know of a way to generate a 'segment label' for a Tensor, given a unique value that represents segment boundaries within the Tensor? For example, given a 1D input tensor where the value 1 represents a segment boundary, x = torch.Tensor([5, 4, 1, 3, 6, 2]) the resulting segment label Tensor should have the same shape with values representing the two segments: segment_label = torch.Tensor([1, 1, 1, 2, 2, 2]) Likewise, for a batch of inputs, e.g. batch size = 3, x = torch.Tensor([ [5, 4, 1, 3, 6, 2], [9, 4, 5, 1, 8, 10], [10, 1, 5, 4, 8, 9] ]) the resulting segment label Tensor (using 1 as the segment separator) should look something like this: segment_label = torch.Tensor([ [1, 1, 1, 2, 2, 2], [1, 1, 1, 1, 2, 2], [1, 1, 2, 2, 2, 2] ]) Context: I'm currently working with Fairseq's Transformer implementation in PyTorch for a seq2seq NLP task. I am looking for a way to incorporate BERT-like segment embeddings in Transformer during the encoder's forward pass, rather than modifying an exisiting dataset used for translation tasks such as language_pair_dataset. Thanks in advance!
You can use torch.cumsum to pull the trick: mask = (x == 1).to(x) # mask with only the boundaries segment_label = mask.cumsum(dim=-1) - mask + 1 Results with the desired segment_label.
https://stackoverflow.com/questions/65175941/
How to generate the architectural figure of the network in pytorch when the type of the output of the network is list?
I wonder how to use tochviz to generate network architecture, when the output is a list type? the demo code is as follows: import torch import torch.nn as nn class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(1, 16, 3, 1, 1), nn.ReLU(), nn.AvgPool2d(2, 2) ) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 3, 1, 1), nn.ReLU(), nn.MaxPool2d(2, 2) ) self.fc = nn.Sequential( nn.Linear(32 * 7 * 7, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU() ) self.out = nn.Linear(64, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) x = self.fc(x) output = [] output.append(x) output.append(self.out(x)) return output MyConvNet = ConvNet() and I use torchviz to view this network's architecture like from torchviz import make_dot x = torch.randn(1, 1, 28, 28).requires_grad_(True) y = MyConvNet(x) MyConvNetVis = make_dot(y, params=dict(list(MyConvNet.named_parameters()) + [('x', x)])) MyConvNetVis.format = "png" MyConvNetVis.directory = "data" MyConvNetVis.view() then, I was blocked with this problem AttributeError Traceback (most recent call last) <ipython-input-23-c8e3cd3a8b4e> in <module> 2 x = torch.randn(1, 1, 28, 28).requires_grad_(True) 3 y = MyConvNet(x) ----> 4 MyConvNetVis = make_dot(y, params=dict(list(MyConvNet.named_parameters()) + [('x', x)])) 5 MyConvNetVis.format = "png" 6 MyConvNetVis.directory = "data" ~/anaconda3/envs/torch1.3/lib/python3.6/site-packages/torchviz/dot.py in make_dot(var, params) 35 return '(' + (', ').join(['%d' % v for v in size]) + ')' 36 ---> 37 output_nodes = (var.grad_fn,) if not isinstance(var, tuple) else tuple(v.grad_fn for v in var) 38 39 def add_nodes(var): AttributeError: 'list' object has no attribute 'grad_fn' Any advice will be appreciated.
The error indicates torchviz is trying to navigate through the network using grad_fn in order to compute its own graph. However, a tuple is not a tensor and doesn't have the gra_fn property. I'm not quite sure you can have multiple outputs (i.e. a tuple as output) when working with torchviz. As a workaround, if you just want to visualize the network you could replace the tuple with concatenation of the two tensors using torch.cat: def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) x = self.fc(x) out = self.out(x) output = torch.cat([x, out], dim=1) return output The result is: Notice how the last node is a CatBackward with two incoming branches, one from AddmmBackward (out) and the other from ReluBackward0 (x). This last node is fictitious and is not present in your actual model, so you could remove it from the graph, by hand.
https://stackoverflow.com/questions/65180618/
pytorch can't shuffle the dataset
i am trying to make an ai with the mnist dataset from torchvision and makeing it with pytorch, but when i type some of the code that shuffle the data, and run it, it say: trainset = torch.utils.data.Dataloader(train, batch_size=10, shuffle=True) AttributeError: module 'torch.utils.data' has no attribute 'Dataloader' i tried a difrent method but it still do not work, and it says: trainset = torch.autograd.Variable.DataLoader(train, batch_size=10, shuffle=True) AttributeError: type object 'Variable' has no attribute 'DataLoader' the code i use is: import torch import numpy as np import torchvision from torchvision import transforms, datasets train = datasets.MNIST("", train=True, download=True, transform = transforms.Compose([transforms.ToTensor()])) test = datasets.MNIST("", train=False, download=True, transform = transforms.Compose([transforms.ToTensor()])) trainset = torch.utils.data.Dataloader(train, batch_size=10, shuffle=True) testset = torch.utils.data.Dataloader(test, batch_size=10, shuffle=True) for data in trainset: print(data) break the eror from this code: trainset = torch.utils.data.Dataloader(train, batch_size=10, shuffle=True) AttributeError: module 'torch.utils.data' has no attribute 'Dataloader' i tried a new version but it still not working: import torch import numpy as np import torchvision from torchvision import transforms, datasets train = datasets.MNIST("", train=True, download=True, transform = transforms.Compose([transforms.ToTensor()])) test = datasets.MNIST("", train=False, download=True, transform = transforms.Compose([transforms.ToTensor()]) trainset = torch.autograd.Variable.DataLoader(train, batch_size=10, shuffle=True) testset = torch.autograd.Variable.DataLoader(test, batch_size=10, shuffle=True) for data in trainset: print(data) break the eror from this code: trainset = torch.autograd.Variable.DataLoader(train, batch_size=10, shuffle=True) AttributeError: type object 'Variable' has no attribute 'DataLoader' i am stil confused why it is not working,i was folowing a tutorial but it did not work
You have a simple typo: Dataloader -> DataLoader (capital L). Try: trainset = torch.utils.data.DataLoader(train, batch_size=10, shuffle=True)
https://stackoverflow.com/questions/65181608/
model.to(device) for Pytorch Lighting
I currently train my model using GPUs using Pytorch Lightning trainer = pl.Trainer( gpus=[0,1], distributed_backend='ddp', resume_from_checkpoint=hparams["resume_from_checkpoint"]) trainer.fit(model, train_dataloader=train_loader, val_dataloaders=val_loader) Instructions are also clear for how to run test samples with trainer defined to use GPU trainer.test(test_dataloader=test_dataloader) and also how to load a model and use it interactively model = transformer.Model.load_from_checkpoint('/checkpoints/run_300_epoch_217.ckpt') results = model(in_data, I use the later to interface with an interactive system via sockets in a docker container. Is there a proper way to make this Pytorch Lightning model run on GPU? Lightning instructions say not to use model.to(device), but it appears to work just like Pytorch. Reason for instructions to avoid a side effect? I started reading about ONNX, but would rather just have an easy way to specify GPU since the interactive setup works perfectly with cpu.
My understanding is that "Remove any .cuda() or to.device() calls" is only for using with the Lightning trainer, because the trainer handles that itself. If you don't use the trainer, a LightningModule module is basically just a regular PyTorch model with some naming conventions. So using model.to(device) is how to run on GPU.
https://stackoverflow.com/questions/65185608/
Difference between these implementations of LSTM Autoencoder?
Specifically what spurred this question is the return_sequence argument of TensorFlow's version of an LSTM layer. The docs say: Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False. I've seen some implementations, especially autoencoders that use this argument to strip everything but the last element in the output sequence as the output of the 'encoder' half of the autoencoder. Below are three different implementations. I'd like to understand the reasons behind the differences, as the seem like very large differences but all call themselves the same thing. Example 1 (TensorFlow): This implementation strips away all outputs of the LSTM except the last element of the sequence, and then repeats that element some number of times to reconstruct the sequence: model = Sequential() model.add(LSTM(100, activation='relu', input_shape=(n_in,1))) # Decoder below model.add(RepeatVector(n_out)) model.add(LSTM(100, activation='relu', return_sequences=True)) model.add(TimeDistributed(Dense(1))) When looking at implementations of autoencoders in PyTorch, I don't see authors doing this. Instead they use the entire output of the LSTM for the encoder (sometimes followed by a dense layer and sometimes not). Example 1 (PyTorch): This implementation trains an embedding BEFORE an LSTM layer is applied... It seems to almost defeat the idea of an LSTM based auto-encoder... The sequence is already encoded by the time it hits the LSTM layer. class EncoderLSTM(nn.Module): def __init__(self, input_size, hidden_size, n_layers=1, drop_prob=0): super(EncoderLSTM, self).__init__() self.hidden_size = hidden_size self.n_layers = n_layers self.embedding = nn.Embedding(input_size, hidden_size) self.lstm = nn.LSTM(hidden_size, hidden_size, n_layers, dropout=drop_prob, batch_first=True) def forward(self, inputs, hidden): # Embed input words embedded = self.embedding(inputs) # Pass the embedded word vectors into LSTM and return all outputs output, hidden = self.lstm(embedded, hidden) return output, hidden Example 2 (PyTorch): This example encoder first expands the input with one LSTM layer, then does its compression via a second LSTM layer with a smaller number of hidden nodes. Besides the expansion, this seems in line with this paper I found: https://arxiv.org/pdf/1607.00148.pdf However, in this implementation's decoder, there is no final dense layer. The decoding happens through a second lstm layer that expands the encoding back to the same dimension as the original input. See it here. This is not in line with the paper (although I don't know if the paper is authoritative or not). class Encoder(nn.Module): def __init__(self, seq_len, n_features, embedding_dim=64): super(Encoder, self).__init__() self.seq_len, self.n_features = seq_len, n_features self.embedding_dim, self.hidden_dim = embedding_dim, 2 * embedding_dim self.rnn1 = nn.LSTM( input_size=n_features, hidden_size=self.hidden_dim, num_layers=1, batch_first=True ) self.rnn2 = nn.LSTM( input_size=self.hidden_dim, hidden_size=embedding_dim, num_layers=1, batch_first=True ) def forward(self, x): x = x.reshape((1, self.seq_len, self.n_features)) x, (_, _) = self.rnn1(x) x, (hidden_n, _) = self.rnn2(x) return hidden_n.reshape((self.n_features, self.embedding_dim)) Question: I'm wondering about this discrepancy in implementations. The difference seems quite large. Are all of these valid ways to accomplish the same thing? Or are some of these mis-guided attempts at a "real" LSTM autoencoder?
There is no official or correct way of designing the architecture of an LSTM based autoencoder... The only specifics the name provides is that the model should be an Autoencoder and that it should use an LSTM layer somewhere. The implementations you found are each different and unique on their own even though they could be used for the same task. Let's describe them: TF implementation: It assumes the input has only one channel, meaning that each element in the sequence is just a number and that this is already preprocessed. The default behaviour of the LSTM layer in Keras/TF is to output only the last output of the LSTM, you could set it to output all the output steps with the return_sequences parameter. In this case the input data has been shrank to (batch_size, LSTM_units) Consider that the last output of an LSTM is of course a function of the previous outputs (specifically if it is a stateful LSTM) It applies a Dense(1) in the last layer in order to get the same shape as the input. PyTorch 1: They apply an embedding to the input before it is fed to the LSTM. This is standard practice and it helps for example to transform each input element to a vector form (see word2vec for example where in a text sequence, each word that isn't a vector is mapped into a vector space). It is only a preprocessing step so that the data has a more meaningful form. This does not defeat the idea of the LSTM autoencoder, because the embedding is applied independently to each element of the input sequence, so it is not encoded when it enters the LSTM layer. PyTorch 2: In this case the input shape is not (seq_len, 1) as in the first TF example, so the decoder doesn't need a dense after. The author used a number of units in the LSTM layer equal to the input shape. In the end you choose the architecture of your model depending on the data you want to train on, specifically: the nature (text, audio, images), the input shape, the amount of data you have and so on...
https://stackoverflow.com/questions/65188556/
pytorch tensors cat on dim =0 not worked for me
I have a problem with cat in pytorch. I want to concatenate tensors on dim=0, for exampe, I want something like this >>> x = torch.randn(2, 3) >>> x tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) >>> torch.cat((x, x, x), 0) tensor([[ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497], [ 0.6580, -1.0969, -0.4614], [-0.1034, -0.5790, 0.1497]]) but, when I try to do it in my program I have def create_batches_to_device(train_df, test_df, device,batch_size=2): train_tensor = torch.tensor([]) for i in range (batch_size): rand_2_strs = train_df.sample(2) tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1], rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1], rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1], np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])]) print("it is tmp tensor") print(tmp_tensor) train_tensor = torch.cat((train_tensor,tmp_tensor),dim=0) print("this is after cat") print(train_tensor) create_batches_to_device(train_data, test_data, device) I have a result it is tmp tensor tensor([ 0.3244, -0.6401, -0.7959, 0.9019, 0.1468, -1.7093, -6.4419], dtype=torch.float64) this is after cat tensor([ 0.3244, -0.6401, -0.7959, 0.9019, 0.1468, -1.7093, -6.4419], dtype=torch.float64) it is tmp tensor tensor([ 1.2923, -0.3088, -0.1275, 0.6417, -1.3383, 1.4020, 28.9065], dtype=torch.float64) this is after cat tensor([ 0.3244, -0.6401, -0.7959, 0.9019, 0.1468, -1.7093, -6.4419, 1.2923, -0.3088, -0.1275, 0.6417, -1.3383, 1.4020, 28.9065], dtype=torch.float64) and the result has no matter was dim=0 or dim=-1, the result is the same for both variance This is example(look what dim=-1) def create_batches_to_device(train_df, test_df, device,batch_size=2): train_tensor = torch.tensor([]) for i in range (batch_size): rand_2_strs = train_df.sample(2) tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1], rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1], rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1], np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])]) print("it is tmp tensor") print(tmp_tensor) train_tensor = torch.cat((train_tensor,tmp_tensor),dim=-1) print("this is after cat") print(train_tensor) create_batches_to_device(train_data, test_data, device) and the result is the same it is tmp tensor tensor([ 1.0183, 0.2162, 0.4987, -0.0165, 0.2094, 0.9425, -14.4564], dtype=torch.float64) this is after cat tensor([ 1.0183, 0.2162, 0.4987, -0.0165, 0.2094, 0.9425, -14.4564], dtype=torch.float64) it is tmp tensor tensor([ 0.2389, -1.0108, -0.2350, 0.7105, -0.9200, 0.3282, 7.5456], dtype=torch.float64) this is after cat tensor([ 1.0183, 0.2162, 0.4987, -0.0165, 0.2094, 0.9425, -14.4564, 0.2389, -1.0108, -0.2350, 0.7105, -0.9200, 0.3282, 7.5456], dtype=torch.float64)
The problem was what tmp_tensor had shape ([7]) so I could to concatenate only on one dimension. The solution was that I shold to add one new string tmp_tensor = torch.unsqueeze(tmp_tensor, 0) and now tmp_tensor([1,7]) and I could using torch.cat without problem def create_batches_to_device(train_df, test_df, device,batch_size=3): train_tensor = torch.tensor([]) for i in range (batch_size): rand_2_strs = train_df.sample(2) tmp_tensor = torch.tensor([rand_2_strs.get('Sigma').iloc[0],rand_2_strs.get('Sigma').iloc[1], rand_2_strs.get('mu').iloc[0],rand_2_strs.get('mu').iloc[1], rand_2_strs.get('th').iloc[0],rand_2_strs.get('th').iloc[1], np.log(weighted_mse(np.array(rand_2_strs.get('Decay').iloc[0]),np.array(rand_2_strs.get('Decay').iloc[1]),t)[0])]) print("it is tmp tensor") tmp_tensor = torch.unsqueeze(tmp_tensor, 0) print(tmp_tensor.shape) train_tensor = torch.cat((train_tensor,tmp_tensor),dim=0) print("this is after cat") print(train_tensor) create_batches_to_device(train_data, test_data, device) and the result is it is tmp tensor torch.Size([1, 7]) this is after cat tensor([[ 0.9207, -0.9658, 0.0492, 1.6959, 0.4620, -0.2433, -6.4764]], dtype=torch.float64) it is tmp tensor torch.Size([1, 7]) this is after cat tensor([[ 0.9207, -0.9658, 0.0492, 1.6959, 0.4620, -0.2433, -6.4764], [-0.5921, -0.1198, 0.6192, -0.0977, -0.1704, 1.2384, 9.4497]], dtype=torch.float64) it is tmp tensor torch.Size([1, 7]) this is after cat tensor([[ 0.9207, -0.9658, 0.0492, 1.6959, 0.4620, -0.2433, -6.4764], [-0.5921, -0.1198, 0.6192, -0.0977, -0.1704, 1.2384, 9.4497], [ 0.3839, -0.3153, 0.6467, -0.9995, -0.7415, -0.5487, -6.5500]], dtype=torch.float64)
https://stackoverflow.com/questions/65188835/
Translating Conv1D Layer from pytorch to tensorflow/keras
I want to create an equal keras layer from this source: Layer=torch.nn.Conv1d(in_features, out_features, 1) My Input is shaped (Batch_size,Channel,Width) This Layer is compiled to: Conv1d(10, 256, kernel_size=(1,), stride=(1,)) By pytorch. How can I express this Layer in tensorflow? I have so far this: layer1 = tf.keras.layers.Conv1D(in_features-out_features+1, kernel_size=1) But I am not confident that this will is the right approach.
In tensorflow's keras you write something like: layer1 = tf.keras.layers.Conv1D(filters=256, kernel_size=1)(layer0)
https://stackoverflow.com/questions/65191561/
how can I clear graphic card memory after training in pytorch?
I am dealing with pytorch in colab While training, pytorch consumes enormous memory after training, I saved model, and loaded model to another notebook(note 2). in note 2, after loading state_dict and everything, pytorch consumes way less memory than in training state. So, I wonder 'useless' data is stored in graphic card memory while training(in my case, about 13gb)... If so, how do I delete useless data after training? plus. I tried to delete variables used while training, but wasn't big enough(about 2gb)
This is to be expected while training. During the training process, the operations themselves will take up memory. For example, consider the following operation - a = np.random.rand(100, 500, 300) b = np.random.rand(200, 500, 300) c = (a[:, None, :, :] * b[None, :, :, :]).sum(-1).sum(-1) The memory size of a, b and c individually is around 400 MB. However, if you check %memit (a[:, None, :, :] * b[None, :, :, :]).sum(-1).sum(-1) That's 23 GB! The line itself takes up a lot of memory to actually do the operation because there are massive intermediate arrays involved. These arrays are temporary and are automatically deleted after the operation is over. So you deleting some variables isn't going to do much for reducing the footprint. The way to get around this is to use memory optimized operations. For example, doing np.tensordot(a, b, ((1, 2), (1, 2))) instead of multiplying by broadcasting leaves a much better memory footprint. So what you need to do is to identify which operation in your code is requiring such a huge memory and see if you can replace that with a more memory efficient equivalent (which might not even be possible depending on your specific use-case).
https://stackoverflow.com/questions/65192180/
PyTorch LogSoftmax vs Softmax for CrossEntropyLoss
I understand that PyTorch's LogSoftmax function is basically just a more numerically stable way to compute Log(Softmax(x)). Softmax lets you convert the output from a Linear layer into a categorical probability distribution. The pytorch documentation says that CrossEntropyLoss combines nn.LogSoftmax() and nn.NLLLoss() in one single class. Looking at NLLLoss, I'm still confused...Are there 2 logs being used? I think of negative log as information content of an event. (As in entropy) After a bit more looking, I think that NLLLoss assumes that you're actually passing in log probabilities instead of just probabilities. Is this correct? It's kind of weird if so...
Yes, NLLLoss takes log-probabilities (log(softmax(x))) as input. Why?. Because if you add a nn.LogSoftmax (or F.log_softmax) as the final layer of your model's output, you can easily get the probabilities using torch.exp(output), and in order to get cross-entropy loss, you can directly use nn.NLLLoss. Of course, log-softmax is more stable as you said. And, there is only one log (it's in nn.LogSoftmax). There is no log in nn.NLLLoss. nn.CrossEntropyLoss() combines nn.LogSoftmax() (log(softmax(x))) and nn.NLLLoss() in one single class. Therefore, the output from the network that is passed into nn.CrossEntropyLoss needs to be the raw output of the network (called logits), not the output of the softmax function.
https://stackoverflow.com/questions/65192475/
Why are the MNIST images 1x28x28 tensors?
I made the MNIST images which are 28x28 pixel images into tensors with dataset = MNIST(root='data/', train=True, transform=transforms.ToTensor()) and when I run img_tensor, label = dataset[0] print(img_tensor.shape, label) It says the shape is torch.Size([1, 28, 28]). Why is it a 1x28x28? What does the first dimension mean? and what is the point of a 1x28x28 opposed to 28x28?
An image seen as a matrix has always 3 dimensions: channels, width and height. 28 and 28 are width and height of course. The 1 in this case is the channel. So what's the channel? Every pixel is represented by three colors: red, blue and green. For each color, you will have one color-channel, so normally 3 (RGB). This makes a pictures dimension (3, W, H). So why do you have a 1 there? Because the MNIST images are black and white and therefore dont need three different color-channel to represent the final color, one channel is enough, therefore for black and white images you dimension is (1, W, H). Here is a picture below to visualize the dimensions: source: https://commons.wikimedia.org/wiki/File:RGB_channels_separation.png So you see, for black and white images you only need one channel. Normally you could ignore the 1 dimension, but pytorch demands the channel dimension.
https://stackoverflow.com/questions/65202011/
TypeError: backward() got an unexpected keyword argument 'grad_tensors' in pytorch
I have the following w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) a = torch.add(w, x) b = torch.add(w, 1) y0 = torch.mul(a, b) # y0 = (x+w) * (w+1) y1 = torch.add(a, b) # y1 = (x+w) + (w+1) loss = torch.cat([y0, y1], dim=0) # [y0, y1] weight = torch.tensor([1., 2.]) loss.backward(grad_tensors=weight) The above give me TypeError: backward() got an unexpected keyword argument 'grad_tensors' I check the website , the grad_tensors does live in the backward. However, when I use loss.backward(gradient=weight) It works. gradient is not a parameters in backward. Any idea of that? my pytorch version is 1.7.0. Thanks.
You are calling the torch.Tensor.backward, not torch.autograd.backward. As for your second question about the difference b/w the two, torch.Tensor.backward internally calls torch.autograd.backward, which calculates gradients of given tensors w.r.t. graph leaves. torch.autograd.backward(self, gradient, retain_graph, create_graph) which corresponds to torch.autograd.backward(tensors: self, grad_tensors: gradient, retain_graph, create_graph) Thus, below two are equivalent: loss.backward(gradient=weight) torch.autograd.backward(loss, weight)
https://stackoverflow.com/questions/65204523/
LSTM Autoencoder problems
TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N]. Decoder algorithm is as follows for a sequence of length N: Get Decoder initial hidden state hs[N]: Just use encoder final hidden state. Reconstruct last element in the sequence: x[N]= w.dot(hs[N]) + b. Same pattern for other elements: x[i]= w.dot(hs[i]) + b use x[i] and hs[i] as inputs to LSTMCell to get x[i-1] and hs[i-1] Minimum Working Example: Here is my implementation, starting with the encoder: class SeqEncoderLSTM(nn.Module): def __init__(self, n_features, latent_size): super(SeqEncoderLSTM, self).__init__() self.lstm = nn.LSTM( n_features, latent_size, batch_first=True) def forward(self, x): _, hs = self.lstm(x) return hs Decoder class: class SeqDecoderLSTM(nn.Module): def __init__(self, emb_size, n_features): super(SeqDecoderLSTM, self).__init__() self.cell = nn.LSTMCell(n_features, emb_size) self.dense = nn.Linear(emb_size, n_features) def forward(self, hs_0, seq_len): x = torch.tensor([]) # Final hidden and cell state from encoder hs_i, cs_i = hs_0 # reconstruct first element with encoder output x_i = self.dense(hs_i) x = torch.cat([x, x_i]) # reconstruct remaining elements for i in range(1, seq_len): hs_i, cs_i = self.cell(x_i, (hs_i, cs_i)) x_i = self.dense(hs_i) x = torch.cat([x, x_i]) return x Bringing the two together: class LSTMEncoderDecoder(nn.Module): def __init__(self, n_features, emb_size): super(LSTMEncoderDecoder, self).__init__() self.n_features = n_features self.hidden_size = emb_size self.encoder = SeqEncoderLSTM(n_features, emb_size) self.decoder = SeqDecoderLSTM(emb_size, n_features) def forward(self, x): seq_len = x.shape[1] hs = self.encoder(x) hs = tuple([h.squeeze(0) for h in hs]) out = self.decoder(hs, seq_len) return out.unsqueeze(0) And here's my training function: def train_encoder(model, epochs, trainload, testload=None, criterion=nn.MSELoss(), optimizer=optim.Adam, lr=1e-6, reverse=False): device = 'cuda' if torch.cuda.is_available() else 'cpu' print(f'Training model on {device}') model = model.to(device) opt = optimizer(model.parameters(), lr) train_loss = [] valid_loss = [] for e in tqdm(range(epochs)): running_tl = 0 running_vl = 0 for x in trainload: x = x.to(device).float() opt.zero_grad() x_hat = model(x) if reverse: x = torch.flip(x, [1]) loss = criterion(x_hat, x) loss.backward() opt.step() running_tl += loss.item() if testload is not None: model.eval() with torch.no_grad(): for x in testload: x = x.to(device).float() loss = criterion(model(x), x) running_vl += loss.item() valid_loss.append(running_vl / len(testload)) model.train() train_loss.append(running_tl / len(trainload)) return train_loss, valid_loss Data: Large dataset of events scraped from the news (ICEWS). Various categories exist that describe each event. I initially one-hot encoded these variables, expanding the data to 274 dimensions. However, in order to debug the model, I've cut it down to a single sequence that is 14 timesteps long and only contains 5 variables. Here is the sequence I'm trying to overfit: tensor([[0.5122, 0.0360, 0.7027, 0.0721, 0.1892], [0.5177, 0.0833, 0.6574, 0.1204, 0.1389], [0.4643, 0.0364, 0.6242, 0.1576, 0.1818], [0.4375, 0.0133, 0.5733, 0.1867, 0.2267], [0.4838, 0.0625, 0.6042, 0.1771, 0.1562], [0.4804, 0.0175, 0.6798, 0.1053, 0.1974], [0.5030, 0.0445, 0.6712, 0.1438, 0.1404], [0.4987, 0.0490, 0.6699, 0.1536, 0.1275], [0.4898, 0.0388, 0.6704, 0.1330, 0.1579], [0.4711, 0.0390, 0.5877, 0.1532, 0.2201], [0.4627, 0.0484, 0.5269, 0.1882, 0.2366], [0.5043, 0.0807, 0.6646, 0.1429, 0.1118], [0.4852, 0.0606, 0.6364, 0.1515, 0.1515], [0.5279, 0.0629, 0.6886, 0.1514, 0.0971]], dtype=torch.float64) And here is the custom Dataset class: class TimeseriesDataSet(Dataset): def __init__(self, data, window, n_features, overlap=0): super().__init__() if isinstance(data, (np.ndarray)): data = torch.tensor(data) elif isinstance(data, (pd.Series, pd.DataFrame)): data = torch.tensor(data.copy().to_numpy()) else: raise TypeError(f"Data should be ndarray, series or dataframe. Found {type(data)}.") self.n_features = n_features self.seqs = torch.split(data, window) def __len__(self): return len(self.seqs) def __getitem__(self, idx): try: return self.seqs[idx].view(-1, self.n_features) except TypeError: raise TypeError("Dataset only accepts integer index/slices, not lists/arrays.") Problem: The model only learns the average, no matter how complex I make the model or now long I train it. Predicted/Reconstruction: Actual: My research: This problem is identical to the one discussed in this question: LSTM autoencoder always returns the average of the input sequence The problem in that case ended up being that the objective function was averaging the target timeseries before calculating loss. This was due to some broadcasting errors because the author didn't have the right sized inputs to the objective function. In my case, I do not see this being the issue. I have checked and double checked that all of my dimensions/sizes line up. I am at a loss. Other Things I've Tried I've tried this with varied sequence lengths from 7 timesteps to 100 time steps. I've tried with varied number of variables in the time series. I've tried with univariate all the way to all 274 variables that the data contains. I've tried with various reduction parameters on the nn.MSELoss module. The paper calls for sum, but I've tried both sum and mean. No difference. The paper calls for reconstructing the sequence in reverse order (see graphic above). I have tried this method using the flipud on the original input (after training but before calculating loss). This makes no difference. I tried making the model more complex by adding an extra LSTM layer in the encoder. I've tried playing with the latent space. I've tried from 50% of the input number of features to 150%. I've tried overfitting a single sequence (provided in the Data section above). Question: What is causing my model to predict the average and how do I fix it?
Okay, after some debugging I think I know the reasons. TLDR You try to predict next timestep value instead of difference between current timestep and the previous one Your hidden_features number is too small making the model unable to fit even a single sample Analysis Code used Let's start with the code (model is the same): import seaborn as sns import matplotlib.pyplot as plt def get_data(subtract: bool = False): # (1, 14, 5) input_tensor = torch.tensor( [ [0.5122, 0.0360, 0.7027, 0.0721, 0.1892], [0.5177, 0.0833, 0.6574, 0.1204, 0.1389], [0.4643, 0.0364, 0.6242, 0.1576, 0.1818], [0.4375, 0.0133, 0.5733, 0.1867, 0.2267], [0.4838, 0.0625, 0.6042, 0.1771, 0.1562], [0.4804, 0.0175, 0.6798, 0.1053, 0.1974], [0.5030, 0.0445, 0.6712, 0.1438, 0.1404], [0.4987, 0.0490, 0.6699, 0.1536, 0.1275], [0.4898, 0.0388, 0.6704, 0.1330, 0.1579], [0.4711, 0.0390, 0.5877, 0.1532, 0.2201], [0.4627, 0.0484, 0.5269, 0.1882, 0.2366], [0.5043, 0.0807, 0.6646, 0.1429, 0.1118], [0.4852, 0.0606, 0.6364, 0.1515, 0.1515], [0.5279, 0.0629, 0.6886, 0.1514, 0.0971], ] ).unsqueeze(0) if subtract: initial_values = input_tensor[:, 0, :] input_tensor -= torch.roll(input_tensor, 1, 1) input_tensor[:, 0, :] = initial_values return input_tensor if __name__ == "__main__": torch.manual_seed(0) HIDDEN_SIZE = 10 SUBTRACT = False input_tensor = get_data(SUBTRACT) model = LSTMEncoderDecoder(input_tensor.shape[-1], HIDDEN_SIZE) optimizer = torch.optim.Adam(model.parameters()) criterion = torch.nn.MSELoss() for i in range(1000): outputs = model(input_tensor) loss = criterion(outputs, input_tensor) loss.backward() optimizer.step() optimizer.zero_grad() print(f"{i}: {loss}") if loss < 1e-4: break # Plotting sns.lineplot(data=outputs.detach().numpy().squeeze()) sns.lineplot(data=input_tensor.detach().numpy().squeeze()) plt.show() What it does: get_data either works on the data your provided if subtract=False or (if subtract=True) it subtracts value of the previous timestep from the current timestep Rest of the code optimizes the model until 1e-4 loss reached (so we can compare how model's capacity and it's increase helps and what happens when we use the difference of timesteps instead of timesteps) We will only vary HIDDEN_SIZE and SUBTRACT parameters! NO SUBTRACT, SMALL MODEL HIDDEN_SIZE=5 SUBTRACT=False In this case we get a straight line. Model is unable to fit and grasp the phenomena presented in the data (hence flat lines you mentioned). 1000 iterations limit reached SUBTRACT, SMALL MODEL HIDDEN_SIZE=5 SUBTRACT=True Targets are now far from flat lines, but model is unable to fit due to too small capacity. 1000 iterations limit reached NO SUBTRACT, LARGER MODEL HIDDEN_SIZE=100 SUBTRACT=False It got a lot better and our target was hit after 942 steps. No more flat lines, model capacity seems quite fine (for this single example!) SUBTRACT, LARGER MODEL HIDDEN_SIZE=100 SUBTRACT=True Although the graph does not look that pretty, we got to desired loss after only 215 iterations. Finally Usually use difference of timesteps instead of timesteps (or some other transformation, see here for more info about that). In other cases, neural network will try to simply... copy output from the previous step (as that's the easiest thing to do). Some minima will be found this way and going out of it will require more capacity. When you use the difference between timesteps there is no way to "extrapolate" the trend from previous timestep; neural network has to learn how the function actually varies Use larger model (for the whole dataset you should try something like 300 I think), but you can simply tune that one. Don't use flipud. Use bidirectional LSTMs, in this way you can get info from forward and backward pass of LSTM (not to confuse with backprop!). This also should boost your score Questions Okay, question 1: You are saying that for variable x in the time series, I should train the model to learn x[i] - x[i-1] rather than the value of x[i]? Am I correctly interpreting? Yes, exactly. Difference removes the urge of the neural network to base it's predictions on the past timestep too much (by simply getting last value and maybe changing it a little) Question 2: You said my calculations for zero bottleneck were incorrect. But, for example, let's say I'm using a simple dense network as an auto encoder. Getting the right bottleneck indeed depends on the data. But if you make the bottleneck the same size as the input, you get the identity function. Yes, assuming that there is no non-linearity involved which makes the thing harder (see here for similar case). In case of LSTMs there are non-linearites, that's one point. Another one is that we are accumulating timesteps into single encoder state. So essentially we would have to accumulate timesteps identities into a single hidden and cell states which is highly unlikely. One last point, depending on the length of sequence, LSTMs are prone to forgetting some of the least relevant information (that's what they were designed to do, not only to remember everything), hence even more unlikely. Is num_features * num_timesteps not a bottle neck of the same size as the input, and therefore shouldn't it facilitate the model learning the identity? It is, but it assumes you have num_timesteps for each data point, which is rarely the case, might be here. About the identity and why it is hard to do with non-linearities for the network it was answered above. One last point, about identity functions; if they were actually easy to learn, ResNets architectures would be unlikely to succeed. Network could converge to identity and make "small fixes" to the output without it, which is not the case. I'm curious about the statement : "always use difference of timesteps instead of timesteps" It seem to have some normalizing effect by bringing all the features closer together but I don't understand why this is key ? Having a larger model seemed to be the solution and the substract is just helping. Key here was, indeed, increasing model capacity. Subtraction trick depends on the data really. Let's imagine an extreme situation: We have 100 timesteps, single feature Initial timestep value is 10000 Other timestep values vary by 1 at most What the neural network would do (what is the easiest here)? It would, probably, discard this 1 or smaller change as noise and just predict 1000 for all of them (especially if some regularization is in place), as being off by 1/1000 is not much. What if we subtract? Whole neural network loss is in the [0, 1] margin for each timestep instead of [0, 1001], hence it is more severe to be wrong. And yes, it is connected to normalization in some sense come to think about it.
https://stackoverflow.com/questions/65205506/
PyTorch alternative for tf.data.experimental.sample_from_datasets
Suppose I have two datasets, dataset one with 100 items and dataset two with 5000 items. Now I want that during training my model sees as much items from dataset one as from dataset two. In Tensorflow I can do: dataset = tf.data.experimental.sample_from_datasets( [dataset_one, dataset_two], weights=[50,1], seed=None ) Is there an alternative in PyTorch that does the same? I think this is not too difficult to implement by creating a custom dataset (not working example) from torch.utils.data import Dataset class SampleDataset(Dataset): def __init__(self, datasets, weights): self.datasets = datasets self.weights = weights def __len__(self): return sum([len(dataset) for dataset in self.datasets]) def __getitem__(self, idx): # sample a random number and based on that sample an item return self.datasets[dataset_idx][sample_idx] However, this seems quite common. Is there already something like this available?
I don't think there is a direct equivalent in PyTorch. However, there's a function called torch.utils.data.WeightedRandomSampler which samples indices based on a list of probabilities. You can use this in combination with torch.data.utils.ConcatDataset and torch.utils.data.DataLoader's sampler option. I'll give an example with two datasets: SetA has 500 elements and SetB which only has 10. First, you can create a concatenation of all your datasets with ConcaDataset: ds = ConcatDataset([SetA(), SetB()]) Then, we need to sample it. The problem is, you can't just give WeightedRandomSampler [50, 1], as you did in Tensorflow. As a workaround, you can create a list of probabilities of the same length as the size of the total dataset. The corresponding probability list for this example would be: dist = np.array([1/51]*500 + [50/51]*10) Essentially, the first 500 indices (i.e. indices 'pointing' to SetA) will have a probability of 1/51 of being choosen while the following 10 indices (i.e. indices in SetB) will have a probability of 50/51 (i.e much more likely to being sampled since there are less elements in SetB, this is the desired result!) We can create a sampler from that distribution: WeightedRandomSampler(dist, 10) Where 10 is the number of sampled elements. I would put the size of the smallest dataset, otherwise you would likely be going over the same datapoints multiple times during the same epoch... Finally, we just have to instanciate the dataloader with our dataset and sampler: dl = DataLoader(ds, sampler=sampler) To summarize: ds = ConcatDataset([SetA(), SetB()]) dist = np.array([1/51]*500 + [50/51]*10) sampler = WeightedRandomSampler(dist, 10) dl = DataLoader(ds, sampler=sampler) Edit, for any number of datasets: sets = [SetA(), SetB(), SetC()] ds = ConcatDataset(sets) dist = np.concatenate([[(len(ds) - len(s))/len(ds)]*len(s) for s in sets]) sampler = WeightedRandomSampler(weights=dist, num_samplesmin([len(s) for s in sets]) dl = DataLoader(ds, sampler=sampler)
https://stackoverflow.com/questions/65205801/
How to use the BitShift operator in Pytorch?
Does anyone has an example of how to use the BitShift operator in Pytorch?
Bitwise shift operator performs element-wise operation. It works the same way it works in python, and numpy i.e. shift the bits of an integer to the left or right. The << and >> denotes the left and right shift respectively. x = torch.tensor([16, 4, 1]) y = torch.tensor([1, 2, 3]) z = x << y print(z) tensor([32, 16, 8]) It's equivalent to 16 << 1 (np.left_shift(16, 1)), 4 << 2, and 1 << 3. For each input element, if the attribute "direction" is "RIGHT", this operator moves its binary representation toward the right side so that the input value is effectively decreased. If the attribute "direction" is "LEFT", bits of binary representation moves toward the left side, which results the increase of its actual value. This operator supports multidirectional (i.e., Numpy-style) broadcasting.
https://stackoverflow.com/questions/65208217/
Creating new channels from nearby pixels in pytorch
Given a batch image tensor like B x C x W x H (batchSize,channels,width,height), I would like to create a new tensor in which the new channels are the channels from nearby pixels (padded with 0s). For instance, if I choose the nearby pixel size to be 3 x 3 (like a 3 x 3 filter) then there are 9 total nearby pixels and the final tensor size would be B x ( 9 * C ) x W x H. Any recommendations on doing this, or do I just need to go the brute-force approach through iteration?
If you want to cut the edges short (img is your image tensor): from skimage.util import view_as_windows B,C,W,H = img.shape img_ = view_as_windows(img,(1,1,3,3)).reshape(B,C,W-2,H-2,-1).transpose(0,1,4,2,3).reshape(B,C*9,W-2,H-2) And if you want to pad them with 0 instead: from skimage.util import view_as_windows img = np.pad(img,((0,0),(0,0),(1,1),(1,1))) B,C,W,H = img.shape img_ = view_as_windows(img,(1,1,3,3)).reshape(B,C,W-2,H-2,-1).transpose(0,1,4,2,3).reshape(B,C*9,W-2,H-2)
https://stackoverflow.com/questions/65208628/
Using an ellipsis for the middle dimensions of a PyTorch tensor
Suppose I have a torch.Tensor t of shape (8, 3, 32, 32). I want to index along the first and last 2 dimensions only. In my usecase, t is a batch of 8 images, of which I want to modify a patch. Suppose the patch is given by indices idx_last = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]]). I also have idx1 = torch.arange(4) : I want the patch for the first 4 images. The following does not work: t[idx1, ..., idx_last] Is there any way to do this ?
I found one workaround, although it may not be the most efficient. In the case where idx1 is 1-dimensional (datapoint selection), and idx_last is multidimensional, the following gets the wanted result: t[(idx1, ...) + tuple(idx_last.T)] Better solutions are definitely welcome.
https://stackoverflow.com/questions/65210040/
Skorch RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same
I'm trying to develop an image segmentation model. In the below code I keep hitting a RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same. I'm not sure why as I've tried to load both my data and my UNet model to the GPU using .cuda() (although not the skorch model-- not sure how to do that). I'm using a library for active learning, modAL, which wraps skorch. from modAL.models import ActiveLearner import numpy as np import torch from torch import nn from torch import Tensor from torch.utils.data import DataLoader from torch.utils.data import Dataset from skorch.net import NeuralNet from modAL.models import ActiveLearner from modAL.uncertainty import classifier_uncertainty, classifier_margin from modAL.utils.combination import make_linear_combination, make_product from modAL.utils.selection import multi_argmax from modAL.uncertainty import uncertainty_sampling from model import UNet from skorch.net import NeuralNet from skorch.helper import predefined_split from torch.optim import SGD import cv2 # Map style dataset, class ImagesDataset(Dataset): """Constructs dataset of satellite images + masks""" def __init__(self, image_paths): super().__init__() self.image_paths = image_paths def __len__(self): return len(self.image_paths) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() print("idx:", idx) sample_dir = self.image_paths[idx] img_path = sample_dir +"/images/"+ Path(sample_dir).name +'.png' mask_path = sample_dir +'/mask.png' img, mask = cv2.imread(img_path), cv2.imread(mask_path) print("shape of img", img.shape) return img, mask # turn data into dataset train_ds = ImagesDataset(train_dirs) val_ds = ImagesDataset(valid_dirs) train_loader = torch.utils.data.DataLoader(train_ds, batch_size=3, shuffle=True, pin_memory=True) val_loader = torch.utils.data.DataLoader(val_ds, batch_size=1, shuffle=True, pin_memory=True) # make sure data loaded in cuda for train, validation for i, (tr, val) in enumerate(train_loader): tr, val = tr.cuda(), val.cuda() for i, (tr2, val2) in enumerate(val_loader): tr2, val2 = tr2.cuda(), val2.cuda() X, y = next(iter(train_loader)) X_train = np.array(X.reshape(3,3,1024,1024)) y_train = np.array(y.reshape(3,3,1024,1024)) X2, y2 = next(iter(val_loader)) X_test = np.array(X2.reshape(1,3,1024,1024)) y_test = np.array(y2.reshape(1,3,1024,1024)) module = UNet(pretrained=True) if torch.cuda.is_available(): module = module.cuda() # create the classifier net = NeuralNet( module, criterion=torch.nn.NLLLoss, batch_size=32, max_epochs=20, optimizer=SGD, optimizer__momentum=0.9, iterator_train__shuffle=True, iterator_train__num_workers=4, iterator_valid__shuffle=False, iterator_valid__num_workers=4, train_split=predefined_split(val_ds), device='cuda', ) # assemble initial data n_initial = 1 initial_idx = np.random.choice(range(len(X_train)), size=n_initial, replace=False) X_initial = X_train[initial_idx] y_initial = y_train[initial_idx] # generate the pool, remove the initial data from the training dataset X_pool = np.delete(X_train, initial_idx, axis=0) y_pool = np.delete(y_train, initial_idx, axis=0) # train the activelearner # shape of 4D matrix is 'batch', 'channel', 'width', 'height') learner = ActiveLearner( estimator= net, X_training=X_initial, y_training=y_initial, ) The full error trace is: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-83-0af6007b6b72> in <module> 8 learner = ActiveLearner( 9 estimator= net, ---> 10 X_training=X_initial, y_training=y_initial, 11 # X_training=X_initial, y_training=y_initial, 12 ) ~/.local/lib/python3.7/site-packages/modAL/models/learners.py in __init__(self, estimator, query_strategy, X_training, y_training, bootstrap_init, on_transformed, **fit_kwargs) 80 ) -> None: 81 super().__init__(estimator, query_strategy, ---> 82 X_training, y_training, bootstrap_init, on_transformed, **fit_kwargs) 83 84 def teach(self, X: modALinput, y: modALinput, bootstrap: bool = False, only_new: bool = False, **fit_kwargs) -> None: ~/.local/lib/python3.7/site-packages/modAL/models/base.py in __init__(self, estimator, query_strategy, X_training, y_training, bootstrap_init, on_transformed, force_all_finite, **fit_kwargs) 70 self.y_training = y_training 71 if X_training is not None: ---> 72 self._fit_to_known(bootstrap=bootstrap_init, **fit_kwargs) 73 self.Xt_training = self.transform_without_estimating(self.X_training) if self.on_transformed else None 74 ~/.local/lib/python3.7/site-packages/modAL/models/base.py in _fit_to_known(self, bootstrap, **fit_kwargs) 160 """ 161 if not bootstrap: --> 162 self.estimator.fit(self.X_training, self.y_training, **fit_kwargs) 163 else: 164 n_instances = self.X_training.shape[0] ~/.local/lib/python3.7/site-packages/skorch/net.py in fit(self, X, y, **fit_params) 901 self.initialize() 902 --> 903 self.partial_fit(X, y, **fit_params) 904 return self 905 ~/.local/lib/python3.7/site-packages/skorch/net.py in partial_fit(self, X, y, classes, **fit_params) 860 self.notify('on_train_begin', X=X, y=y) 861 try: --> 862 self.fit_loop(X, y, **fit_params) 863 except KeyboardInterrupt: 864 pass ~/.local/lib/python3.7/site-packages/skorch/net.py in fit_loop(self, X, y, epochs, **fit_params) 774 775 self.run_single_epoch(dataset_train, training=True, prefix="train", --> 776 step_fn=self.train_step, **fit_params) 777 778 if dataset_valid is not None: ~/.local/lib/python3.7/site-packages/skorch/net.py in run_single_epoch(self, dataset, training, prefix, step_fn, **fit_params) 810 yi_res = yi if not is_placeholder_y else None 811 self.notify("on_batch_begin", X=Xi, y=yi_res, training=training) --> 812 step = step_fn(Xi, yi, **fit_params) 813 self.history.record_batch(prefix + "_loss", step["loss"].item()) 814 self.history.record_batch(prefix + "_batch_size", get_len(Xi)) ~/.local/lib/python3.7/site-packages/skorch/net.py in train_step(self, Xi, yi, **fit_params) 707 return step['loss'] 708 --> 709 self.optimizer_.step(step_fn) 710 return step_accumulator.get_step() 711 ~/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 24 def decorate_context(*args, **kwargs): 25 with self.__class__(): ---> 26 return func(*args, **kwargs) 27 return cast(F, decorate_context) 28 ~/.local/lib/python3.7/site-packages/torch/optim/sgd.py in step(self, closure) 84 if closure is not None: 85 with torch.enable_grad(): ---> 86 loss = closure() 87 88 for group in self.param_groups: ~/.local/lib/python3.7/site-packages/skorch/net.py in step_fn() 703 def step_fn(): 704 self.optimizer_.zero_grad() --> 705 step = self.train_step_single(Xi, yi, **fit_params) 706 step_accumulator.store_step(step) 707 return step['loss'] ~/.local/lib/python3.7/site-packages/skorch/net.py in train_step_single(self, Xi, yi, **fit_params) 643 """ 644 self.module_.train() --> 645 y_pred = self.infer(Xi, **fit_params) 646 loss = self.get_loss(y_pred, yi, X=Xi, training=True) 647 loss.backward() ~/.local/lib/python3.7/site-packages/skorch/net.py in infer(self, x, **fit_params) 1046 x_dict = self._merge_x_and_fit_params(x, fit_params) 1047 return self.module_(**x_dict) -> 1048 return self.module_(x, **fit_params) 1049 1050 def _get_predict_nonlinearity(self): ~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/al/model.py in forward(self, x) 51 52 def forward(self, x): ---> 53 conv1 = self.conv1(x) 54 conv2 = self.conv2(conv1) 55 conv3 = self.conv3(conv2) ~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.local/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: --> 117 input = module(input) 118 return input 119 ~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.local/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input) 421 422 def forward(self, input: Tensor) -> Tensor: --> 423 return self._conv_forward(input, self.weight) 424 425 class Conv3d(_ConvNd): ~/.local/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight) 418 _pair(0), self.dilation, self.groups) 419 return F.conv2d(input, weight, self.bias, self.stride, --> 420 self.padding, self.dilation, self.groups) 421 422 def forward(self, input: Tensor) -> Tensor: RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.cuda.FloatTensor) should be the same If anyone could help that would be so so so appreciated! I've been really stuck despite searching all over-- casting my UNet model to floats has not helped and I think I've called .cuda() where I'm supposed to. Specific things I've tried: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same loading entries of my DataLoader to CUDA adding pin_memory to my DataLoader loading my skorch NeuralNetwork to CUDA as stated here Pytorch, INPUT (normal tensor) and WEIGHT (cuda tensor) mismatch (which didn't work because it's not a function in skorch) Casting my data to float (https://discuss.pytorch.org/t/input-type-torch-cuda-doubletensor-and-weight-type-torch-cuda-floattensor-should-be-the-same/22704)
cv2.imread gives you np.uint8 data type which will be converted to PyTorch's byte. The byte type cannot be used with the float type (which is most probably used by your model). You need to convert the byte type to float type (and to Tensor), by modifying the dataset import torchvision.transforms as transforms class ImagesDataset(Dataset): """Constructs dataset of satellite images + masks""" def __init__(self, image_paths): super().__init__() self.image_paths = image_paths self.transform = transforms.Compose([transforms.ToTensor()]) def __len__(self): return len(self.image_paths) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() print("idx:", idx) sample_dir = self.image_paths[idx] img_path = sample_dir +"/images/"+ Path(sample_dir).name +'.png' mask_path = sample_dir +'/mask.png' img, mask = cv2.imread(img_path), cv2.imread(mask_path) img = self.transform(img) mask = self.transform(mask) print("shape of img", img.shape) return img, mask
https://stackoverflow.com/questions/65210442/
How to concatenate 2 pytorch models and make the first one non-trainable in PyTorch
I've two networks, which I need to concatenate for my full model. However my first model is pre-trained and I need to make it non-trainable when training the full model. How can I achieve this in PyTorch. I am able to concatenate two models using this answer class MyModelA(nn.Module): def __init__(self): super(MyModelA, self).__init__() self.fc1 = nn.Linear(10, 2) def forward(self, x): x = self.fc1(x) return x class MyModelB(nn.Module): def __init__(self): super(MyModelB, self).__init__() self.fc1 = nn.Linear(20, 2) def forward(self, x): x = self.fc1(x) return x class MyEnsemble(nn.Module): def __init__(self, modelA, modelB): super(MyEnsemble, self).__init__() self.modelA = modelA self.modelB = modelB def forward(self, x): x1 = self.modelA(x) x2 = self.modelB(x1) return x2 # Create models and load state_dicts modelA = MyModelA() modelB = MyModelB() # Load state dicts modelA.load_state_dict(torch.load(PATH)) model = MyEnsemble(modelA, modelB) x = torch.randn(1, 10) output = model(x) Basically here, I want to load pre-trained modelA and make it non-trainable when training the Ensemble model.
You can freeze all parameters of the model you dont want to train, by setting requires_grad to false. Like this: for param in model.parameters(): param.requires_grad = False This should work for you. Another way is to handle this in your train-loop: modelA = MyModelA() modelB = MyModelB() criterionB = nn.MSELoss() optimizerB = torch.optim.Adam(modelB.parameters(), lr=0.001) for epoch in range(epochs): for samples, targets in dataloader: optimizerB.zero_grad() x = modelA.train()(samples) predictions = modelB.train()(samples) loss = criterionB(predictions, targets) loss.backward() optimizerB.step() So you pass the output of modelA to modelB but you optimize just modelB.
https://stackoverflow.com/questions/65216411/