instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Does PyTorch loss() and backpropagation understand lambda layers?
I've been working with a resnet56 model from the code provided here: https://github.com/akamaster/pytorch_resnet_cifar10/blob/master/resnet.py. I noticed that the implementation is different from many of the other available ResNet examples online, and I was wondering if PyTorch's backpropagation algorithm using loss() can account for the lambda layer and shortcut in the code provided. If that is the case, can anyone provide insight into how PyTorch is able to interpret the lambda layer for backpropagation (i.e. how does PyTorch know how to differentiate with respect to the layer's operations)? P.S. I also had to modify the code to fit my own use-case, and it seems like my own implementation with option == 'A' does not produce great results. This may simply be because option == 'B,' which uses convolutional layers instead of padding, is better for my data. self.shortcut = nn.Sequential() if stride != 1 or in_planes != planes: if option == 'A': top = (int) ((self.expansion*planes - in_planes) / 2) bot = (self.expansion*planes - in_planes) - top self.shortcut = LambdaLayer(lambda x: F.pad(x[:, :, ::stride, ::stride], (0, 0, 0, 0, top, bot), "constant", 0))
"I was wondering if PyTorch's backpropagation algorithm using loss() can account for the lambda layer and shortcut in the code provided." PyTorch has no problem with backpropagating through lambda functions. Your LambdaLayer is just defining the forward pass of the Module as the evaluation of the lambda function, so your question boils down to whether PyTorch can backpropagate through lambda functions. "If that is the case, can anyone provide insight into how PyTorch is able to interpret the lambda layer for backpropagation (i.e. how does PyTorch know how to differentiate with respect to the layer's operations)?" The lambda function performs the torch.nn.functional.Pad function on x, which we can packpropagate through because it is has a defined backwards() function. PyTorch handles lambda functions the same way an autodiff tool like PyTorch handles any function: it breaks it up into primitive operations, and uses the differentiation rules for each primitive operation to build up the derivative of the entire computation.
https://stackoverflow.com/questions/59815491/
What does Pytorch's default padding in Conv2D signify?
In the Pytorch documentation for the MaxPool2D states: padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0 Does it mean that the default values for padding would be 0 or that there will be no padding? In my code: nn.Conv2d(input_channels, output_channels, kernel_size, stride), I didn't pass any padding parameter and when I printed it, it shows the following (0): Conv2d(4, 8, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2)) So, I'm assuming it by default adds padding to match the input size with values 0. Now, if that's the case, then how can I turn it off and/or add a different padding value?
Does it mean that the default values for padding would be 0 or that there will be no padding? It means that there will be no padding at all (because the parameter padding specifies the size of the padding for each dimension and by default it is padding=0, i.e. (0, 0)). >>> conv = torch.nn.Conv2d(4, 8, kernel_size=(3, 3), stride=(1, 1)) >>> conv.padding (0, 0) The convolution layer is agnostic of the input height and width, it only expects the input to have 4 channels (in this particular example). The input height and weight are computed after you pass the input to it, and if the image size is smaller than the kernel size, then it will throw an error -- it will not pad the input automatically to match the size. It would help if you posted full code to see where the padding comes from.
https://stackoverflow.com/questions/59818595/
Constant loss during LSTM training - PyTorch
I'm trying to implement an LSTM network for predicting the next word in a sentence. This is my first time building a neural network and I'm confused by all the information I found on the Internet. I'm trying to use the following architecture: import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class WordLSTM(nn.Module): def __init__(self, vocabulary_size, embedding_dim, hidden_dim): super().__init__() # Word embeddings self.encoder = nn.Embedding(vocabulary_size, embedding_dim) # LSTM input dim is embedding_dim, output dim is hidden_dim self.lstm = nn.LSTM(embedding_dim, hidden_dim) # Linear layer to map hidden states to vocabulary space self.decoder = nn.Linear(hidden_dim, vocabulary_size) def forward(self, sentence): encoded = self.encoder(sentence) output, _ = self.lstm( encoded.view(len(sentence), 1, -1)) decoded = self.decoder(output) word_scores = F.softmax(decoded, dim=1) return word_scores[-1].view(1, -1) I've created a dictionary with all the sentences from my dataset, and each word is encoded with their respective indices from the dictionary. They're followed by an encoded next word (target vector). Here's a bunch of training examples that I'm trying to use: [tensor([39]), tensor([13698])], [tensor([ 39, 13698]), tensor([11907])], [tensor([ 39, 13698, 11907]), tensor([70])] I'm passing one sentence at a time during training, so my batch size is always 1. NUM_EPOCHS = 100 LEARNING_RATE = 0.0005 rnn = WordLSTM(vocab_size, 64, 32) optimizer = optim.SGD(rnn.parameters(), lr=LEARNING_RATE) for epoch in range(NUM_EPOCHS): training_example = generate_random_training_example(training_ds) optimizer.zero_grad() for sentence, next_word in training_example: output = rnn(sentence) loss = F.cross_entropy(output, next_word) loss.backward() optimizer.step() print(f"Epoch: {epoch}/{NUM_EPOCHS} Loss: {loss:.4f}") However, when I start the training, the loss does not change with time: Epoch: 0/100 Loss: 10.3929 Epoch: 1/100 Loss: 10.3929 Epoch: 2/100 Loss: 10.3929 Epoch: 3/100 Loss: 10.3929 Epoch: 4/100 Loss: 10.3929 Epoch: 5/100 Loss: 10.3929 Epoch: 6/100 Loss: 10.3929 I've tried placing optimizer.zero_grad() and optimizer.step() in a different places already, but didn't help either. What could be the problem in this case? Am I calculating the loss in the wrong way, or do I pass the tensors in the wrong format?
Delete F.softmax. You do log_softmax(softmax(x)). CrossEntropyLoss This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. import torch as t class Net(t.nn.Module): def __init__(self): super(Net, self).__init__() self.emb = t.nn.Embedding(100, 8) self.lstm = t.nn.LSTM(8, 16, batch_first=True) self.linear = t.nn.Linear(16, 100) def forward(self, x): x = self.emb(x) x, _ = self.lstm(x) x = self.linear(x[:, -1]) #x = t.nn.Softmax(dim=1)(x) return x t.manual_seed(0) net = Net() batch_size = 1 X = t.LongTensor(batch_size, 5).random_(0, 100) Y = t.LongTensor(batch_size).random_(0, 100) optimizer = t.optim.Adam(net.parameters()) criterion = t.nn.CrossEntropyLoss() for epoch in range(10): optimizer.zero_grad() output = net(X) loss = criterion(output, Y) loss.backward() optimizer.step() print(loss.item()) 4.401515960693359 4.389760494232178 4.377873420715332 4.365848541259766 4.353675365447998 4.341339588165283 4.328824520111084 4.316114902496338 4.303196430206299 4.2900567054748535 With uncommented t.nn.Softmax: 4.602912902832031 4.6027679443359375 4.602619171142578 4.6024675369262695 4.602311611175537 4.602152347564697 4.601987361907959 4.601818084716797 4.6016435623168945 4.601463794708252 Use softmax during evaluation: net.eval() t.nn.Softmax(dim=1)(net(X[0].view(1,-1))) tensor([[0.0088, 0.0121, 0.0098, 0.0072, 0.0085, 0.0083, 0.0083, 0.0108, 0.0127, 0.0090, 0.0094, 0.0082, 0.0099, 0.0115, 0.0094, 0.0107, 0.0081, 0.0096, 0.0087, 0.0131, 0.0129, 0.0127, 0.0118, 0.0107, 0.0087, 0.0073, 0.0114, 0.0076, 0.0103, 0.0112, 0.0104, 0.0077, 0.0116, 0.0091, 0.0091, 0.0104, 0.0106, 0.0094, 0.0116, 0.0091, 0.0117, 0.0118, 0.0106, 0.0113, 0.0083, 0.0091, 0.0076, 0.0089, 0.0076, 0.0120, 0.0107, 0.0139, 0.0097, 0.0124, 0.0096, 0.0097, 0.0104, 0.0128, 0.0084, 0.0119, 0.0096, 0.0100, 0.0073, 0.0099, 0.0086, 0.0090, 0.0089, 0.0098, 0.0102, 0.0086, 0.0115, 0.0110, 0.0078, 0.0097, 0.0115, 0.0102, 0.0103, 0.0107, 0.0095, 0.0083, 0.0090, 0.0120, 0.0085, 0.0113, 0.0128, 0.0074, 0.0096, 0.0123, 0.0106, 0.0105, 0.0101, 0.0112, 0.0086, 0.0105, 0.0121, 0.0103, 0.0075, 0.0098, 0.0082, 0.0093]], grad_fn=)
https://stackoverflow.com/questions/59818727/
Visualizing models, data, and training with tensorboard in pytorch
Following this Tutorial to visualize images in tensorboard using torch.utils.tensorboard got error from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter('runs/fashion_mnist_experiment_1') writer.add_embedding(features, metadata=class_labels, label_img=images.unsqueeze(1)) writer.close() Error: AttributeError: module 'tensorflow_core._api.v2.io.gfile' has no attribute 'get_filesystem'
try adding writer = SummaryWriter(path) img_grid = torchvision.utils.make_grid(torch.FloatTensor(imgs)) #imgs represent a batch of images with shape BxCxHxW writer_add_image('images', img_grid) source : https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html#writing-to-tensorboard
https://stackoverflow.com/questions/59824811/
Confused about calculation of convolutional layer shapes
I am new in this forum and I have started studying the theory of CNN. It is probably a stupid question but I am confused about the calculation of the CNN outputs shape. I am following a course on Udacity and in one of the tutorials they provide this CNN architecture. import torch.nn as nn import torch.nn.functional as F # define the CNN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # convolutional layer (sees 32x32x3 image tensor) self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # convolutional layer (sees 16x16x16 tensor) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # convolutional layer (sees 8x8x32 tensor) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) # max pooling layer self.pool = nn.MaxPool2d(2, 2) # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 4 * 4, 500) # linear layer (500 -> 10) self.fc2 = nn.Linear(500, 10) # dropout layer (p=0.25) self.dropout = nn.Dropout(0.25) Could you please help in understanding the way they calculate the outputs of the CNN layers? (The starting shape of the images is 32x32x3) More specifically how did they end up with this: # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 4 * 4, 500) Thanks a lot
It misses the definition of the forward pass and one can guess there is a 2x2 pooling after each conv layer. Hence, the pooling implies a subsampling each time (see the comments) and the 32x32 images becomes 16x16 after conv1 (+ 2x2 pooling), 8x8 after conv2 (+ 2x2 pooling) and 4x4 after conv3 (+ 2x2 pooling). Since conv3 has 64 filters, it outputs 64 feature maps of size 4x4. Then, the fc1 maps this tensor to a fully connected layer of size 500. It is exactly what is defined by the line self.fc1 = nn.Linear(64 * 4 * 4, 500)
https://stackoverflow.com/questions/59826152/
neighbours of a cell in matrix pytorch
I am trying to get neighbours of a cell of matrix in pytorch using below part of code. it works correctly but it is very time consumming. Have you any suggestion to to get it faster def neighbour(x): result=F.pad(input=x, pad=(1, 1, 1, 1), mode='constant', value=0) for m in range(1,x.size(0)+1): for n in range(1,x.size(1)+1): y=torch.Tensor([result[m][n],result[m-1][n-1],result[m-1][n],result[m-1] [n+1],result[m][n-1],result[m][n+1],result[m+1][n-1],result[m+1][n],result[m+1][n+1]]) x[m-1][n-1]=y.mean() return x
If you are only after the mean of the 9 elements centered at each pixel, then your best option would be to use a 2D convolution with a constant 3x3 filter: import torch.nn.functional as nnf def mean_filter(x_bchw): """ Calculating the mean of each 3x3 neighborhood. input: - x_bchw: input tensor of dimensions batch-channel-height-width output: - y_bchw: each element in y is the average of the 9 corresponding elements in x_bchw """ # define the filter box = torch.ones((3, 3), dtype=x_bchw.dtype, device=x_bchw.device, requires_grad=False) box = box / box.sum() box = box[None, None, ...].repeat(x_bchw.size(1), 1, 1, 1) # use grouped convolution - so each channel is averaged separately. y_bchw = nnf.conv2d(x_bchw, box, padding=1, groups=x_bchw.size(1)) return y_bchw however, if you want to apply a more elaborate function over each neighborhood, you may want to use nn.Unfold. This operation converts each 3x3 (or whatever rectangular neighborhood you define) to a vector. Once you have all the vectors you may apply your function to them. See this answer for more details on unfold and fold.
https://stackoverflow.com/questions/59831211/
Taking the maximum values of each row in a tensor [PyTorch]
Suppose I have a tensor of the form [[-5, 0, -1], [3, 100, 87], [17, -34, 2], [45, 1, 25]] I want to find the maximum value in each row and return a rank 1 tensor as follows: [0, 100, 17, 45] How would I do this in PyTorch?
You can use the torch.max() function. So you can do something like x = torch.Tensor([[-5, 0, -1], [3, 100, 87], [17, -34, 2], [45, 1, 25]]) out, inds = torch.max(x,dim=1) and this will return the maximum values across each row (dimension 1). It will return max values with their indices.
https://stackoverflow.com/questions/59832252/
How to store and load training data comprised 50 millions 25x25 numpy arrays while training a multi-class CNN model?
I have an image processing problem where there are five classes, each class has approximately 10 millions examples as training data where an image is a z-scored 25x25 numpy array. Obviously, I can’t load all the training data into memory, so I have to use fit_generator. I also the one who generates and augments these training data matrices, but I can’t do it in real time within fit_generator because it will be too slow to train the model. First, how to store 50 millions 25x25 .npy arrays on disk? What would be the best practice? Second, Should I use a database to store these matrices and to query from it during training? I don’t think SQLite supports multi threads, and SQL datasets support is still experimental in tensorflow. I would love to know if there is a neat way to store these 50 million matrices, so the retrieval during training will be optimal. Third, what about using HDF5 format? Should I switch to pytorch instead?
Here is some code I found on medium (can't find the original post). This will help to generate training data on-the-fly in a producer-consumer fashion: import tensorflow as tf import numpy as np from time import sleep class DataGen(): counter = 0 def __init__(self): self.gen_num = DataGen.counter DataGen.counter += 1 def py_gen(self, gen_name): gen_name = gen_name.decode('utf8') + '_' + str(self.gen_num) for num in range(10): sleep(0.3) yield '{} yields {}'.format(gen_name, num) Dataset = tf.data.Dataset dummy_ds = Dataset.from_tensor_slices(['Gen1', 'Gen2', 'Gen3']) dummy_ds = dummy_ds.interleave(lambda x: Dataset.from_generator(DataGen().py_gen, output_types=(tf.string), args=(x,)), cycle_length=5, block_length=2, num_parallel_calls=5) data_tf = dummy_ds.as_numpy_iterator() for d in data_tf: print(d) Output: b'Gen1_0 yields 0' b'Gen1_0 yields 1' b'Gen2_0 yields 0' b'Gen2_0 yields 1' b'Gen3_0 yields 0' b'Gen3_0 yields 1' b'Gen1_0 yields 2' b'Gen1_0 yields 3' b'Gen2_0 yields 2' b'Gen2_0 yields 3' b'Gen3_0 yields 2' b'Gen3_0 yields 3' b'Gen1_0 yields 4' b'Gen1_0 yields 5' b'Gen2_0 yields 4' b'Gen2_0 yields 5' b'Gen3_0 yields 4' b'Gen3_0 yields 5' b'Gen1_0 yields 6' b'Gen1_0 yields 7' b'Gen2_0 yields 6' b'Gen2_0 yields 7' b'Gen3_0 yields 6' b'Gen3_0 yields 7' b'Gen1_0 yields 8' b'Gen1_0 yields 9' b'Gen2_0 yields 8' b'Gen2_0 yields 9' b'Gen3_0 yields 8' b'Gen3_0 yields 9'
https://stackoverflow.com/questions/59836100/
How can I load CSV data for a PyTorch neural network?
This may be a simple question, and I apologize if it's too simple. But I have some data in a CSV: Date,Open,High,Low,Close,Adj Close,Volume 1993-01-29,43.968750,43.968750,43.750000,43.937500,26.453930,1003200 1993-02-01,43.968750,44.250000,43.968750,44.250000,26.642057,480500 1993-02-02,44.218750,44.375000,44.125000,44.343750,26.698507,201300 1993-02-03,44.406250,44.843750,44.375000,44.812500,26.980742,529400 1993-02-04,44.968750,45.093750,44.468750,45.000000,27.093624,531500 1993-02-05,44.968750,45.062500,44.718750,44.968750,27.074818,492100 1993-02-08,44.968750,45.125000,44.906250,44.968750,27.074818,596100 1993-02-09,44.812500,44.812500,44.562500,44.656250,26.886669,122100 .... I want to create a "training set", which is basically a random vector of 10 rows of data (I can figure out the normalizing, etc) randomly sampled from anywhere in the file. I think I'll have to use pandas to do the loading maybe? If what I'm trying to ask is unclear, please add comments and I will adjust the question accordingly. Thank you.
import pandas as pd sample = pd.read_csv('myfile.csv').sample(n=10) you should load the file only 1 time and then sample as you go: df = pd.read_csv('myfile.csv') sample1 = df.sample(n=10) sample2 = df.sample(n=10)
https://stackoverflow.com/questions/59849877/
Pytorch Sigmoid Function what is e
I have a question on setting up the sigmoid function in pytroch. So I define it as this # Sigmoid function def sigmoid(x): return 1/(1 + torch.exp(-x)) But then looking at the sigmoid function from here http://mathworld.wolfram.com/SigmoidFunction.html The sigmoid should be defined as y = 1/(1 + e^-x) I see the 1/(1+ part but I don't get the e^-x part. Can someone explain why torch.exp(-x) == e^-x What is e here? Is that the tensor. But i thought that x was the tensor
Here e is the exponential constant e and torch.exp(-x) == e^-x torch.exp(x) returns e^x
https://stackoverflow.com/questions/59852884/
How to process input data for audio classification using CNN with PyTorch?
As an engineer student works towards DSP and ML fields, I am working on an audio classification project with inputs being short clips (4 sec.) of instruments like bass, keyboard, guitar, etc. (NSynth Dataset by the Magenta team at Google). The idea is to convert all the short clips (.wav files) to spectrograms or melspectrograms then apply a CNN to train the model. However, my questions is since the entire dataset is large (approximately 23GB), I wonder if I should firstly convert all the audio files to images like PNG then apply CNN. I feel like this can take a lot of time, and it will double the storage space for my input data as now it is audio + image (maybe up to 70GB). Thus, I wonder if there is any workaround here that can speed the process. Thanks in advance.
Preprocessing is totally worth it. You will very likely end up, running multiple experiments before your network will work as you want it to and you don't want to waste time pre-processing the features every time, you want to change a few hyper-parameters. Rather than using PNG, I would rather save directly PyTorch tensors (torch.save that uses Python's standard pickling protocols) or NumPy arrays (numpy.savez saves serialized arrays into a zip file). If you are concerned with disk space, you can consider numpy.save_compressed.
https://stackoverflow.com/questions/59854794/
Why randn doesn't always have a mean of 0 and variance of 1?
For the PyTorch.randn() method the documentation says: Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). So here is an example tensor: x = torch.randn(4,3) tensor([[-0.6569, -0.7337, -0.0028], [-0.3938, 0.3223, 0.0497], [ 0.0129, -2.7546, -2.2488], [ 1.6754, -0.1497, 1.8202]]) When I print the mean: x.mean() tensor(-0.2550) When I print the standard deviation: x.std() tensor(1.3225) So why isn't the mean 0 and the standard deviation 1? Bonus question: How do I generate a random tensor that always has a mean of 0?
It would be a big coincidence that a finite sample of the distribution has exactly the same mean and exactly the same standard deviation. It is to be expected that the more numbers you generate, the closer the mean and deviation of the sample approaches the "true" mean and deviation of the distribution.
https://stackoverflow.com/questions/59857571/
How can I build an LSTM AutoEncoder with PyTorch?
I have my data as a DataFrame: dOpen dHigh dLow dClose dVolume day_of_week_0 day_of_week_1 ... month_6 month_7 month_8 month_9 month_10 month_11 month_12 639 -0.002498 -0.000278 -0.005576 -0.002228 -0.002229 0 0 ... 0 0 1 0 0 0 0 640 -0.004174 -0.005275 -0.005607 -0.005583 -0.005584 0 0 ... 0 0 1 0 0 0 0 641 -0.002235 0.003070 0.004511 0.008984 0.008984 1 0 ... 0 0 1 0 0 0 0 642 0.006161 -0.000278 -0.000281 -0.001948 -0.001948 0 1 ... 0 0 1 0 0 0 0 643 -0.002505 0.001113 0.005053 0.002788 0.002788 0 0 ... 0 0 1 0 0 0 0 644 0.004185 0.000556 -0.000559 -0.001668 -0.001668 0 0 ... 0 0 1 0 0 0 0 645 0.002779 0.003056 0.003913 0.001114 0.001114 0 0 ... 0 0 1 0 0 0 0 646 0.000277 0.004155 -0.002227 -0.002782 -0.002782 1 0 ... 0 0 1 0 0 0 0 647 -0.005540 -0.007448 -0.003348 0.001953 0.001953 0 1 ... 0 0 1 0 0 0 0 648 0.001393 -0.000278 0.001960 -0.003619 -0.003619 0 0 ... 0 0 1 0 0 0 0 My input will be 10 rows (already one-hot encoded). I want to create an n-dimensional auto encoded representation. So as I understand it, my input and output should be the same. I've seen some examples to construct this, but am still stuck on the first step. Is my training data just a lot of those samples as to make a matrix? What then? I apologize for the general nature of the question. Any questions, just ask and I will clarify in the comments. Thank you.
It isn't quite clear from the question what you are trying to achieve. Based on what you wrote you want to create an autoencoder with the same input and output and that doesn't quite make sense to me when I see your data set. In the common case, the encoder part of the autoencoder creates a model which, based on a large set of input features produces a small output vector and decoder is performing an inverse operation of reconstruction of the plausible input features based on the full set of output and input features. A result of using an autoencoder is enhanced (in some meaning, like with noise removed, etc) input. You can find a few examples here with the 3rd use case providing code for the sequence data, learning random number generation model. Here is another example, which looks closer to your application. A sequential model is constructed to encode a large data set with information loss. If that is what you are trying to achieve, you'll find the code there. If the goal is a sequence prediction (like future stock prices), this and that example seem to be more appropriate as you likely only want to predict a handful of values in your data sequence (say dHigh and dLow) and you don't need to predict day_of_week_n or the month_n (even though that part of autoencoder model probably will train much more reliable as the pattern is pretty clear). This approach will allow you to predict a single consequent output feature value (tomorrow's dHigh and dLow) If you want to predict a sequence of future outputs you can use a sequence of outputs, rather than a single one in your model. In general, the structure of inputs and outputs is totally up to you
https://stackoverflow.com/questions/59863263/
Can you reverse a PyTorch neural network and activate the inputs from the outputs?
Can we activate the outputs of a NN to gain insight into how the neurons are connected to input features? If I take a basic NN example from the PyTorch tutorials. Here is an example of a f(x,y) training example. import torch N, D_in, H, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-4 for t in range(500): y_pred = model(x) loss = loss_fn(y_pred, y) model.zero_grad() loss.backward() with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad After I've finished training the network to predict y from x inputs. Is it possible to reverse the trained NN so that it can now predict x from y inputs? I don't expect y to match the original inputs that trained the y outputs. So I expect to see what features the model activates on to match x and y. If it is possible, then how do I rearrange the Sequential model without breaking all the weights and connections?
It is possible but only for very special cases. For a feed-forward network (Sequential) each of the layers needs to be reversible; that means the following arguments apply to each layer separately. The transformation associated with one layer is y = activation(W*x + b) where W is the weight matrix and b the bias vector. In order to solve for x we need to perform the following steps: Reverse activation; not all activation functions have an inverse though. For example the ReLU function does not have an inverse on (-inf, 0). If we used tanh on the other hand we can use its inverse which is 0.5 * log((1 + x) / (1 - x)). Solve W*x = inverse_activation(y) - b for x; for a unique solution to exist W must have similar row and column rank and det(W) must be non-zero. We can control the former by choosing a specific network architecture while the latter depends on the training process. So for a neural network to be reversible it must have a very specific architecture: all layers must have the same number of input and output neurons (i.e. square weight matrices) and the activation functions all need to be invertible. Code: Using PyTorch we will have to do the inversion of the network manually, both in terms of solving the system of linear equations as well as finding the inverse activation function. Consider the following example of a 1-layer neural network (since the steps apply to each layer separately extending this to more than 1 layer is trivial): import torch N = 10 # number of samples n = 3 # number of neurons per layer x = torch.randn(N, n) model = torch.nn.Sequential( torch.nn.Linear(n, n), torch.nn.Tanh() ) y = model(x) z = y # use 'z' for the reverse result, start with the model's output 'y'. for step in list(model.children())[::-1]: if isinstance(step, torch.nn.Linear): z = z - step.bias[None, ...] z = z[..., None] # 'torch.solve' requires N column vectors (i.e. shape (N, n, 1)). z = torch.solve(z, step.weight)[0] z = torch.squeeze(z) # remove the extra dimension that we've added for 'torch.solve'. elif isinstance(step, torch.nn.Tanh): z = 0.5 * torch.log((1 + z) / (1 - z)) print('Agreement between x and z: ', torch.dist(x, z))
https://stackoverflow.com/questions/59878319/
Pytorch: [TypeError: __init__() takes 1 positional argument but 2 were given]
I searched StackOverflow and visited other websites for help, but I can´t find a solution to my problem. I will leave the whole code to make it understandable for you. It´s about 110 lines, written with PyTorch. Each time, I will compile and calculate a prediction, this error-code will show up: Traceback (most recent call last): File "/Users/MacBookPro/Dropbox/01 GST h_da Privat/BA/06_KNN/PyTorchV1/BesucherV5.py", line 108, in <module> result = Network(test_exp).data[0][0].item() TypeError: __init__() takes 1 positional argument but 2 were given I know, other users had this too, but none of their solutions helped me out. I guess the mistake is either in my class "Network" or in variable "result". I hope that someone of you had this problem and know how to fix it or can help me in a different way. Short information about the Dataset: My Dataset has 10 columns and gets splitted into two sets. X and Y. X has 9 columns, Y just one. These are then used to train the network. Thank you in advance! Kind regards Christian Richter My Code: import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.optim as optim import pandas as pd ### Dataset ### dataset = pd.read_csv('./data/train_data_csv.csv') x_temp = dataset.iloc[:, :-1].values print(x_temp) print() print(x_temp.size) print() y_temp = dataset.iloc[:, 9:].values print(y_temp) print() print(y_temp.size) print() x_train_tensor = torch.FloatTensor(x_temp) y_train_tensor = torch.FloatTensor(y_temp) ### Network Architecture ### class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.linear1 = nn.Linear(9, 9) #10 Input-Neurons, 10 Output-Neurons, Linearer Layer self.linear2 = nn.Linear(9, 1) def forward(self, x): pax_predict = F.relu(self.linear1(x)) pax_predict = self.linear2(x) return pax_predict def num_flat_features(self, pax_predict): size = pax_predict.size()[1:] num = 1 for i in size: num *= i return num network = Network() print(network) criterion = nn.MSELoss() target = Variable(y_train_tensor) optimizer = torch.optim.SGD(network.parameters(), lr=0.0001) ### Training for epoch in range(200): input = Variable(x_train_tensor) y_pred = network(input) loss = criterion(y_pred, target) optimizer.zero_grad() loss.backward() optimizer.step() test_exp = torch.Tensor([[40116]]) result = Network(test_exp).data[0][0].item() print('Result is: ', result)
The problem is quite simple and is in this line, I suppose: result = Network(test_exp).data[0][0].item() Here you should use network (the object) instead of Network (the class). As you defined, Network takes only 1 argument (i.e., self), but you are passing 2: self and test_exp. Perhaps if you had chosen another name for your object (e.g., net), you'd have spotted this error more easily. Take that into consideration :) And please, always post the full traceback.
https://stackoverflow.com/questions/59884949/
Algorithim of how Conv2d is implemented in PyTorch
I am working on an inference model of a pytorch onnx model which is why this question is being asked. Assume, I have a image with dimensions 32 x 32 x 3 (CIFAR-10 dataset). I pass it through a Conv2d with dimensions : 3 x 192 x 5 x 5. The command I used is: Conv2d(3, 192, kernel_size=5, stride=1, padding=2) Using the formula (stated here for reference pg12 https://arxiv.org/pdf/1603.07285.pdf) I should be getting an output image with dimensions 28 x 28 x 192 (input - kernel + 1 = 32 - 5 + 1). Question is how has PyTorch implemented this 4d tensor 3 x 192 x 5 x 5 to get me an output of 28 x 28 x 192 ? The layer is a 4d tensor and the input image is a 2d one. How is the kernel (5x5) spread in the image matrix 32 x 32 x 3 ? What does the kernel convolve with first -> 3 x 192 or 32 x 32? Note : I have understood the 2d aspects of things. I am asking the above questions in 3 or more.
The input to Conv2d is a tensor of shape (N, C_in, H_in, W_in) and the output is of shape (N, C_out, H_out, W_out), where N is the batch size (number of images), C is the number of channels, H is the height and W is the width. The output height and width H_out, W_out are computed as follows (ignoring the dilation): H_out = (H_in + 2*padding[0] - kernel_size[0]) / stride[0] + 1 W_out = (W_in + 2*padding[1] - kernel_size[1]) / stride[1] + 1 See cs231n for an explanation of how this formulas were obtained. In your example N=1, H_in = 32, W_in = 32, C_in = 3, kernel_size = (5, 5), strides = (1, 1), padding = (0, 0), giving H_out = 28, W_out = 28. The C_out=192 means that there are 192 different filters, each of shape (C_in, kernel_size[0], kernel_size[1]) = (3, 5, 5). Each filter independently performs convolution with the input image resulting in a 2D tensor of shape (H_out, W_out) = (28, 28), and since there are C_out = 192 filters and N = 1 images, the final output is of shape (N, C_out, H_out, W_out) = (1, 192, 28, 28). To understand how exactly the convolution is performed see the convolution demo.
https://stackoverflow.com/questions/59887786/
element wise multiplication of 2D tensors as layer of neural network in pytorch
I have a 3D torch tensor with dimension of [Batch_size, n, n] which is the out put of a layer of my network and a constant 2D torch tensor with size of [n, n]. How can I perform element wise multiplication over the batch size which should resulted in a torch tensor with size of [Batch_size, n, n]? I know it is possible to implement this operation using explicit loop but I am interested in the most efficient way.
One option is that you can expand your weight matrix to have a matching batch dimension (without using any additional memory). E.g. twoDTensor.expand((batch_size, n, n)) returns the same underlying data, but representing a 3D tensor. You can see that the stride for the batch dim is zero.
https://stackoverflow.com/questions/59888482/
Torch installation results in not supported wheel on this platform
Tried running pip3 install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html first, taken from PyTorch website which resulted in No matching distribution found for torch===1.4.0 and Could not find a version that satisfies the requirement torch===1.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) Finally downloaded the .whl file from downloads page and tried installing locally like so 'C:\Users\Raf\AppData\Local\Programs\Python\Python38\Scripts\pip.exe' install torch-1.4.0+cpu-cp37-cp37m-win_amd64.whl after which I get torch-1.4.0+cpu-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform. using 64 Python 3.8, on 64 bit Windows
using 64 Python 3.8 but you downloaded the cp37 whl which is for python 3.7. There is currently no whl file available for python 3.8. So either install from source (probably not recommended), install a different python version or create a virtual environment with python 3.7 Update There is now: https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp38-cp38-win_amd64.whl
https://stackoverflow.com/questions/59894984/
RuntimeError with copy_() and expand() - Unsupported operation: more than one element of the written-to tensor refers to a single memory location
I'm migrating a repository from Pytorch Nightly 1.0.0 to 1.3.1. Stripping the unnecessary details, it is basically performing the following sequence of operations: mu = torch.tensor(0.005) bar = torch.eye(5, 5) foo = torch.eye(5).expand(5, 5, 5) # update bar.copy_(mu * bar) # ok! foo.copy_(mu * foo) # error bar.copy_(mu * bar) works, while when I try to foo.copy_() the result, it gives the following error: RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.
This is because expand() only creates a new view on the existing tensor, thus it doesn't allocate the full memory necessary to receive all the elements from the operation mu * foo, which has more elements than the original tensor foo. You can fix it by either using expand().clone() or repeat(), which will give you the full tensor. foo = torch.eye(5).expand(5, 5, 5).clone() # clone gives the full tensor foo.copy_(mu * foo) # ok! albanD suggests that doing expand().clone() might still be faster than repeat(). See here and here for more details about expand() and repeat().
https://stackoverflow.com/questions/59905234/
PyTorch get indices of value in two-dimensional tensor
Given the following tensor (or any random tensor with two dimension), I want to get the index of '101': tens = tensor([[ 101, 146, 1176, 21806, 1116, 1105, 18621, 119, 102, 0, 0, 0, 0], [ 101, 1192, 1132, 1136, 1184, 146, 1354, 1128, 1127, 117, 1463, 119, 102], [ 101, 6816, 1905, 1132, 14918, 119, 102, 0, 0, 0, 0, 0, 0]]) From the related answers I know that I can do something like this: idxs = torch.tensor([(i == 101).nonzero() for i in tens]) But this seems messy and potentially quite slow. Is there a better way to do this that is fast and more torch-y? Related questions discussing only one-dimensional tensor: How Pytorch Tensor get the index of specific value How Pytorch Tensor get the index of elements?
How about (tens == 101).nonzero()[:, 1] In [20]: from torch import tensor In [21]: tens = torch.tensor([[ 101, 146, 1176, 21806, 1116, 1105, 18621, 119, 102, 0, ...: 0, 0, 0], ...: [ 101, 1192, 1132, 1136, 1184, 146, 1354, 1128, 1127, 117, ...: 1463, 119, 102], ...: [ 101, 6816, 1905, 1132, 14918, 119, 102, 0, 0, 0, ...: 0, 0, 0]]) In [22]: (tens == 101).nonzero()[:, 1] Out[22]: tensor([0, 0, 0])
https://stackoverflow.com/questions/59908433/
modification from classification loss to regression loss
General I am following this repo for object detection https://github.com/yhenon/pytorch-retinanet Motivation Object detection networks usually perform 2 tasks.For every object in the image output a class confidence score and bounding box regression score.For my task along with these 2 outputs, for every object i want to output one more regression score which will be between 0-5. Problem statement The approach i have taken is that since the network already does classifcation i thought i would modify some of the parts and make it a regression loss.The loss used in the repo mentioned above is focal loss.Part of what the classification loss looks like is described below, i would like to modify this to be a regression loss. targets = torch.ones(classification.shape) * -1#Here classification shape is = torch.Size([114048, 1]) targets = targets.cuda() targets[torch.lt(IoU_max, 0.4), :] = 0#Iou_max is a tensor of shape = torch.Size([114048]) which looks like tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0') positive_indices = torch.ge(IoU_max, 0.5) num_positive_anchors = positive_indices.sum() assigned_annotations = bbox_annotation[IoU_argmax, :]#Here bbox_annotation has the shape torch.Size([6, 6]) and IoU_argmax has the shape torch.Size([114048]) targets[positive_indices, :] = 0#Here positive indices has the shape torch.Size([114048]) targets[positive_indices, assigned_annotations[positive_indices, 4].long()] = 1 alpha_factor = torch.ones(targets.shape).cuda() * alpha#here alpha is 0.25 alpha_factor = torch.where(torch.eq(targets, 1.), alpha_factor, 1. - alpha_factor) focal_weight = torch.where(torch.eq(targets, 1.), 1. - classification, classification) focal_weight = alpha_factor * torch.pow(focal_weight, gamma)#here gamma is 2.0 bce = -(targets * torch.log(classification) + (1.0 - targets) * torch.log(1.0 - classification)) cls_loss = focal_weight * bce cls_loss = torch.where(torch.ne(targets, -1.0), cls_loss,torch.zeros(cls_loss.shape).cuda()) classification_losses.append(cls_loss.sum()/torch.clamp(num_positive_anchors.float(), min=1.0)) So i tried modifying some of the bits described above.But even though the code does not break This does not learn anything.Here erosion is just a tensor but erosion score is something we want to regress. targets = torch.ones(erosion.shape) * -1#Here erosion shape is torch.Size([114048, 1]) targets = targets.cuda() targets[torch.lt(IoU_max, 0.4), :] = 0#This part remains the same as above positive_indices = torch.ge(IoU_max, 0.5) num_positive_anchors = positive_indices.sum() assigned_annotations = bbox_annotation[IoU_argmax, :] targets[positive_indices, :] = 0 targets[assigned_annotations[positive_indices, 5].long()] = 1#Here i have indexed along the 5th dimension because the 5th dimension contains annotations for erosion. criterion = nn.MSELoss()#This part is where i use the mean squared loss for regression. loss = torch.sqrt(criterion(targets, erosion)) erosion_losses.append(loss) This will not throw an error , but it does not learn anything either. Could someone please help me with the correct formulation of this loss to perform regression. More info assigned_annotations_erosion = assigned_annotations[positive_indices, 5] erosion_pred = erosion[positive_indices].view(-1) criterion = nn.MSELoss()# loss = torch.sqrt(criterion(assigned_annotations_erosion, erosion_pred)) loss *torch.tensor(5.0).cuda() # import pdb;pdb.set_trace() erosion_losses.append(loss)
If I understand it correctly you want to add another regression parameter which should have values in [0-5]. Instead of trying to change the classification part you can just add it to the box regression part. So you add one more parameter to predict for every anchor. Here is the loss calculation for the regression parameters. If you added one parameter to every anchor (probably by increasing the filter count in the last layer) you have to "extract" it here like with the other parameters. You also probably want to think about the activation for the value e.g. sigmoid so you get something between 0 and 1, you can rescale to 0-5 if you want. After line 107 you want to add something like this anchor_erosion_pi = anchor_erosion[positive_indices] Then you have to get the gt somehow, maybe rescale it to 0-1, so you can just use the sigmoid activated output for this value. Then you have to set the target. This is just: targets_erosion = gt_erosion If you are not dependent on the anchor which seems like it is the case. After that the targets are stacked, where you have to add your new target value and a weighting is added, you have to decide and try out if you need a weighting here. I would probably add start by adding 0.2 or something to the tensor. Maybe have a look inside the paper how these other weightings are chosen. targets = targets/torch.Tensor([[0.1, 0.1, 0.2, 0.2, 0.2]]).cuda() After that the Smooth L1 loss is calculated where you probably don't need to change anything. there is probably more you have to do. Just look at the code, see where the other parameters are and try to add your parameter to it.
https://stackoverflow.com/questions/59909066/
Autoencoder MaxUnpool2d missing 'Indices' argument
The following model returns the error: TypeError: forward() missing 1 required positional argument: 'indices' I've exhausted many online examples and they all look similar to my code. My maxpool layer returns both the input and the indices for the unpool layer. Any ideas on what's wrong? class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.encoder = nn.Sequential( ... nn.MaxPool2d(2, stride=1, return_indices=True) ) self.decoder = nn.Sequential( nn.MaxUnpool2d(2, stride=1), ... ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x
Similar to the question here, the solution seems to be to separate the maxunpool layer from the decoder and explicitly pass its required parameters. nn.Sequential only takes one parameter. class SimpleConvAE(nn.Module): def __init__(self): super().__init__() # input: batch x 3 x 32 x 32 -> output: batch x 16 x 16 x 16 self.encoder = nn.Sequential( ... nn.MaxPool2d(2, stride=2, return_indices=True), ) self.unpool = nn.MaxUnpool2d(2, stride=2, padding=0) self.decoder = nn.Sequential( ... ) def forward(self, x): encoded, indices = self.encoder(x) out = self.unpool(encoded, indices) out = self.decoder(out) return (out, encoded)
https://stackoverflow.com/questions/59912850/
Install specific version of PyTorch to conda environment
Using Anaconda Navigator I created a new environment for running someone's VAE code off GitHub that uses Python 3.6 and PyTorch 0.4.0. Unfortunately, Anaconda Navigator doesn't give me the option to install an older version of PyTorch on this environment, just the PyTorch version I have currently installed. How do I install PyTorch 0.4.0 only to this new Conda environment I created? If it's possible via Anaconda Navigator, great! But I assume it's going to be done via a Conda command. I definitely don't want to mess up my other environments. Thanks!
Just navigate to the conda environment you want to install it, then use conda install pytorch=0.4.1 -c pytorch More details here on how you can install previous PyTorch versions: https://pytorch.org/get-started/previous-versions/ According to this blog, conda navigator does not work, but you can follow it to install pytorch in a conda environment: https://medium.com/@bryant.kou/how-to-install-pytorch-on-windows-step-by-step-cc4d004adb2a
https://stackoverflow.com/questions/59913348/
"ImportError: No module named torch" in ROS package
I've created a ROS package in which there are some python scripts. The python scripts are based on torch module (it is an inference code of pytorch models). when I try to run my scripts it gives me an error: ImportError: No module named torch To install ROS, I used the instruction of ROS wiki. To validate my installation, I followed the sample code of ROS (a simple publisher and subscriber) and it works well. My sys-info is: python: 3.6.9 torch: 1.1.0 torchvision: 0.3.0 OS kernel: Linux 4.15.0-74-generic OS distribution: Ubuntu 18.04.3 I want to import the below libs: import torch from cv_bridge import CvBridge import cv2 import os import numpy as np from torch.autograd import Variable from torchvision import transforms import torch.nn.functional as F import torch._utils import time from PIL import Image My CMake file is as below: cmake_minimum_required(VERSION 2.8.3) project(inference_pytorch) ## Compile as C++11, supported in ROS Kinetic and newer # add_compile_options(-std=c++11) set(Torch_DIR ".local/lib/python3.6/site-packages/torch/share/cmake/Torch") find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs cv_bridge ) find_package(Torch REQUIRED) find_package(OpenCV 3 REQUIRED) ## System dependencies are found with CMake's conventions # find_package(Boost REQUIRED COMPONENTS system) ################################### ## catkin specific configuration ## ################################### catkin_package( # INCLUDE_DIRS include # LIBRARIES inference_pytorch # CATKIN_DEPENDS roscpp rospy std_msgs # DEPENDS system_lib ) ########### ## Build ## ########### ## Specify additional locations of header files ## Your package locations should be listed before other locations include_directories( # include ${catkin_INCLUDE_DIRS} ${OPENCV_INCLUDE_DIRS} ${Torch_INSTALL_INCLUDE} ${Torch_DIR} ) ############# ## Install ## ############# # all install targets should use catkin DESTINATION variables # See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html install(PROGRAMS scripts/inference_test.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) catkin_install_python(PROGRAMS scripts/inference_test.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) link_directories( ${Torch_INSTALL_LIB} ) Now, how should I edit the CMAKE file to add the mentioned libs to my ROS package?
Finally, I found out that the installed ROS is working with python2 so I installed the torch with pip2. Also, set the torch path in the cmake file. Now it works!
https://stackoverflow.com/questions/59916182/
how to create a pytorch NN with 2 hidden layer with nn.Sequential?
how to add an another hidden layer in this 1 layer model? : model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(D_in, H)), ('Tanh', nn.Tanh()), ('fc2', nn.Linear(H, D_out))]))
You an do this by separating the original hidden layer into two or adding another hidden layer: model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(D_in, H1)), ('act1', nn.Tanh()), ('fc2', nn.Linear(H1, H2)), ('act2', nn.Tanh()), ('fc3', nn.Linear(H2, D_out)) ])) The only thing you got to do is take the 1st hidden layer (H1) as input to the next Linear layer which will output to another hidden layer (H2) then we add another Tanh activation layer and then lastly, we add a Linear layer which takes the H2 layer as input and the outputs to the number of output nodes.
https://stackoverflow.com/questions/59916814/
Understanding pytorch autograd
I am trying to understand how pytorch autograd works. If I have functions y = 2x and z = y**2, if I do normal differentiation, I get dz/dx at x = 1 as 8 (dz/dx = dz/dy * dy/dx = 2y*2 = 2(2x)*2 = 8x). Or, z = (2x)**2 = 4x^2 and dz/dx = 8x, so at x = 1, it is 8. If I do the same with pytorch autograd, I get 4 x = torch.ones(1,requires_grad=True) y = 2*x z = y**2 x.backward(z) print(x.grad) which prints tensor([4.]) where am I going wrong?
You're using Tensor.backward wrong. To get the result you asked for you should use x = torch.ones(1,requires_grad=True) y = 2*x z = y**2 z.backward() # <-- fixed print(x.grad) The call to z.backward() invokes the back-propagation algorithm, starting at z and working back to each leaf node in the computation graph. In this case x is the only leaf node. After calling z.backward() the computation graph is reset and the .grad member of each leaf node is updated with the gradient of z with respect to the leaf node (in this case dz/dx). What's actually happening in your original code? Well, what you've done is apply back-propagation starting at x. With no arguments x.backward() would simply result in x.grad being set to 1 since dx/dx = 1. The additional argument (gradient) is effectively a scale to apply to the resulting gradient. In this case z=4 so you get x.grad = z * dx/dx = 4 * 1 = 4. If interested, you can check out this for more information on what the gradient argument does.
https://stackoverflow.com/questions/59924167/
Load custom data from folder in dir Pytorch
I am getting my hands dirty with Pytorch and I am trying to do what is apparently the hardest part in deep learning-> LOADING MY CUSTOM DATASET AND RUNNING THE PROGRAM<-- The problem is this " too many values to unpack (expected 2)" also I think I am loading the data wrong. Can someone please show me how to do this. How to use the Dataloader user one's own data. import os import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.utils.data as data import torchvision from torchvision import transforms # Hyper parameters num_epochs = 20 batchsize = 100 lr = 0.001 EPOCHS = 2 BATCH_SIZE = 10 LEARNING_RATE = 0.003 TRAIN_DATA_PATH = "ImageFolder/images/train/" TEST_DATA_PATH = "ImageFolder/images/test/" TRANSFORM_IMG = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(256), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_data = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG) train_data_loader = data.DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) test_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_data_loader = data.DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) # Data Loader (Input Pipeline) train_loader = torch.utils.data.DataLoader(dataset=TRAIN_DATA_PATH, batch_size=batchsize, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=TEST_DATA_PATH, batch_size=batchsize, shuffle=False) # CNN model class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conlayer1 = nn.Sequential( nn.Conv2d(1,6,3), nn.Sigmoid(), nn.MaxPool2d(2)) self.conlayer2 = nn.Sequential( nn.Conv2d(6,16,3), nn.Sigmoid(), nn.MaxPool2d(2)) self.fc = nn.Sequential( nn.Linear(400,120), nn.Relu(), nn.Linear(120,84), nn.Relu(), nn.Linear(84,10)) def forward(self, x): out = self.conlayer1(x) out = self.conlayer2(out) out = out.view(out.size(0),-1) out = self.fc(out) return out cnn = CNN() # Loss and Optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(cnn.parameters(), lr=lr) # Train the Model for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Forward + Backward + Optimize optimizer.zero_grad() outputs = cnn(images) loss = criterion(outputs,labels) loss.backward() optimizer.step() if (i+1)%100 == 0: print('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f' % (epoch+1,num_epochs,i+1,len(train_dataset)//batchsize,loss.data[0])) # Test the Model cnn.eval() # Change model to 'eval' mode (BN uses moving mean/var) correct = 0 total = 0 for images, labels in test_loader: outputs = cnn(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum() print('Test Accuracy of the model on test images: %.6f%%' % (100.0*correct/total)) #Save the Trained Model torch.save(cnn.state_dict(),'cnn.pkl')
You just provided a string to dataloader: dataset=TRAIN_DATA_PATH Maybe use train_data, which is a data generator based on the filepath you provided
https://stackoverflow.com/questions/59924310/
What value should I set for `nn.Linear(1024, 256)` when the dimensions of the dataset are [64 x 25088]?
I keep getting an error in pytorch that the size I have entered is wrong: RuntimeError: size mismatch, m1: [64 x 25088], m2: [1024 x 256] at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generic/THCTensorMathBlas.cu:249 If I set nn.Linear to 64 x 25088, I get a similar error: RuntimeError: size mismatch, m1: [64 x 25088], m2: [64 x 25088] at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generic/THCTensorMathBlas.cu:249 What is going on? This is the code that is generating the error: model = models.vgg16(pretrained=True) # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.classifier = nn.Sequential(nn.Linear(64, 25088), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 2), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.003) # Use GPU if it's available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device); epochs = 1 steps = 0 running_loss = 0 print_every = 5 with active_session(): for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the default device inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() # Calculate accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f"Epoch {epoch+1}/{epochs}.. " f"Train loss: {running_loss/print_every:.3f}.. " f"Test loss: {test_loss/len(testloader):.3f}.. " f"Test accuracy: {accuracy/len(testloader):.3f}") running_loss = 0 model.train()
In nn.Linear(), the first parameter is the size of the input, the second parameter is the size of the output. If your data'size is 64 x 25088, then you need to set nn.Linear(64*25088, output_size). However this layer will be huge, so you should try to make your data smaller.
https://stackoverflow.com/questions/59924335/
Pytorch gradient computation
I am trying to figure out how exactly the function grad works. This is my code : A = torch.Tensor(2, 3).uniform_(-1, 1).requires_grad_() B = torch.Tensor(3, 1).uniform_(-1, 1).requires_grad_() o = torch.matmul(A,B) print("A : ", A) print("B : ", B) do_dinput = torch.autograd.grad(o, A, grad_outputs=torch.ones(2, 1)) print('Size do/dA :', (do_dinput[0].size())) I was expecting torch.Size([1, 3]) to be printed, because the derivative of AB w.r.t A is B^T. However, I got torch.Size([2, 3]). Is there something wrong with my code, or am I missing something ?
What you get is the grad starting from o back-propagated through the computational graph to A. In the end you have the grad for every value in A. It is the same as doing the following A = torch.Tensor(2, 3).uniform_(-1, 1).requires_grad_() B = torch.Tensor(3, 1).uniform_(-1, 1).requires_grad_() o = torch.matmul(A,B).sum() o.backward() print("A : ", A) print("B : ", B) print(A.grad) A.grad in this example and do_dinputare the same. If you look at the grad tensor it is just B^T in both rows. To make it a bit more visual what happens. We have A and B as input and some function f(...) which takes all values from A and B as input and calculates some value. In this case the function is sum(AB). Note: The summation doesn't change the gradients in any way. A = x_1 x_2 x_3 x_4 x_5 x_6 B = y_1 y_2 y_3 o = x_1 * y_1 + x_2 * y_2 + x_3 * y_3 x_4 * y_1 + x_5 * y_2 + x_6 * y_3 f(x_1,...,x_6, y_1, y_2, y_3) = x_1 * y_1 + x_2 * y_2 + x_3 * y_3 + x_4 * y_1 + x_5 * y_2 + x_6 * y_3 If you now calculate the gradient you derive f(...) in respect to all variables. So for x_1 it would be df/dx_1 = y_1 So the grad value in A for x_1 is equal to y_1. This is done for all other values. So in the end you get a grad value for all entries in A and B. It works the same in your example you just skip the summing of the tensor.
https://stackoverflow.com/questions/59924410/
Python modify instance attribute with instance method
I am trying to use an instance method to modify one of the instance's attribute like this: from torch.optim import SGD from typing import Dict class ChangeRateSgd(SGD): def __init__(self, params, lr: float, lr_change_instructions: Dict): super().__init__(params, lr) self.lr_change_instructions = lr_change_instructions def change_update_rate(self, input_epoch): update_mapping = self.lr_change_instructions if input_epoch in update_mapping.keys(): new_lr = self.lr_change_instructions[input_epoch] self.lr = new_lr However, my IDE flags the line self.lr = new_lr as not being ideal coding practice, with the warning Instance attribute lr defined outside __init__. What is the best way to do what I am trying to do with this instance method?
Try the below code , you need to define the lr in init method and access that any other methods. from torch.optim import SGD from typing import Dict class ChangeRateSgd(SGD): def __init__(self, params, lr: float, lr_change_instructions: Dict): super().__init__(params, lr) self.lr =None self.lr_change_instructions = lr_change_instructions def change_update_rate(self, input_epoch): update_mapping = self.lr_change_instructions if input_epoch in update_mapping.keys(): new_lr = self.lr_change_instructions[input_epoch] self.lr = new_lr
https://stackoverflow.com/questions/59924904/
Why do we need pack_padded_sequence() when we have pack_sequence()?
After reading the answers to this question I'm still a bit confused about the whole PackedSequence object thing. As I understand it, this is an object optimized for parallel processing of variable sized sequences in recurrent models, a problem to which zero padding is one [imperfect] solution. It seems that given a PackedSequence object, a Pytorch RNN will process each sequence in the batch to its end, and not continue to process the padding. So why is padding needed here? Why are there both a pack_padded_sequence() and pack_sequence() methods?
Mostly for historical reasons; torch.nn.pack_padded_sequence() was created before torch.nn.pack_sequence() (the later appeared in 0.4.0 for the first time if I see correctly) and I suppose there was no reason to remove this functionality and break backward compatibility. Furthermore, it's not always clear what's the best/fastest way to pad your input and it highly varies on data you are using. When data was somehow padded beforehand (e.g. your data was pre-padded and provided to you like that) it is faster to use pack_padded_sequence() (see source code of pack_sequence, it's calculating length of each data point for you and calls pad_sequence followed by pack_padded_sequence internally). Arguably pad_packed_sequence is rarely of use right now though. Lastly, please notice enforce_sorted argument provided since 1.2.0 version for both of those functions. Not so long ago users had to sort their data (or batch) with the longest sequence first and shortest last, now it can be done internally when this parameter is set to False.
https://stackoverflow.com/questions/59938530/
Gradient calculation in A2C
In A2C, the actor and critic algorithm, the weights are updated via equations: delta = TD Error and theta = theta + alpha*delta*[Grad(log(PI(a|s,theta)))] and w = w + beta*delta*[Grad(V(s,w))] So my question is, when using neural networks to implement this, how are the gradients calculated and am I correct that the weights are updated via the optimization fmethods in TensorFlow or PyTorch? Thanks, Jon
I'm not quite clear what you mean to update with w, but I'll answer the question for theta assuming it is denoting the parameters for the actor model. 1) Gradients can be calculated in a variety of ways, but if focussing on PyTorch, you can call .backward() on f(x)=alpha * delta * log(PI(a|s,theta), which will df/dx for every parameter x that is chained to f(x) via autograd. 2) You are indeed correct that the weights are updated via the optimization methods in Pytorch (specifically, autograd). However, in order to complete the optimization step, you must call torch.optim.step with whatever optimizer you would like to use on the network's parameters (e.g. weights and biases).
https://stackoverflow.com/questions/59940538/
L1 regularization neural network in Pytorch does not yield sparse solution
I'm implementing a neural network with l1 regularization in pytorch. I directly add the l1 norm penalty to the loss function. The framework is basically the same as Lack of Sparse Solution with L1 Regularization in Pytorch, however, the solution is not sparse no matter how I adjust the tuning parameter. How do I make the solution sparse? My code is pasted below. class NeuralNet(nn.Module): """ neural network class, with nn api """ def __init__(self, input_size: int, hidden_size: List[int], output_size: int): """ initialization function @param input_size: input data dimension @param hidden_size: list of hidden layer sizes, arbitrary length @param output_size: output data dimension """ super().__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.relu = nn.ReLU() self.softmax = nn.Softmax(dim=1) """layers""" self.input = nn.Linear(self.input_size, self.hidden_size[0], bias=False) self.hiddens = nn.ModuleList([ nn.Linear(self.hidden_size[h], self.hidden_size[h + 1]) for h in range(len(self.hidden_size) - 1)]) self.output = nn.Linear(hidden_size[-1], output_size) def forward(self, x: torch.Tensor) -> torch.Tensor: """ forward propagation process, required by the nn.Module class @param x: the input data @return: the output from neural network """ x = self.input(x) x = self.relu(x) for hidden in self.hiddens: x = hidden(x) x = self.relu(x) x = self.output(x) x = self.softmax(x) return x def estimate(self, x, y, l1: bool = False, lam: float = None, learning_rate: float = 0.1, batch_size: int = 32, epochs: int = 50): """ estimates the neural network model @param x: training data @param y: training label @param l1: whether to use l1 norm regularization @param lam: tuning parameter @param learning_rate: learning rate @param batch_size: batch size @param epochs: number of epochs @return: null """ input_size = x.shape[1] hidden_size = [50, 30, 10] output_size = 2 model = NeuralNet(input_size, hidden_size, output_size) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) trainset = [] for i in range(x.shape[0]): trainset.append([x[i, :], y[i]]) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True) for e in range(epochs): running_loss = 0 for data, label in trainloader: input_0 = data.view(data.shape[0], -1) optimizer.zero_grad() output = model(input_0.float()) loss = torch.nn.CrossEntropyLoss()(output, label.squeeze(1)) if l1: if lam is None: raise ValueError("lam needs to be specified when l1 is True.") else: for w in model.parameters(): if w.dim() > 1: loss = loss + lam * w.norm(1) loss.backward() optimizer.step() running_loss += loss.item() Doesn't pytorch support sparse solution?
The l1 regularization in pytorch does not handle the cases when the parameter will be shrunk to zero. You have to manually apply some soft-thresholding function like $$s_{\lambda}(z)=sign(z)*(|z|-\lambda)_+$$ after each iteration to filter those parameters which are zero until convergence.
https://stackoverflow.com/questions/59941063/
Why does the parameters saved in the checkpoint are different from the ones in the fused model?
I trained a QAT (Quantization Aware Training) based model in Pytorch, the training went on smoothly. However when I tried to load the weights into the fused model and run a test on widerface dataset I faced lots of errors: (base) marian@u04-2:/mnt/s3user/Pytorch_Retinaface_quantized# python test_widerface.py --trained_model ./weights/mobilenet0.25_Final_quantized.pth --network mobile0.25layers: Loading pretrained model from ./weights/mobilenet0.25_Final_quantized.pth remove prefix 'module.' Missing keys:235 Unused checkpoint keys:171 Used keys:65 Traceback (most recent call last): File "/root/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py", line 43, in <module> main(ptvsdArgs) File "/root/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 432, in main run() File "/root/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 316, in run_file runpy.run_path(target, run_name='__main__') File "/root/anaconda3/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/root/anaconda3/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/root/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/mnt/f3user/Pytorch_Retinaface_quantized/test_widerface.py", line 114, in <module> net = load_model(net, args.trained_model, args.cpu) File "/mnt/f3user/Pytorch_Retinaface_quantized/test_widerface.py", line 95, in load_model model.load_state_dict(pretrained_dict, strict=False) File "/root/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for RetinaFace: While copying the parameter named "ssh1.conv3X3.0.weight", whose dimensions in the model are torch.Size([32, 64, 3, 3]) and whose dimensions in the checkpoint are torch.Size([32, 64, 3, 3]). While copying the parameter named "ssh1.conv5X5_2.0.weight", whose dimensions in the model are torch.Size([16, 16, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 16, 3, 3]). While copying the parameter named "ssh1.conv7x7_3.0.weight", whose dimensions in the model are torch.Size([16, 16, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 16, 3, 3]). While copying the parameter named "ssh2.conv3X3.0.weight", whose dimensions in the model are torch.Size([32, 64, 3, 3]) and whose dimensions in the checkpoint are torch.Size([32, 64, 3, 3]). While copying the parameter named "ssh2.conv5X5_2.0.weight", whose dimensions in the model are torch.Size([16, 16, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 16, 3, 3]). ..... The full list can be found here. basically the weights cant be found. plus the scale and zero_point which are missing from the fused model. in case it matters, the following snippet is the actual training loop which was used to train and save the model : if __name__ == '__main__': # train() ... net = RetinaFace(cfg=cfg) print("Printing net...") print(net) net.fuse_model() ... net.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(net, inplace=True) print(f'quantization preparation done.') ... quantized_model = net for i in range(max_epoch): net = net.to(device) train_one_epoch(net, data_loader, optimizer, criterion, cfg, gamma, i, step_index, device) if i in stepvalues: step_index += 1 if i > 3 : net.apply(torch.quantization.disable_observer) if i > 2 : net.apply(torch.nn.intrinsic.qat.freeze_bn_stats) net=net.cpu() quantized_model = torch.quantization.convert(net.eval(), inplace=False) quantized_model.eval() # evaluate on test set ?! torch.save(net.state_dict(), save_folder + cfg['name'] + '_Final.pth') torch.save(quantized_model.state_dict(), save_folder + cfg['name'] + '_Final_quantized.pth') #torch.jit.save(torch.jit.script(quantized_model), save_folder + cfg['name'] + '_Final_quantized_jit.pth') for testing the test_widerface.py is used which can be accessed here You can view the keys here Why has this happened? How should this be taken care of? Update I checked the name, and created a new state_dict dictionary and inserted the 112 keys that were in both checkpoint and model using the snippet below : new_state_dict = {} checkpoint_state_dict = torch.load(checkpoint_path, map_location=lambda storage, loc: storage) for (ck, cp) in checkpoint_state_dict.items(): for (mk, mp) in model.state_dict().items(): kname,kext = os.path.splitext(ck) mname,mext = os.path.splitext(mk) # check the two parameter and see if they are the same # then use models key naming scheme and use checkpoints weights if kname+kext == mname+mext or kname+'.0'+kext == mname+mext: new_state_dict[mname+mext] = cp else: if kext in ('.scale','.zero_point'): new_state_dict[ck] = cp and then use this new state_dict! yet I'm getting the ver same exact errors! meaning errors like this : RuntimeError: Error(s) in loading state_dict for RetinaFace: While copying the parameter named "ssh1.conv3X3.0.weight", whose dimensions in the model are torch.Size([32, 64, 3, 3]) and whose dimensions in the checkpoint are torch.Size([32, 64, 3, 3]). This is really frustrating and there is no documentation concerning this! I'm completely clueless here.
I finally found out the cause. The error messages with the form of : While copying the parameter named "xxx.weight", whose dimensions in the model are torch.Size([yyy]) and whose dimensions in the checkpoint are torch.Size([yyy]). are actually generic messages, only returned when an exception has occured while copying the parameters in question. Pytorch developers could easily add the actual exception args into this spurious yet unhelpful message, so it could actually help better debug the issue at hand. Anyway, looking at the exception which was by the way : "copy_" not implemented for \'QInt8' You'll now know what the actual issue is/was!
https://stackoverflow.com/questions/59941776/
'NoneType' object has no attribute 'size' - how to face detection using mtcnn?
i'm trying to build face recognition in real time in neural network(facenet network) using pytorch and face detection using MTCNN i've tried this for detecting faces in real time (from webcam) but doesnt work read frames then going through mtcnn detector import cv2 capture = cv2.VideoCapture(0) while(True): ret, frame = capture.read() frames_tracked = [] print('\rTracking frame: {}'.format(i + 1), end='') boxes,_ = mtcnn.detect(frame) frame_draw = frame.copy() draw = ImageDraw.Draw(frame_draw) for box in boxes: draw.rectangle(box.tolist(), outline=(255, 0, 0), width=6) frames_tracked.append(frame_draw.resize((640, 360), Image.BILINEAR)) d = display.display(frames_tracked[0], display_id=True) i = 1 try: while True: d.update(frames_tracked[i % len(frames_tracked)]) i += 1 except KeyboardInterrupt: pass if cv2.waitKey('q') == 27: break capture.release() cv2.destroyAllWindows() but it will rise this error : this is the entire traceback http://dpaste.com/0HR58RQ AttributeError: 'NoneType' object has no attribute 'size' is there a solution for this problem ? what caused this error? thanks for your advice
Let's take a look at that error again. AttributeError: 'NoneType' object has no attribute 'size' so, somewhere in your code you(or mtcnn) are trying to call size attribute from a None variable. you are passing frame to mtcnn using following command : boxes,_ = mtcnn.detect(frame) this is exactly where you see that error. because you are passing a None variable to mtcnn. in order to prevent it, you can prevent it before calling this method. in other words : ret, frame = capture.read() if frame == None: continue
https://stackoverflow.com/questions/59950464/
Define a custom variable pytorch tensor
I have a problem similar to this one: There is an input vector of the form (n1, n2), and I want a model where f(n1, n2) = f(n2, n1). I want this model to be linear. For instance a 2x2 matrix that I want to learn. The 2x2 matrix has 4 weights which the training will try to learn. However, the constriction allows a clever reduction of the problem. f(nvec) = W*nvec, being nvec=(n1, n2). So, let X = [[0, 1],[1, 0]] be a 2x2 matrix that flips vectors, then f(n2,n1) = W*X*(n1, n2) = f(n1, n2) = W (n1, n2). Basically, the equality implies that W*X = W, or if W = [[w1, w2], [w3, w4]], then the equality implies W = [[w1, w1], [w3, w3]], i.e. the matrix has only 2 free parameters. How can I define a model like this one in pytorch? Thanks.
If you have a special structure to your parameters, and you do not want to use the default parameters stored by the default layers, you can define a layer on your own and use nn.Parameter to store your parameters. For example, class CustomLayer(nn.Module): # layer must be derived from nn.Module class def __init__(self): # you can have arguments here... super(CustomLayer, self).__init__() self.params = nn.Parameter(data=torch.rand((2, 1), dtype=torch.float), requires_grad=True) def forward(self, x): w = self.params.repeat(1, 2) # create a 2x2 matrix from the two parameters y = torch.bmm(x, w) return y
https://stackoverflow.com/questions/59960545/
Keeping gradients while rearranging data in a tensor, with pytorch
I have a scheme where I store a matrix with zeros on the diagonals as a vector. I want to later on optimize over that vector, so I require gradient tracking. My challenge is to reshape between the two. I want - for domain specific reasons - keep the order of data in the matrix so that transposed elements of the W matrix next to each other in the vector form. The size of the W matrix is subject to change, so I start with enumering items in the top-left part of the matrix, and continue outwards. I have come up with two ways to do this. See code snippet. import torch import torch.sparse w = torch.tensor([10,11,12,13,14,15],requires_grad=True,dtype=torch.float) i = torch.LongTensor([ [0, 1,0], [1, 0,1], [0, 2,2], [2, 0,3], [1, 2,4], [2, 1,5], ]) v = torch.FloatTensor([1, 1, 1 ,1,1,1 ]) reshaper = torch.sparse.FloatTensor(i.t(), v, torch.Size([3,3,6])).to_dense() W_mat_with_reshaper = reshaper @ w W_mat_directly = torch.tensor([ [0, w[0], w[2],], [w[1], 0, w[4],], [w[3], w[5], 0,], ]) print(W_mat_with_reshaper) print(W_mat_directly) and this gives output tensor([[ 0., 10., 12.], [11., 0., 14.], [13., 15., 0.]], grad_fn=<UnsafeViewBackward>) tensor([[ 0., 10., 12.], [11., 0., 14.], [13., 15., 0.]]) As you can see, the direct way to reshape the vector into a matrix does not have a grad function, but the multiply-with-a-reshaper-tensor does. Creating the reshaper-tensor seems like it will be a hassle, but on the other hand, manually writing the matrix for is also infeasible. Is there a way to do arbitrary reshapes in pytorch that keeps grack of gradients?
Instead of constructing W_mat_directly from the elements of w, try assigning w into W: W_mat_directly = torch.zeros((3, 3), dtype=w.dtype) W_mat_directly[(0, 0, 1, 1, 2, 2), (1, 2, 0, 2, 0, 1)] = w You'll get tensor([[ 0., 10., 11.], [12., 0., 13.], [14., 15., 0.]], grad_fn=<IndexPutBackward>)
https://stackoverflow.com/questions/59962904/
Pytorch dataloader for sentences
I have collected a small dataset for binary text classification and my goal is to train a model with the method proposed by Convolutional Neural Networks for Sentence Classification I started my implementation by using the torch.util.data.Dataset. Essentially every sample in my dataset my_data looks like this (as example): {"words":[0,1,2,3,4],"label":1}, {"words":[4,9,20,30,4,2,3,4,1],"label":0} Next I took a look at Writing custom dataloaders with pytorch: using: dataloader = DataLoader(my_data, batch_size=2, shuffle=False, num_workers=4) I would suspect that enumerating over a batch would yield something the following: {"words":[[0,1,2,3,4],[4,9,20,30,4,2,3,4,1]],"labels":[1,0]} However it is more like this: {"words":[[0,4],[1,9],[2,20],[3,30],[4,4]],"label":[1,0]} I guess it has something to do that they are not equal size. Do they need to be the same size and if so how can i achieve it? For people knwoing about this paper, what does your training data look like? edit: class CustomDataset(Dataset): def __init__(self, path_to_file, max_size=10, transform=None): with open(path_to_file) as f: self.data = json.load(f) self.transform = transform self.vocab = self.build_vocab(self.data) self.word2idx, self.idx2word = self.word2index(self.vocab) def get_vocab(self): return self.vocab def get_word2idx(self): return self.word2idx, self.idx2word def __len__(self): return len(self.data) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() inputs_ = word_tokenize(self.data[idx][0]) inputs_ = [w for w in inputs_ if w not in stopwords] inputs_ = [w for w in inputs_ if w not in punctuation] inputs_ = [self.word2idx[w] for w in inputs_] # convert words to index label = {"positive": 1,"negative": 0} label_ = label[self.data[idx][1]] #convert label to 0|1 sample = {"words": inputs_, "label": label_} if self.transform: sample = self.transform(sample) return sample def build_vocab(self, corpus): word_count = {} for sentence in corpus: tokens = word_tokenize(sentence[0]) for token in tokens: if token not in word_count: word_count[token] = 1 else: word_count[token] += 1 return word_count def word2index(self, word_count): word_index = {w: i for i, w in enumerate(word_count)} idx_word = {i: w for i, w in enumerate(word_count)} return word_index, idx_word
As you correctly suspected, this is mostly a problem of different tensor shapes. Luckily, PyTorch offers you several solutions of varying simplicity to achieve what you desire (batch sizes >= 1 for text samples): The highest-level solution is probably torchtext, which provides several solutions out of the box to load (custom) datasets for NLP tasks. If you can make your training data fit in any one of the described loaders, this is probably the recommended option, as there is a decent documentation and several examples. If you prefer to build a solution, there are padding solutions like torch.nn.utils.rnn.pad_sequence, in combination with torch.nn.utils.pack_padded_sequence, or the combination of both (torch.nn.utils.rnn.pack_sequence. This generally allows you a lot more flexibility, which may or may not be something that you require. Personally, I have had good experiences using just pad_sequence, and sacrifice a bit of speed for a much clearer debugging state, and seemingly others have similar recommendations.
https://stackoverflow.com/questions/59971324/
Where is torch.cholesky and how torch refers to its methods?
I'm doing some research into Cholesky decomposition, which requires some insights into how torch.cholesky works. After a while of grep-ing and searching through ATen, I got stuck at TensorMethods.h, which interestingly has this following code: inline Tensor Tensor::cholesky(bool upper) const { #ifdef USE_STATIC_DISPATCH return TypeDefault::cholesky(const_cast<Tensor&>(*this), upper); #else static c10::OperatorHandle op = c10::Dispatcher::singleton().findSchema({"aten::cholesky", ""}).value(); return c10::Dispatcher::singleton().callUnboxed<Tensor, const Tensor &, bool>( op, impl::dispatchTypeId(at::detail::multi_dispatch_tensor_type_set(*this)), const_cast<Tensor&>(*this), upper); #endif } This raised the question of how torch locates its methods. Thank you!
Take a look at aten/src/ATen/native/README.md which describes how functions are registered to the API. ATen "native" functions are the modern mechanism for adding operators and functions to ATen (they are "native" in contrast to legacy functions, which are bound via TH/THC cwrap metadata). Native functions are declared in native_functions.yaml and have implementations defined in one of the cpp files in this directory. If we take a look at aten/src/ATen/native/native_functions.yaml and search for cholesky we find - func: cholesky(Tensor self, bool upper=False) -> Tensor use_c10_dispatcher: full variants: method, function To find the entry-point you basically just have to search the .cpp files in the aten/src/ATen/native directory and locate the function named cholesky. Currently it can be found at BatchLinearAlgebra.cpp:550 Tensor cholesky(const Tensor &self, bool upper) { if (self.size(-1) == 0) { return at::empty_like(self, LEGACY_CONTIGUOUS_MEMORY_FORMAT); } squareCheckInputs(self); auto raw_cholesky_output = at::_cholesky_helper(self, upper); if (upper) { return raw_cholesky_output.triu_(); } else { return raw_cholesky_output.tril_(); } } From this point it's just a matter of following the C++ code to understand what's going on.
https://stackoverflow.com/questions/59978260/
Error training RNN with pytorch : RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Helo everyone, I am trying to create a model using the PyTorch RNN class and to train this model using minibatches. My dataset is a simple timeserie (one input one output). Here is what my model looks like : class RNN_pytorch(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN_pytorch, self).__init__() self.hidden_size = hidden_size self.input_size = input_size self.rnn = nn.RNN(input_size, hidden_size, num_layers=1) self.linear = nn.Linear(hidden_size, output_size) def forward(self, x, hidden): batch_size = x.size(1) # print(batch_size) hidden = self.init_hidden(batch_size) out, hidden = self.rnn(x, hidden) # out = out.view(out.size(1), out.size(2)) print("Input linear : ", out.size()) out = self.linear(out) return out, hidden def init_hidden(self, batch_size): hidden = torch.zeros(1, batch_size, self.hidden_size) # print(hidden.size()) return hidden Then I process my dataset and split it like so : batch_numbers = 13 batch_size = int(len(train_signal[:-1])/batch_numbers) print("Train sample total size =", len(train_signal[:-1])) print("Number of batches = ", batch_numbers) print("Size of batches = {} (train_size / batch_numbers)".format(batch_size)) train_signal_batched = train_signal[:-1].reshape(batch_numbers, batch_size, 1) train_label_batched = train_signal[1:].reshape(batch_numbers, batch_size, 1) print("X_train shape =", train_signal_batched.shape) print("Y_train shape =", train_label_batched.shape) Returning : Train sample total size = 829439 Number of batches = 13 Size of batches = 63803 (train_size / batch_numbers) X_train shape = (13, 63803, 1) Y_train shape = (13, 63803, 1) So far so good, but then I try to train my model : rnn_mod = RNN_pytorch(1, 16, 1) criterion = nn.MSELoss() optimizer = torch.optim.RMSprop(rnn_mod.parameters(), lr=0.01) n_epochs = 3 hidden = rnn_mod.init_hidden(batch_size) for epoch in range(1, n_epochs): for i, batch in enumerate(train_signal_batched): optimizer.zero_grad() x = torch.Tensor([batch]).float() print("Input : ",x.size()) out, hidden = rnn_mod.forward(x, hidden) print("Output : ",out.size()) label = torch.Tensor([train_label_batched[i]]).float() print("Label : ", label.size()) loss = criterion(output, label) print("Loss : ", loss) loss.backward(retain_graph=True) optimizer.step() print("*", end="") # if epoch % 100 == 0: print("Step {} --- Loss {}".format(epoch, loss)) Which leads to the error : Input : torch.Size([1, 63803, 1]) Input linear : torch.Size([1, 63803, 16]) Output : torch.Size([1, 63803, 1]) Label : torch.Size([1, 63803, 1]) Loss : tensor(0.0051) /home/kostia/.virtualenvs/machine-learning/lib/python3.6/site-packages/torch/nn/modules/loss.py:431: UserWarning: Using a target size (torch.Size([1, 63803, 1])) that is different to the input size (torch.Size([1, 1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-217-d019358438ff> in <module> 17 loss = criterion(output, label) 18 print("Loss : ", loss) ---> 19 loss.backward(retain_graph=True) 20 optimizer.step() 21 print("*", end="") ~/.virtualenvs/machine-learning/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 116 products. Defaults to ``False``. 117 """ --> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph) 119 120 def register_hook(self, hook): ~/.virtualenvs/machine-learning/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 91 Variable._execution_engine.run_backward( 92 tensors, grad_tensors, retain_graph, create_graph, ---> 93 allow_unreachable=True) # allow_unreachable flag 94 95 RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn Can anyone tell me what is the problem here because I honestly doesn't have a clue ? Thanks in advance
It is as the error says. The loss tensor doesn't require grad and doesn't have a gread_fn function. Your out in forward in RNN_pytorch should also already have a grad_fn, start checking there. See if you somehow disabled gradients by for example having .eval() or .no_grad() somewhere.
https://stackoverflow.com/questions/59994462/
Pytorch CNN loss is not changing,
I making a CNN for a binary classification problem between bees and ants images. Images are of 500x500 dimension with 3 channels. Here is my code. Dataloader: def load_data(path): data = [] ant = 0 bee = 0 for folder in os.listdir(path): print(folder) curfolder = os.path.join(path, folder) for file in os.listdir(curfolder): image = plt.imread(curfolder+'/'+file) image = cv2.resize(image, (500,500)) if folder == 'ants': ant += 1 data.append([np.array(image) , np.eye(2)[0]]) elif folder == 'bees': bee += 1 data.append([np.array(image) , np.eye(2)[1]]) np.random.shuffle(data) np.save('train.npy',data) print('ants : ',ant) print('bees : ',bee) training_data = np.load("train.npy",allow_pickle=True) print(len(training_data)) CNN class class Net(nn.Module): def __init__(self): super().__init__() # just run the init of parent class (nn.Module) self.conv1 = nn.Conv2d(3, 32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window self.conv2 = nn.Conv2d(32, 64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 kernel / window self.conv3 = nn.Conv2d(64, 128, 5) x = torch.randn(3,500,500).view(-1,3,500,500) self._to_linear = None self.convs(x) print(self._to_linear) self.fc1 = nn.Linear(self._to_linear, 512) #flattening. self.fc2 = nn.Linear(512, 2) # 512 in, 2 out bc we're doing 2 classes (dog vs cat). def convs(self, x): # max pooling over 2x2 x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) if self._to_linear is None: self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2] return x def forward(self, x): x = self.convs(x) x = x.view(-1, self._to_linear) # .view is reshape ... this flattens X before x = F.relu(self.fc1(x)) x = self.fc2(x) # bc this is our output layer. No activation here. return F.softmax(x, dim=1) net = Net() print(net) loss and optimizer import torch.optim as optim optimizer = optim.Adam(net.parameters(), lr=0.001) loss_function = nn.MSELoss() Data Flat train_X = torch.Tensor([i[0] for i in training_data]).view(-1,3,500,500) train_X = train_X/255.0 train_y = torch.Tensor([i[1] for i in training_data]) training the model device = torch.device("cuda:0") net = Net().to(device) print(len(train_X)) epochs = 10 BATCH_SIZE = 1 for epoch in range(epochs): for i in range(0, len(train_X), BATCH_SIZE): # from 0, to the len of x, stepping BATCH_SIZE at a time. [:50] ..for now just to dev #print(f"{i}:{i+BATCH_SIZE}") batch_X = train_X[i:i+BATCH_SIZE] batch_y = train_y[i:i+BATCH_SIZE] batch_X, batch_y = batch_X.to(device), batch_y.to(device) net.zero_grad() outputs = net(batch_X) loss = loss_function(outputs, batch_y) loss.backward() optimizer.step() # Does the update print(f"Epoch : {epoch}. Loss: {loss}") The loss does not update for every epoch. I have tried changing the learning rate but the problem still remains. Epoch : 0. Loss: 0.23345321416854858 Epoch : 1. Loss: 0.23345321416854858 Epoch : 2. Loss: 0.23345321416854858 Epoch : 3. Loss: 0.23345321416854858 Epoch : 4. Loss: 0.23345321416854858 Epoch : 5. Loss: 0.23345321416854858 Epoch : 6. Loss: 0.23345321416854858 Epoch : 7. Loss: 0.23345321416854858 Epoch : 8. Loss: 0.23345321416854858 Epoch : 9. Loss: 0.23345321416854858 Thank you in advance.
In your training loop, you should do optimizer.zero_grad() instead of net.zero_grad(). Also, you are using MSELoss() for a classification problem, you need something like BinaryCrossEntropy() or CrossEntropy() or NLLLoss().
https://stackoverflow.com/questions/60003876/
How to pass input dim from fit method to skorch wrapper?
I am trying to incorporate PyTorch functionalities into a scikit-learn environment (in particular Pipelines and GridSearchCV) and therefore have been looking into skorch. The standard documentation example for neural networks looks like import torch.nn.functional as F from torch import nn from skorch import NeuralNetClassifier class MyModule(nn.Module): def __init__(self, num_units=10, nonlin=F.relu): super(MyModule, self).__init__() self.dense0 = nn.Linear(20, num_units) self.nonlin = nonlin self.dropout = nn.Dropout(0.5) ... ... self.output = nn.Linear(10, 2) ... ... where you explicitly pass the input and output dimensions by hardcoding them into the constructor. However, this is not really how scikit-learn interfaces work, where the input and output dimensions are derived by the fit method rather than being explicitly passed to the constructors. As a practical example consider # copied from the documentation net = NeuralNetClassifier( MyModule, max_epochs=10, lr=0.1, # Shuffle training data on each epoch iterator_train__shuffle=True, ) # any general Pipeline interface pipeline = Pipeline([ ('transformation', AnyTransformer()), ('net', net) ]) gs = GridSearchCV(net, params, refit=False, cv=3, scoring='accuracy') gs.fit(X, y) besides the fact that nowhere in the transformers must one specify the input and output dimensions, the transformers that are applied before the model may change the dimentionality of the training set (think at dimensionality reductions and similar), therefore hardcoding input and output in the neural network constructor just will not do. Did I misunderstand how this is supposed to work or otherwise what would be a suggested solution (I was thinking of specifying the constructors into the forward method where you do have X available for fit already, but I am not sure this is good practice)?
This is a very good question and I'm afraid that there is best practice answer to this as PyTorch is normally written in a way where initialization and execution are separate steps which is exactly what you don't want in this case. There are several ways forward which are all going in the same direction, namely introspecting the input data and re-initializing the network before fitting. The simplest way I can think of is writing a callback that sets the corresponding parameters during training begin: class InputShapeSetter(skorch.callbacks.Callback): def on_train_begin(self, net, X, y): net.set_params(module__input_dim=X.shape[-1]) This sets a module parameter during training begin which will re-initialize the PyTorch module with said parameter. This specific callback expects that the parameter for the first layer is called input_dim but you can change this if you want. A full example: import torch import skorch from sklearn.datasets import make_classification from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA X, y = make_classification() X = X.astype('float32') class ClassifierModule(torch.nn.Module): def __init__(self, input_dim=80): super().__init__() self.l0 = torch.nn.Linear(input_dim, 10) self.l1 = torch.nn.Linear(10, 2) def forward(self, X): y = self.l0(X) y = self.l1(y) return torch.softmax(y, dim=-1) class InputShapeSetter(skorch.callbacks.Callback): def on_train_begin(self, net, X, y): net.set_params(module__input_dim=X.shape[-1]) net = skorch.NeuralNetClassifier( ClassifierModule, callbacks=[InputShapeSetter()], ) pipe = Pipeline([ ('pca', PCA(n_components=10)), ('net', net), ]) pipe.fit(X, y) print(pipe.predict(X))
https://stackoverflow.com/questions/60005715/
How to debug invalid gradient at index 0? - PyTorch
I am trying to train an actor-critic model, but when I reach the backprop for the critic I get this error: RuntimeError: invalid gradient at index 0 - expected type torch.cuda.FloatTensor but got torch.FloatTensor I am failing to identify which gradient the error refers to. Can anyone help? Here is the Stack trace: Traceback (most recent call last): File "train.py", line 338, in <module> main() File "train.py", line 327, in main reinforce_trainer.train(opt.start_reinforce, opt.start_reinforce + opt.critic_pretrain_epochs - 1, True, start_time) File "/home/fbommfim/init-tests/treeLSTM/lib/train/reinforce_trainer.py", line 56, in train train_reward, critic_loss = self.train_epoch(epoch, pretrain_critic, no_update) File "/home/fbommfim/init-tests/treeLSTM/lib/train/reinforce_trainer.py", line 153, in train_epoch critic_loss = self.critic.backward(baselines.cuda(), rewards, critic_weights.cuda(), num_words, self.critic_loss_func, regression=True) File "/home/fbommfim/init-tests/treeLSTM/lib/model/encoder_decoder/hybrid2seq_model.py", line 67, in backward outputs.backward(grad_output) File "/home/linuxbrew/.linuxbrew/Cellar/python/3.7.6_1/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/linuxbrew/.linuxbrew/Cellar/python/3.7.6_1/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: invalid gradient at index 0 - expected type torch.cuda.FloatTensor but got torch.FloatTensor and relevant code: train_epoch from reinforce_trainer def train_epoch(self, epoch, pretrain_critic, no_update): self.actor.train() # may also have self.critic.train() ? total_reward, report_reward = 0, 0 total_critic_loss, report_critic_loss = 0, 0 total_sents, report_sents = 0, 0 total_words, report_words = 0, 0 last_time = time.time() batch_count = len(self.train_data) batch_order = torch.randperm(batch_count) with tqdm(total = (batch_count)) as prog: for i in range(batch_count): batch = self.train_data[i] # batch_order[i] if self.opt.data_type == 'code': targets = batch[2] attention_mask = batch[1][2][0].data.eq(lib.Constants.PAD).t() elif self.opt.data_type == 'text': targets = batch[2] attention_mask = batch[0][0].data.eq(lib.Constants.PAD).t() elif self.opt.data_type == 'hybrid': targets = batch[2] attention_mask_code = batch[1][2][0].data.eq(lib.Constants.PAD).t() attention_mask_txt = batch[0][0].data.eq(lib.Constants.PAD).t() batch_size = targets.size(1) self.actor.zero_grad() self.critic.zero_grad() # Sample translations if self.opt.has_attn: if self.opt.data_type == 'code' or self.opt.data_type == 'text': self.actor.decoder.attn.applyMask(attention_mask) elif self.opt.data_type == 'hybrid': self.actor.decoder.attn.applyMask(attention_mask_code, attention_mask_txt) samples, outputs = self.actor.sample(batch, self.max_length) # Calculate rewards rewards, samples = self.sent_reward_func(samples.t().tolist(), targets.data.t().tolist()) reward = sum(rewards) # Perturb rewards (if specified). if self.pert_func is not None: rewards = self.pert_func(rewards) samples = torch.LongTensor(samples).t().contiguous() rewards = torch.FloatTensor([rewards] * samples.size(0)).contiguous() if self.opt.cuda: samples = samples.cuda() rewards = rewards.cuda() # Update critic. critic_weights = samples.ne(lib.Constants.PAD).float() num_words = critic_weights.data.sum() if not no_update: if self.opt.data_type == 'code': baselines = self.critic((batch[0], batch[1], samples, batch[3]), eval=False, regression=True) elif self.opt.data_type == 'text': baselines = self.critic((batch[0], batch[1], samples, batch[3]), eval=False, regression=True) elif self.opt.data_type == 'hybrid': baselines = self.critic((batch[0], batch[1], samples, batch[3]), eval=False, regression=True) critic_loss = self.critic.backward(baselines, rewards, critic_weights, num_words, self.critic_loss_func, regression=True) self.critic_optim.step() else: critic_loss = 0 # Update actor if not pretrain_critic and not no_update: # Subtract baseline from reward norm_rewards = (rewards - baselines).data actor_weights = norm_rewards * critic_weights # TODO: can use PyTorch reinforce() here but that function is a black box. # This is an alternative way where you specify an objective that gives the same gradient # as the policy gradient's objective, which looks much like weighted log-likelihood. actor_loss = self.actor.backward(outputs, samples, actor_weights, 1, self.actor_loss_func) self.optim.step() else: actor_loss = 0 # Gather stats total_reward += reward report_reward += reward total_sents += batch_size report_sents += batch_size total_critic_loss += critic_loss report_critic_loss += critic_loss total_words += num_words report_words += num_words self.opt.iteration += 1 print ("iteration: %s, loss: %s " % (self.opt.iteration, actor_loss)) print ("iteration: %s, reward: %s " % (self.opt.iteration, (report_reward / report_sents) * 100)) if i % self.opt.log_interval == 0 and i > 0: print("""Epoch %3d, %6d/%d batches; actor reward: %.4f; critic loss: %f; %5.0f tokens/s; %s elapsed""" % (epoch, i, batch_count, (report_reward / report_sents) * 100, report_critic_loss / report_words, report_words / (time.time() - last_time), str(datetime.timedelta(seconds=int(time.time() - self.start_time))))) report_reward = report_sents = report_critic_loss = report_words = 0 last_time = time.time() prog.update(1) return total_reward / total_sents, total_critic_loss / total_words and backward for hybrid2seq_model.py: def backward(self, outputs, targets, weights, normalizer, criterion, regression=False): grad_output, loss = self.generator.backward(outputs, targets, weights, normalizer, criterion, regression) outputs.cuda() grad_output.cuda() outputs.backward(grad_output) return loss
The error says it's expecting a cuda tensor and got a non-cuda tensor, so that's what I'd look for. Calls like grad_output.cuda() returns a cuda tensor. It's not an inplace operation. You probably wanted grad_output = grad_output.cuda(), so I'd start by fixing calls like that.
https://stackoverflow.com/questions/60009868/
Having trouble writing training loop in pytorch
I load data from a CSV FILE OF 20+6 Columns(Features and Labels). Im trying to run my data through Convolutional Neural Network in pytorch. I get error saying it expects a 3D input and Im giving it 1D input. I am using Conv1d . import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import pandas as pd from torch.utils.data import Dataset,DataLoader from sklearn.model_selection import train_test_split #Read Data data=pd.read_csv('Data.csv') Features=data[data.columns[0:20]] Labels=data[data.columns[20:]] #Split Data X_train, X_test, y_train, y_test = train_test_split( Features, Labels, test_size=0.33, shuffle=True) #Create Tensors train_in=torch.tensor(X_train.values) train_out=torch.tensor(y_train.values) test_in=torch.tensor(X_test.values) test_out=torch.tensor(y_test.values) #Model CNN class CNN(nn.Module): def __init__(self): super(CNN,self).__init__() self.layer1 = nn.Sequential( nn.Conv1d(20,40,kernel_size=5,stride=1,padding=2), nn.ReLU(), nn.MaxPool1d(kernel_size=2,stride=2) ) self.layer2 = nn.Sequential( nn.Conv1d(40,60,kernel_size=5,stride=1,padding=2), nn.ReLU(), nn.MaxPool1d(kernel_size=2,stride=2) ) self.drop_out = nn.Dropout() self.fc1 = nn.Linear(60,30) self.fc2 = nn.Linear(30,15) self.fc3 = nn.Linear(15,6) def forward(self,x): out=self.layer1(x) out=self.layer2(out) out=self.drop_out(out) out=self.fc1(out) out=self.fc2(out) out=self.fc3(out) return out Epochs=10 N_labels=len(Labels.columns) N_features=len(Features.columns) batch_size=100 learning_rate=0.001 #TRAIN MODEL model = CNN() #LOSS AND OPTIMIZER criterion = torch.nn.SmoothL1Loss() optimizer = torch.optim.Adam(model.parameters(),lr=learning_rate) #TRAIN MODEL model.train() idx=0 for i in train_in: y=model(i) loss=criterion(y,train_out[idx]) idx+=1 loss.backward() optimizer.step() How do I write the Training and Eval loop? All the examples I see on the internet all use images and they also use DataLoader.
Conv1D takes as input a tensor with 3 dimensions (N, C, L) where N is the batchsize, C is the number of channels and L size of the 1D data. In your case it seems like one sample has 20 entries and you have one channel. You have a batch_size variable but it is not used in the code posted. nn.Conv1d(20,40,kernel_size=5,stride=1,padding=2) This lines creates a convolution which takes a input with 20 channels (you have 1) and outputs 40 channels. So you have to change the 20 to a 1 and you might wanna change the 40 to something smaller. Since convolutions are applied to the whole input (controlled by stride, patting and kernel size), there is no need to specify the size of a sample. Also you might wanna add some logic to build minibatches. Right now it seems like you just want to input every sample by itself. Maybe read a bit about dataset classes and data loaders in pytorch.
https://stackoverflow.com/questions/60016597/
What does model.eval() do in pytorch?
When should I use .eval()? I understand it is supposed to allow me to "evaluate my model". How do I turn it back off for training? Example training code using .eval().
model.eval() is a kind of switch for some specific layers/parts of the model that behave differently during training and inference (evaluating) time. For example, Dropouts Layers, BatchNorm Layers etc. You need to turn off them during model evaluation, and .eval() will do it for you. In addition, the common practice for evaluating/validation is using torch.no_grad() in pair with model.eval() to turn off gradients computation: # evaluate model: model.eval() with torch.no_grad(): ... out_data = model(data) ... BUT, don't forget to turn back to training mode after eval step: # training step ... model.train() ...
https://stackoverflow.com/questions/60018578/
PyTorch C++ - how to know the recommended version of cuDNN?
I've previously inferenced TensorFlow graphs from C++. Now I'm embarking on working out how to inference PyTorch graphs via C++. My first question is, how can I know the recommended version of cuDNN to use with LibTorch, or if I'm doing my own PyTorch compile? Determining the recommended CUDA version is easy. Upon going to https://pytorch.org/ and choosing the options under Quick Start Locally (PyTorch Build, Your OS, etc.) the site makes it pretty clear that CUDA 10.1 is recommended, but there is no mention of cuDNN version and upon Googling I'm unable to find a definitive answer for this. From what I understand about PyTorch on ubuntu, if you use the Python version you have to install the CUDA driver (ex. so nvidia-smi works, version 440 currently), but the CUDA and cuDNN install are not actually required beyond the driver because they are included in the pip3 package, is this correct? If so, then is there a command I can run in a Python script that shows the version of CUDA (expected to be 10.1) and cuDNN that the pip pre-compiled .whl uses? I suspect there is such a command but I'm not familiar enough with PyTorch yet to know what that may be or how to look it up. I've ran into compile and inferencing errors using C++ with TensorFlow when I was not using the specific recommended version of cuDNN for a certain version of TensorFlow and CUDA so I'm aware these version can be sensitive and I have to make the right choices from the get-go. If anybody can assist in determining the recommended version of cuDNN for a certain version of PyTorch that would be great.
CUDA is supported via the graphics card driver, AFAIK there's no separate "CUDA driver". The system graphics card driver pretty much just needs to be new enough to support the CUDA/cudNN versions for the selected PyTorch version. To the best of my knowledge backwards compatibility is included in most drivers. For example a driver that supports CUDA 10.1 (reported via nvidia-smi) will also likely support CUDA 8, 9, 10.0 If you installed with pip or conda then a version of CUDA and cudNN are included with the install. You can query the actual versions being used in python with torch.version.cuda and torch.backends.cudnn.version().
https://stackoverflow.com/questions/60019630/
Errow Showing :'ResNet' object has no attribute 'classifier'
I download Resnet18 model to train a model. When I type model it shows ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (fc): Linear(in_features=512, out_features=1000, bias=True) (classifer): Sequential( (fc1): Linear(in_features=512, out_features=256, bias=True) (relu): ReLU() (fc5): Linear(in_features=128, out_features=2, bias=True) (output): LogSoftmax() ) ) As you can see it clearly shows the classifier but when I do optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) it shows an error AttributeError: 'ResNet' object has no attribute 'classifier' I don't know what mistake I am doing, if you can help that would be great. I can provide some extra details if you want.
Remove classifier and keep it model.parameters() only. optimizer = optim.Adam(model.parameters(), lr=0.001) To construct an Optimizer you have to give it an iterable containing the parameters to optimize.
https://stackoverflow.com/questions/60021722/
Pytorch Module: Why do we pass the class and object to the parent class initializer, in the __init__ method?
Why do we pass the class and the object (self) to the parent's init method for pytorch Module? For e.g. class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() Why is the class RNN, as well as the object (self) passed to the parent's init?
Every method receives the instance of the class invoking the method as its first argument; __init__ is no exception. foo = RNN(...) causes foo.__init__(...) to be called, which is equivalent to RNN.__init__(foo, ...). super returns a "proxy" for the instance indicated by its arguments. The class indicates the starting point in the MRO to decide which class the proxy represents, and the second argument indicates which instance. You virtually never pass anything other than the class and self, and in Python 3 this is the default: super().__init__().
https://stackoverflow.com/questions/60021777/
Pytorch: RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered
I am running into the following error when trying to train this on this dataset. Since this is the configuration published in the paper, I am assuming I am doing something incredibly wrong. This error arrives on a different image every time I try to run training. C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1741, in <module> main() File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1735, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1135, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Noam/Code/vision_course/hopenet/deep-head-pose/code/original_code_augmented/train_hopenet_with_validation_holdout.py", line 187, in <module> loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\loss.py", line 431, in forward return F.mse_loss(input, target, reduction=self.reduction) File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\functional.py", line 2204, in mse_loss ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction)) RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered Any ideas?
This kind of error generally occurs when using NLLLoss or CrossEntropyLoss, and when your dataset has negative labels (or labels greater than the number of classes). That is also the exact error you are getting Assertion t >= 0 && t < n_classes failed. This won't occur for MSELoss, but OP mentions that there is a CrossEntropyLoss somewhere and thus the error occurs (the program crashes asynchronously on some other line). The solution is to clean the dataset and ensure that t >= 0 && t < n_classes is satisfied (where t represents the label). Also, ensure that your network output is in the range 0 to 1 in case you use NLLLoss or BCELoss (then you require softmax or sigmoid activation respectively). Note that this is not required for CrossEntropyLoss or BCEWithLogitsLoss because they implement the activation function inside the loss function. (Thanks to @PouyaB for pointing out).
https://stackoverflow.com/questions/60022388/
How do I use tensorboard with pytorch?
I was following the pytorch tensorboard tutorial: https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html. But I can't even start because of the following error: from torch.utils.tensorboard import SummaryWriter --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) ~/apps/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py in <module> 1 try: ----> 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ModuleNotFoundError: No module named 'tensorboard' During handling of the above exception, another exception occurred: ImportError Traceback (most recent call last) <ipython-input-4-c8ffdef1cfab> in <module> ----> 1 from torch.utils.tensorboard import SummaryWriter 2 3 # default `log_dir` is "runs" - we'll be more specific here 4 writer = SummaryWriter('runs/fashion_mnist_experiment_1') ~/apps/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/tensorboard/__init__.py in <module> 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ----> 4 raise ImportError('TensorBoard logging requires TensorBoard with Python summary writer installed. ' 5 'This should be available in 1.14 or above.') 6 from .writer import FileWriter, SummaryWriter # noqa F401 ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above. I installed pytorch 1.14 via conda. Am I supposed to install something else?
That tutorial should probably let you know that you need to install tensorboard. Take a look at the pytorch tensorboard docs which explains that you need to install tensorboard first. Basically you can install tensorboard using pip install tensorboard and then start the tensorboard server by running tensorboard --logdir=runs The runs directory is where your summary writer will write to and it's where the tensorboard server reads from to know what to visualize.
https://stackoverflow.com/questions/60025291/
Best way to vectorize generating a batch of randomly rotated matrices in Numpy/PyTorch?
I’d like to generate batches of randomly rotated matrices based on an initial starting matrix (which has a shape of, for example, (4096, 3)), where the rotation applied to each matrix in the batch is randomly chosen from a group of rotation matrices (in my code in the original post, I only want to randomly select from 8 possible rotation angles). Therefore, what I end up with is a tensor of shape (batch_size, 4096, 3). My current approach is that I pre-make the possible rotated matrices (since I’m only dealing with 8 possible random rotations), and then use a for loop to generate the batch by randomly picking one of the eight pre-made rotated matrices for each item in the batch. This isn’t super efficient, so I was hoping to vectorize the whole process somehow. Right now, this is how I loop over a batch to generate a batch of rotated matrices one by one: for view_i in range(batch_size): # Get rotated view grid points randomly idx = torch.randint(0, 8, (1,)) pointsf = rotated_points[idx] In the code below, I generate a pre-made set of random rotation matrices that get randomly selected from in a for-loop over the batch. The make_3d_grid function generates a (grid_dim * grid_dim * grid_dim, 3) shaped matrix (basically a 2D array of x, y, z coordinate points). The get_rotation_matrix function returns a (3, 3) rotation matrix, where theta is used for rotation around the x-axis. rotated_points = [] grid_dim = 16 pointsf = make_3d_grid((-1,)*3, (1,)*3, (grid_dim,)*3) view_angles = torch.tensor([0, np.pi / 4.0, np.pi / 2.0, 3 * np.pi / 4.0, np.pi, 5 * np.pi / 4.0, 3 * np.pi / 2.0, 7 * np.pi / 4.0]) for i in range(len(view_angles)): theta = view_angles[i] rot = get_rotation_matrix(theta, torch.tensor(0.0), torch.tensor(0.0)) pointsf_rot = torch.mm(pointsf, rot) rotated_points.append(pointsf_rot) Any help in vectorizing this would be greatly appreciated! If code for this can be done in Numpy that works fine too, since I can convert it to PyTorch myself.
You can pre-generate your rotation matrices as a (batch_size, 3, 3) array, and then multiply by your (N, 3) points array broadcasted to (batch_size, N, 3). rotated_points = np.dot(pointsf, rots) np.dot will sum-product over the last axis of pointsf and the second-to-last axis of rots, putting the dimensions of pointsf first. This means that your result will be of shape (N, batch_size, 3) rather than (batch_size, N, 3). You can of course fix this with a simple axis swap: rotated_points = np.dot(pointsf, rots).transpose(1, 0, 2) OR rotated_points = np.swapaxes(np.dot(pointsf, rots), 0, 1) I would suggest, however, that you make rots be the inverse (transposed) rotation matrices from what you had before. In that case, you can just compute: rotated_points = np.dot(transposed_rots, pointsf.T) You should be able to convert np.dot to torch.mm fairly trivially.
https://stackoverflow.com/questions/60028614/
Select specific rows of 2D PyTorch tensor
Suppose I have a 2D tensor looking something like this: [[44, 50, 1, 32], . . . [7, 13, 90, 83]] and a list of row indices that I want to select that looks something like this [0, 34, 100, ..., 745]. How can I go through and create a new tensor that contains only the rows whose indices are contained in the array?
You could select like with numpy import torch x = torch.Tensor([[1, 2, 3, 4], [5, 6, 7, 8], [9, 8, 7, 6], [5, 4, 2, 1]]) indices = [0, 3] print(x[indices]) # tensor([[1., 2., 3., 4.], # [5., 4., 2., 1.]])
https://stackoverflow.com/questions/60032073/
How can I efficiently modify/make pairwise distance matrix?
x_norm = (x**2).sum(1).view(-1, 1) if y is not None: y_norm = (y**2).sum(1).view(1, -1) else: y = x y_norm = x_norm.view(1, -1) dist = (x_norm + y_norm - 2.0 * torch.mm(x, torch.transpose(y, 0, 1))) return dist Above is a code used to calculate pairwise distance matrix(M*N) between x (M points) and y (N points). I hope to make pairwise distance matrix that has 0 element when distance between two points is larger than specific value 'T'. In this case, what should I do? Thanks
I think you are looking for torch.where: new_dist = troch.where(dist > T, dist, 0.)
https://stackoverflow.com/questions/60032668/
Operator translate error occurs when I try to convert onnx file to caffe2
I train a boject detection model on pytorch, and I have exported to onnx file. And I want to convert it to caffe2 model : import onnx import caffe2.python.onnx.backend as onnx_caffe2_backend # Load the ONNX ModelProto object. model is a standard Python protobuf object model = onnx.load("CPU4export.onnx") # prepare the caffe2 backend for executing the model this converts the ONNX model into a # Caffe2 NetDef that can execute it. Other ONNX backends, like one for CNTK will be # availiable soon. prepared_backend = onnx_caffe2_backend.prepare(model) # run the model in Caffe2 # Construct a map from input names to Tensor data. # The graph of the model itself contains inputs for all weight parameters, after the input image. # Since the weights are already embedded, we just need to pass the input image. # Set the first input. W = {model.graph.input[0].name: x.data.numpy()} # Run the Caffe2 net: c2_out = prepared_backend.run(W)[0] # Verify the numerical correctness upto 3 decimal places np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3) print("Exported model has been executed on Caffe2 backend, and the result looks good!") I always got this error : RuntimeError: ONNX conversion failed, encountered 1 errors: Error while processing node: input: "90" input: "91" output: "92" op_type: "Resize" attribute { name: "mode" s: "nearest" type: STRING } . Exception: Don't know how to translate op Resize How can I solve it ?
The problem is that the Caffe2 ONNX backend does not yet support the export of the Resize operator. Please raise an issue on the Caffe2 / PyTorch github -- there's an active community of developers who should be able to address this use case.
https://stackoverflow.com/questions/60034575/
Pytorch Change the learning rate based on number of epochs
When I set the learning rate and find the accuracy cannot increase after training few epochs optimizer = optim.Adam(model.parameters(), lr = 1e-4) n_epochs = 10 for i in range(n_epochs): // some training here If I want to use a step decay: reduce the learning rate by a factor of 10 every 5 epochs, how can I do so?
You can use learning rate scheduler torch.optim.lr_scheduler.StepLR import torch.optim.lr_scheduler.StepLR scheduler = StepLR(optimizer, step_size=5, gamma=0.1) Decays the learning rate of each parameter group by gamma every step_size epochs see docs here Example from docs # Assuming optimizer uses lr = 0.05 for all groups # lr = 0.05 if epoch < 30 # lr = 0.005 if 30 <= epoch < 60 # lr = 0.0005 if 60 <= epoch < 90 # ... scheduler = StepLR(optimizer, step_size=30, gamma=0.1) for epoch in range(100): train(...) validate(...) scheduler.step() Example: import torch import torch.optim as optim optimizer = optim.SGD([torch.rand((2,2), requires_grad=True)], lr=0.1) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1) for epoch in range(1, 21): scheduler.step() print('Epoch-{0} lr: {1}'.format(epoch, optimizer.param_groups[0]['lr'])) if epoch % 5 == 0:print() Epoch-1 lr: 0.1 Epoch-2 lr: 0.1 Epoch-3 lr: 0.1 Epoch-4 lr: 0.1 Epoch-5 lr: 0.1 Epoch-6 lr: 0.010000000000000002 Epoch-7 lr: 0.010000000000000002 Epoch-8 lr: 0.010000000000000002 Epoch-9 lr: 0.010000000000000002 Epoch-10 lr: 0.010000000000000002 Epoch-11 lr: 0.0010000000000000002 Epoch-12 lr: 0.0010000000000000002 Epoch-13 lr: 0.0010000000000000002 Epoch-14 lr: 0.0010000000000000002 Epoch-15 lr: 0.0010000000000000002 Epoch-16 lr: 0.00010000000000000003 Epoch-17 lr: 0.00010000000000000003 Epoch-18 lr: 0.00010000000000000003 Epoch-19 lr: 0.00010000000000000003 Epoch-20 lr: 0.00010000000000000003 More on How to adjust Learning Rate - torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs.
https://stackoverflow.com/questions/60050586/
How to translate or convert code from Python Pytorch to C++ Libtorch
I couldn't find the equivalent C++ calls in Libtorch (Pytorch C++ Frontend) for my Python Pytorch code. The documentation according to my searchs (Pytorch Discuss) does not exist yet for my code. I wonder if someone can guide me with the following pieces (bellow). I cutted pieces where I've been having more crashes (wrong usage) of Libtorch C++. import torch as th th.set_grad_enabled(False) ... X = th.zeros((nobs, 3+p), device=dev, dtype=th.float32) y = th.tensor(indata, device=dev, dtype=th.float32) diffilter = th.tensor([-1., 1.], device=dev, dtype=th.float32).view(1, 1, 2) dy = th.conv1d(y.view(1, 1, -1), diffilter).view(-1) z = dy[p:].clone() ... # X matrix X[:, 0] = 1 X[:, 1] = th.arange(p+1, n) X[:, 2] = y[p:-1] ... # master X Xm = th.zeros((nobsadf, 3+p), device=th.device('cpu'), dtype=th.float32) ... # batch matrix, vector and observations Xbt = th.zeros(batch_size, adfs_count, nobsadf, (3+p), device=th.device('cpu'), dtype=th.float32) ... t = 0 # start line for master main X OLS matrix/ z vector for i in range(nbatchs): for j in range(batch_size): # assembly batch_size matrixes Xm[:] = X[t:t+nobsadf] ... Xbt[j, :, :, :] = Xm.repeat(adfs_count, 1).view(adfs_count, nobsadf, (3+p)) for k in range(adfs_count): Xbt[j, k, :k, :] = 0 nobt[j, k] = float(nobsadf-k-(p+3))
After suffering a lot!... I learned to better use the Pytorch Discuss forum for Pytorch and Libtorch information. Using the tag C++ for example. Unfortunantly, there is the oficial source of information (altough quite messy). This is the reason why I am sharing my answer here in SO. namespace th = torch; ... // th.set_grad_enabled(False) th::NoGradGuard guard; // or same as with torch.no_grad(): block ... auto dtype_option = th::TensorOptions().dtype(th::kFloat32); //X = th.zeros((nobs, 3+p), device=dev, dtype=th.float32) //y = th.tensor(indata, device=dev, dtype=th.float32) //diffilter = th.tensor([-1., 1.], device=dev, dtype=th.float32).view(1, 1, 2) //dy = th.conv1d(y.view(1, 1, -1), diffilter).view(-1) //z = dy[p:].clone() auto X = th::zeros({nobs, 3+p}, dtype_option); auto y = th::from_blob(signal, {n}, dtype_option); auto diffilter = th::tensor({-1, 1}, dtype_option).view({ 1, 1, 2 }); // first difference filter auto dy = th::conv1d(y.view({ 1, 1, -1 }), diffilter).view({ -1 }); auto z = dy.slice(0, p).clone(); ... // X[:, 0] = 1 # drift // X[:, 1] = th.arange(p+1, n) // X[:, 2] = y[p:-1] // create acessors to fill in the matrix auto ay = y.accessor<float, 1>(); // <1> dimension auto aX = X.accessor<float, 2>(); // <2> dimension for (auto i = 0; i < nobs; i++) { aX[i][0] = 1; aX[i][1] = p + 1 + i; aX[i][2] = ay[p+i]; } ... // Xm = th.zeros((nobsadf, 3+p), device=th.device('cpu'), dtype=th.float32) auto Xm = th::zeros({ nobsadf, 3 + p }, dtype_option.device(th::Device(th::kCPU))); // Xbt = th.zeros(batch_size, adfs_count, nobsadf, (3+p), device=th.device('cpu'), dtype=th.float32) auto Xbt = th::zeros({ batch_size, adfs_count, nobsadf, (3 + p) }, dtype_option.device(th::Device(th::kCPU))); ... // this acessor will be used in the inner for loop k auto anobt = nobt.accessor<float, 2>(); auto tline = 0; // start line for master main X OLS matrix/ z vector for (int i = 0; i < nbatchs; i++){ for (int j = 0; j < batch_size; j++){ // assembly batch_size matrixes // Xm[:] = X[t:t+nobsadf] Xm.copy_(X.narrow(0, tline, nobsadf)); ... // Xbt[j, :, :, :] = Xm.repeat(adfs_count, 1).view(adfs_count, nobsadf, (3+p)) auto Xbts = Xbt.select(0, j); Xbts.copy_(Xm.repeat({ adfs_count, 1 }).view({ adfs_count, nobsadf, (3 + p) })); for (int k = 0; k < adfs_count; k++) { // Xbt[j, k, :k, :] = 0 // nobt[j][k] = float(nobsadf - k - (p + 3)); Xbts.select(0, k).narrow(0, 0, k).fill_(0); anobt[j][k] = float(nobsadf - k - (p + 3)); } tline++; } } Probably there is a better or faster way of coding but the code above fully works. Fell free to make suggestions to improve my code. C++ Signatures of commom functions above Tensor Tensor::slice(int64_t dim, int64_t start, int64_t end, int64_t step) Tensor Tensor::narrow(int64_t dim, int64_t start, int64_t length) Tensor Tensor::select(int64_t dim, int64_t index) Tensor & Tensor::copy_(const Tensor & src, bool non_blocking=false) Further notes: Almost all C++ function have Pytorch Python equivalent. So here is my golden tip: Translate your python script using C++ equivalent functions like copy_, narrow, slice testing it (to make sure it works) than just go to C++ replicating everything.
https://stackoverflow.com/questions/60059438/
Why does roi_align not seem to work in pytorch?
I am a pytorch beginner. It seems that there is a bug in the RoIAlign module in pytorch. The code is simple but the result is out of my expectation. code: import torch from torchvision.ops import RoIAlign if __name__ == '__main__': output_size = (3,3) spatial_scale = 1/4 sampling_ratio = 2 #x.shape:(1,1,6,6) x = torch.FloatTensor([[ [[1,2,3,4,5,6], [7,8,9,10,11,12], [13,14,15,16,17,18], [19,20,21,22,23,24], [25,26,27,28,29,30], [31,32,33,34,35,36],], ]]) rois = torch.tensor([ [0,0.0,0.0,20.0,20.0], ]) channel_num = x.shape[1] roi_num = rois.shape[0] a = RoIAlign(output_size, spatial_scale=spatial_scale, sampling_ratio=sampling_ratio) ya = a(x, rois) print(ya) output: tensor([[[[ 6.8333, 8.5000, 10.1667], [16.8333, 18.5000, 20.1667], [26.8333, 28.5000, 30.1667]]]]) But in this case shouldn't it be an average pooling operation on every 2x2 cell, like: tensor([[[[ 4.5000, 6.5000, 8.5000], [16.5000, 18.5000, 20.5000], [28.5000, 30.5000, 32.5000]]]]) My torch version is 1.3.0 with python3.6 and cuda 10.1, on Ubuntu16. I have been troubled for two days and I couldn't appreciate it more if anyone could help me.
Intuitive Interpretation There are some complications with image coordinates. We need to take into account the fact that pixels are actually squares and not points in space. We interpret the center of the pixel to be the integer coordinates, so for example (0,0) refers to the center of the first pixel while (-0.5, -0.5) refers to the upper left corner of the first pixel. Basically this is why you aren't getting the results you expect. An roi that goes from (0,0) to (5,5) actually cuts through the border pixels and leads to sampling between pixels when performing roi align. If instead we define our roi from (-0.5, -0.5) to (5.5, 5.5) then we get the expected result. Accounting for the scale factor this translates to an roi from (-2, -2) to (22, 22). import torch from torchvision.ops import RoIAlign output_size = (3, 3) spatial_scale = 1 / 4 sampling_ratio = 2 x = torch.FloatTensor([[ [[1, 2, 3, 4, 5, 6 ], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24], [25, 26, 27, 28, 29, 30], [31, 32, 33, 34, 35, 36]] ]]) rois = torch.tensor([ [0, -2.0, -2.0, 22.0, 22.0], ]) a = RoIAlign(output_size, spatial_scale=spatial_scale, sampling_ratio=sampling_ratio) ya = a(x, rois) print(ya) which results in tensor([[[[ 4.5000, 6.5000, 8.5000], [16.5000, 18.5000, 20.5000], [28.5000, 30.5000, 32.5000]]]]) Alternative interpretation Partitioning the interval [0, 5] into 3 intervals of equal length gives [0, 1.67], [1.67, 3.33], [3.33, 5]. So the boundaries of the output window will fall into these coordinates. Clearly this won't lead to nice sampling results.
https://stackoverflow.com/questions/60060016/
forcing pytorch to use gpu
i've recently followed a tutorial here https://www.provideocoalition.com/automatic-rotoscopingfor-free/ And ended with a functional bit of code that generate masks outlining interestings objects. But now, i want ot run it on my gpu, since cpu is way too slow. I have CUDA installed and all, but pytorch refuses to use it. I've used most tricks like setting torch.device and all, but to no avail; pytorch keep using 0 gpu. here's the code : from PIL import Image import torch import torchvision.transforms as T from torchvision import models import numpy as np fcn = None device = torch.device('cuda') torch.cuda.set_device(0) print('Using device:', device) print() if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached:', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB') def getRotoModel(): global fcn #fcn = models.segmentation.fcn_resnet101(pretrained=True).eval() fcn = models.segmentation.deeplabv3_resnet101(pretrained=True).eval() # Define the helper function def decode_segmap(image, nc=21): label_colors = np.array([(0, 0, 0), # 0=background # 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle (128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128), # 6=bus, 7=car, 8=cat, 9=chair, 10=cow (0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0), # 11=dining table, 12=dog, 13=horse, 14=motorbike, 15=person (192, 128, 0), (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128), # 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor (0, 64, 0), (128, 64, 0), (0, 192, 0), (128, 192, 0), (0, 64, 128)]) r = np.zeros_like(image).astype(np.uint8) g = np.zeros_like(image).astype(np.uint8) b = np.zeros_like(image).astype(np.uint8) for l in range(0, nc): idx = image == l r[idx] = label_colors[l, 0] g[idx] = label_colors[l, 1] b[idx] = label_colors[l, 2] rgb = np.stack([r, g, b], axis=2) return rgb def createMatte(filename, matteName, size): img = Image.open(filename) trf = T.Compose([T.Resize(size), T.ToTensor(), T.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])]) inp = trf(img).unsqueeze(0) if (fcn == None): getRotoModel() out = fcn(inp)['out'] om = torch.argmax(out.squeeze(), dim=0).detach().cpu().numpy() rgb = decode_segmap(om) im = Image.fromarray(rgb) im.save(matteName) What could i do ? thanks.
If everything is set up correctly you just have to move the tensors you want to process on the gpu to the gpu. You can try this to make sure it works in general import torch t = torch.tensor([1.0]) # create tensor with just a 1 in it t = t.cuda() # Move t to the gpu print(t) # Should print something like tensor([1], device='cuda:0') print(t.mean()) # Test an operation just to be sure You already have a device variable so instead of .cuda() you can just use .to(device). Which is also the preferable way to do it so you can just switch between cpu and gpu by setting one variable.
https://stackoverflow.com/questions/60063910/
Tracing back deprecated warning in pytorch
I am training yolov3 on my data using this code here : https://github.com/cfotache/pytorch_custom_yolo_training/ But I am getting this annoying deprecation warnings Warning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (expandTensors at /pytorch/aten/src/ATen/native/IndexingUtils.h:20) I tried using python3 -W ignore train.py I tried adding : import warnings warnings.filterwarnings('ignore') but the warning is still persistent. I found this piece of code on here on stackoverflow that prints that stack on warnings , import traceback import warnings import sys def warn_with_traceback(message, category, filename, lineno, file=None, line=None): log = file if hasattr(file,'write') else sys.stderr traceback.print_stack(file=log) log.write(warnings.formatwarning(message, category, filename, lineno, line)) warnings.showwarning = warn_with_traceback and here's what I get : File "/content/pytorch_custom_yolo_training/train.py", line 102, in <module> loss = model(imgs, targets) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/pytorch_custom_yolo_training/models.py", line 267, in forward x, *losses = module[0](x, targets) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/pytorch_custom_yolo_training/models.py", line 203, in forward loss_x = self.mse_loss(x[mask], tx[mask]) File "/usr/lib/python3.6/warnings.py", line 99, in _showwarnmsg msg.file, msg.line) File "/content/pytorch_custom_yolo_training/train.py", line 29, in warn_with_traceback traceback.print_stack(file=log) /pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. Going to the files and functions mentioned in the stack , I don't find any uint8 . What can I to solve the problem or even stop getting these warnings ?
Found the problem. line : loss_x = self.mse_loss(x[mask], tx[mask]) the mask variable was a ByteTensor which is deprecated . Just replaced it with a BoolTensor
https://stackoverflow.com/questions/60067164/
Number of Conv2d Layers and Filters for Small Image Classification Task
If I'm working with a dataset where I have ~100,000 training images and ~20,000 validation images, each of size 32 x 32 x 3, how does the size and the dimensions of my dataset affect the number of Conv2d layers I have in my CNN? My intuition is to use fewer Conv2d layers, 2-3, because any more than 3 layers will be working with parts of the image that are too small to gain relevant data from. In addition, does it make sense to have layers with a large number of filters, >128? My thought is that when dealing with small images, it doesn't make sense to have a large number of parameters.
Since you have the exact input size like the images in Cifar10 and Cifar100 just have a look what people tried out. In general you can start with something like a ResNet18. Also I don't quite understand why you say because any more than 3 layers will be working with parts of the image that are too small to gain relevant data from. As long as you don't downsample using something like max pooling or a conv with padding 1 and stride 2. The size of 32x32 will be the same and only the number of channels will change depending on the network. Designing networks is almost always at looking what did other people do and what worked for them and starting from there. You almost never want to do it from scratch on your own, since the iteration cycles are just to long and models released by researches from Google, Facebook ... had way more resources then you will ever have to find something good.
https://stackoverflow.com/questions/60069035/
Sharing GPU memory between process on a same GPU with Pytorch
I'm trying to implement an efficient way of doing concurrent inference in Pytorch. Right now, I start 2 processes on my GPU (I have only 1 GPU, both process are on the same device). Each process load my Pytorch model and do the inference step. My problem is that my model takes quite some space on the memory. I have 12Gb of memory on the GPU, and the model takes ~3Gb of memory alone (without the data). Which means together, my 2 processes takes 6Gb of memory just for the model. Now I was wondering if it's possible to load the model only once, and use this model for inference on 2 different processes. What I want is only 3Gb of memory is consumed by the model, but still have 2 processes. I came accross this answer mentioning IPC, but as far as I understood it means the process #2 will copy the model from process #1, so I will still end up with 6Gb allocated for the model. I also checked on the Pytorch documentation, about DataParallel and DistributedDataParallel, but it seems not possible. This seems to be what I want, but I couldn't find any code example on how to use with Pytorch in inference mode. I understand this might be difficult to do such a thing for training, but please note I'm only talking about the inference step (the model is in read-only mode, no need to update gradients). With this assumption, I'm not sure if it's possible or not.
The GPU itself has many threads. When performing an array/tensor operation, it uses each thread on one or more cells of the array. This is why it seems that an op that can fully utilize the GPU should scale efficiently without multiple processes -- a single GPU kernel is already massively parallelized. In a comment you mentioned seeing better results with multiple processes in a small benchmark. I'd suggest running the benchmark with more jobs to ensure warmup, ten kernels seems like too small of a test. If you're finding a thorough representative benchmark to run faster consistently though, I'll trust good benchmarks over my intuition. My understanding is that kernels launched on the default CUDA stream get executed sequentially. If you want them to run in parallel, I think you'd need multiple streams. Looking in the PyTorch code, I see code like getCurrentCUDAStream() in the kernels, which makes me think the GPU will still run any PyTorch code from all processes sequentially. This NVIDIA discussion suggests this is correct: https://devtalk.nvidia.com/default/topic/1028054/how-to-launch-cuda-kernel-in-different-processes/ Newer GPUs may be able to run multiple kernels in parallel (using MPI?) but it seems like this is just implemented with time slicing under the hood anyway, so I'm not sure we should expect higher total throughput: How do I use Nvidia Multi-process Service (MPS) to run multiple non-MPI CUDA applications? If you do need to share memory from one model across two parallel inference calls, can you just use multiple threads instead of processes, and refer to the same model from both threads? To actually get the GPU to run multiple kernels in parallel, you may be able to use nn.Parallel in PyTorch. See the discussion here: https://discuss.pytorch.org/t/how-can-l-run-two-blocks-in-parallel/61618/3
https://stackoverflow.com/questions/60069977/
How to get indices of multiple elements in a 2D tensor, in a GPU friendly way?
This question is similar to that already answered here, but that question does not address how to retrieve the indices of multiple elements. I have a 2D tensor points with many rows and a small number of columns, and would like to get a tensor containing the row indices of all the elements in that tensor. I know what elements are present in points beforehand; It contains integer elements ranging from 0 to 999, and I can make a tensor using the range function to reflect the set of possible elements. The elements may be in any of the columns. How can I retrieve the row indices where each element appears in my tensor in a way that avoids looping or using numpy, so I can do this quickly on a GPU? I am looking for something like (points == elements).nonzero()[:,1] Thanks!
try torch.cat([(t == i).nonzero() for i in elements_to_compare]) >>> import torch >>> t = torch.empty((15,4)).random_(0, 999) >>> t tensor([[429., 833., 393., 828.], [555., 893., 846., 909.], [ 11., 861., 586., 222.], [232., 92., 576., 452.], [171., 341., 851., 953.], [ 94., 46., 130., 413.], [243., 251., 545., 331.], [620., 29., 194., 176.], [303., 905., 771., 149.], [482., 225., 7., 315.], [ 44., 547., 206., 299.], [695., 7., 645., 385.], [225., 898., 677., 693.], [746., 21., 505., 875.], [591., 254., 84., 888.]]) >>> torch.cat([(t == i).nonzero() for i in [7,385]]) tensor([[ 9, 2], [11, 1], [11, 3]]) >>> torch.cat([(t == i).nonzero()[:,1] for i in [7,385]]) tensor([2, 1, 3]) Numpy: >>> np.nonzero(np.isin(t, [7,385])) (array([ 9, 11, 11], dtype=int64), array([2, 1, 3], dtype=int64)) >>> np.nonzero(np.isin(t, [7,385]))[1] array([2, 1, 3], dtype=int64)
https://stackoverflow.com/questions/60071517/
Difference between Keras' BatchNormalization and PyTorch's BatchNorm2d?
I've a sample tiny CNN implemented in both Keras and PyTorch. When I print summary of both the networks, the total number of trainable parameters are same but total number of parameters and number of parameters for Batch Normalization don't match. Here is the CNN implementation in Keras: inputs = Input(shape = (64, 64, 1)). # Channel Last: (NHWC) model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 1))(inputs) model = BatchNormalization(momentum=0.15, axis=-1)(model) model = Flatten()(model) dense = Dense(100, activation = "relu")(model) head_root = Dense(10, activation = 'softmax')(dense) And the summary printed for above model is: Model: "model_8" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_9 (InputLayer) (None, 64, 64, 1) 0 _________________________________________________________________ conv2d_10 (Conv2D) (None, 64, 64, 32) 320 _________________________________________________________________ batch_normalization_2 (Batch (None, 64, 64, 32) 128 _________________________________________________________________ flatten_3 (Flatten) (None, 131072) 0 _________________________________________________________________ dense_11 (Dense) (None, 100) 13107300 _________________________________________________________________ dense_12 (Dense) (None, 10) 1010 ================================================================= Total params: 13,108,758 Trainable params: 13,108,694 Non-trainable params: 64 _________________________________________________________________ Here's the implementation of the same model architecture in PyTorch: # Image format: Channel first (NCHW) in PyTorch class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), padding=1), nn.ReLU(True), nn.BatchNorm2d(num_features=32), ) self.flatten = nn.Flatten() self.fc1 = nn.Linear(in_features=131072, out_features=100) self.fc2 = nn.Linear(in_features=100, out_features=10) def forward(self, x): output = self.layer1(x) output = self.flatten(output) output = self.fc1(output) output = self.fc2(output) return output And following is the output of summary of the above model: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 32, 64, 64] 320 ReLU-2 [-1, 32, 64, 64] 0 BatchNorm2d-3 [-1, 32, 64, 64] 64 Flatten-4 [-1, 131072] 0 Linear-5 [-1, 100] 13,107,300 Linear-6 [-1, 10] 1,010 ================================================================ Total params: 13,108,694 Trainable params: 13,108,694 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.02 Forward/backward pass size (MB): 4.00 Params size (MB): 50.01 Estimated Total Size (MB): 54.02 ---------------------------------------------------------------- As you can see in above results, Batch Normalization in Keras has more number of parameters than PyTorch (2x to be exact). So what's the difference in above CNN architectures? If they are equivalent, then what am I missing here?
Keras treats as parameters (weights) many things that will be "saved/loaded" in the layer. While both implementations naturally have the accumulated "mean" and "variance" of the batches, these values are not trainable with backpropagation. Nevertheless, these values are updated every batch, and Keras treats them as non-trainable weights, while PyTorch simply hides them. The term "non-trainable" here means "not trainable by backpropagation", but doesn't mean the values are frozen. In total they are 4 groups of "weights" for a BatchNormalization layer. Considering the selected axis (default = -1, size=32 for your layer) scale (32) - trainable offset (32) - trainable accumulated means (32) - non-trainable, but updated every batch accumulated std (32) - non-trainable, but updated every batch The advantage of having it like this in Keras is that when you save the layer, you also save the mean and variance values the same way you save all other weights in the layer automatically. And when you load the layer, these weights are loaded together.
https://stackoverflow.com/questions/60079783/
Pytorch - Porting @ Operator
I have the following line of code I want to port to Torch Matmul rotMat = xmat @ ymat @ zmat Can I know if this is the correct ordering: rotMat = torch.matmul(xmat, torch.matmul(ymat, zmat))
According to the python docs on operator precedence the @ operator has left-to-right associativity https://docs.python.org/3/reference/expressions.html#operator-precedence Operators in the same box group left to right (except for exponentiation, which groups from right to left). Therefore the equivalent operation is rotMat = torch.matmul(torch.matmul(xmat, ymat), zmat) Though keep in mind that matrix multiplication is associative (mathematically) so you shouldn't see much of a difference in the result if you do it the other way. Generally you want to associate in the way that results in the fewest computational steps. For example using the naive matrix multiplication algorithm, if X is 1x10, Y is 10x100 and Z is 100x1000 then the difference between (X @ Y) @ Z and X @ (Y @ Z) is about 1*10*100 + 1*100*1000 = 101,000 multiplication/addition operations for the first versus 10*100*1000 + 1*10*1000 = 1,001,000 operations for the second. Though these have the same result (ignoring rounding errors) the second version will be about 10 x slower! As pointed out by @Szymon Maszke pytorch tensors also support the @ operator so you can still use xmat @ ymat @ zmat in pytorch.
https://stackoverflow.com/questions/60080997/
How to solve the RuntimeError when using torch.utils.tensorboard to add a graph
I am trying to use tensorboard to visualize my pytorch model and encounter a problem. The input tensor's shape is (-1, 1, 20, 15) and the output tensor's shape is (-1, 6). My model combines a list of 5 convolutional networks. packages: python: 3.7.6 pytorch: 1.4.0 tensorboard: 2.1.0 The pytorch model is as below: import torch from torch import nn from torch.nn import functional as F class MyModel(nn.Module): """example""" def __init__(self, nchunks=[2, 5, 3, 2, 3], resp_size=6): super().__init__() self.nchunks = nchunks self.conv = [nn.Conv2d(1, 2, (2, x)) for x in nchunks] self.pool = nn.Sequential( nn.AdaptiveMaxPool1d(output_size=10), nn.Flatten(start_dim=1) ) self.bn = nn.BatchNorm1d(100) self.fc1 = nn.Linear(100, 100) self.fc2 = nn.Linear(100, 100) self.fc3 = nn.Linear(100, resp_size) def forward(self, x): xi = torch.split(x, self.nchunks, dim=3) xi = [f(subx.float()).view(-1, 2, 19) for f, subx in zip(self.conv, xi)] xi = [self.pool(subx) for subx in xi] xi = torch.cat(xi, dim=1) xi = self.bn(xi) xi = F.relu(self.fc1(xi)) xi = F.relu(self.fc2(xi)) xi = self.fc3(xi) return xi Here is the code for the tensorboard summary writer: from torch.utils.tensorboard import SummaryWriter x = torch.rand((5,1,20,15)) model = MyModel() writer = SummaryWriter('logs') writer.add_graph(model, x) Such an error is returned: RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient Tensor: (1,1,.,.) = -0.2108 -0.4986 -0.4009 -0.1910 (2,1,.,.) = 0.2383 -0.4147 0.2642 0.0456 [ torch.FloatTensor{2,1,2,2} ] I guess the model has some issues, but I am not sure what happens. This similar github issue does not relate to my problem because I am not using multi GPUs.
I solved the problem by replacing [nn.Conv2d(1, 2, (2, x)) for x in nchunks] with nn.ModuleList([nn.Conv2d(1, 2, (2, x)) for x in nchunks])
https://stackoverflow.com/questions/60082604/
Error in loading Glove vectors using TorchText on Kaggle Kernel
I am trying to load the Glove embedding vectors in kaggle kernel using TorchText lib. from torchtext import vocab vec = vocab.Vectors('glove.6B.100d.txt', '../input/glove6b100dtxt/') I am getting the following error: OSError: [Errno 30] Read-only file system: '../input/glove6b100dtxt/glove.6B.100d.txt.pt' Adding screenshot for more clarification:
This happens when you are using glove embeddings as part of the input from other datasets. You don't have the right access to the input folder so the workaround is as follows : Now we will load the Glove Embedding and move it out to the working directory !cp -r ../input/glove-embeddings/ ../kaggle/working/glove-embeddings Once moved, change the location of access vec = vocab.Vectors('glove.6B.100d.txt', '../kaggle/working/glove-embeddings')
https://stackoverflow.com/questions/60082711/
How to implement a gaussian renderer with mean and variance values as input in any deep modeling framework (needs to be back-propagable)
Imagine a typical auto-encoder-decoder model. However, instead of a general decoder where deconvoutions together with upscaling are used to create/synthesize a tensor similar to the model's input, I need to implement a structured/custom decoder. Here, I need the decoder to take its input, e.g. a 10x2 tensor where each row represents x,y positions or coordinates, and render a fixed predefined size image where there are 10 gaussian distributions generated at the location specified by the input. In another way, I need to create an empty fixed sized tensor, fill the locations specified by the 10 coordinates a value 1, and then sweep a gaussian kernel over the whole tensor. For example, imagine the following 1-d scenario. let the input to the whole model be a vector of size 10. if the input to the decoder is [3, 7], which are two x-coordinates (0-indexing), and the gaussian kernel of size 3 that we want to use is [0.28, 0.44, 0.28], then the output of the decoder should look like the following (should be the same size as the original input of the model which is 10): [0, 0, 0.28, 0.44, 0.28, 0, 0.28, 0.44, 0.28, 0] which is the same as [0, 0, 0, 1, 0, 0, 0, 1, 0, 0]*[0.28, 0.44, 0.28] where * represents the convolution operator. please note that in the first vector, the 1 or located at positions 3 and 7 considering a 0-indexing format. Finally a typical pixel loss such as MSE will be calculated. The important part is that this rendering module needs to be able to backpropagate the errors from the loss to its inputs which are the coordinates. This module itself does not have any trainable parameters. Also, I do not want to change the layers coming before this rendering module and they need to stay as they are. In a more advanced setting, I would also like to provide the 4 covariance values as input too, i.e. the input to the renderer would be in the form of [num_points, 5] where each row is [x_coord, y_coord, cov(x,x), cov(x,y), cov(y,y)]. How can I implement such a module in any of the available deep learning frameworks? a hint towards something similar would also be very useful.
In my experience, punctual things in neural networks will have a bad performance because it cuts the influence of distant pixels. Thus, instead of using a gaussian kernel, it would be better to have an actual gaussian function applied to all pixels. So, taking a 2D gaussian distribution function: We can use it like this: This means some steps in a custom function: import keras.backend as K def coords_to_gaussian(x): #where x is shape (batch, 10, 2), and 2 = x, y #pixel coordinates - must match the values of x and y #here I suppose from 0 to image size, but you may want it normalized, maybe x_pixels = K.reshape(K.arange(image_size), (1,1,image_size,1)) x_pixels = K.concatenate([x_pixels]*image_size, axis=-1) #shape(1,1,size,size) y_pixels = K.permute_dimensions(x_pixels, (0,1,3,2)) pixels = K.stack([x_pixels, y_pixels], axis=-1) #shape(1,1,size,size,2) #adjusting the AE locations to a compatible shape: locations = K.reshape(x, (-1, 10, 1, 1, 2)) #calculating the upper part of the equation result = K.square(pixels - locations) #shape (batch, 10, size, size, 2) result = - K.sum(result, axis=-1) / (2*square_sigma) #shape (batch, 10, size, size) #calculating the E: result = K.exp(result) / (2 * pi * square_sigma) #sum the 10 channels (principle of superposition) result = K.sum(result, axis=1) #shape (batch, size, size) #add a channel for future convolutions result = K.expand_dims(result, axis=-1) #shape (batch, size, size, 1) return result Use this in a Lambda layer: from keras.layers import Lambda Lambda(coords_to_gaussian)(coordinates_tensor_from_encoder) I'm not considering the covariances here, but you might find a way to put them in the formulas and adjust the code.
https://stackoverflow.com/questions/60082775/
Display PyTorch model with multiple outputs using torchviz make_dots
I have a model with multiple outputs, 4 to be exact: def forward(self, x): outputs = [] for conv, act in zip(self.Convolutions, self.Activations): y = conv(x) outputs.append(act(y)) return outputs I wanted to display it using make_dot from torchviz: from torchviz import make_dot generator = ... batch = next(iter(generator)) input, output = batch["input"].to(device, dtype=torch.float), batch["output"].to(device, dtype=torch.float) dot = make_dot(model(input), params=dict(model.named_parameters())) But I get the following error: File "/opt/local/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torchviz/dot.py", line 37, in make_dot output_nodes = (var.grad_fn,) if not isinstance(var, tuple) else tuple(v.grad_fn for v in var) AttributeError: 'list' object has no attribute 'grad_fn' Obviously a list does not have a grad_fn function, but according to this discussion, I can return a list of outputs. What am I doing wrong?
Model can return a list, but make_dot wants a Tensor. If output components have similar shape, I suggest to use torch.cat on it.
https://stackoverflow.com/questions/60090411/
segfault when using xarray, only when 'import torch'
I get a segfault when I open a file using xarray, but only if I import torch (imagine how long it took me to figure this out): import xarray as xr import torch xr.open_dataset('my/file.nc') ---- [1651963ee602:10863] *** Process received signal *** [1651963ee602:10863] Signal: Segmentation fault (11) [1651963ee602:10863] Signal code: Address not mapped (1) [1651963ee602:10863] Failing at address: 0x440000e9 [1651963ee602:10863] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f26f1e11890] [1651963ee602:10863] [ 1] /usr/local/mpi/lib/libmpi.so.40(PMPI_Comm_set_errhandler+0x41)[0x7f26dc349691] [1651963ee602:10863] [ 2] /opt/conda/lib/python3.6/site-packages/mpi4py/MPI.cpython-36m-x86_64-linux-gnu.so(+0x82370)[0x7f26cadc2370] [1651963ee602:10863] [ 3] /opt/conda/lib/python3.6/site-packages/mpi4py/MPI.cpython-36m-x86_64-linux-gnu.so(+0x2dd79)[0x7f26cad6dd79] [1651963ee602:10863] [ 4] /opt/conda/bin/python /opt/conda/bin/ipython(PyModule_ExecDef+0x7a)[0x55cad2262cca] [1651963ee602:10863] [ 5] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1ecd38)[0x55cad2262d38] [1651963ee602:10863] [ 6] /opt/conda/bin/python /opt/conda/bin/ipython(PyCFunction_Call+0xf4)[0x55cad218cb54] [1651963ee602:10863] [ 7] /opt/conda/bin/python /opt/conda/bin/ipython(_PyEval_EvalFrameDefault+0x539a)[0x55cad224183a] [1651963ee602:10863] [ 8] /opt/conda/bin/python /opt/conda/bin/ipython(+0x170cf6)[0x55cad21e6cf6] [1651963ee602:10863] [ 9] /opt/conda/bin/python /opt/conda/bin/ipython(+0x171c91)[0x55cad21e7c91] [1651963ee602:10863] [10] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1a1635)[0x55cad2217635] [1651963ee602:10863] [11] /opt/conda/bin/python /opt/conda/bin/ipython(_PyEval_EvalFrameDefault+0x30a)[0x55cad223c7aa] [1651963ee602:10863] [12] /opt/conda/bin/python /opt/conda/bin/ipython(+0x171a5b)[0x55cad21e7a5b] [1651963ee602:10863] [13] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1a1635)[0x55cad2217635] [1651963ee602:10863] [14] /opt/conda/bin/python /opt/conda/bin/ipython(_PyEval_EvalFrameDefault+0x30a)[0x55cad223c7aa] [1651963ee602:10863] [15] /opt/conda/bin/python /opt/conda/bin/ipython(+0x171a5b)[0x55cad21e7a5b] [1651963ee602:10863] [16] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1a1635)[0x55cad2217635] [1651963ee602:10863] [17] /opt/conda/bin/python /opt/conda/bin/ipython(_PyEval_EvalFrameDefault+0x30a)[0x55cad223c7aa] [1651963ee602:10863] [18] /opt/conda/bin/python /opt/conda/bin/ipython(+0x171a5b)[0x55cad21e7a5b] [1651963ee602:10863] [19] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1a1635)[0x55cad2217635] [1651963ee602:10863] [20] /opt/conda/bin/python /opt/conda/bin/ipython(_PyEval_EvalFrameDefault+0x30a)[0x55cad223c7aa] [1651963ee602:10863] [21] /opt/conda/bin/python /opt/conda/bin/ipython(_PyFunction_FastCallDict+0x11b)[0x55cad21e80cb] [1651963ee602:10863] [22] /opt/conda/bin/python /opt/conda/bin/ipython(_PyObject_FastCallDict+0x26f)[0x55cad2189f0f] [1651963ee602:10863] [23] /opt/conda/bin/python /opt/conda/bin/ipython(_PyObject_CallMethodIdObjArgs+0x100)[0x55cad21b3be0] [1651963ee602:10863] [24] /opt/conda/bin/python /opt/conda/bin/ipython(PyImport_ImportModuleLevelObject+0x280)[0x55cad21804b0] [1651963ee602:10863] [25] /opt/conda/bin/python /opt/conda/bin/ipython(+0x1abbda)[0x55cad2221bda] [1651963ee602:10863] [26] /opt/conda/bin/python /opt/conda/bin/ipython(PyCFunction_Call+0xc6)[0x55cad218cb26] [1651963ee602:10863] [27] /opt/conda/bin/python /opt/conda/bin/ipython(PyObject_Call+0x3e)[0x55cad218994e] [1651963ee602:10863] [28] /opt/conda/bin/python /opt/conda/bin/ipython(PyObject_CallFunction+0xf4)[0x55cad21e6854] [1651963ee602:10863] [29] /opt/conda/bin/python /opt/conda/bin/ipython(PyImport_Import+0x9e)[0x55cad2180a9e] [1651963ee602:10863] *** End of error message *** [1] 10863 segmentation fault (core dumped) ipython This works: import xarray as xr xr.open_dataset('my/file.nc') This works as well: import xarray as xr import torch print(torch.ones(1)) OS ubuntu 18.04.3 python 3.6.7 xarray 0.15.0 torch 1.3.0a0+24ae9b5 I appreciate any help.
Installing netcdf4 via pip solved the problem.
https://stackoverflow.com/questions/60096490/
PyTorch RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly
I am a beginner at PyTorch and I am just trying out some examples on this webpage. But I can't seem to get the 'super_resolution' program running due to this error: RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I searched the Internet and found that some people suggest setting num_workers to 0. But if I do that, the program tells me that I am running out of memory (either with CPU or GPU): RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM! or RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch) How do I fix this? I am using python 3.8 on Win10(64bit) and pytorch 1.4.0. More complete error messages (--cuda means using GPU, --threads x means passing x to the num_worker parameter): with command line arguments --upscale_factor 1 --cuda File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "E:\Python38\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 48, in train for iteration, batch in enumerate(training_data_loader, 1): File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data success, data = self._try_get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly with command line arguments --upscale_factor 1 --cuda --threads 0 File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 52, in train loss = criterion(model(input), target) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\super_resolution\model.py", line 21, in forward x = self.relu(self.conv2(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch)
There is no "complete" solve for GPU out of memory errors, but there are quite a few things you can do to relieve the memory demand. Also, make sure that you are not passing the trainset and testset to the GPU at the same time! Decrease batch size to 1 Decrease the dimensionality of the fully-connected layers (they are the most memory-intensive) (Image data) Apply centre cropping (Image data) Transform RGB data to greyscale (Text data) Truncate input at n chars (which probably won't help that much) Alternatively, you can try running on Google Colaboratory (12 hour usage limit on K80 GPU) and Next Journal, both of which provide up to 12GB for use, free of charge. Worst case scenario, you might have to conduct training on your CPU. Hope this helps!
https://stackoverflow.com/questions/60101168/
Finding mean and standard deviation across image channels PyTorch
Say I have a batch of images in the form of tensors with dimensions (B x C x W x H) where B is the batch size, C is the number of channels in the image, and W and H are the width and height of the image respectively. I'm looking to use the transforms.Normalize() function to normalize my images with respect to the mean and standard deviation of the dataset across the C image channels, meaning that I want a resulting tensor in the form 1 x C. Is there a straightforward way to do this? I tried torch.view(C, -1).mean(1) and torch.view(C, -1).std(1) but I get the error: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. Edit After looking into how view() works in PyTorch, I know realize why my approach doesn't work; however, I still can't figure out how to get the per-channel mean and standard deviation.
You just need to rearrange batch tensor in a right way: from [B, C, W, H] to [B, C, W * H] by: batch = batch.view(batch.size(0), batch.size(1), -1) Here is complete usage example on random data: Code: import torch from torch.utils.data import TensorDataset, DataLoader data = torch.randn(64, 3, 28, 28) labels = torch.zeros(64, 1) dataset = TensorDataset(data, labels) loader = DataLoader(dataset, batch_size=8) nimages = 0 mean = 0. std = 0. for batch, _ in loader: # Rearrange batch to be the shape of [B, C, W * H] batch = batch.view(batch.size(0), batch.size(1), -1) # Update total number of images nimages += batch.size(0) # Compute mean and std here mean += batch.mean(2).sum(0) std += batch.std(2).sum(0) # Final step mean /= nimages std /= nimages print(mean) print(std) Output: tensor([-0.0029, -0.0022, -0.0036]) tensor([0.9942, 0.9939, 0.9923])
https://stackoverflow.com/questions/60101240/
How to use GPU in pytorch?
I tried following steps at: https://pytorch.org/get-started/locally/ First I created a conda environment as: conda create -n facenet37_2 python=3.7 Then on above site I selected: PyTorch Build: Stable (1.4) OS: Linux (I am using Ubuntu 18.04) Package: conda Language: python CUDA: 10.1 and it asked me to run following command: conda install pytorch torchvision cudatoolkit=10.1 -c pytorch But after that when I opened python and typed: import torch torch.cuda.is_available() I get False I have GeForce GT 630M (computeCapability: 2.1). But it is not getting detected. Why? Is it too old and no longer supported? How can I fix the issue? Edit: Why did I get a negative vote?
The GeForce GT 630M has compute capability 2.1 and therefore only supports up to CUDA 8. PyTorch binaries dropped support for compute capability <= 5.0 in PyTorch 0.3.1. It's not clear to me if compute capability 2.1 was ever included in the binaries. The PyTorch codebase dropped CUDA 8 support in PyTorch 1.1.0. Due to the second point there's no way short of changing the PyTorch codebase to make your GPU work with the latest version. Your options are: Install PyTorch without GPU support. Try compiling PyTorch < 1.1.0 from source (instructions). Make sure to checkout the v1.0.1 tag. This will produce a binary with support for your compute capability. If acceptable you could try installing a really old version: PyTorch < 0.3.1 using conda or a wheel and see if that works. It may have compute capability 2.1 support though I can't verify this. See pytorch.org for information. Though it looks like the link to https://download.pytorch.org/whl/cu80/torch_stable.html is broken.
https://stackoverflow.com/questions/60101973/
Pytorch runtimeError "matrices expected, got 1D, 2D tensors at"
I begin to practice Pytorch ,try to use torch.mm() method。 Below is my code import torch import numpy as np from torch.autograd import Variable num_x = np.array([[1.0, 2.0] ,[3.0,4.0]]) tensor_x = torch.from_numpy(num_x) x = Variable(tensor_x,requires_grad = True) s = Variable(torch.DoubleTensor([0.01,0.02]),requires_grad = True) print(s) s = s.mm(x) print(s) Unfortunately,there is a runtime error *RuntimeError Traceback (most recent call last) <ipython-input-58-e8a58ffb2545> in <module>() 9 s = Variable(torch.DoubleTensor([0.01,0.02]),requires_grad = True) 10 print(s) ---> 11 s = s.mm(x) 12 print(s) RuntimeError: matrices expected, got 1D, 2D tensors at /pytorch/aten/src/TH/generic/THTensorMath.cpp:131* How can I fix this problem。 your reply is appreciated
try reshape you need to change shape of s to (1,2) to make possible matrix multiplication operation with (2,2) tensor >>> s.reshape(1,2).mm(x) tensor([[0.0700, 0.1000]], dtype=torch.float64, grad_fn=<MmBackward>) Or give right shape when initializing s >>> s = Variable(torch.DoubleTensor([[0.01,0.02]]),requires_grad = True) >>> s.mm(x) tensor([[0.0700, 0.1000]], dtype=torch.float64, grad_fn=<MmBackward>)
https://stackoverflow.com/questions/60107114/
Find specific element index from Tensor(Matrix)
I am a student just beginning to study deep learning. x_norm = (x**2).sum(1).view(-1, 1) if y is not None: y_norm = (y**2).sum(1).view(1, -1) else: y = x y_norm = x_norm.view(1, -1) ## NOTICE ## dist = torch.exp(-1*(x_norm + y_norm - 2.0 * torch.mm(x, torch.transpose(y, 0, 1)))) return dist dist = pairwise_distances(atom_s[:3,-3:]) zero_mat=torch.zeros_like(dist,dtype=torch.float) dist= torch.where(dist>exp(-8),dist,zero_mat) Above is my coding to make pairwise distance map. And change some element that satiesfy a condition to 0. Question is that "how can I get indexes of elements that satiesfy specific condition ( e.g larger than >0.5)?? without using slow 'for' loop.
If it is normal tensor you can use torch.nonzero >>> (dist > 0.5).nonzero() which will return indices of all elements that are more than 0.5 Example: >>> dist = torch.rand((6,5)) >>> dist tensor([[0.7549, 0.0962, 0.3198, 0.6868, 0.8117], [0.0785, 0.7666, 0.2623, 0.5140, 0.2713], [0.5768, 0.8160, 0.8654, 0.6978, 0.0138], [0.8147, 0.1394, 0.3204, 0.0104, 0.2872], [0.1396, 0.5639, 0.7085, 0.7151, 0.8253], [0.6115, 0.0214, 0.6033, 0.1403, 0.1977]]) >>> (dist > 0.5).nonzero() tensor([[0, 0], [0, 3], [0, 4], [1, 1], [1, 3], [2, 0], [2, 1], [2, 2], [2, 3], [3, 0], [4, 1], [4, 2], [4, 3], [4, 4], [5, 0], [5, 2]])
https://stackoverflow.com/questions/60107267/
PyTorch flatten doesn't maintain batch size
In Keras, using the Flatten() layer retains the batch size. For eg, if the input shape to Flatten is (32, 100, 100), in Keras output of Flatten is (32, 10000), but in PyTorch it is 320000. Why is it so?
As OP already pointed out in their answer, the tensor operations do not default to considering a batch dimension. You can use torch.flatten() or Tensor.flatten() with start_dim=1 to start the flattening operation after the batch dimension. Alternatively since PyTorch 1.2.0 you can define an nn.Flatten() layer in your model which defaults to start_dim=1.
https://stackoverflow.com/questions/60115633/
How to read from a high IO dataset in pytorch which grows from epoch to epoch
I use Tensorflow, but I'm writing documentation for users that will typically vary across deep learning frameworks. When working with datasets that don't fit on the local filesystem (TB+) I sample data from a remote data store and write samples locally to a Tensorflow standardtfrecords format. During the first epoch of training I will have only sampled a few values, therefore an epoch of local data is very small, I train on it. On epoch 2 I re-examine what data files have been produced by my sampling subprocesses (now more) and train on the expanded set of local data files for the next epoch. Repeat the process each epoch. In this way I build up a local cache of samples and can evict older samples as I fill up the local storage. The local samples cache grows at about the time the model needs the variance the most (towards the latter part of training). In Python/Tensorflow it's crucial that I not deserialize the data in the Python training loop process because the Python GIL can't support the data transfer rates (300-600 MB/sec, the data is raw scientific uncompressible), and thus GPU performance suffers when the Python GIL can't service the training loop fast. Writing the samples to a tfrecords file from subprocesses (python multiprocessing) allows tensorflow's native TFRecordsDataset to do deserialization outside of Python and thus we sidestep the Python GIL issues, and I can saturate a GPU with high IO data rates. I would like to know how I would address this issue in Pytorch. I'm writing about the sampling strategy that's being used, and want to provide specific recommendations to users of both Tensorflow and PyTorch, but I don't know the PyTorch preprocessing ecosystem well enough to write with sufficient detail. Side note: the only purely Python based solution to support these data transfer rates may come in Python 3.8 with System V shared memory and multiprocessing, but I haven't tried that yet as support for it isn't quite sufficient (soon it will be). Existing multiprocessing solutions aren't sufficient because they require deserialization in the training loop process and thus lock the GIL during deserialization at high IO rates.
Actually, you can easily deserialize data in a subprocess by using torch.utils.data.DataLoader. By setting num_workers argument to 1 or a bigger value, you can spawn subprocesses with their own python interpreters and GILs. loader = torch.utils.data.DataLoader(your_dataset, num_workers=n, **kwargs) for epoch in range(epochs): for batch_idx, data in enumerate(loader): # loader in the main process does not claim GIL at this point A Dataloader requires a torch.utils.data.Dataset to get data from. It may not be a trivial job to implement a proper subclass in your case. In case you need to recreate a Dataset instance for every epoch, you can do something like this. for epcoh in range(epochs): dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) for batch_idx, data in enumerate(loader): # Do training or even better dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) for epcoh in range(epochs): last_batch_idx = (len(dset)-1) // loader.batch_size for batch_idx, data in enumerate(loader): # Prepare next loader in advance to avoid blocking if batch_idx == last_batch_idx: dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) # Do training As a side note, please note that it's CPU bound operation that is affected by GIL in most cases, not I/O bound operation, i.e., threading will do for any purely I/O heavy operation and you don't even need subprocess. For more information please refer to this question and this wikipedia article.
https://stackoverflow.com/questions/60119934/
Optimizer and scheduler for BERT fine-tuning
I'm trying to fine-tune a model with BERT (using transformers library), and I'm a bit unsure about the optimizer and scheduler. First, I understand that I should use transformers.AdamW instead of Pytorch's version of it. Also, we should use a warmup scheduler as suggested in the paper, so the scheduler is created using get_linear_scheduler_with_warmup function from transformers package. The main questions I have are: get_linear_scheduler_with_warmup should be called with the warm up. Is it ok to use 2 for warmup out of 10 epochs? When should I call scheduler.step()? If I do after train, the learning rate is zero for the first epoch. Should I call it for each batch? Am I doing something wrong with this? from transformers import AdamW from transformers.optimization import get_linear_scheduler_with_warmup N_EPOCHS = 10 model = BertGRUModel(finetune_bert=True,...) num_training_steps = N_EPOCHS+1 num_warmup_steps = 2 warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1 optimizer = AdamW(model.parameters()) criterion = nn.BCEWithLogitsLoss(pos_weight=torch.Tensor([class_weights[1]])) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps ) for epoch in range(N_EPOCHS): scheduler.step() #If I do after train, LR = 0 for the first epoch print(optimizer.param_groups[0]["lr"]) train(...) # here we call optimizer.step() evaluate(...) My model and train routine(quite similar to this notebook) class BERTGRUSentiment(nn.Module): def __init__(self, bert, hidden_dim, output_dim, n_layers=1, bidirectional=False, finetune_bert=False, dropout=0.2): super().__init__() self.bert = bert embedding_dim = bert.config.to_dict()['hidden_size'] self.finetune_bert = finetune_bert self.rnn = nn.GRU(embedding_dim, hidden_dim, num_layers = n_layers, bidirectional = bidirectional, batch_first = True, dropout = 0 if n_layers < 2 else dropout) self.out = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): #text = [batch size, sent len] if not self.finetune_bert: with torch.no_grad(): embedded = self.bert(text)[0] else: embedded = self.bert(text)[0] #embedded = [batch size, sent len, emb dim] _, hidden = self.rnn(embedded) #hidden = [n layers * n directions, batch size, emb dim] if self.rnn.bidirectional: hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) else: hidden = self.dropout(hidden[-1,:,:]) #hidden = [batch size, hid dim] output = self.out(hidden) #output = [batch size, out dim] return output import torch from sklearn.metrics import accuracy_score, f1_score def train(model, iterator, optimizer, criterion, max_grad_norm=None): """ Trains the model for one full epoch """ epoch_loss = 0 epoch_acc = 0 model.train() for i, batch in enumerate(iterator): optimizer.zero_grad() text, lens = batch.text predictions = model(text) target = batch.target loss = criterion(predictions.squeeze(1), target) prob_predictions = torch.sigmoid(predictions) preds = torch.round(prob_predictions).detach().cpu() acc = accuracy_score(preds, target.cpu()) loss.backward() # Gradient clipping if max_grad_norm: torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
Here you can see a visualization of learning rate changes using get_linear_scheduler_with_warmup. Referring to this comment: Warm up steps is a parameter which is used to lower the learning rate in order to reduce the impact of deviating the model from learning on sudden new data set exposure. By default, number of warm up steps is 0. Then you make bigger steps, because you are probably not near the minima. But as you are approaching the minima, you make smaller steps to converge to it. Also, note that number of training steps is number of batches * number of epochs, but not just number of epochs. So, basically num_training_steps = N_EPOCHS+1 is not correct, unless your batch_size is equal to the training set size. You call scheduler.step() every batch, right after optimizer.step(), to update the learning rate.
https://stackoverflow.com/questions/60120043/
pytorch nllloss function target shape mismatch
I'm training a LSTM model using pytorch with batch size of 256 and NLLLoss() as loss function. The loss function is having problem with the data shape. The softmax output from the forward passing has shape of torch.Size([256, 4, 1181]) where 256 is batch size, 4 is sequence length, and 1181 is vocab size. The target is in the shape of torch.Size([256, 4]) where 256 is batch size and 4 is the output sequence length. When I was testing earlier with batch size of 1, the model works fine but when I add batch size, it is breaking. I read that NLLLoss() can take class target as input instead of one hot encoded target. Am I misunderstanding it? Or did I not format the shape of the target correctly? class LSTM(nn.Module): def __init__(self, embed_size=100, hidden_size=100, vocab_size=1181, embedding_matrix=...): super(LSTM, self).__init__() self.hidden_size = hidden_size self.word_embeddings = nn.Embedding(vocab_size, embed_size) self.word_embeddings.load_state_dict({'weight': torch.Tensor(embedding_matrix)}) self.word_embeddings.weight.requires_grad = False self.lstm = nn.LSTM(embed_size, hidden_size) self.hidden2out = nn.Linear(hidden_size, vocab_size) def forward(self, tokens): batch_size, num_steps = tokens.shape embeds = self.word_embeddings(tokens) lstm_out, _ = self.lstm(embeds.view(batch_size, num_steps, -1)) out_space = self.hidden2out(lstm_out.view(batch_size, num_steps, -1)) out_scores = F.log_softmax(out_space, dim=1) return out_scores model = LSTM(self.config.embed_size, self.config.hidden_size, self.config.vocab_size, self.embedding_matrix) loss_function = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=self.config.lr) Error: ~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1846 if target.size()[1:] != input.size()[2:]: 1847 raise ValueError('Expected target size {}, got {}'.format( -> 1848 out_size, target.size())) 1849 input = input.contiguous().view(n, c, 1, -1) 1850 target = target.contiguous().view(n, 1, -1) ValueError: Expected target size (256, 554), got torch.Size([256, 4])
Your input shape to the loss function is (N, d, C) = (256, 4, 1181) and your target shape is (N, d) = (256, 4), however, according to the docs on NLLLoss the input should be (N, C, d) for a target of (N, d). Supposing x is your network output and y is the target then you can compute loss by transposing the incorrect dimensions of x as follows: loss = loss_function(x.transpose(1, 2), y) Alternatively, since NLLLoss is just averaging all the responses anyway, you can reshape x and y to be (N*d, C) and (N*d). This gives the same result without creating temporary copies of your tensors. loss = loss_function(x.reshape(N*d, C), y.reshape(N*d))
https://stackoverflow.com/questions/60121107/
How to match items in two pytorch tensors
I have two pytorch tensors have the same number of rows, ( tensor1-size is (6,4) and tensor2-size is (6), The first tensor is bounding-box coordinate points, whereas the second tensor is the prediction scores. tensor1 = ([[780.8306, 98.1060, 813.8367, 149.8171], [585.6562, 117.6804, 621.6012, 166.3151], [ 88.4085, 117.1313, 129.3327, 173.1145], [223.1263, 239.2682, 255.1892, 270.7897], [194.4088, 117.9768, 237.3028, 166.1765], [408.9165, 109.0131, 441.0802, 141.2362]]) tensor2 = ([0.9842, 0.9751, 0.9689, 0.8333, 0.7021, 0.6191]) How could I concatenate every item in tensor2 with its correspondent row in tensor1)? I want get predilection-score and the bounding box. The output to be something like: prediction bounding box 0.9842 780.8306, 98.1060, 813.8367, 149.8171 0.9751 585.6562, 117.6804, 621.6012, 166.3151] 0.96898 88.4085, 117.1313, 129.3327, 173.1145 0.8333 223.1263, 239.2682, 255.1892, 270.7897 0.7021 194.4088, 117.9768, 237.3028, 166.1765 0.6191 408.9165, 109.0131, 441.0802, 141.2362 I appreciate If anyone could help, Thank you
You can use torch.cat to concatenate the columns. Since cat requires that all tensors have the same number of dimensions you will first need to insert a unitary dimension to tensor2 to make it a (6, 1). This can be done a few ways, but the most clear is probably Tensor.unsqueeze(1) which reshapes the tensor to have a unitary dimension at dimension 1. tensor21 = torch.cat((tensor2.unsqueeze(1), tensor1), dim=1)
https://stackoverflow.com/questions/60125072/
What gets printed when you print a object of some class in python?
I want to ask about this specific example, taken from the official pytorch tutorial. import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 3x3 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 3) self.conv2 = nn.Conv2d(6, 16, 3) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) net = Net() print(net) And the output is Net( (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1)) (fc1): Linear(in_features=576, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) As I understand, this code defines a child class Net of nn.Module, and defines in its initializer the members conv1, conv2, etc. These members are printed when print(net) is called. Based on this obsevation, I thought that if I add the line self.x = 0 to the initializer of Net, there would be an extra line of output, something like: (x): 0. But that didn't happen. So who decides which part of Net gets printed?
From the Python3 documentation repr(object) Return a string containing a printable representation of an object. For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(), otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object. A class can control what this function returns for its instances by defining a __repr__() method. Since your class inherits the nn.Module class, it uses its repr method
https://stackoverflow.com/questions/60126416/
Different results after converting pytorch to torchscript? Converting NSnumber to Float cause any loss?
I converted the pytorch pretrained-model(.pt) to torchscript model(.pt) for using it in Swift 5(ios-iphone6s, xcode 11). In Swift, the “predict” function of the model gave me its embedding values(Tensor). Since it returned the NSNumber array as a result of prediction, I used type casting [NSNumber] to both [Double] or [Float] to calculate the distance between two embedding values. L2 normalization, dot product, etc. However, while the pytorch version got the correct answers, the torchscript model got so many wrong answers. Not only the answers are different, the distance calculations of the two embedding pairs in torchscript are also different from the results of the pytorch model on the PC(CPU, Pycharm). In fact, before using type casting for distance calculations, the embedding values in NSNumber(Swift) are so different from the values ​​in float32(pytorch). I used same input images. I tried to find the reason.. Once, I copied the embedding values( [NSNumber] ) ​​from swift-torchscript and calculated the distance between two embeddings in pytorch, to check if there was a problem with my distance calculation implementation in Swift. I used torch.FloatTensor to use type casting [NSNumber] -> [Float]. I also tried [Double]. As a result of this, I found many infinite numbers. Is this infinite numbers related to the wrong answer? What does this “inf” mean? Is it a calculation or type casting error? Did I lost information while casting from NSNumber to Float or Double? How could I get the correct value from the torchscript model in swift? What should I check? I used the following codes to convert. pytorch -> torchscript. import torch from models.inception_resnet_v1 import InceptionResnetV1 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') resnet = InceptionResnetV1(pretrained='vggface2').eval().to(device) example = torch.rand(1, 3, 160, 160) traced_script_module = torch.jit.trace(resnet, example) traced_script_module.save("mobile_model.pt")
Are you using InceptionResnetV1 from: https://github.com/timesler/facenet-pytorch ? When you are referring to the pytorch model in your comparison of the outputs, are you referring to the torchscript model when run in pytorch, or the resnet as is? If it is the latter, did you already check something similar as below? What do you get when running the following: print('Original:') orig_res = resnet(example) print(orig_res.shape) print(orig_res[0, 0:10]) print('min abs value:{}'.format(torch.min(torch.abs(orig_res)))) print('Torchscript:') ts_res = traced_script_module(example) print(ts_res.shape) print(ts_res[0, 0:10]) print('min abs value:{}'.format(torch.min(torch.abs(ts_res)))) print('Dif sum:') abs_diff = torch.abs(orig_res-ts_res) print(torch.sum(abs_diff)) print('max dif:{}'.format(torch.max(abs_diff))) after defining 'traced_script_module'. I get the following: Original: torch.Size([1, 512]) tensor([ 0.0347, 0.0145, -0.0124, 0.0723, -0.0102, 0.0653, -0.0574, 0.0004, -0.0686, 0.0695], device='cuda:0', grad_fn=<SliceBackward>) min abs value:0.00034740756382234395 Torchscript: torch.Size([1, 512]) tensor([ 0.0347, 0.0145, -0.0124, 0.0723, -0.0102, 0.0653, -0.0574, 0.0004, -0.0686, 0.0695], device='cuda:0', grad_fn=<SliceBackward>) min abs value:0.0003474018594715744 Dif sum: tensor(8.1539e-06, device='cuda:0', grad_fn=<SumBackward0>) max dif:5.960464477539063e-08 which is not perfect but considering the outputs are in the order of 10^-4 minimum, and that the before last number is the sum of the absolute difference of 512 elements, not the mean, it seems not too far off for me. The maximum difference is at around 10^-8. By the way, you might want to change to: example = torch.rand(1, 3, 160, 160).to(device) If you get something similar for the tests above, what are the kind of values you get for the first 10 output values you get from the swift-torchscript as NSNumber, and then, once casted in float, when compared against both the same slices in the pytorch and torchscript-pytorch model outputs?
https://stackoverflow.com/questions/60127078/
How do I separate the input and targets from Pytorch Fashion MNIST?
The Fashion MNIST dataset is implemented pretty weirdly in Pytorch. I want to do something like: X, y = FashionMNIST But in reality, it's a little more complicated. This is what I have: from torchvision.datasets import FashionMNIST train = FashionMNIST(root='.', download=True, train=True) print(train) The output: Dataset FashionMNIST Number of datapoints: 60000 Root location: c:/users/nicolas/documents/data/fashionmnist Split: Train What one observation looks like: print(train[0]) (<PIL.Image.Image image mode=L size=28x28 at 0x20868074780>, 9) I could only do it for one observation. X, y = train[0] So how do I separate the input and targets?
FashionMNIST object has data and targets attributes. You can simply write X, y = train.data, train.targets and then you can see the shapes X.shape, y.shape (torch.Size([60000, 28, 28]), torch.Size([60000]))
https://stackoverflow.com/questions/60129816/
Loading a model with pytorch
I'm having a problem loading my model in the Image Classifier project. First, I saved it: model.class_to_idx = train_data.class_to_idx checkpoint = {'arch': 'vgg19', 'learn_rate': learn_rate, 'epochs': epochs, 'state_dict': model.state_dict(), 'class_to_idx': model.class_to_idx, 'optimizer': optimizer.state_dict(), 'input_size': 25088, 'output_size': 102, 'momentum': momentum, 'batch_size':64, 'classifier' : classifier } torch.save(checkpoint, 'checkpoint.pth') Then I tried to load the project I had saved: def load_checkpoint(filepath): checkpoint = torch.load(filepath) learn_rate = checkpoint['learn_rate'] optimizer.load_state_dict(checkpoint['optimizer']) model = models.vgg16(pretrained=True) model.epochs = checkpoint['epochs'] model.load_state_dict(checkpoint['state_dict']) model.class_to_idx = checkpoint['class_to_idx'] model.classifier = checkpoint['classifier'] return learn_rate, optimizer, model learn_rate, optimizer, model = load_checkpoint('checkpoint.pth') And I get an error when I try to load: <ipython-input-75-5bd1aa042c7f> in load_checkpoint(filepath) 9 model = models.vgg16(pretrained=True) 10 model.epochs = checkpoint['epochs'] ---> 11 model.load_state_dict(checkpoint['state_dict']) 12 model.class_to_idx = checkpoint['class_to_idx'] 13 model.classifier = checkpoint['classifier'] RuntimeError: Error(s) in loading state_dict for VGG: Missing key(s) in state_dict: "classifier.0.weight", "classifier.0.bias", "classifier.3.weight", "classifier.3.bias", "classifier.6.weight", "classifier.6.bias". Unexpected key(s) in state_dict: "classifier.fc1.weight", "classifier.fc1.bias", "classifier.fc2.weight", "classifier.fc2.bias". This seems to be classifier issue. Does anyone know what's going on?
jodag's comment points at the heart of the issue. If fc1 fc2 correspond to classifier.0 classifier.3, classifier.6 you can adjust the dictionary to link them. When loading the weights to the model make sure to add the option strict=False. You will need to retrain your model for the classifier - because your state dict misses weights for 3 layers but have 2 unused layer weights - but it should converge really quickly (from personal experience).
https://stackoverflow.com/questions/60131761/
Pytorch or Numpy Batch Matrix Operation
I am trying to us torch.bmm to do the following matrix operation, If matrix is a M * N tensor, batch is a N * B tensor, how can i achieve, In each batch, matrix @ batch_i, which gives M, and put the batch size together, the output tensor looks like M * B There two questions here, 1.To use torch.bmm, it seems need both matrix need be batch, but my first input is not The batch size need be the first dimension, while my batch size in the end I guess it is the same question for Numpy users
It seems that torch.einsum('ij,jbc->ibc', A, B) will solve the question
https://stackoverflow.com/questions/60133121/
Tensorboard is showing a blank page (Refused to execute script from 'http://localhost:6006/index.js' because its MIME type)
When Trying to open tesnorflow I just get a plank page: This is how it looks like in firefox: I get the error message in the chrome console: Refused to execute script from 'http://localhost:6006/index.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. In the firefox console I get the error message: The resource from “http://localhost:6006/index.js” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff) and Loading failed for the <script> with source “http://localhost:6006/index.js”. I tried: Unable to open Tensorboard in browser Tensorboard get blank page I typed in the console: tensorboard --logdir=runs --bind_all tensorboard --logdir=./runs --bind_all tensorboard --logdir=./runs/ --bind_all tensorboard --logdir=./runs --host localhost --port 6006 tensorboard --logdir=./runs --host localhost tensorboard --logdir=./runs --port 6006 --bind_all I have tensorboard version: 2.1.0 I generated my data like that: train_set = torchvision.datasets.FashionMNIST( root="./data/FashionMNIST", train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ]) ) train_loader = torch.utils.data.DataLoader(train_set, batch_size=1000) tb = SummaryWriter() network = Network() images, labels = next(iter(train_loader)) grid = torchvision.utils.make_grid(images) tb.add_image("image", grid) tb.add_graph(network, images) tb.close() I followed this tutorial: TensorBoard with PyTorch - Visualize Deep Learning Metrics
There's a similar error and resolution reported here. Apparently this has to do with some issue in the windows registry. Based on the comments this seems to be the solution In my case following procedure solved the problem: windows + r and regedit [your computer]\HKEY_LOCAL_MACHINE\SOFTWARE\Classes\.js Change content type from 'text/plain' to 'application/javascript'
https://stackoverflow.com/questions/60136106/
Is there any build in function for making 2D tensor from 1D tensor using specific calculation?
Hi I'm student who just started for deep learning. For example, I have 1-D tensor x = [ 1 , 2]. From this one, I hope to make 2D tensor y whose (i,j)th element has value (x[i] - x[j]), i.e y[0,:] = [0 , 1] , y[1,:]=[ -1 , 0]. Is there built-in function like this in pytorch library? Thanks.
Here you need right dim of tensor to get expected result which you can get using torch.unsqueeze x = torch.tensor([1 , 2]) y = x - x.unsqueeze(1) y tensor([[ 0, 1], [-1, 0]])
https://stackoverflow.com/questions/60136744/
Issues installing PyTorch 1.4 - "No matching distribution found for torch===1.4.0"
Used the install guide on pytorch.org on how to install it and the command I'm using is pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html But it's coming up with this error; ERROR: Could not find a version that satisfies the requirement torch===1.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch===1.4.0 Is this even a me-related issue? Can other people use this command? Pip is installed and works for other modules, Python 3.8, CUDA version 10.1, Windows 10 Home 2004
Looks like this issue is related to virtual environment. Did you try recommended installation line in another/new one virtual environment? If it doesn't help the possible solution might be installing package using direct link to PyTorch and TorchVision builds for your system: pip install https://download.pytorch.org/whl/cu101/torch-1.4.0-cp38-cp38-win_amd64.whl pip install https://download.pytorch.org/whl/cu101/torchvision-0.5.0-cp38-cp38-win_amd64.whl
https://stackoverflow.com/questions/60137572/
HuggingFace Transformers For Text Generation with CTRL with Google Colab's free GPU
I wanted to test TextGeneration with CTRL using PyTorch-Transformers, before using it for fine-tuning. But it doesn't prompt anything like it does with GPT-2 and other similar language generation models. I'm very new for this and am stuck and can't figure out what's going on. This is the procedure I followed in my Colab notebook, !pip install transformers !git clone https://github.com/huggingface/pytorch-transformers.git !python pytorch-transformers/examples/run_generation.py \ --model_type=ctrl \ --length=100 \ --model_name_or_path=ctrl \ --temperature=0.2 \ --repetition_penalty=1.2 \ And this is what I get after running the script 02/10/2020 01:02:31 - INFO - transformers.tokenization_utils - loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-vocab.json from cache at /root/.cache/torch/transformers/a858ad854d3847b02da3aac63555142de6a05f2a26d928bb49e881970514e186.285c96a541cf6719677cfb634929022b56b76a0c9a540186ba3d8bbdf02bca42 02/10/2020 01:02:31 - INFO - transformers.tokenization_utils - loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-merges.txt from cache at /root/.cache/torch/transformers/aa2c569e6648690484ade28535a8157aa415f15202e84a62e82cc36ea0c20fa9.26153bf569b71aaf15ae54be4c1b9254dbeff58ca6fc3e29468c4eed078ac142 02/10/2020 01:02:31 - INFO - transformers.configuration_utils - loading configuration file https://storage.googleapis.com/sf-ctrl/pytorch/ctrl-config.json from cache at /root/.cache/torch/transformers/d6492ca334c2a4e079f43df30956acf935134081b2b3844dc97457be69b623d0.1ebc47eb44e70492e0c20494a084f108332d20fea7fe5ad408ef5e7a8f2baef4 02/10/2020 01:02:31 - INFO - transformers.configuration_utils - Model config CTRLConfig { "architectures": null, "attn_pdrop": 0.1, "bos_token_id": 0, "dff": 8192, "do_sample": false, "embd_pdrop": 0.1, "eos_token_ids": 0, "finetuning_task": null, "from_tf": false, "id2label": { "0": "LABEL_0" }, "initializer_range": 0.02, "is_decoder": false, "label2id": { "LABEL_0": 0 }, "layer_norm_epsilon": 1e-06, "length_penalty": 1.0, "max_length": 20, "model_type": "ctrl", "n_ctx": 512, "n_embd": 1280, "n_head": 16, "n_layer": 48, "n_positions": 50000, "num_beams": 1, "num_labels": 1, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 0, "pruned_heads": {}, "repetition_penalty": 1.0, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "use_bfloat16": false, "vocab_size": 246534 } 02/10/2020 01:02:31 - INFO - transformers.modeling_utils - loading weights file https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin from cache at /root/.cache/torch/transformers/c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0 tcmalloc: large alloc 1262256128 bytes == 0x38b92000 @ 0x7fe1900bdb6b 0x7fe1900dd379 0x7fe139843b4a 0x7fe1398455fa 0x7fe13bb7578a 0x7fe13bdbe30b 0x7fe13be05b37 0x7fe184c8cad5 0x7fe184c8d17b 0x7fe184c91160 0x7fe184ade496 0x551b15 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x5096b7 0x595311 0x54a6ff 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x509642 0x595311 0x54a6ff 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 tcmalloc: large alloc 1262256128 bytes == 0x19fdda000 @ 0x7fe1900bdb6b 0x7fe1900dd379 0x7fe139843b4a 0x7fe1398455fa 0x7fe13bb7578a 0x7fe13bdbe30b 0x7fe13be05b37 0x7fe184c8cad5 0x7fe184c8d17b 0x7fe184c91160 0x7fe184ade496 0x551b15 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x509642 0x595311 0x54a6ff 0x551b81 0x5aa6ec 0x50abb3 0x50d390 0x508245 0x509642 0x595311 0x54a6ff 0x551b81 0x5a067e 0x50d966 0x508245 ^C and then terminates. Could this be because of a GPU problem? Any sort of help is appreciated.
The solution was to increase the RAM. Since I was using the Google Colab's free GPU, I was going through this: GitHub issue and found this useful: Solution The following piece of code will crash the session in Colab and select 'Get more RAM', which will increase the RAM up to 25.51GB d=[] while(1): d.append('1')
https://stackoverflow.com/questions/60142937/
RuntimeError: cuDNN version mismatch: PyTorch was compiled against 7102 but linked against 7604
I got this error when running training a deep learning model and although looking at many solutions over the Internet, they did not help me. The log is as follows: Traceback (most recent call last): File "main.py", line 208, in <module> main() File "main.py", line 100, in main model = nn.DataParallel(model).cuda() File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 105, in __init__ self.module.cuda(device_ids[0]) File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/nn/modules/module.py", line 249, in cuda return self._apply(lambda t: t.cuda(device)) File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/nn/modules/module.py", line 176, in _apply module._apply(fn) File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 112, in _apply self.flatten_parameters() File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/nn/modules/rnn.py", line 78, in flatten_parameters if not any_param.is_cuda or not torch.backends.cudnn.is_acceptable(any_param): File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/backends/cudnn/__init__.py", line 87, in is_acceptable if _libcudnn() is None: File "/home/dexter/miniconda3/envs/VideoSum/lib/python3.5/site-packages/torch/backends/cudnn/__init__.py", line 58, in _libcudnn 'but linked against {}'.format(compile_version, __cudnn_version)) RuntimeError: cuDNN version mismatch: PyTorch was compiled against 7102 but linked against 7604
The question is, no matter the versions showed in the log are, 7.6.4 is my cudnn version and 7.1.2 is the cudnn version that the code is originally compiled. What I need is just downgrade (or upgrade my current cudnn version) by: conda install cudnn=7.1.2 It works, if any, please correct me.
https://stackoverflow.com/questions/60144922/
Iterating over subsets from torch.utils.data.random_split
I am currently loading a folder with AI training data in it. The subfolders represent the label names with the corresponding images inside. This works well by using pyTorch's ImageFolder loader. def load_dataset(): data_path = 'C:/example_folder/' train_dataset_manual = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor() ) train_loader_manual = torch.utils.data.DataLoader( train_dataset_manual, batch_size=1, num_workers=0, shuffle=True ) return train_loader_manual full_dataset = load_dataset() Now I want to have this dataset split into a training and a test data set. I am using the random_split function for this: training_data_size = 0.8 train_size = int(training_data_size * len(full_dataset)) test_size = len(full_dataset) - train_size train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size]) The full_dataset is an object of type torch.utils.data.dataloader.DataLoader. I can iterate through it with a loop like this: for batch_idx, (data, target) in enumerate(full_dataset): print(batch_idx) The train_dataset is an object of type torch.utils.data.dataset.Subset. If I try to loop through it, I get: TypeError 'DataLoader' object is not subscriptable: for batch_idx, (data, target) in enumerate(train_dataset): print(batch_idx) How can I loop through it? I am relatively new to Python. Thanks!
You need to apply random_split to a Dataset not a DataLoader. The dataset used to define the DataLoader is available in the DataLoader.dataset member. For example you could do train_dataset, test_dataset = torch.utils.data.random_split(full_dataset.dataset, [train_size, test_size]) train_loader = DataLoader(train_dataset, batch_size=1, num_workers=0, shuffle=True) test_loader = DataLoader(test_dataset, batch_size=1, num_workers=0, shuffle=False) Then you can iterate over train_loader and test_loader as expected.
https://stackoverflow.com/questions/60150426/
PyTorch LSTM: RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 1219 and 440 in dimension 1
I have a basic PyTorch LSTM: import torch.nn as nn import torch.nn.functional as F class BaselineLSTM(nn.Module): def __init__(self): super(BaselineLSTM, self).__init__() self.lstm = nn.LSTM(input_size=13, hidden_size=13) def forward(self, x): x = self.lstm(x) return x For my data, I have: train_set = CorruptedAudioDataset(corrupted_path, train_set=True) train_loader = torch.utils.data.DataLoader(train_set, batch_size=128, shuffle=True, **kwargs) My CorruptedAudioDataset has: def __getitem__(self, index): corrupted_sound_file = SoundFile(self.file_paths[index]) corrupted_samplerate = corrupted_sound_file.samplerate corrupted_signal_audio_array = corrupted_sound_file.read() clean_path = self.file_paths[index].split('/') # print(self.file_paths[index], clean_path) clean_sound_file = SoundFile(self.file_paths[index]) clean_samplerate = clean_sound_file.samplerate clean_signal_audio_array = clean_sound_file.read() corrupted_mfcc = mfcc(corrupted_signal_audio_array, samplerate=corrupted_samplerate) clean_mfcc = mfcc(clean_signal_audio_array, samplerate=clean_samplerate) print('return', corrupted_mfcc.shape, clean_mfcc.shape) return corrupted_mfcc, clean_mfcc My training loop looks like: model = BaselineLSTM() for epoch in range(300): for inputs, outputs in train_loader: print('inputs', inputs) And that's the line that I get the error on: File "train_lstm_baseline.py", line 47, in train for inputs, outputs in train_loader: ... RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 1219 and 440 in dimension 1 at ../aten/src/TH/generic/THTensor.cpp:612
This exception is thrown basically because you're loading batches with different shapes. As they're stored in the same tensor, all samples must have the same shape. In this case, you have an input in dimension 0 with 1219 and 440, which is not possible. For example, you have something like: torch.Size([1, 1219]) torch.Size([1, 440]) torch.Size([1, 550]) ... You must have: torch.Size([1, n]) torch.Size([1, n]) torch.Size([1, n]) ... The easiest way to solve this problem is setting batch_size=1. However, it may delay your code. The best way is setting the data to the same shape. In this case, you need to assess your problem to check if it's possible.
https://stackoverflow.com/questions/60154328/
How can I resize a PyTorch tensor with a sliding window?
I have a tensor with size: torch.Size([118160, 1]). What I want to do is split it up into n tensors with 100 elements each, sliding by 50 elements at a time. What's the best way to achieve this with PyTorch?
A possible solution is: window_size = 100 stride = 50 splits = [x[i:min(x.size(0),i+window_size)] for i in range(0,x.size(0),stride)] However, the last few elements will be shorter than window_size. If this is undesired, you can do: splits = [x[i:i+window_size] for i in range(0,x.size(0)-window_size+1,stride)] EDIT: A more readable solution: # if keep_short_tails is set to True, the slices shorter than window_size at the end of the result will be kept def window_split(x, window_size=100, stride=50, keep_short_tails=True): length = x.size(0) splits = [] if keep_short_tails: for slice_start in range(0, length, stride): slice_end = min(length, slice_start + window_size) splits.append(x[slice_start:slice_end]) else: for slice_start in range(0, length - window_size + 1, stride): slice_end = slice_start + window_size splits.append(x[slice_start:slice_end]) return splits
https://stackoverflow.com/questions/60157188/
Transformers summarization with Python Pytorch - how to get longer output?
I use Ai-powered summarization from https://github.com/huggingface/transformers/tree/master/examples/summarization - state of the art results. Should i train it myself to get summary output longer than used in original huggingface github training script? : python run_summarization.py \ --documents_dir $DATA_PATH \ --summaries_output_dir $SUMMARIES_PATH \ # optional --no_cuda false \ --batch_size 4 \ --min_length 50 \ --max_length 200 \ --beam_size 5 \ --alpha 0.95 \ --block_trigram true \ --compute_rouge true When i do inference with --min_length 500 \ --max_length 600 \ I got a good output for 200 tokens, but the rest of the text is . . . [unused7] [unused7] [unused7] [unused8] [unused4] [unused7] [unused7] [unused4] [unused7] [unused8]. [unused4] [unused7] . [unused4] [unused8] [unused4] [unused8]. [unused4] [unused4] [unused8] [unused4] . . [unused4] [unused6] [unused4] [unused7] [unused6] [unused4] [unused8] [unused5] [unused4] [unused7] [unused4] [unused4] [unused7]. [unused4] [unused6]. [unused4] [unused4] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused6] [unused4] [unused4] [unused4]. [unused4]. [unused5] [unused4] [unused8] [unused7] [unused4] [unused7] [unused9] [unused4] [unused7] [unused4] [unused7] [unused5] [unused4] [unused5] [unused4] [unused6] [unused4]. . . [unused5]. [unused4] [unused4] [unused4] [unused6] [unused5] [unused4] [unused4] [unused6] [unused4] [unused6] [unused4] [unused4] [unused5] [unused4]. [unused5] [unused4] . [unused4] [unused4] [unused8] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused7] [unused4] [unused8] [unused4] [unused8] [unused4] [unused6]
The short answer is: Yes, probably. To explain this in a bit more detail, we have to look at the paper behind the implementation: In Table 1, you can clearly see that most of their generated headlines are much shorter than what you are trying to initialize. While that alone might not be an indicator that you couldn't generate anything longer, we can go even deeper and look at the meaning of the [unusedX] tokens, as described by BERT dev Jacob Devlin: Since [the [unusedX] tokens] were not used they are effectively randomly initialized. Further, the summariazation paper describes Position embeddings in the original BERT model have a maximum length of 512; we over-come this limitation by adding more position em-beddings that are initialized randomly and fine-tuned with other parameters in the encoder. This is a strong indicator that past a certain length, they are likely falling back to the default initialization, which is unfortunately random. The question is whether you can still salvage the previous pre-training, and simply fine-tune to your objective, or whether it is better to just start from scratch.
https://stackoverflow.com/questions/60157959/
View on portion of tensor
I have a multiple dimensions tensor, let's take this simple one as example: out = torch.Tensor(3, 4, 5) I have to get a portion/subpart of this tensor out[:,0,:] and then apply the method view(-1), but it's not possible: out[:,0,:].view(-1) RuntimeError: invalid argument 2: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Call .contiguous() before .view(). at ../aten/src/TH/generic/THTensor.cpp:203 A solution is to clone the subpart: out[:,0,:].clone().view(-1) Is there a better/faster solution than cloning?
What you did will work fine. That said, a more portable approach would be to use reshape which will return a view when possible, but will create a contiguous copy if necessary. That way it will do the fastest thing possible. In your case the data must be copied, but by always using reshape there are cases where a copy won't be produced. So you could use out[:,0,:].reshape(-1) Gotcha There's one important gotcha here. If you perform in-place operations on the output of reshape then that may or may not affect the original tensor, depending on whether or not a view or copy was returned. For example, assuming out is already contiguous then in this case >>> x = out[:,0,:].reshape(-1) # returns a copy >>> x[0] = 10 >>> print(out[0,0,0].item() == 10) False x is a copy so changes to it don't affect out. But in this case >>> x = out[:,:,0].reshape(-1) # returns a view >>> x[0] = 10 >>> print(out[0,0,0].item() == 10) True x is a view, so in-place changes to x will change out as well. Alternatives A couple alternative are out[:,0,:].flatten() # .flatten is just a special case of .reshape and out[:,0,:].contiguous().view(-1) Though if you want the fastest approach I recommend against the latter method using contiguous().view since, in general, it is more likely than reshape or flatten to return a copy. This is because contiguous will create a copy even if the underlying data has the same number of bytes between subsequent entries. Therefore, there's a difference between out[:,:,0].contiguous().view(-1) # creates a copy and out[:,:,0].flatten() # creates a non-contiguous view (b/c underlying data has uniform spacing of out.shape[2] values between entries) where the contiguous().view approach forces a copy since out[:,:,0] is not contiguous, but flatten/reshape would create a view since the underlying data is uniformly spaced. Sometimes contiguous() won't create a copy, for example compare out[0,:,:].contiguous().view(-1) # creates a view b/c out[0,:,:] already is contiguous and out[0,:,:].flatten() # creates a view which both produce a view of the original data without copying since out[0,:,:] is already contiguous. If you want to ensure that the out is decoupled completely from its flattened counterpart then the original approach using .clone() is the way to go.
https://stackoverflow.com/questions/60160307/
How do I represent a PyTorch LSTM 3D Tensor?
As per the docs, I see that Pytorch’s LSTM expects all of its inputs to be 3D tensors. I am trying to do a simple sequence-to-sequence LSTM and I have: class BaselineLSTM(nn.Module): def __init__(self): super(BaselineLSTM, self).__init__() self.lstm = nn.LSTM(input_size=100, hidden_size=100) def forward(self, x): print('x', x) x = self.lstm(x) return x My x.size() is torch.Size([100, 1]). I expect I need a third dimension somehow, but I'm unsure as to what it actually means. Any help would be greatly appreciated.
The input shape is further elaborated on in this Pytorch docs, in the Inputs: input, (h_0, c_0)section. The first dimension of the input tensor is expected to correspond to the sequence length, the second dimension the batch size, and the third, the input size. So for your example, the input tensor x should actually be of size (seq_length, batch_size, 100). Here is a detailed thread on the Pytorch forum for more details: https://discuss.pytorch.org/t/why-3d-input-tensors-in-lstm/4455/9
https://stackoverflow.com/questions/60160826/
How do I calculate cross-entropy from probabilities in PyTorch?
By default, PyTorch's cross_entropy takes logits (the raw outputs from the model) as the input. I know that CrossEntropyLoss combines LogSoftmax (log(softmax(x))) and NLLLoss (negative log likelihood loss) in one single class. So, I think I can use NLLLoss to get cross-entropy loss from probabilities as follows: true labels: [1, 0, 1] probabilites: [0.1, 0.9], [0.9, 0.1], [0.2, 0.8] where, y_i,j denotes the true value i.e. 1 if sample i belongs to class j and 0 otherwise. and p_i,j denotes the probability predicted by your model of sample i belonging to class j. If I calculate by hand, it turns out to be: >>> -(math.log(0.9) + math.log(0.9) + math.log(0.8)) 0.4338 Using PyTorch: >>> labels = torch.tensor([1, 0, 1], dtype=torch.long) >>> probs = torch.tensor([[0.1, 0.9], [0.9, 0.1], [0.2, 0.8]], dtype=torch.float) >>> F.nll_loss(torch.log(probs), labels) tensor(0.1446) What am I doing wrong? Why is the answer different?
There is a reduction parameter for all loss functions in the PyTorch. As you can see from the documentation default reduction parameter is 'mean' which divides the sum with number of elements in the batch. To get a summation behavior (0.4338) as you want, you should give the reduction parameter as following: F.nll_loss(torch.log(probs), labels,reduction='sum')
https://stackoverflow.com/questions/60166427/
Does changing image channel break the image?
so I have images with the format(width,height,channel). My original channel is in rgb. So I Load the images in grayscale for r, d, file in tqdm(os.walk(path)): for i in tqdm(file): if i[0:2]=="01": dist_one.append(cv2.imread(os.path.join(path,i),cv2.IMREAD_GRAYSCALE)) else: dist_two.append(cv2.imread(os.path.join(path,i),cv2.IMREAD_GRAYSCALE)) Suppose the images have the shape (187, 187). So I add a channel using the code g = np.expand_dims(dist_one[0], axis=0) But this breaks the images when I try to plot the image. TypeError Traceback (most recent call last) in () 1 import matplotlib.pyplot as plt 2 ----> 3 plt.imshow(dist_one[0],cmap='gray') 4 plt.show() 5 frames/usr/local/lib/python3.6/dist-packages/matplotlib/image.py in set_data(self, A) 688 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]): 689 raise TypeError("Invalid shape {} for image data" --> 690 .format(self._A.shape)) 691 692 if self._A.ndim == 3: TypeError: Invalid shape (1, 187, 187) for image data But it works when the channel is put last. g = np.expand_dims(dist_one[0], axis=-1). Whats the reason for this? I need the channel at the first for pytorch. Or am I suppose to train the model with broken images?
Pytorch requires [C, H, W], whereas numpy and matplotlib (which is where the error is being thrown) require the image to be [H, W, C]. To fix this issue, I'd suggest plotting g obtained by doing this g = np.expand_dims(dist_one[0], axis=-1). Before sending it to PyTorch, you can do one of two things. You can use 'g = torch.tensor(g).permute(2, 0, 1)` which will results in a [C, H, W] tensor or you can use the PyTorch Dataset and Dataloader setup, which handles other things like batching for you.
https://stackoverflow.com/questions/60170712/
Add Training and Testing Accuracy to a Simple Neural Network in PyTorch
I know this is a primitive question but what should I add in my code for it to output the training accuracy of the Neural Network in addition to the loss, I checked PyTorch tutorials and they show how to add training/testing accuracy in image classification but I do not know how to do that in my simple XOR solving NN, below is the code: # Step 1: importing our dependencies import torch from torch.autograd import Variable import numpy as np # Our data x = Variable(torch.Tensor([[0, 0, 1], [0, 1, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1], [0, 0, 0]])) y = Variable(torch.Tensor([[0], [1], [1], [1], [1], [0], [0]])) # Step 2: building our class model class NeuralNetwork(torch.nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.linear_ij = torch.nn.Linear(3, 4) self.linear_jk = torch.nn.Linear(4, 1) def forward(self, x): matmul = self.linear_ij(x) activation = torch.sigmoid(matmul) matmul = self.linear_jk(activation) prediction = torch.sigmoid(matmul) return prediction # Our model model = NeuralNetwork() # Constructing the loss function and the optimization algorithm criterion = torch.nn.BCELoss(reduction='mean') optimizer = torch.optim.SGD(model.parameters(), lr=1) # Step 3: the training process for epoch in range(10000): prediction = model(x) loss = criterion(prediction, y) if epoch % 1000 == 0 or epoch == 10000 - 1: print("epoch ", epoch, ",", "loss: ", loss.item()) # Backpropagation process optimizer.zero_grad() loss.backward() optimizer.step() and this is it what it gives as an output: epoch 0 , loss: 0.6983293294906616 epoch 1000 , loss: 0.015215665102005005 epoch 2000 , loss: 0.0048239342868328094 epoch 3000 , loss: 0.00280318153090775 epoch 4000 , loss: 0.001963752554729581 epoch 5000 , loss: 0.0015071843517944217 epoch 6000 , loss: 0.0012211233843117952 epoch 7000 , loss: 0.0010254186345264316 epoch 8000 , loss: 0.000883264874573797 epoch 9000 , loss: 0.0007753585232421756 epoch 9999 , loss: 0.0006908221403136849 As for testing: # Testing our model model.eval() x_test = Variable(torch.Tensor([[1, 1, 0], [0, 0, 1], [0, 1, 1]])) y_test = Variable(torch.Tensor([[0], [0], [1]])) y_pred = model(x_test) print(model(x_test)) with the output: tensor([[0.0026], [0.0011], [0.9991]], grad_fn=<SigmoidBackward>)
To add accuracy you only need one line, namely: print("Accuracy: ", ((prediction > 0.5) == y).float().mean().item()) When you use sigmoid anything greater than 0.5 is considered positive and anything below negative. (prediction > 0.5) creates a tensor of bool type and you check which of those are equal to y. float() is needed as you cannot calculate mean of bool tensors. item() isn't needed but returns python value from single valued tensor and IMO looks cleaner this way. You can do the same for test, hence it would be: model.eval() x_test = Variable(torch.Tensor([[1, 1, 0], [0, 0, 1], [0, 1, 1]])) y_test = Variable(torch.Tensor([[0], [0], [1]])) with torch.no_grad(): y_pred = model(x_test) print("Accuracy: ", ((y_pred > 0.5) == y_test).float().mean().item()) Please notice torch.no_grad(). This context manager disables autograph when you are within it's scope. As you are just passing inputs through your neural network and not training it using gradients there is no need for autograph to be a part of the equation. Working with logits It's usually a good habit not to use final activation in your neural networks (unless you really need it). Hence your forward would look like this: def forward(self, x): matmul = self.linear_ij(x) activation = torch.sigmoid(matmul) # Notice no sigmoid return self.linear_jk(activation) This outputs logits (let's say unnormalized probability ranging from [-inf, inf]) indicating how confident your neural network is it's positive (+inf) or negative class. You have to change your loss function accordingly, e.g. torch.nn.BCEWithLogitsLoss (mean is deafult reduction, no need to make it explicit here): criterion = torch.nn.BCEWithLogitsLoss() Finally, accuracy changes slightly as well. Now anything greater than 0 is considered positive, hence you would do this: print("Accuracy: ", ((prediction > 0) == y).float().mean().item()) If you need probability you can use torch.sigmoid on the output still, but you might not even need it (as it seems in this case). EDIT You should also specify your data as torch.Tensor, torch.Variable is deprecated (tensors already have requires_grad=True), e.g.: x = torch.Tensor( [[0, 0, 1], [0, 1, 1], [1, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1], [0, 0, 0]] ) y = torch.Tensor([[0], [1], [1], [1], [1], [0], [0]]) EDIT2: It should be placed below (or above) your loss printing, e.g.: for epoch in range(10000): prediction = model(x) loss = criterion(prediction, y) if epoch % 1000 == 0 or epoch == 10000 - 1: # Here is fine print("Accuracy: ", ((prediction > 0.5) == y).float().mean().item()) print("epoch ", epoch, ",", "loss: ", loss.item()) # Backpropagation process optimizer.zero_grad() loss.backward() optimizer.step()
https://stackoverflow.com/questions/60171198/
How do I know whether an instance is stored on GPU with PyTorch?
I'm learning PyTorch recently, and this question comes up. For example, if I have a net inheriting the "torch.nn.Module". class Net(torch.nn.Module): def __init__(self, something): super(net, self).__init__() self.p1=something def forward(): pass net1=Net(123) net1.cuda() ##Here I can't see what is changed. Then how can I know whether net1 (and that something) is stored on GPU. I've read how the *.cuda() works, seems like let all the "children" run the *.cuda(). I tried to see what the "children" are. It seems the net1 above has no children.
To check a simple tensor, you can check the is_cuda attribute. For example: x = torch.zeros(100).cuda() y = torch.zeros(100) print(x.is_cuda) # True print(y.is_cuda) # False To check a model, a think the easiest way is using the parameters() method, which returns all trainable parameters of your model. next(model.parameters()).is_cuda
https://stackoverflow.com/questions/60171654/
How can I convert this keras cnn model to pytorch version
This is the example keras code that I want to convert to pytorch. My input dataset is 10000*1*102 (two dimensions for labels). The dataset includes 10000 samples. Each sample contains one row with 102 features. I am thinking to use 1dcnn for regression. PS: hyper-parameter (e.g. filters, kernel_size, stride, padding) could be adjusted based on my 10000*1*102 dataset. model = Sequential() model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features))) model.add(Conv1D(filters=64, kernel_size=3, activation='relu')) model.add(Dropout(0.5)) model.add(MaxPooling1D(pool_size=2)) model.add(Flatten()) model.add(Dense(100, activation='relu')) model.add(Dense(n_outputs, activation='softmax'))
Welcome to pytorch. :) I am really glad you decide to switch from Keras to PyTorch. It was an important step for me to understand how NNs work in more detail. If you have any specific questions about code or if it isn't working please let me know. import torch.nn as nn a0 = nn.Conv1D(n_timesteps, 64, 3) a1 = nn.Relu() b0 = nn.Conv1D(64, 64, 3) b1 = nn.Relu() c0 = torch.nn.Dropout(p=0.5) d0 = nn.MaxPool1d(2) e0 = nn.Flatten() e1 = nn.Linear(32*n_timesteps,100) e2 = nn.Relu() e3 = nn.Linear(n_outputs) f0 = nn.Softmax(dim=1) model = nn.Sequential(a0,a1,b0,b1,c0,d0,e0,e1,e2,e3,f0)
https://stackoverflow.com/questions/60172607/