instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
After Pytorch Upgrade , my model is giving almost random output
I trained, tested and still using a model in "Pytorch 0.4.1". It was, and is still working fine (output is what it should be) if I use pitch 0.4.1. But as i upgrade to version 1.0.1, every time, I try to evaluate same input image, I get different output (Its a regression). I tried to see what has been changed in those versions, but since i am not getting any errors, warnings anything, i am not sure what should I look for specifically. PS: I checked the weights, they are also same when I load the model
It was because of drop out layer. Model.eval disables the dropout layer. Pretty simple. But Now in Pytorh upgrade, if Dropout is not defined specifically in model init function, it will not get disable during eval. Atleast this was reason for my case.
https://stackoverflow.com/questions/54984345/
How does pytorch's nn.Module register submodule?
When I read the source code(python) of torch.nn.Module , I found the attribute self._modules has been used in many functions like self.modules(), self.children(), etc. However, I didn't find any functions updating it. So, where will the self._modules be updated? Furthermore, how does pytorch's nn.Module register submodule? class Module(object): def __init__(self): self._backend = thnn_backend self._parameters = OrderedDict() self._buffers = OrderedDict() self._backward_hooks = OrderedDict() self._forward_hooks = OrderedDict() self._forward_pre_hooks = OrderedDict() self._modules = OrderedDict() self.training = True def named_modules(self, memo=None, prefix=''): if memo is None: memo = set() if self not in memo: memo.add(self) yield prefix, self for name, module in self._modules.items(): if module is None: continue submodule_prefix = prefix + ('.' if prefix else '') + name for m in module.named_modules(memo, submodule_prefix): yield m
The modules and parameters are usually registered by setting an attribute for an instance of nn.module. Particularly, this kind of behavior is implemented by customizing the __setattr__ method: def __setattr__(self, name, value): def remove_from(*dicts): for d in dicts: if name in d: del d[name] params = self.__dict__.get('_parameters') if isinstance(value, Parameter): if params is None: raise AttributeError( "cannot assign parameters before Module.__init__() call") remove_from(self.__dict__, self._buffers, self._modules) self.register_parameter(name, value) elif params is not None and name in params: if value is not None: raise TypeError("cannot assign '{}' as parameter '{}' " "(torch.nn.Parameter or None expected)" .format(torch.typename(value), name)) self.register_parameter(name, value) else: modules = self.__dict__.get('_modules') if isinstance(value, Module): if modules is None: raise AttributeError( "cannot assign module before Module.__init__() call") remove_from(self.__dict__, self._parameters, self._buffers) modules[name] = value elif modules is not None and name in modules: if value is not None: raise TypeError("cannot assign '{}' as child module '{}' " "(torch.nn.Module or None expected)" .format(torch.typename(value), name)) modules[name] = value else: buffers = self.__dict__.get('_buffers') if buffers is not None and name in buffers: if value is not None and not isinstance(value, torch.Tensor): raise TypeError("cannot assign '{}' as buffer '{}' " "(torch.Tensor or None expected)" .format(torch.typename(value), name)) buffers[name] = value else: object.__setattr__(self, name, value) See https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py to find this method.
https://stackoverflow.com/questions/54994658/
pdb cannot debug into backward hooks
Here is my code. import torch v = torch.tensor([0., 0., 0.], requires_grad=True) x = 1 def f(grad): global x x = 2 return grad * 2 h = v.register_hook(f) # double the gradient v.backward(torch.tensor([1., 2., 3.])) h.remove() print(v.grad) When I debug with pdb, I find that I cannot break in function f (I set a breakpoint inside f at the statement x = 2). Does anyone know how to solve this? Note: if I use pycharm, I can break into the function. But on the remote server, I would like to use pdb.
You can try ipdb https://pypi.org/project/ipdb/ instead of pdb.
https://stackoverflow.com/questions/54998448/
Pytorch TypeError: eq() received an invalid combination of arguments
num_samples = 10 def predict(x): sampled_models = [guide(None, None) for _ in range(num_samples)] yhats = [model(x).data for model in sampled_models] mean = torch.mean(torch.stack(yhats), 0) return np.argmax(mean.numpy(), axis=1) print('Prediction when network is forced to predict') correct = 0 total = 0 for j, data in enumerate(test_loader): images, labels = data predicted = predict(images.view(-1,28*28)) total += labels.size(0) correct += (predicted == labels).sum().item() print("accuracy: %d %%" % (100 * correct / total)) Error: correct += (predicted == labels).sum().item() TypeError: eq() received an invalid combination of arguments - got (numpy.ndarray), but expected one of: * (Tensor other) didn't match because some of the arguments have invalid types: (!numpy.ndarray!) * (Number other) didn't match because some of the arguments have invalid types: (!numpy.ndarray!) *
You are trying to compare predicted and labels. However, your predicted is an np.array while labels is a torch.tensor therefore eq() (the == operator) cannot compare between them. Replace the np.argmax with torch.argmax: return torch.argmax(mean, dim=1) And you should be okay.
https://stackoverflow.com/questions/54999926/
Why does creating a single tensor on the GPU take 2.5 seconds in PyTorch?
I'm just going through the beginner tutorial on PyTorch and noticed that one of the many different ways to put a tensor (basically the same as a numpy array) on the GPU takes a suspiciously long amount compared to the other methods: import time import torch if torch.cuda.is_available(): print('time =', time.time()) x = torch.randn(4, 4) device = torch.device("cuda") print('time =', time.time()) y = torch.ones_like(x, device=device) # directly create a tensor on GPU => 2.5 secs?? print('time =', time.time()) x = x.to(device) # or just use strings ``.to("cuda")`` z = x + y print(z) print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together! a = torch.ones(5) print(a.cuda()) print('time =', time.time()) else: print('I recommend you get CUDA to work, my good friend!') Output (just times): time = 1551809363.28284 time = 1551809363.282943 time = 1551809365.7204516 # (!) time = 1551809365.7236063 Version details: 1 CUDA device: GeForce GTX 1050, driver version 415.27 CUDA = 9.0.176 PyTorch = 1.0.0 cuDNN = 7401 Python = 3.5.2 GCC = 5.4.0 OS = Linux Mint 18.3 Linux kernel = 4.15.0-45-generic As you can see this one operation ("y = ...") takes much longer (2.5 seconds) than the rest combined (.003 seconds). I'm confused about this as I expect all these methods to basically do the same. I've tried making sure the types in this line are 32 bit or have different shapes but that didn't change anything.
When I re-order the commands, whatever command is on top takes 2.5 seconds. So this leads me to believe there is a delayed one-time setup of the device happening here, and future on-GPU allocations will be faster.
https://stackoverflow.com/questions/55009297/
Trying to print class names for dog breed but it keeps saying list index out of range
I am using a resnet model to classify dog breeds but when I try to print out an image with the label of dog breed it says list index out of range. Here is my code: import torchvision.models as models import torch.nn as nn model_transfer = models.resnet18(pretrained=True) if use_cuda: model_transfer = model_transfer.cuda() model_transfer.fc.out_features = 133 Then I train the model and get over 70% accuracy on the dog breeds. Then here is my code to classify dog and print the dog breed: data_transfer = {'train': datasets.ImageFolder('/data/dog_images/train',transform=transforms.Compose([transforms.RandomResizedCrop(224),transforms.ToTensor()]))} class_names[0] class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes] def predict_breed_transfer(img_path): image = Image.open(img_path) # large images will slow down processing in_transform = transforms.Compose([ transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) # discard the transparent, alpha channel (that's the :3) and add the batch dimension image = in_transform(image)[:3,:,:].unsqueeze(0) image = image output = model_transfer(image) pred = torch.argmax(output) return class_names[pred] predict_breed_transfer('images/Labrador_retriever_06455.jpg') The code always predicts the dog wrong for some reason Then when I try to print out the image and the label: import matplotlib.pyplot as plt def run_app(img_path): img = Image.open(img_path) dog = dog_detector(img_path) if not dog: print('hello, human!') plt.imshow(img) print('You look like a ... ') print(predict_breed_transfer(img_path)) if dog: print('hello, dog!') print('Your predicted breed is ....') print(predict_breed_transfer(img_path)) plt.imshow(img) else: print('Niether human nor dog') And run a for loop that calls it on some dog images it will print some of the breeds out then it will say list index out of range and doesn't show any of the images. The length of class_names is 133 And when I print out the resnet model the output is only 133 nodes does anyone know why it is saying list index out of range or why it is so inaccurate. `IndexError Traceback (most recent call last) <ipython-input-26-473a9ba884b5> in <module>() 5 ## suggested code, below 6 for file in np.hstack((human_files[:3], dog_files[:3])): ----> 7 run_app(file) 8 <ipython-input-25-1d44200e44cc> in run_app(img_path) 10 plt.show(img) 11 print('You look like a ... ') ---> 12 print(predict_breed_transfer(img_path)) 13 if dog: 14 print('hello, dog!') <ipython-input-20-a51fb205659e> in predict_breed_transfer(img_path) 26 pred = torch.argmax(output) 27 ---> 28 return class_names[pred] 29 predict_breed_transfer('images/Labrador_retriever_06455.jpg') 30 IndexError: list index out of range` Here is the full error
I suppose you have several issues that can be fixed using 13 chars. First, I suggest what @Alekhya Vemavarapu suggested - run your code with a debugger to isolate each line and inspect the output. This is one of the greatest benefits of dynamic graphs with pytorch. Secondly, the most probable cause for your issue is the argmax statement that you use incorrectly. You do not specify the dimension you perform the argmax on, and therefore PyTorch automatically flattens the image and perform the operation on the full length vector. Thus, you get a number between 0 and MB_Size x num_classes -1. See Official doc on this method. So, due to your fully connected layer I assume your output is of shape (MB_Size, num_classes). If so, you need to change your code to the following line: pred = torch.argmax(output,dim=1) and thats it. Otherwise, just choose the dimension of the logits. Third thing you want to consider is the dropout and other influences that a training configuration may cause to the inference. For instance, the dropout in some frameworks may require multiplying the ouptut by 1/(1-p) in the inference (or not since it can be done while training), batch normalization may be cancelled since the batch size is different, and so on. Additionally, to reduce memory consumption, no gradients should be computed. Luckily, PyTorch developers are very thoughtful and provided us with torch.no_grad() and model.eval() for that. I strongly suggest to have a practice for that, possibly changing your code with a few letters: output = model_transfer.eval()(image) and your done! Edit: This is a simple use case of wrong usage of the PyTorch framework, not reading the docs and not debugging your code. The following code is purely incorrect: model_transfer.fc.out_features = 133 This line does not actually creates a new fully connected layer. It just changes the property of that tensor. Try in your console: import torch a = torch.nn.Linear(1,2) a.out_features = 3 print(a.bias.data.shape, a.weight.data.shape) Output: torch.Size([2]) torch.Size([2, 1]) which indicates that the actual matrix of the weights and the biases vector remain in their original dimension. A correct way to perform transfer learning is to keep the backbone (usually the convolutional layers until the fully connected ones in these type of models) and overwriting the head (FC layer in this case) with yours. If it is just one fully connected layer that exists in the original model, you do not have to change the forward pass of your model and you're good to go. Since this answer is already long enough, just visit the Transfer learning tutorial in PyTorch docs to see how it can be done. Good luck to ya'll.
https://stackoverflow.com/questions/55012880/
Why doesn't the learning rate (LR) go below 1e-08 in pytorch?
I am training a model. To overcome overfitting I have done optimization, data augmentation etc etc. I have an updated LR (I tried for both SGD and Adam), and when there is a plateu (also tried step), the learning rate is decreased by a factor until it reaches LR 1e-08 but won't go below than that and my model's validation gets stuck after this point. I tried passing the epsilon parameter to Adam to suggest a smaller value, but it still got stuck at LR 1e-08. I also pass a weight decay, but it doesn't change the situation. Neither did setting the amsgrad to true. I did some research and people suggest that Adam optimizer has inherent problems but nothing is mentioned about the learning rate - and every discussion added that with SGD, there is no problem. Why is this? Is it a bug or is it designed so because authors think it is meaninglessly a small value after that? It seems like it would really help to have a smaller learning rate for my dataset because all seems well up until learning rate is down to LR 1e-08.
Personally I'm not aware of a lower limit on the learning rate (other than 0.0). But you can achieve the effect of a lower learning rate by reducing the loss before computing the backwards pass: outputs = model(batch) loss = criterion(outputs, targets) # Equivalent to lowering the learning rate by a factor of 100 loss = loss / 100 self.optimizer.zero_grad() loss.backward() self.optimizer.step()
https://stackoverflow.com/questions/55026544/
Pre-Trained Models in Keras,TorchVision
I Have The Following Code which use pre-trained ResNet50 Model in Keras with imagenet DataSet: from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input, decode_predictions import numpy as np model = ResNet50(weights='imagenet') print(model) and it works fine..My question is can i find a pre-trained model inside Keras or Torchvision or TensorFlow But to one of the following: 1) LeNet5 for MNIST DataSet 2) 32-Layer ResNet for the CIFAR-10 Dataset I know the alternative is to train the LeNet5 for example on my own, but a pre-trained model will be preferred and as far as I searched I didn't find them. thank you.
I've also been exploring Tensorflow's pretrained model landscape and (as of 1/14/2020), solutions don't exist for 1) a mnist-pretrained lenet or 2) a cifar10-pretrained 32-layer resnet. Honestly, I strongly doubt that most frameworks release a pretrained model for LeNet-5. It's extremely small and usually takes O(minutes) to train. Aside from the tf.keras.applications module you mentioned, some other potential options are: The official tensorflow/models repository, which contains model examples for mnist and a 32-layer resnet Swithing to pytorch, which has various models pretrained on imagenet I realize neither of these are ideal.
https://stackoverflow.com/questions/55030766/
How to do 2-layer nested FOR loop in PYTORCH?
I am learning to implement the Factorization Machine in Pytorch. And there should be some feature crossing operations. For example, I've got three features [A,B,C], after embedding, they are [vA,vB,vC], so the feature crossing is "[vA·vB], [vA·vC], [vB·vc]". I know this operation can be simplified by the following: It can be implemented by MATRIX OPERATIONS. But this only gives a final result, say, a single value. The question is, how to get all cross_vec in the following without doing FOR loop: note: size of "feature_emb" is [batch_size x feature_len x embedding_size] g_feature = 0 for i in range(self.featurn_len): for j in range(self.featurn_len): if j <= i: continue cross_vec = feature_emb[:,i,:] * feature_emb[:,j,:] g_feature += torch.sum(cross_vec, dim=1)
You can cross_vec = (feature_emb[:, None, ...] * feature_emb[..., None, :]).sum(dim=-1) This should give you corss_vec of shape (batch_size, feature_len, feature_len). Alternatively, you can use torch.bmm cross_vec = torch.bmm(feature_emb, feature_emb.transpose(1, 2))
https://stackoverflow.com/questions/55038022/
PyTorch conversion between tensor and numpy array: the addition operation
I am following the 60-minute blitz on PyTorch but have a question about conversion of a numpy array to a tensor. Tutorial example here. This piece of code: import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) yields [2. 2. 2. 2. 2.] tensor([2., 2., 2., 2., 2.], dtype=torch.float64) However import numpy as np a = np.ones(5) b = torch.from_numpy(a) a = a + 1 #the diff is here print(a) print(b) yields [2. 2. 2. 2. 2.] tensor([1., 1., 1., 1., 1.], dtype=torch.float64) Why are the outputs different?
This actually has little to do with PyTorch. Compare import numpy as np a = np.ones(5) b = a followed by either np.add(a, 1, out=a) print(b) or a = a + 1 print(b) There is a difference between np.add(a, 1, out=a) and a = a + 1. In the former you retain the same object (array) a with different values (2 instead of 1); in the latter you get a new array, which is bound to the same variable name a and has values of 2. However, the "original" a is discarded and unless something else (b) points to it, would be deallocated. In other words, the first operation is in-place and the latter out-of-place. Since b holds on to the array originally found at a, reassigning a + 1 to a does not affect the value of b. An alternative in-place mutation syntax would be a[:] = a + 1 print(b) Regarding PyTorch, it's very simple. from_numpy creates a tensor which aliases an actual object (array), so it is equivalent to the b = a line in my first snippet. The tensor will track the changes in the array named a at the point of calling, rather than the changes of what the name a points to.
https://stackoverflow.com/questions/55040217/
Problems passing tensor to linear layer - Pytorch
I'm trying to build a neural net however I can't figure out where I'm going wrong with the max pooling layer. self.embed1 = nn.Embedding(256, 8) self.conv_1 = nn.Conv2d(1, 64, (7,8), padding = (0,0)) self.fc1 = nn.Linear(64, 2) def forward(self,x): import pdb; pdb.set_trace() x = self.embed1(x) #input a tensor of ([1,217]) output size: ([1, 217, 8]) x = x.unsqueeze(0) #conv lay needs a tensor of size (B x C x W x H) so unsqueeze here to make ([1, 1, 217, 8]) x = self.conv_1(x) #creates 64 filter of size (7, 8).Outputs ([1, 64, 211, 1]) as 6 values lost due to not padding. x = torch.max(x,0) #returning max over the 64 columns. This returns a tuple of length 2 with 64 values in each att, the max val and indices. x = x[0] #I only need the max values. This returns a tensor of size ([64, 211, 1]) x = x.squeeze(2) #linear layer only wants the number of inputs and number of outputs so I squeeze the tensor to ([64, 211]) x = self.fc1(x) #Error Size mismatch (M1: [64 x 211] M2: [64 x 2]) I understand why the linear layer isn't accepting 211 however I don't understand why my tensor after maxing over the columns isn't 64 x 2.
You use of torch.max returns two outputs: the max value along dim=0 and the argmax along that dimension. Thus, you need to pick only the first output. (you might want to consider using adaptive max pooling for this task). Your linear layer expects its input to have dim 64 (that is batch_size-by-64 shaped tensor). However, it seems like your x[0] is of shape 13504x1 - definitely not 64. See this thread for example.
https://stackoverflow.com/questions/55040412/
How does Pytorch Dataloader handle variable size data?
I have a dataset that looks like below. That is the first item is the user id followed by the set of items which is clicked by the user. 0 24104 27359 6684 0 24104 27359 1 16742 31529 31485 1 16742 31529 2 6579 19316 13091 7181 6579 19316 13091 2 6579 19316 13091 7181 6579 19316 2 6579 19316 13091 7181 6579 19316 13091 6579 2 6579 19316 13091 7181 6579 4 19577 21608 4 19577 21608 4 19577 21608 18373 5 3541 9529 5 3541 9529 6 6832 19218 14144 6 6832 19218 7 9751 23424 25067 12606 26245 23083 12606 I define a custom dataset to handle my click log data. import torch.utils.data as data class ClickLogDataset(data.Dataset): def __init__(self, data_path): self.data_path = data_path self.uids = [] self.streams = [] with open(self.data_path, 'r') as fdata: for row in fdata: row = row.strip('\n').split('\t') self.uids.append(int(row[0])) self.streams.append(list(map(int, row[1:]))) def __len__(self): return len(self.uids) def __getitem__(self, idx): uid, stream = self.uids[idx], self.streams[idx] return uid, stream Then I use a DataLoader to retrieve mini batches from the data for training. from torch.utils.data.dataloader import DataLoader clicklog_dataset = ClickLogDataset(data_path) clicklog_data_loader = DataLoader(dataset=clicklog_dataset, batch_size=16) for uid_batch, stream_batch in stream_data_loader: print(uid_batch) print(stream_batch) The code above returns differently from what I expected, I want stream_batch to be a 2D tensor of type integer of length 16. However, what I get is a list of 1D tensor of length 16, and the list has only one element, like below. Why is that ? #stream_batch [tensor([24104, 24104, 16742, 16742, 6579, 6579, 6579, 6579, 19577, 19577, 19577, 3541, 3541, 6832, 6832, 9751])]
So how do you handle the fact that your samples are of different length? torch.utils.data.DataLoader has a collate_fn parameter which is used to transform a list of samples into a batch. By default it does this to lists. You can write your own collate_fn, which for instance 0-pads the input, truncates it to some predefined length or applies any other operation of your choice.
https://stackoverflow.com/questions/55041080/
PyTorch model validation: The size of tensor a (32) must match the size of tensor b (13)
I am a very beginner in case of machine learning. So for learning purpose I am trying to develop a simple CNN to classify chess pieces. The net already works and I can train it but I have a problem with my validation function. I can't compare my prediction with my target_data because my prediction is only a tensor of size 13 while target.data is [batch_size]x13. I can't figure out where my mistake is. The PyTorch examples are almost all using this function to compare the prediction with the target data. It would be really great if anybody could help me out here. You can lookup the rest of the code here: https://github.com/michaelwolz/ChessML/blob/master/train.ipynb def validate(model, validation_data, criterion): model.eval() loss = 0 correct = 0 for i in range(len(validation_data)): data, target = validation_data[i][0], validation_data[i][1] target = torch.Tensor(target) if torch.cuda.is_available(): data = data.cuda() target = target.cuda() out = model(data) loss += criterion(out, target).item() _, prediction = torch.max(out.data, 1) correct += (prediction == target.data).sum().item() loss = loss / len(validation_data) print("###################################") print("Average loss:", loss) print("Accuracy:", 100. * correct / len(validation_data)) print("###################################") Error: <ipython-input-6-6b21e2bfb8a6> in validate(model, validation_data, criterion) 17 18 _, prediction = torch.max(out.data, 1) ---> 19 correct += (prediction == target.data).sum().item() 20 21 loss = loss / len(validation_data) RuntimeError: The size of tensor a (32) must match the size of tensor b (13) at non-singleton dimension 1 Edit: My labels look like this: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Each index represents one class. The output of the torch.max() function seems to be the index of the class. I don't understand how I could compare the index to the target_label. I mean I could just write a function which checks if there is a 1 at the predicted index but I think that my mistake is somewhere else.
Simply run "argmax" on the target as well: _, target = torch.max(target.data, 1) Or better yet, just keep the target around as [example_1_class, example_2_class, ...], instead of 1-hot encoding.
https://stackoverflow.com/questions/55046831/
Unexpected key(s) in state_dict: "model", "opt"
I'm currently using fast.ai to train an image classifier model. data = ImageDataBunch.single_from_classes(path, classes, ds_tfms=get_transforms(), size=224).normalize(imagenet_stats) learner = cnn_learner(data, models.resnet34) learner.model.load_state_dict( torch.load('stage-2.pth', map_location="cpu") ) which results in : torch.load('stage-2.pth', map_location="cpu") File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Sequential: ... Unexpected key(s) in state_dict: "model", "opt". I have looked around in SO and tried to use the following solution: # original saved file with DataParallel state_dict = torch.load('stage-2.pth', map_location="cpu") # create new OrderedDict that does not contain `module.` from collections import OrderedDict new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[7:] # remove `module.` new_state_dict[name] = v # load params learner.model.load_state_dict(new_state_dict) which results in : RuntimeError: Error(s) in loading state_dict for Sequential: Unexpected key(s) in state_dict: "". I'm using Google Colab to train my model and then port the trained model into docker and try to host in in a local server. What could be the issue? Could it be the different version of pytorch which results in model mismatch? In my docker config: # Install pytorch and fastai RUN pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html RUN pip install fastai While my Colab is using the following: !curl -s https://course.fast.ai/setup/colab | bash
My strong guess is that stage-2.pth contains two top-level items: the model itself (its weights) and the final state of the optimizer which was used to train it. To load just the model, you need only the former. Assuming things were done in the idiomatic PyTorch way, I would try learner.model.load_state_dict( torch.load('stage-2.pth', map_location="cpu")['model'] ) Update: after applying my first round of advice it becomes clear that you're loading a savepoint create with a different (perhaps differently configured?) model than the one you're loading it into. As you can see in the pastebin, the savepoint contains weights for some extra layers, not present in your model, such as bn3, downsample, etc. "0.4.0.bn3.running_var", "0.4.0.bn3.num_batches_tracked", "0.4.0.downsample.0.weight" at the same time some other key names match, but the tensors are of different shapes. size mismatch for 0.5.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). I see a pattern that you consistently try to load a parameter of shape [2^(x+1), 2^x, 1, 1] in place of [2^(x), 2^(x-1), 1, 1]. Perhaps you're trying to load a model of different depth (ex. loading vgg-16 weights for vgg-11?). Either way, you need to figure out the exact architecture used to create your savepoint and then recreate it before loading the savepoint. PS. In case you weren't sure - savepoints contain model weights, along with their shapes and (autogenerated) names. They do not contain the full specification of the architecture itself - you need to assure yourself, that you're calling model.load_state_dict with model being of exactly the same architecture as was used to create the savepoint. Otherwise you will likely have weight names mismatching.
https://stackoverflow.com/questions/55047065/
converting list of tensors to tensors pytorch
I have list of tensor where each tensor has a different size. How can I convert this list of tensors into a tensor using PyTorch? For instance, x[0].size() == torch.Size([4, 8]) x[1].size() == torch.Size([4, 7]) # different shapes! This: torch.tensor(x) Gives the error: ValueError: only one element tensors can be converted to Python scalars
You might be looking for cat. However, tensors cannot hold variable length data. for example, here we have a list with two tensors that have different sizes(in their last dim(dim=2)) and we want to create a larger tensor consisting of both of them, so we can use cat and create a larger tensor containing both of their data. also note that you can't use cat with half tensors on cpu as of right now so you should convert them to float, do the concatenation and then convert back to half import torch a = torch.arange(8).reshape(2, 2, 2) b = torch.arange(12).reshape(2, 2, 3) my_list = [a, b] my_tensor = torch.cat([a, b], dim=2) print(my_tensor.shape) #torch.Size([2, 2, 5]) you haven't explained your goal so another option is to use pad_sequence like this: from torch.nn.utils.rnn import pad_sequence a = torch.ones(25, 300) b = torch.ones(22, 300) c = torch.ones(15, 300) pad_sequence([a, b, c]).size() #torch.Size([25, 3, 300]) edit: in this particular case, you can use torch.cat([x.float() for x in sequence], dim=1).half()
https://stackoverflow.com/questions/55050717/
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 1 in dimension 1
While training the resnet50 model through pytorch I got this error: RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 1 in dimension 1 at /pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1333 I'm using this: http://github.com/Helias/Car-Model-Recognition/ with this dataset http://vmmrdb.cecsresearch.org/
I solved this, the problem was the different images color channels, not all the images were RGB, so I made a conversion in dataset.py, I changed this: im = Image.open(image_path) into this: im = Image.open(image_path).convert('RGB')
https://stackoverflow.com/questions/55054009/
Pytorch batch matrix vector outer product
I am trying to generate a vector-matrix outer product (tensor) using PyTorch. Assuming the vector v has size p and the matrix M has size qXr, the result of the product should be pXqXr. Example: #size: 2 v = [0, 1] #size: 2X3 M = [[0, 1, 2], [3, 4, 5]] #size: 2X2X3 v*M = [[[0, 0, 0], [0, 0, 0]], [[0, 1, 2], [3, 4, 5]]] For two vectors v1 and v2, I can use torch.bmm(v1.view(1, -1, 1), v2.view(1, 1, -1)). This can be easily extended for a batch of vectors. However, I am not able to find a solution for vector-matrix case. Also, I need to do this operation for batches of vectors and matrices.
You can use torch.einsum operator: torch.einsum('bp,bqr->bpqr', v, M) # batch-wise operation v.shape=(b,p) M.shape=(b,q,r) torch.einsum('p,qr->pqr', v, M) # cross-batch operation
https://stackoverflow.com/questions/55054127/
PyTorch PermissionError: [Errno 13] Permission denied: '/.torch'
I'm running a PyTorch based ML program for image classification using Resnet50 model for transfer learning. I am getting below error regarding permission. Traceback (most recent call last): File "imgc_pytorch.py", line 67, in   model = models.resnet50(pretrained=True) File "/opt/conda/lib/python3.6/site-packages/torchvision/models/resnet.py", line 187, in resnet50   model.load_state_dict(model_zoo.load_url(model_urls['resnet50'])) File "/opt/conda/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 59, in load_url   os.makedirs(model_dir) File "/opt/conda/lib/python3.6/os.py", line 210, in makedirs   makedirs(head, mode, exist_ok) File "/opt/conda/lib/python3.6/os.py", line 220, in makedirs   mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.torch' Looked up on this forum and it was suggested to add 'sudo' before the file name, but getting error "/bin/sh: 1: sudo: not found"
you can change model_zoo.load_url(model_urls['resnet50']) to model_zoo.load_url(model_urls['resnet50'], model_dir='~/.torch/') like this
https://stackoverflow.com/questions/55073757/
Extract features from last hidden layer Pytorch Resnet18
I am implementing an image classifier using the Oxford Pet dataset with the pre-trained Resnet18 CNN. The dataset consists of 37 categories with ~200 images in each of them. Rather than using the final fc layer of the CNN as output to make predictions I want to use the CNN as a feature extractor to classify the pets. For each image i'd like to grab features from the last hidden layer (which should be before the 1000-dimensional output layer). My model is using Relu activation so I should grab the output just after the ReLU (so all values will be non-negative) Here is code (following the transfer learning tutorial on Pytorch): loading data normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) image_datasets = {"train": datasets.ImageFolder('images_new/train', transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize ])), "test": datasets.ImageFolder('images_new/test', transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize ])) } dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4, pin_memory=True) for x in ['train', 'test']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'test']} train_class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") train function def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'test']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model Compute SGD cross-entropy loss model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features print("number of features: ", num_ftrs) model_ft.fc = nn.Linear(num_ftrs, len(train_class_names)) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=24) Now how do I get a feature vector from the last hidden layer for each of my images? I know I have to freeze the previous layer so that gradient isn't computed on them but I'm having trouble extracting the feature vectors. My ultimate goal is to use those feature vectors to train a linear classifier such as Ridge or something like that. Thanks!
This is probably not the best idea, but you can do something like this: #assuming model_ft is trained now model_ft.fc_backup = model_ft.fc model_ft.fc = nn.Sequential() #empty sequential layer does nothing (pass-through) # or model_ft.fc = nn.Identity() # now you use your network as a feature extractor I also checked fc is the right attribute to change, look at forward
https://stackoverflow.com/questions/55083642/
How to save and load random number generator state in Pytorch?
I am training a DL model in Pytorch, and want to train my model in a deterministic way. As written in this official guide, I set random seeds like this: np.random.seed(0) torch.manual_seed(0) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False Now, my training is long and i want to save, then later load everything, including the RNGs. I use torch.save and torch.load_state_dict for the model and the optimizer. How can the random number generators be saved & loaded?
You can use torch.get_rng_state and torch.set_rng_state When calling torch.get_rng_state you will get your random number generator state as a torch.ByteTensor. You can then save this tensor somewhere in a file and later you can load and use torch.set_rng_state to set the random number generator state. When using numpy you can of course do the same there using: numpy.random.get_state and numpy.random.set_state
https://stackoverflow.com/questions/55097671/
Build Pytorch from source
I'm trying to install Pytorch from source on my MacOS (version 10.14.3) to use GPU. I have follow the documentation from this link. When I launch in my terminal the MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install I'm getting the following error in my terminal: [ 69%] Built target caffe2_observers make: *** [all] Error 2 Traceback (most recent call last): File "setup.py", line 710, in <module> build_deps() File "setup.py", line 282, in build_deps build_dir='build') File "/Users/Desktop/pytorch/tools/build_pytorch_libs.py", line 259, in build_caffe2 check_call(['make', '-j', str(max_jobs), 'install'], cwd=build_dir, env=my_env) File "/Users/anaconda3/lib/python3.6/subprocess.py", line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['make', '-j', '4', 'install']' returned non-zero exit status 2. I tried to upgrade pip and reinstall anaconda and cuda without success. Here you can find the error just belong : [ 69%] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/net_observer_reporter_print.cc.o In file included from <built-in>:1: In file included from /usr/local/cuda/include/cuda_runtime.h:115: In file included from /usr/local/cuda/include/crt/common_functions.h:77: /Library/Developer/CommandLineTools/usr/include/c++/v1/string.h:61:15: fatal error: 'string.h' file not found #include_next <string.h> ^~~~~~~~~~ 1 error generated. CMake Error at caffe2_gpu_generated_THCReduceApplyUtils.cu.o.Release.cmake:219 (message): Error generating /Users/Desktop/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/./caffe2_gpu_generated_THCReduceApplyUtils.cu.o make[2]: *** [caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/THC/caffe2_gpu_generated_THCReduceApplyUtils.cu.o] Error 1 make[2]: *** Waiting for unfinished jobs.... Scanning dependencies of target torch_shm_manager 1 error generated.
I was encountering this problem because Nvidia is incompatible with OSX Mojave 10.14+
https://stackoverflow.com/questions/55107466/
Finding non-intersection of two pytorch tensors
Thanks everyone in advance for your help! What I'm trying to do in PyTorch is something like numpy's setdiff1d. For example given the below two tensors: t1 = torch.tensor([1, 9, 12, 5, 24]).to('cuda:0') t2 = torch.tensor([1, 24]).to('cuda:0') The expected output should be (sorted or unsorted): torch.tensor([9, 12, 5]) Ideally the operations are done on GPU and no back and forth between GPU and CPU. Much appreciated!
if you don't want to leave cuda, a workaround could be: t1 = torch.tensor([1, 9, 12, 5, 24], device = 'cuda') t2 = torch.tensor([1, 24], device = 'cuda') indices = torch.ones_like(t1, dtype = torch.uint8, device = 'cuda') for elem in t2: indices = indices & (t1 != elem) intersection = t1[indices]
https://stackoverflow.com/questions/55110047/
Fastest way to read an image from huge uncompressed tar file in __getitem__ of PyTorch custom dataset
I have a huge dataset (2 million) of jpg images in one uncompressed TAR file. I also have a txt file each line is the name of the image in TAR file in order. img_0000001.jpg img_0000002.jpg img_0000003.jpg ... and images in tar file are exactly the same. I searched alot and find out tarfile module is the best one, but when I tried to read images from tar file using name, it takes too long. And the reason is, everytime I call getmemeber(name) method, it calls getmembers() method which scan whole tar file, then return a Namespace of all names, then start finding in this Namespace. if it helps, my dataset size is 20GB single tar file. I don't know it is better idea to first extract all then use extracted folders in my CustomDataset or reading directly from archive. Here is the code I am using to read a single file from tar file: with tarfile.open('data.tar') as tf: tarinfo = tf.getmember('img_000001.jpg') image = tf.extractfile(tarinfo) image = image.read() image = Image.open(io.BytesIO(image)) I used this code in my __getitem__ method of CustomDataset class that loops over all names in filelist.txt Thanks for any advice
tarfile seems to have caching for getmember, it reuses getmembers() results. But if you use the provided snipped in __getitem__, then for each item from the dataset the tar file is open and read fully, one image file extracted, then the tar file closed and the associated info is lost. The simplest way to resolve this is probably to open the tar file in your dataset's __init__ like self.tf = tarfile.open('data.tar'), but then you need to remember to close it in the end.
https://stackoverflow.com/questions/55116639/
Convolutional auto-encoder error - 'RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.FloatTensor) should be the same'
For below model I received error 'Expected stride to be a single value integer or list'. I used suggested answer from https://discuss.pytorch.org/t/expected-stride-to-be-a-single-integer-value-or-a-list/17612/2 and added img.unsqueeze_(0) I now receive error : RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.FloatTensor) should be the same For below code I three sample images and attempt to learn a representation of them using an auto-encoder : %reset -f import torch.utils.data as data_utils import warnings warnings.filterwarnings('ignore') import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import pyplot as plt from sklearn import metrics import datetime from sklearn.preprocessing import MultiLabelBinarizer import seaborn as sns sns.set_style("darkgrid") from ast import literal_eval import numpy as np from sklearn.preprocessing import scale import seaborn as sns sns.set_style("darkgrid") import torch import torch import torchvision import torch.nn as nn from torch.autograd import Variable from os import listdir import cv2 import torch.nn.functional as F import numpy as np from numpy.polynomial.polynomial import polyfit import matplotlib.pyplot as plt number_channels = 3 %matplotlib inline x = np.arange(10) m = 1 b = 2 y = x * x plt.plot(x, y) plt.axis('off') plt.savefig('1-increasing.jpg') x = np.arange(10) m = 0.01 b = 2 y = x * x * x plt.plot(x, y) plt.axis('off') plt.savefig('2-increasing.jpg') x = np.arange(10) m = 0 b = 2 y = (m*x)+b plt.plot(x, y) plt.axis('off') plt.savefig('constant.jpg') batch_size_value = 2 train_image = [] train_image.append(cv2.imread('1-increasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('2-increasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('decreasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('constant.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) data_loader = data_utils.DataLoader(train_image, batch_size=batch_size_value, shuffle=False,drop_last=True) import torch import torchvision from torch import nn from torch.autograd import Variable from torch.utils.data import DataLoader from torchvision import transforms from torchvision.utils import save_image from torchvision.datasets import MNIST import os if not os.path.exists('./dc_img'): os.mkdir('./dc_img') def to_img(x): x = 0.5 * (x + 1) x = x.clamp(0, 1) x = x.view(x.size(0), 1, 28, 28) return x num_epochs = 100 # batch_size = 128 batch_size = 2 learning_rate = 1e-3 dataloader = data_loader class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.encoder = nn.Sequential( nn.Conv2d(3, 16, 3, stride=3, padding=1), # b, 16, 10, 10 nn.ReLU(True), nn.MaxPool2d(2, stride=2), # b, 16, 5, 5 nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3 nn.ReLU(True), nn.MaxPool3d(3, stride=1) # b, 8, 2, 2 ) self.decoder = nn.Sequential( nn.ConvTranspose3d(8, 16, 3, stride=2), # b, 16, 5, 5 nn.ReLU(True), nn.ConvTranspose3d(16, 8, 5, stride=3, padding=1), # b, 8, 15, 15 nn.ReLU(True), nn.ConvTranspose3d(8, 1, 2, stride=2, padding=1), # b, 1, 28, 28 nn.Tanh() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x model = autoencoder() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-5) for epoch in range(num_epochs): for data in dataloader: img, _ = data img.unsqueeze_(0) # img.unsqueeze_(0) # print(img) # img.unsqueeze_(0) img = Variable(img).cuda() # ===================forward===================== output = model(img) loss = criterion(output, img) # ===================backward==================== optimizer.zero_grad() loss.backward() optimizer.step() # ===================log=================to_img======= print('epoch [{}/{}], loss:{:.4f}' .format(epoch+1, num_epochs, loss.data[0])) if epoch % 10 == 0: pic = to_img(output.cpu().data) save_image(pic, './dc_img/image_{}.png'.format(epoch)) torch.save(model.state_dict(), './conv_autoencoder.pth') But as stated earlier this results in error : 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: Input type (torch.cuda.ByteTensor) and weight type (torch.FloatTensor) should be the same The issue appears to be related to img.unsqueeze_(0) ? How to train the auto-encoder on these images ?
This is because your image tensor resides in GPU (that happens here img = Variable(img).cuda()), while your model is still in RAM. Please remember that you need to explicitly call cuda() to send a tensor (or an instance of nn.Module) to GPU. Just change this line: model = autoencoder() To this: model = autoencoder().cuda()
https://stackoverflow.com/questions/55120789/
Groups in Convolutional Neural Network / CNN
I came across this PyTorch example for depthwise separable convolutions using the groups parameter: class depthwise_separable_conv(nn.Module): def __init__(self, nin, nout): super(depthwise_separable_conv, self).__init__() self.depthwise = nn.Conv2d(nin, nin, kernel_size=3, padding=1, groups=nin) self.pointwise = nn.Conv2d(nin, nout, kernel_size=1) def forward(self, x): out = self.depthwise(x) out = self.pointwise(out) return out I haven't seen any usage of groups in CNNs before. The documentation is also a bit sparse as far as that is concerned: groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. So my questions are: What are groups in CNNs? In which cases do I need to use groups? (I guess this is more a general, not PyTorch specific.)
Perhaps you're looking up an older version of the docs. 1.0.1 documentation for nn.Conv2d expands on this. Groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example, At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups= in_channels, each input channel is convolved with its own set of filters, of size: (floor(c_out / c_in)) If you prefer a more mathematical description, start by thinking of a 1x1 convolution with groups=1 (default). It is a essentially a full matrix applied across all channels f at each (h, w) location. Setting groups to higher values turns this matrix into a diagonal block-sparse matrix with the number of blocks equal to groups. With groups=in_channels you get a diagonal matrix. Now, if the kernel is larger than 1x1, you retain the channel-wise block-sparsity as above, but allow for larger spatial kernels. I suggest rereading the groups=2 exempt from the docs I quoted above, it describes exactly that scenario in yet another way, perhaps helpful for understanding. Hope this helps. Edit: Why does anybody want to use it? Either as a constraint (prior) for the model or as a performance improvement technique; sometimes both. In the linked thread the idea is to replace a NxN, groups=1 2d conv with a sequence of NxN, groups=n_features -> 1x1, groups=1 convolutions. This mathematically results in a single convolution (since a convolution of a convolution is still a convolution), but makes the "product" convolution matrix more sparse and thus reduces the number of parameters and computational complexity. This seems to be a reasonable resource explaining this more in-depth.
https://stackoverflow.com/questions/55123161/
Output and Broadcast shape mismatch in MNIST, torchvision
I am getting following error when using MNIST dataset in Torchvision RuntimeError: output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28] Here is my code: import torch from torchvision import datasets, transforms transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) images, labels = next(iter(trainloader))
The error is due to color vs grayscale on the dataset, the dataset is grayscale. I fixed it by changing transform to transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ])
https://stackoverflow.com/questions/55124407/
Implementing fast dense feature extraction in PyTorch
I am trying to implement this paper in PyTorch Fast Dense Feature Extractor but I am having trouble converting the Torch implementation example they provide into PyTorch. My attempt thus far has the issue that when adding an additional dimension to the feature map then the convolutional weights don't match the feature shape. How is this managed in Torch (from their implementation it seem that Torch doesn't care about this, but PyTorch does). My code: https://gist.github.com/system123/c4b8ef3824f2230f181f8cfba84f0cfd Any other solutions to this problem would be great too. Basically, I have a feature extractor that converts a 128x128 patch into an embedding and I'd like to apply this in a dense manner across a larger image without using a for loop to evaluate the CNN on each location as that has a lot of duplicate computation.
It is your lucky day as I have recently uploaded a PyTorch and TF implementation of the paper Fast Dense Feature Extraction with CNNs with Pooling Layers. An approach to compute patch-based local feature descriptors efficiently in presence of pooling and striding layers for whole images at once. See https://github.com/erezposner/Fast_Dense_Feature_Extraction for details. It contains simple instructions that will explain how to use the Fast Dense Feature Extraction (FDFE) project. Good luck
https://stackoverflow.com/questions/55126493/
How can I make a neural network that has multiple outputs using pytorch?
Is my question even right? I looked everywhere but couldn't find a single thing. I'm pretty sure this was addressed when I learned keras, but how do I implement it in pytorch?
Multiple outputs can be trivially achieved with pytorch. Here is one such network. import torch.nn as nn class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.linear1 = nn.Linear(in_features = 3, out_features = 1) self.linear2 = nn.Linear(in_features = 3,out_features = 2) def forward(self, x): output1 = self.linear1(x) output2 = self.linear2(x) return output1, output2
https://stackoverflow.com/questions/55128814/
PyTorch grid_sample returns zero array
I want to sample rectangular patch from my image by affine_grid/grid_sample I created array which contains only 255 values canvas1 = np.zeros((128, 128), dtype=np.uint8) canvas1[:] = 255 Also i created grid theta = torch.FloatTensor([[ [11/2, 0, 63], [0, 11/2, 63], ]]) grid = F.affine_grid(theta, (1, 1, 11, 11)) Grid contains values like [[57.5000, 57.5000], [58.6000, 57.5000], [59.7000, 57.5000], [60.8000, 57.5000], [61.9000, 57.5000], [63.0000, 57.5000], [64.1000, 57.5000], [65.2000, 57.5000], [66.3000, 57.5000], [67.4000, 57.5000], [68.5000, 57.5000]], ............... After that i called grid_sample canvas1_torch = torch.FloatTensor(canvas1.astype(np.float32)) canvas1_torch = canvas1_torch.unsqueeze(0).unsqueeze(0) sampled = F.grid_sample(canvas1_torch, grid, mode="bilinear") Unfortunately sampled contains zero values (but canvas1_torch[0, 0, 63, 65]) is 255 What i am doing wrong?
Your grid values are outside [-1, 1]. According to https://pytorch.org/docs/stable/nn.html#torch.nn.functional.grid_sample, such values are handled as defined by padding_mode. Default padding_mode is 'zeros', what you probably want is "border": F.grid_sample(canvas1_torch, grid, mode="bilinear", padding_mode="border") returns all values 255.
https://stackoverflow.com/questions/55129589/
How to transfer the following tensorflow code into pytorch
I want to re-implement the word embedding here here is the original tensorflow code (version: 0.12.1) import tensorflow as tf class Network(object): def __init__( self, user_length,item_length, num_classes, user_vocab_size,item_vocab_size,fm_k,n_latent,user_num,item_num, embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0,l2_reg_V=0.0): # Skip the embedding pooled_outputs_u = [] for i, filter_size in enumerate(filter_sizes): with tf.name_scope("user_conv-maxpool-%s" % filter_size): # Convolution Layer filter_shape = [filter_size, embedding_size, 1, num_filters] W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W") b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b") conv = tf.nn.conv2d( self.embedded_users, W, strides=[1, 1, 1, 1], padding="VALID", name="conv") # Apply nonlinearity h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu") # Maxpooling over the outputs pooled = tf.nn.max_pool( h, ksize=[1, user_length - filter_size + 1, 1, 1], strides=[1, 1, 1, 1], padding='VALID', name="pool") pooled_outputs_u.append(pooled) num_filters_total = num_filters * len(filter_sizes) self.h_pool_u = tf.concat(3,pooled_outputs_u) self.h_pool_flat_u = tf.reshape(self.h_pool_u, [-1, num_filters_total]) with tf.name_scope("dropout"): self.h_drop_u = tf.nn.dropout(self.h_pool_flat_u, 1.0) self.h_drop_i= tf.nn.dropout(self.h_pool_flat_i, 1.0) with tf.name_scope("get_fea"): Wu = tf.get_variable( "Wu", shape=[num_filters_total, n_latent], initializer=tf.contrib.layers.xavier_initializer()) bu = tf.Variable(tf.constant(0.1, shape=[n_latent]), name="bu") self.u_fea=tf.matmul(self.h_drop_u, Wu) + bu #self.u_fea = tf.nn.dropout(self.u_fea,self.dropout_keep_prob) Wi = tf.get_variable( "Wi", shape=[num_filters_total, n_latent], initializer=tf.contrib.layers.xavier_initializer()) bi = tf.Variable(tf.constant(0.1, shape=[n_latent]), name="bi") self.i_fea = tf.matmul(self.h_drop_i, Wi) + bi #self.i_fea=tf.nn.dropout(self.i_fea,self.dropout_keep_prob) with tf.name_scope('fm'): self.z=tf.nn.relu(tf.concat(1,[self.u_fea,self.i_fea])) #self.z=tf.nn.dropout(self.z,self.dropout_keep_prob) WF1=tf.Variable( tf.random_uniform([n_latent*2, 1], -0.1, 0.1), name='fm1') Wf2=tf.Variable( tf.random_uniform([n_latent*2, fm_k], -0.1, 0.1), name='fm2') one=tf.matmul(self.z,WF1) inte1=tf.matmul(self.z,Wf2) inte2=tf.matmul(tf.square(self.z),tf.square(Wf2)) inter=(tf.square(inte1)-inte2)*0.5 inter=tf.nn.dropout(inter,self.dropout_keep_prob) inter=tf.reduce_sum(inter,1,keep_dims=True) print inter b=tf.Variable(tf.constant(0.1), name='bias') And here is the pytorch version 1.0 that I try class Network(torch.nn.Module): def __init__( self, user_length,item_length, num_classes, user_vocab_size,item_vocab_size,fm_k,n_latent,user_num,item_num, embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0,l2_reg_V=0.0): pooled_outputs_u = [] def forward(): I mainly have the trouble with the convolutional layer tranforming. Pytorch is much easy to use since we can skip lots of W and b definition. Is there some one can help me with the rest? Thanks
The pytorch equivalent of the tensorflow part of the code will be, explained with comments in the code itself, you have to import truncnorm from scipy. from scipy.stats import truncnorm #extra import equivalent to tf.trunc initialise pooled_outputs_u = [] for i, filter_size in enumerate(filter_sizes): filter_shape = [filter_size, embedding_size, 1, num_filters] #W is just a tensor now that will act as weight to the conv layer W=torch.tensor(truncnorm.rvs(-1,1,size=[10,10])) #bias initialized with 0.1 initial values b=torch.zeros([num_filters])+0.1 #conv layer with the same parameters as the tensorflow layer more on this in the link conv=torch.nn.functional.conv2d(self.embedded_users,W,bias=b,strides=[1,1,1,1],padding=0) #can use torch sequential to include it all in a single line but did it like this for better understanding. h=torch.relu(conv) #look at link2 for what a max pool layer does, basically it is a kind of feature extraction pooled=torch.nn.functional.max_pool(h,kernal_size=[1,user_length-filter_size+1,1,1],strides=[1,1,1,1],padding=0) pooled_outputs_u.append(pooled) num_filters_total = num_filters * len(filter_sizes) self.h_pool_u=torch.concat(3,pooled_outputs_u) self.h_pool_flat_u=torch.reshape(self.h_pool_u,[-1,num_filters_total]) Reference: Link 1 Link 2
https://stackoverflow.com/questions/55133700/
How to transfer the follow Embedding code in tensorflow to pytorch?
I have an Embedding code in Tensorflow as follow self.input_u = tf.placeholder(tf.int32, [None, user_length], name="input_u") with tf.name_scope("user_embedding"): self.W1 = tf.Variable( tf.random_uniform([user_vocab_size, embedding_size], -1.0, 1.0), name="W") self.embedded_user = tf.nn.embedding_lookup(self.W1, self.input_u) self.embedded_users = tf.expand_dims(self.embedded_user, -1) And I want to re-write in pytorch, How to do that?
Method 1: Use Embedding layer and freeze the weight to act as lookup table import numpy as np import torch # user_vocab_size = 10 # embedding_size = 5 W1 = torch.FloatTensor(np.random.uniform(-1,1,size=(user_vocab_size,embedding_size))) embedded_user = torch.nn.Embedding(user_vocab_size,embedding_size, _weight=W1) embedded_user.weight.requires_grad = False embedded_users = torch.unsqueeze(embedded_user, -1) # user_length = 5 # batch_size = 4 #input = torch.LongTensor(np.random.randint(0,user_vocab_size,(batch_size,user_length))) #embb = embedded_user(input) You can change the dimensions of embb tensor to your needs using torch.unqueeze W1 : A tensor of uniform distribution between (-1,1) of size (user_vocab_size, embedding_size) embedded_user : Is an embedding layer which uses W1 as embedding vectors Method 2: Use Embedding functional api input_u = torch.LongTensor(np.random.randint(0,user_vocab_size,(batch_size,user_length))) embedded_user = torch.nn.functional.embedding(input_u,W1) embedded_users = torch.unsqueeze(embedded_user, -1)
https://stackoverflow.com/questions/55133931/
Output from LSTM not changing for different inputs
I have the an LSTM implemented in PyTorch as below. import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable class LSTM(nn.Module): """ Defines an LSTM. """ def __init__(self, input_dim, hidden_dim, output_dim, num_layers): super(LSTM, self).__init__() self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True) def forward(self, input_data): lstm_out_pre, _ = self.lstm(input_data) return lstm_out_pre model = LSTM(input_dim=2, hidden_dim=2, output_dim=1, num_layers=8) random_data1 = torch.Tensor(np.random.standard_normal(size=(1, 5, 2))) random_data2 = torch.Tensor(np.random.standard_normal(size=(1, 5, 2))) out1 = model(random_data1).detach().numpy() out2 = model(random_data2).detach().numpy() print(out1) print(out2) I am simply creating an LSTM network and passing two random inputs into it. The outputs does not make sense because no matter what random_data1 and random_data2 is, out1 and out2 are always the same. This does not make any sense to me as random inputs multiplied with random weights should give different outputs. This does not seem to be the case if I use less number of hidden layers. With num_layers=2, this effect seems to be nil. And as you increase it, out1 and out2 keeps on getting closer. This does not make sense to me because with more layers of the LSTM stacked on top of each other, we are multiplying the input with more number of random weights which should magnify the differences in the input and give a very different output. Can someone please explain this behavior? Is there something wrong with my implementation? In one particular run, random_data1 is tensor([[[-2.1247, -0.1857], [ 0.0633, -0.1089], [-0.6460, -0.1079], [-0.2451, 0.9908], [ 0.4027, 0.3619]]]) random_data2 is tensor([[[-0.9725, 1.2400], [-0.4309, -0.7264], [ 0.5053, -0.9404], [-0.6050, 0.9021], [ 1.4355, 0.5596]]]) out1 is [[[0.12221643 0.11449362] [0.18342148 0.1620608 ] [0.2154751 0.18075559] [0.23373817 0.18768947] [0.24482158 0.18987371]]] out2 is [[[0.12221643 0.11449362] [0.18342148 0.1620608 ] [0.2154751 0.18075559] [0.23373817 0.18768945] [0.24482158 0.18987371]]] EDIT: I am running on the following configurations - PyTorch - 1.0.1.post2 Python - 3.6.8 with GCC 7.3.0 OS - Pop!_OS 18.04 (Ubuntu 18.04, more-or-less) CUDA - 9.1.85 Nvidia driver - 410.78
Initial weights for LSTM are small numbers close to 0, and by adding more layers the initial weighs and biases are getting smaller: all the weights and biases are initialized from -sqrt(k) to -sqrt(k), where k = 1/hidden_size (https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM) By adding more layers you effectively multiply the input by many small numbers, so effect of the input is basically 0 and only biases in the later layers matter. If you try LSTM with bias=False, you will see that output getting closer and closer to 0 with adding more layers.
https://stackoverflow.com/questions/55134920/
index selection in case of conflict in pytorch Argmax
I have been trying to learn tensor operations and this one has thrown me for a loop. Let us say I have one tensor t: t = torch.tensor([ [1,0,0,2], [0,3,3,0], [4,0,0,5] ], dtype = torch.float32) Now this is a rank 2 tensor and we can apply argmax for each rank/dimension. let us say we apply it for dim = 1 t.max(dim = 1) (tensor([2., 3., 5.]), tensor([3, 2, 3])) Now we can see that the result is as expected the tensor along dim =1 has 2,3,and 5 as the max elements. But there is a conflict on 3. There are two values that are exactly similar. How is it resolved? is it arbitrarily chosen? Is there an order for selecting like L-R, higher index value? I'd appreciate any insights into how this is resolved!
That is a good question I stumbled over a couple of times myself. The simplest answer is that there are no guarantees whatsoever that torch.argmax (or torch.max(x, dim=k), which also returns indices when dim is specified) will return the same index consistently. Instead, it will return any valid index to the argmax value, possibly randomly. As this thread in the official forum discusses, this is considered to be desired behavior. (I know that there is another thread I read a while ago that makes this more explicit, but I cannot find it again). Having said that, as this behavior was unacceptable to my usecase, I wrote the following functions that will find the left and rightmost indices (be aware that condition is a function-object you pass in): def __consistent_args(input, condition, indices): assert len(input.shape) == 2, 'only works for batch x dim tensors along the dim axis' mask = condition(input).float() * indices.unsqueeze(0).expand_as(input) return torch.argmax(mask, dim=1) def consistent_find_leftmost(input, condition): indices = torch.arange(input.size(1), 0, -1, dtype=torch.float, device=input.device) return __consistent_args(input, condition, indices) def consistent_find_rightmost(input, condition): indices = torch.arange(0, input.size(1), 1, dtype=torch.float, device=input.device) return __consistent_args(input, condition, indices) # one example: consistent_find_leftmost(torch.arange(10).unsqueeze(0), lambda x: x>5) # will return: # tensor([6]) Hope they will help! (Oh, and please let me know if you have a better implementation that does the same)
https://stackoverflow.com/questions/55139801/
Convolutional encoder error - 'RuntimeError: input and target shapes do not match'
In below code three images are created, saved and a convolutional auto-encoder attempts to encode them to a lower dimensional representation. %reset -f import torch.utils.data as data_utils import warnings warnings.filterwarnings('ignore') import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import pyplot as plt from sklearn import metrics import datetime from sklearn.preprocessing import MultiLabelBinarizer import seaborn as sns sns.set_style("darkgrid") from ast import literal_eval import numpy as np from sklearn.preprocessing import scale import seaborn as sns sns.set_style("darkgrid") import torch import torch import torchvision import torch.nn as nn from torch.autograd import Variable from os import listdir import cv2 import torch.nn.functional as F import numpy as np from numpy.polynomial.polynomial import polyfit import matplotlib.pyplot as plt number_channels = 3 %matplotlib inline x = np.arange(10) m = 1 b = 2 y = x * x plt.plot(x, y) plt.axis('off') plt.savefig('1-increasing.jpg') x = np.arange(10) m = 0.01 b = 2 y = x * x * x plt.plot(x, y) plt.axis('off') plt.savefig('2-increasing.jpg') x = np.arange(10) m = 0 b = 2 y = (m*x)+b plt.plot(x, y) plt.axis('off') plt.savefig('constant.jpg') batch_size_value = 2 train_image = [] train_image.append(cv2.imread('1-increasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('2-increasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('decreasing.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) train_image.append(cv2.imread('constant.jpg', cv2.IMREAD_UNCHANGED).reshape(3, 288, 432)) data_loader = data_utils.DataLoader(train_image, batch_size=batch_size_value, shuffle=False,drop_last=True) import torch import torchvision from torch import nn from torch.autograd import Variable from torch.utils.data import DataLoader from torchvision import transforms from torchvision.utils import save_image from torchvision.datasets import MNIST import os def to_img(x): x = 0.5 * (x + 1) x = x.clamp(0, 1) x = x.view(x.size(0), 1, 28, 28) return x num_epochs = 100 # batch_size = 128 batch_size = 2 learning_rate = 1e-3 dataloader = data_loader class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True) self.encoder = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=3, padding=1), # b, 16, 10, 10 nn.ReLU(True), nn.MaxPool2d(2, stride=2), # b, 16, 5, 5 nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3 nn.ReLU(True), nn.MaxPool2d(3, stride=1) # b, 8, 2, 2 ) self.decoder = nn.Sequential( nn.ConvTranspose2d(8, 16, 2, stride=1), # b, 16, 5, 5 nn.ReLU(True), nn.ConvTranspose2d(16, 8, 3, stride=3, padding=1), # b, 8, 15, 15 nn.ReLU(True), nn.ConvTranspose2d(8, 3, 2, stride=2, padding=1), # b, 1, 28, 28 nn.Tanh() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x model = autoencoder().cuda().double() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-5) for epoch in range(num_epochs): for data in dataloader: img, _ = data img = img.double() img = Variable(img).cuda() img = img.unsqueeze_(0) # ===================forward===================== output = model(img) loss = criterion(output, img) # ===================backward==================== optimizer.zero_grad() loss.backward() optimizer.step() # ===================log=================to_img======= print('epoch [{}/{}], loss:{:.4f}' .format(epoch+1, num_epochs, loss.data[0])) torch.save(model.state_dict(), './conv_autoencoder.pth') But error is returned : RuntimeError: input and target shapes do not match: input [1 x 3 x 132 x 204], target [1 x 3 x 288 x 432] at /pytorch/aten/src/THCUNN/generic/MSECriterion.cu:15 The shape of the images are (3, 288, 432) . How to change the configuration of the model to allow [1 x 3 x 288 x 432] instead of [1 x 3 x 132 x 204] ? Update: I changed nn.ConvTranspose2d(8, 3, 2, stride=2, padding=1) to : nn.ConvTranspose2d(8, 3, 3, stride=4, padding=2) Which results in closer dimensional output but not exact so error is now : RuntimeError: input and target shapes do not match: input [1 x 3 x 263 x 407], target [1 x 3 x 288 x 432] at /pytorch/aten/src/THCUNN/generic/MSECriterion.cu:12 How should the output decoder dimensions be calculated in order to produce the correct dimension ?
There are a couple of ways, Here is the one solution: class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True) self.encoder = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=3, padding=1), # b, 16, 10, 10 nn.ReLU(True), nn.MaxPool2d(2, stride=2), # b, 16, 5, 5 nn.Conv2d(16, 8, 3, stride=2, padding=1), # b, 8, 3, 3 nn.ReLU(True), nn.MaxPool2d(3, stride=1) # b, 8, 2, 2 ) self.decoder = nn.Sequential( nn.ConvTranspose2d(8, 16, 2, stride=1), # b, 16, 5, 5 nn.ReLU(True), nn.ConvTranspose2d(16, 8, 3, stride=3, padding=1), # b, 8, 15, 15 nn.ReLU(True), nn.ConvTranspose2d(8, 3, 2, stride=2, padding=1), # b, 1, 28, 28 nn.ReLU(True), nn.ConvTranspose2d(3, 3, 2, stride=2, padding=1), # b, 1, 28, 28 nn.ReLU(True), nn.ConvTranspose2d(3, 3, 25, stride=1), nn.ReLU(True), nn.ConvTranspose2d(3, 3, 3, stride=1), nn.Tanh() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x Here is the formula; N --> Input Size, F --> Filter Size, stride-> Stride Size, pdg-> Padding size ConvTranspose2d; OutputSize = N*stride + F - stride - pdg*2 Conv2d; OutputSize = (N - F)/stride + 1 + pdg*2/stride [e.g. 32/3=10 it ignores after the comma]
https://stackoverflow.com/questions/55140554/
How to do backprop in Pytorch (autograd.backward(loss) vs loss.backward()) and where to set requires_grad=True?
I have been using Pytorch for a while now. One question I had regarding backprop is as follows: let's say we have a loss function for a neural network. For doing backprop, I have seen two different versions. One like: optimizer.zero_grad() autograd.backward(loss) optimizer.step() and the other one like: optimizer.zero_grad() loss.backward() optimizer.step() Which one should I use? Is there any difference between these two versions? As a last question, do we need to specify the requires_grad=True for the parameters of every layer of our network to make sure their gradients is being computed in the backprop? For example do I need to specify it for the layer nn.Linear(hidden_size, output_size) inside my network or it is automatically being set to True by default?
so just a quick answer: both autograd.backward(loss) and loss.backward() are actually the same. Just look at the implementation of tensor.backward() (as your loss is just a tensor), where tensor.loss just calls autograd.backward(loss). As to your second question: whenever you use a prefabricated layer such as nn.Linear, or convolutions, or RNNs, etc., all of them rely on nn.Parameter attributes to store the parameters values. And, as the docs say, these default with requires_grad=True. Update to a follow-up in the comments: To answer what happens to tensors when they are in a backward pass depends on whether a variable is on the computation path between the "output" and a leaf variable, or not. If not, it is not entirely clear what backprop should compute - after all, the entire purpose is to compute gradients for parameters, i.e., leaf-variables. If the tensor is on that path, all gradients will generally be automatically computed. For a more thorough discussion, see this question and this tutorial from the docs.
https://stackoverflow.com/questions/55144904/
Implementation of VGG16 on Pytorch giving size mismatch error
Snippet of my code implementation on PyTorch is: model = models.vgg16(pretrained = False) classifier = nn.Sequential( nn.Linear(25088, 128), nn.ReLU(True), nn.Dropout(), nn.Linear(128, 128), nn.ReLU(True), nn.Dropout(), nn.Linear(128, 20) ) model.classifier = classifier I'm feeding images of input size (60x60x3) and batch_size = 30. When I run the code from Linux (Ubuntu) Terminal (with PyTorch Version: 1.0.0, Torchvision Version: 0.2.1) it gives me, the following error message: RuntimeError: size mismatch, m1: [30 x 512], m2: [25088 x 128] While, when I run it from Spyder (Anaconda) on Windows (with PyTorch Version: 1.0.1, Torchvision Version: 0.2.2), it runs perfectly. Am I missing something or is this because of some version mismatch in Pytorch and Torchvision? Both, I'm running on Python 3.6. Please suggest. [UPDATE: Mistakenly interchanged the version numbers for the error-case and error-free case. Thanks @Manoj Mohan for pointing it out]
It's probably the other way around. Things run perfectly on torchvision 0.2.2 and fails on torch vision 0.2.1. This change of using AdaptiveAvgPool2d that went into 0.2.2 is why you don't see the error. https://github.com/pytorch/vision/commit/83b2dfb2ebcd1b0694d46e3006ca96183c303706 >>> import torch >>> model = models.vgg16(pretrained = False) >>> x = torch.randn(1,3,60,60) # random image >>> feat = model.features(x) >>> flat_feat = feat.view(feat.size(0), -1) # flatten >>> flat_feat.shape torch.Size([1, 512]) >>> model.classifier(flat_feat) RuntimeError: size mismatch, m1: [1 x 512], m2: [25088 x 4096] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:940 You see the error of size mismatch. After, adaptive average pooling, things work fine. >>> import torch.nn.functional as F >>> avg = F.adaptive_avg_pool2d(feat, (7,7)) >>> avg = avg.view(avg.size(0), -1) >>> output = model.classifier(avg) >>> output.shape torch.Size([1, 1000])
https://stackoverflow.com/questions/55145561/
Caffe2: Load ONNX model, and inference single threaded on multi-core host / docker
I'm having trouble running inference on a model in docker when the host has several cores. The model is exported via PyTorch 1.0 ONNX exporter: torch.onnx.export(pytorch_net, dummyseq, ONNX_MODEL_PATH) Starting the model server (wrapped in Flask) with a single core yields acceptable performance (cpuset pins the process to specific cpus) docker run --rm -p 8081:8080 --cpus 0.5 --cpuset-cpus 0 my_container response from ab -c 1 -n 1000 http://0.0.0.0:8081/predict\?itemids\=5,100 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 5 90% 7 95% 46 98% 48 99% 49 But pinning it to four cores gives completely different stats for the same ab-call docker run --rm -p 8081:8080 --cpus 0.5 --cpuset-cpus 0,1,2,3 my_container Percentage of the requests served within a certain time (ms) 50% 9 66% 12 75% 14 80% 18 90% 62 95% 66 98% 69 99% 69 100% 77 (longest request) Model inference is done like this, and except this issue it seems to work as expected. (This runs in a completely separate environment from the model export of course) from caffe2.python import workspace from caffe2.python.onnx.backend import Caffe2Backend as c2 from onnx import ModelProto class Model: def __init__(self): self.predictor = create_caffe2_predictor(path) @staticmethod def create_caffe2_predictor(onnx_file_path): with open(onnx_file_path, 'rb') as onnx_model: onnx_model_proto = ModelProto() onnx_model_proto.ParseFromString(onnx_model.read()) init_net, predict_net = c2.onnx_graph_to_caffe2_net(onnx_model_proto) predictor = workspace.Predictor(init_net, predict_net) return predictor def predict(self, numpy_array): return self.predictor.run({'0': numpy_array}) ** wrapper flask app which calls Model.predict() on calls to /predict ** OMP_NUM_THREADS=1 is also present in the container environment, which had some effect, but it is not the end issue. The benchmark stats you're seeing here are run on a local machine with 8 hyperthreads, so I shouldn't be saturating my machine and affect the test. These results also show up in my kubernetes environment, and I'm getting a large amount of CFS (Completely Fair Scheduler) throttling there. I'm running in a kubernetes environment, so there's no way for me to control how many CPUs the host exposes, and doing some sort of pinning there seems a bit hacky as well. Is there any way to pin caffe2 model inference to a single processor? Am I doing something obviously wrong here? Is the caffe2.Predictor object not suited to this task? Any help appreciated. EDIT: I've added the simplest possible reproducable example I can think of here, with a docker-container and run-script included: https://github.com/NegatioN/Caffe2Struggles
This is not a direct answer to the question, but if your goal is to serve PyTorch models (and only PyTorch models, as mine is now) in production, simply using PyTorch Tracing seems to be the better choice. You can then load it directly into a C++ frontend similarly to what you would do through Caffe2, but PyTorch tracing seems more well maintained. From what I can see there are no speed slowdowns, but it is a whole lot easier to configure. An example of this to get good performance on a single-core container is to run with OMP_NUM_THREADS=1 as before, and export the model as follows: from torch import jit ### Create a model model.eval() traced = jit.trace(model, torch.from_numpy(an_array_with_input_size)) traced.save("traced.pt") And then simply run the model in production in pure C++ following the above guide, or through the Python interface as such: from torch import jit model = jit.load("traced.pt") output = model(some_input)
https://stackoverflow.com/questions/55147193/
Convert a torch t7 model to keras h5
How can we convert a t7 model to keras' h5 ? I am trying to do so for c3d-sports1m-kinetics.t7 that you can find here https://github.com/kenshohara/3D-ResNets/releases The least I can ask for is a way to load the t7 model to python (pytorch) and then extract its weights, but I couldn't do it with the load_lua() function... I get an error while trying to do it using this function https://github.com/pytorch/pytorch/blob/c6529f4851bb8ac95f05d3f17dea178a0367aaee/torch/utils/serialization/read_lua_file.py The error that I get is the following : Traceback (most recent call last): File "convert_t7_to_hdf5.py", line 574, in <module> a = load_lua("model.t7") File "convert_t7_to_hdf5.py", line 571, in load_lua return reader.read() File "convert_t7_to_hdf5.py", line 542, in read typeidx = self.read_int() File "convert_t7_to_hdf5.py", line 440, in read_int return self._read('i') File "convert_t7_to_hdf5.py", line 431, in _read result = struct.unpack(fmt, self.f.read(sz)) ValueError: read of closed file
As mentioned in this link, https://github.com/pytorch/pytorch/issues/15307#issuecomment-448086741 with torchfile package, the load was successful. You can dump the contents of model to a file and then understand the contents. Each layer information is stored as a dictionary. Knowing the model architecture would make it easier to parse the contents. >>> import torchfile >>> model = torchfile.load('c3d-sports1m-kinetics.t7') >>> module = model.modules[0].modules[0] >>> module.name b'conv1a' >>> module['weight'].shape (64, 3, 3, 3, 3) >>> module['bias'].shape (64,)
https://stackoverflow.com/questions/55147282/
Invalid combination of arguments - eq()
I'm using a code shared here to test a CNN image classifier. When I call the test function, I got this error on line 155: test_acc += torch.sum(prediction == labels.data) TypeError: eq() received an invalid combination of arguments - got (numpy.ndarray), but expected one of: * (Tensor other) didn't match because some of the arguments have invalid types: ([31;1mnumpy.ndarray[0m) * (Number other) didn't match because some of the arguments have invalid types: ([31;1mnumpy.ndarray[0m) Fragment of the test function: def test(): model.eval() test_acc = 0.0 for i, (images, labels) in enumerate(test_loader): if cuda_avail: images = Variable(images.cuda()) labels = Variable(labels.cuda()) #Predict classes using images from the test set outputs = model(images) _,prediction = torch.max(outputs.data, 1) prediction = prediction.cpu().numpy() test_acc += torch.sum(prediction == labels.data) #line 155 #Compute the average acc and loss over all 10000 test images test_acc = test_acc / 10000 return test_acc After a quick search I see that the error is probably related to the comparison between the prediction and labels, as seem in this SO question. Any idea on how to fix this?
Why do you have .numpy() here prediction = prediction.cpu().numpy()? That way you convert PyTorch tensor to NumPy array, making it incompatible type to compare with labels.data. Removing .numpy() part should fix the issue.
https://stackoverflow.com/questions/55147511/
How to pass parameters to forward function of my torch nn.module from skorch.NeuralNetClassifier.fit()
I have extended nn.Module to implement my network whose forward function is like this ... def forward(self, X, **kwargs): batch_size, seq_len = X.size() length = kwargs['length'] embedded = self.embedding(X) # [batch_size, seq_len, embedding_dim] if self.use_padding: if length is None: raise AttributeError("Length must be a tensor when using padding") embedded = nn.utils.rnn.pack_padded_sequence(embedded, length, batch_first=True) #print("Size of Embedded packed", embedded[0].size()) hidden, cell = self.init_hidden(batch_size) if self.rnn_unit == 'rnn': out, _ = self.rnn(embedded, hidden) elif self.rnn_unit == 'lstm': out, (hidden, cell) = self.rnn(embedded, (hidden, cell)) # unpack if padding was used if self.use_padding: out, _ = nn.utils.rnn.pad_packed_sequence(out, batch_first = True) I initialized a skorch NeuralNetClassifier like this, net = NeuralNetClassifier( model, criterion=nn.CrossEntropyLoss, optimizer=Adam, max_epochs=8, lr=0.01, batch_size=32 ) Now if I call net.fit(X, y, length=X_len) it throws an error TypeError: __call__() got an unexpected keyword argument 'length' According to the documentation fit function expects a fit_params dictionary, **fit_params : dict Additional parameters passed to the ``forward`` method of the module and to the ``self.train_split`` call. and the source code always send my parameters to train_split where obviously my keyword argument would not be recognized. Is there any way around to pass the arguments to my forward function?
The fit_params parameter is intended for passing information that is relevant to data splits and the model alike, like split groups. In your case, you are passing additional data to the module via fit_params which is not what it is intended for. In fact, you could easily run into trouble doing this if you, for example, enable batch shuffling on the train data loader since then your lengths and your data are misaligned. The best way to do this is already described in the answer to your question on the issue tracker: X_dict = {'X': X, 'length': X_len} net.fit(X_dict, y) Since skorch supports dicts you can simply add the length's to your input dict and have it both passed to the module, nicely batched and passed through the same data loader. In your module you can then access it via the parameters in forward: def forward(self, X, length): return ... Further documentation of this behaviour can be found in the docs.
https://stackoverflow.com/questions/55156877/
U-net with pre-trained backbones: where to make skip connections?
I'm trying to implement U-Net with PyTorch, using pre-trained networks in the encoder path. The original U-Net paper trained a network from scratch. Are there any resources or principles on where skip connections should be placed, when a pre-trained backbone is used instead? I already found some examples (e.g. this repo), but without any justification for the feature selection.
In the original U-Net paper features right before Max-Pool layers are used for the skip-connections. The logic is exactly the same with pre-trained backbones: at each spatial resolution, the deepest feature layer is selected. Thanks for qubvel on github for pointing this out in an issue.
https://stackoverflow.com/questions/55165091/
Display examples of augmented images in PyTorch
I want to display some samples of augmented training images. My transform includes the standard ImageNet transforms.Normalize like this: train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) However, because of the Normalise, the images display in weird colours. This answer says I'd need access to the original image, which is difficult when the transforms are applied at load time: image_datasets['train'] = datasets.ImageFolder(train_dir, transform=train_transforms) How would I go about displaying a few sample augmented images in their usual colours while using the normalised ones for calculation?
To answer my own question, I came up with the following: # Undo transforms.Normalize def denormalise(image): image = image.numpy().transpose(1, 2, 0) # PIL images have channel last mean = [0.485, 0.456, 0.406] stdd = [0.229, 0.224, 0.225] image = (image * stdd + mean).clip(0, 1) return image example_rows = 2 example_cols = 5 sampler = torch.utils.data.RandomSampler(image_datasets['train'], num_samples=example_rows * example_cols) # Get a batch of images and labels images, indices = next(iter(sampler)) plt.rcParams['figure.dpi'] = 120 # Increase size of pyplot plots # Show a grid of example images fig, axes = plt.subplots(example_rows, example_cols, figsize=(9, 5)) # sharex=True, sharey=True) axes = axes.flatten() for ax, image, index in zip(axes, images, indices): ax.imshow(denormalise(image)) ax.set_axis_off() ax.set_title(class_names[index], fontsize=7) fig.subplots_adjust(wspace=0.02, hspace=0) fig.suptitle('Augmented training set images', fontsize=20) plt.show() This is based on PyTorch's Transfer Learning Tutorial's code but displays the title above each image and generally looks much nicer.
https://stackoverflow.com/questions/55179282/
LSTMCell parameters is not shown Pytorch
I have the following code: class myLSTM(nn.Module): def __init__(self, input_size, output_size, hidden_size, num_layers): super(myLSTM, self).__init__() self.input_size = input_size + 1 self.output_size = output_size self.hidden_size = hidden_size self.num_layers = num_layers self.layers = [] new_input_size = self.input_size for i in xrange(num_layers): self.layers.append(LSTMCell(new_input_size, hidden_size)) new_input_size = hidden_size self.linear = nn.Linear(hidden_size, output_size) self.softmax = nn.Softmax() def forwardLayers(self, input, hns, cns, layers): new_hns = [] new_cns = [] (hn, cn) = layers[0](input, (hns[0], cns[0])) new_hns.append(hn) new_cns.append(cn) for i in xrange(1, len(layers)): (hn, cn) = layers[i](hn, (hns[i], cns[i])) new_hns.append(hn) new_cns.append(cn) return hn, (new_hns, new_cns) def forward(self, input, hx): actions = [] hns, cns = hx action = torch.Tensor([[0.0]]) for i in range(len(input)): new_input = input[i] new_input = new_input.view(1, -1) output, (hns, cns) = self.forwardLayers(new_input, hns, cns, self.layers) output = self.softmax(self.linear(output)) return output Now when I call the following code to see the parameters of my network: for name, param in myLSTM_object.named_parameters(): if param.requires_grad: print name, param.data What I get is: linear.weight tensor([[ 0.5042, -0.6984], [ 0.0721, -0.4060]]) linear.bias tensor([ 0.6968, -0.4649]) So, it completely misses the parameters of LSTMCell. Does this mean the parameters of LSTMCell is not trained. What should I do to see LSTMCell parameters?
This is to be expected - storing modules in list, dict, set or other python containers does not register them with the module owning said list, etc. To make your code work, use nn.ModuleList instead. It's as simple as modifying your __init__ code to use layers = [] new_input_size = self.input_size for i in xrange(num_layers): layers.append(LSTMCell(new_input_size, hidden_size)) new_input_size = hidden_size self.layers = nn.ModuleList(layers)
https://stackoverflow.com/questions/55186290/
Understanding PyTorch training batches
Reading https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel & https://discuss.pytorch.org/t/how-does-enumerate-trainloader-0-work/14410 I'm trying to understand how training epochs behave in PyTorch. Take this outer and inner loop : for epoch in range(num_epochs): for i1,i2 in enumerate(training_loader): Is this a correct interpretation : ? For each invocation of the outer loop/epoch the entire training set, in above example training_loader is iterated per batch. This means the model does not process one instance per training cycle. Per training cycle ( for epoch in range(num_epochs): ) the entire training set is processed in chunks/batches where the batch size is determined when creating training_loader
torch.utils.data.DataLoader returns an iterable that iterates over the dataset. Therefore, the following - training_loader = torch.utils.data.DataLoader(*args) for i1,i2 in enumerate(training_loader): #process runs one over the dataset completely in batches.
https://stackoverflow.com/questions/55188624/
Understanding when to use python list in Pytorch
Basically as this thread discusses here, you cannot use python list to wrap your sub-modules (for example your layers); otherwise, Pytorch is not going to update the parameters of the sub-modules inside the list. Instead you should use nn.ModuleList to wrap your sub-modules to make sure their parameters are going to be updated. Now I have also seen codes like following where the author uses python list to calculate the loss and then do loss.backward() to do the update (in reinforce algorithm of RL). Here is the code: policy_loss = [] for log_prob in self.controller.log_probability_slected_action_list: policy_loss.append(- log_prob * (average_reward - b)) self.optimizer.zero_grad() final_policy_loss = (torch.cat(policy_loss).sum()) * gamma final_policy_loss.backward() self.optimizer.step() Why using the list in this format works for updating the parameters of modules but the first case does not work? I am very confused now. If I change in the previous code policy_loss = nn.ModuleList([]), it throws an exception saying that tensor float is not sub-module.
You are misunderstanding what Modules are. A Module stores parameters and defines an implementation of the forward pass. You're allowed to perform arbitrary computation with tensors and parameters resulting in other new tensors. Modules need not be aware of those tensors. You're also allowed to store lists of tensors in Python lists. When calling backward it needs to be on a scalar tensor thus the sum of the concatenation. These tensors are losses and not parameters so they should not be attributes of a Module nor wrapped in a ModuleList.
https://stackoverflow.com/questions/55188698/
Pytorch select values from the last tensor dimension with indices from another tenor with a smaller dimension
I have a tensor a with three dimensions. The first dimension corresponds to minibatch size, the second to the sequence length, and the third to the feature dimension. E.g., >>> a = torch.arange(1, 13, dtype=torch.float).view(2,2,3) # Consider the values of a to be random >>> a tensor([[[ 1., 2., 3.], [ 4., 5., 6.]], [[ 7., 8., 9.], [10., 11., 12.]]]) I have a second, two-dimensional tensor. Its first dimension corresponds to the minibatch size and its second dimension to the sequence length. It contains values in the range of the indices of the third dimension of a. as third dimension has size 3, so b can contain values 0, 1 or 2. E.g., >>> b = torch.LongTensor([[0, 2],[1,0]]) >>> b tensor([[0, 2], [1, 0]]) I want to obtain a tensor c that has the shape of b and contains all the values of a that are referenced by b. In the upper scenario I would like to have: c = torch.empty(2,2) c[0,0] = a[0, 0, b[0,0]] c[1,0] = a[1, 0, b[1,0]] c[0,1] = a[0, 1, b[0,1]] c[1,1] = a[1, 1, b[1,1]] >>> c tensor([[ 1., 5.], [ 8., 10.]]) How can I create the tensor c fast? Further, I also want c to be differentiable (be able to use .backprob()). I am not too familiar with pytorch, so I am not sure, if a differentiable version of this exists. As an alternative, instead of c having the same shape as b I could also use a c with the same shape of a, having only zeros, but at the places referenced by b ones. Then I could multiply a and c to obtain a differentiable tensor. Like follows: c = torch.zeros(2,2,3, dtype=torch.float) c[0,0,b[0,0]] = 1 c[1,0,b[1,0]] = 1 c[0,1,b[0,1]] = 1 c[1,1,b[1,1]] = 1 >>> a*c tensor([[[ 1., 0., 0.], [ 0., 5., 0.]], [[ 0., 8., 0.], [10., 0., 0.]]])
Lets declare necessary variables first: (notice requires_grad in a's initialization, we will use it to ensure differentiability) a = torch.arange(1,13,dtype=torch.float32,requires_grad=True).reshape(2,2,3) b = torch.LongTensor([[0, 2],[1,0]]) Lets reshape a and squash minibatch and sequence dimensions: temp = a.reshape(-1,3) so temp now looks like: tensor([[ 1., 2., 3.], [ 4., 5., 6.], [ 7., 8., 9.], [10., 11., 12.]], grad_fn=<AsStridedBackward>) Notice now each value of b can be used in each row of temp to get desired output. Now we do: c = temp[range(len(temp )),b.view(-1)].view(b.size()) Notice how we index temp, range(len(temp )) to select each row and 1D b i.e b.view(-1) to get corresponding columns. Lastly .view(b.size()) brings this array to the same size as b. If we print c now: tensor([[ 1., 6.], [ 8., 10.]], grad_fn=<ViewBackward>) The presence of grad_fn=.. shows that c requires gradient i.e. its differentiable.
https://stackoverflow.com/questions/55196295/
Inconsistency when comparing scipy, torch and fourier periodic convolution
I'm implementing a 2d periodic convolution on a synthetic image in three different ways: using scipy, using torch and using the Fourier transform (also under torch framework). However, I've got different results. Performing the operation by hand I can see that scipy's convolution yields the correct results. torch's spatial version, on the other hand, yields the expected result inverted. Finally, the Fourier version returns something unexpected. The code is the following: import torch import numpy as np import scipy.signal as sig import torch.nn.functional as F import matplotlib.pyplot as plt def numpy_periodic_conv(f, k): H, W = f.shape periodic_f = np.hstack([f, f]) periodic_f = np.vstack([periodic_f, periodic_f]) conv = sig.convolve2d(periodic_f, k, mode='same') conv = conv[H // 2:-H // 2, W // 2:-W // 2] return periodic_f, conv def torch_periodic_conv(f, k): H, W = f.shape[-2:] periodic_f = f.repeat(1, 1, 2, 2) conv = F.conv2d(periodic_f, k, padding=1) conv = conv[:, :, H // 2:-H // 2, W // 2:-W // 2] return periodic_f.squeeze().numpy(), conv.squeeze().numpy() def torch_fourier_conv(f, k): pad_x = f.shape[-2] - k.shape[-2] pad_y = f.shape[-1] - k.shape[-1] expanded_kernel = F.pad(k, [0, pad_x, 0, pad_y]) fft_x = torch.rfft(f, 2, onesided=False) fft_kernel = torch.rfft(expanded_kernel, 2, onesided=False) real = fft_x[:, :, :, :, 0] * fft_kernel[:, :, :, :, 0] - \ fft_x[:, :, :, :, 1] * fft_kernel[:, :, :, :, 1] im = fft_x[:, :, :, :, 0] * fft_kernel[:, :, :, :, 1] + \ fft_x[:, :, :, :, 1] * fft_kernel[:, :, :, :, 0] fft_conv = torch.stack([real, im], -1) # (a+bj)*(c+dj) = (ac-bd)+(ad+bc)j ifft_conv = torch.irfft(fft_conv, 2, onesided=False) return expanded_kernel.squeeze().numpy(), ifft_conv.squeeze().numpy() if __name__ == '__main__': f = np.concatenate([np.ones((10, 5)), np.zeros((10, 5))], 1) k = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]]) f_tensor = torch.from_numpy(f).unsqueeze(0).unsqueeze(0).float() k_tensor = torch.from_numpy(k).unsqueeze(0).unsqueeze(0).float() np_periodic_f, np_periodic_conv = numpy_periodic_conv(f, k) tc_periodic_f, tc_periodic_conv = torch_periodic_conv(f_tensor, k_tensor) tc_fourier_k, tc_fourier_conv = torch_fourier_conv(f_tensor, k_tensor) print('Spatial numpy conv shape= ', np_periodic_conv.shape) print('Spatial torch conv shape= ', tc_periodic_conv.shape) print('Fourier torch conv shape= ', tc_fourier_conv.shape) r_np = dict(name='numpy', im=np_periodic_f, k=k, conv=np_periodic_conv) r_torch = dict(name='torch', im=tc_periodic_f, k=k, conv=tc_periodic_conv) r_fourier = dict(name='fourier', im=f, k=tc_fourier_k, conv=tc_fourier_conv) titles = ['{} im', '{} kernel', '{} conv'] results = [r_np, r_torch, r_fourier] fig, axs = plt.subplots(3, 3) for i, r_dict in enumerate(results): axs[i, 0].imshow(r_dict['im'], cmap='gray') axs[i, 0].set_title(titles[0].format(r_dict['name'])) axs[i, 1].imshow(r_dict['k'], cmap='gray') axs[i, 1].set_title(titles[1].format(r_dict['name'])) axs[i, 2].imshow(r_dict['conv'], cmap='gray') axs[i, 2].set_title(titles[2].format(r_dict['name'])) plt.show() The results I'm obtaining: Note: The image for both numpyand torch versions show the periodic image, which is required to perform the periodic convolution. The kernel for the Fourier version shows the original kernel zero-padded to the image size, which is required to compute the element-wise multiplication in the frequency domain. -Edit1: There was an error when in the multiplication in the Fourier version, I was doing (ac-bd)+(ad-bc)j instead of (ac-bd)+(ad+bc)j. But now, I get the convolution shifted by one column. -Edit2: torch's spatial convolution results is inverted because the operation is actually a cross-correlation. This was confirmed in the pytorch's official forum here. Furthermore, after fixing the kernel padding as Cris Luengo's answer, the frequency method yielded the same results as the correlations. This is rather strange for me because, as far as I know, the frequency property hold for convolution, not correlation. New-results after fixing the kernel:
The FFT result is wrong because the padding is wrong. When padding, you need to put the origin (center of the kernel) at the top-left corner of the image. See this other answer for details. The difference between the other two is the difference between a convolution and a correlation. It looks like the “numpy“ result is a convolution, the “torch” one a correlation.
https://stackoverflow.com/questions/55199256/
Unable to Normalize Tensor in PyTorch
I am trying to normalize the tensor outputted by my network but am getting an error in doing so. The code is as follows: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model_load_path = r'path\to\saved\model\file' model.load_state_dict(torch.load(model_load_path)) model.eval() output = model(input).to(device).view(-1, 1, 150, 150) inv_normalize = transforms.Compose( [ transforms.Normalize(mean=[-0.5/0.5], std=[1/0.5]) ] ) print(output.size()) # The size printed is torch.Size([1, 1, 150, 150]) output = inv_normalize(output) I am getting an error in the following line: output = inv_normalize(output) The error reads: TypeError: tensor is not a torch image. My output is a single image, with a single channel, and height and width = 150 Any help will be appreciated! Thanks!
In order to apply transforms.Normalize you have to convert the input to a tensor. For this you can use transforms.ToTensor. inv_normalize = transforms.Compose( [ transforms.toTensor(), transforms.Normalize(mean=[-0.5/0.5], std=[1/0.5]) ] ) This tensor must consist of three dimensions (channels, height, width). Currently you have one dimension too much. Just remove the extra dimension in your view call: output = model(input).to(device).view(1, 150, 150)
https://stackoverflow.com/questions/55205750/
Load a plain text file into PyTorch
I have two separate files, one is a text file, with each line being a single text. The other file contains the class label of that corresponding line. How do I load this into PyTorch and carry out further tokenization, embedding, etc?
What have you tried already? What you described is still not very PyTorch related, you can make a pre-processing script that loads all the sentences into single data structured, e.g.: a list of (text, label) tuple.You can also already split your data into training and hold-out set in this step. You can then dump all this into .csv files. Then, one way to do it is in 3 steps: Implement the class Dataset - to load efficiently your data, reading the produced .csv files; Have another like Vocabulary that keeps a mapping from tokens to ids and vice-verse; Something like a Vectorizer, that converts your sentences into vectors, either one-hot-encondings or embeddings; Then you can use this to produce a vector representation of your sentences a pass it to a neural network. Look into this notebook to understand all this in more detail: Sentiment Classification
https://stackoverflow.com/questions/55216339/
Common class for Linear, Conv1d, Conv2d,..., LSTM,
Is there any class that all torch::nn::Linear, torch::nn::Conv1d, torch::nn::Conv2d, ... torch::nn::GRU, .... all inherit from that? torch::nn::Module seems be a good option, though there is a middle class, called torch::nn::Cloneable, so that torch::nn::Module does not work. Also, torch::nn::Cloneable itself is a template so that needs type in the declaration. I want to create a general class model, which has std::vector<the common class> layers, so that later I can fill layers with any type of layer that I want, e.g., Linear, LSTM, etc. Is there such a capability in the current API? This can be done easily in python, though here we need declaration and this hinders the python's easiness. Thanks, Afshin
I found that nn::sequential can be used for a this purpose, and it does not need a forward implementation, which can be a positive point and at a same time a negative point. nn::sequential already requires each module to have a forward implementation, and calls the forward functions in a sequence that they have added in. So, one cannot create an ad-hock non-usual forward pass like Dense-Net with that, though it is good enough for general usages. In addition, it seems that nn::sequential just uses a std::vector<nn::AnyModule> as its underlying module list. So, std::vector<nn::AnyModule> also might be used.
https://stackoverflow.com/questions/55223728/
Encountering Import Error DLL load failed constantly
I have been trying to intall scikit-learn and pytorch using their respective commands given in the docs: The commands for installing PyTorch are: 1) pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.1-cp37-cp37m-win_amd64.whl 2) pip3 install torchvision The command for installing scikit-learn is: pip install -U scikit-learn Some background: I am using Windows 8.1, Python 3.7.2. My pip is updated. I have also installed Anaconda for solving this using conda, but had zero luck!(Also, here I am running into 'conda' unrecognized error which is another story). Here are the paths my PATH variable holds. PATH C:\Users\satya\Anaconda3; C:\Users\satya\Anaconda3\Library\mingw-w64\bin; C:\Users\satya\Anaconda3\Library\usr\bin; C:\Users\satya\Anaconda3\Library\bin; C:\Users\satya\Anaconda3\Scripts; C:\Users\satya\AppData\Local\Programs\Python\Python37\Scripts\; C:\Users\satya\AppData\Local\Programs\Python\Python37\; C:\Users\satya\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Python 3.7 The Actual Problem: The same commands for installation given above work perfectly fine on my other Windows 10, but, for my Windows 8.1 it gives this error which has become a real PITA Import Error: DLL load failed The specified module could not be found When I import sklearn or import torch I get the exact same error. All the time. Back Story: I have searched almost all the related questions I could find on Stackoverflow and Github for 6+ hours to help me solve this problem. But, none of the answers have helped till now and some haven't had an "understandable" answer. Maybe, its just a small fix, but now, I am choosing to post a question on SO. My Question Again: Can someone please help out and try to explain what I am missing out here? I really want to fix this error for good(and want to be in a position to fix it if I encounter it again). An elaborate answer would really help understand easily. Thank You!
Please check your python build number with the following command. conda list python Python 3.7.2 with build number h8c8aaf0_2 has a solved issue. If this is the case, an update will do. conda update python
https://stackoverflow.com/questions/55225211/
How to change the axis on which 1 dimensional convolution is performed on embedding layer in PyTorch?
I've been playing around with text classification in PyTorch and I've encountered a problem with 1 dimensional convolutions. I've set an embedding layer of dimesions (x, y, z) where: x - denotes the batch size y - denotes the length of a sentence (fixed with padding, so 40 words) z - the dimensionality of pre-trained word embedding (for now 100) For simplicity sake, let's assume I put in a matrix of (1,40, 100) However, when to my knowledge once I perform torch.nn.conv1d(*args), The resulting matrix becomes (batch size = 1, word size = 40, feature map size = 98) with kernel size of 3. Basically, as I understand it convolves around y axis instead of x axis and it turn does not capture the spacial relationship between word embeddings. Is there any way to change the convolutional layer so it calculates feature maps around different axis? TL, DR: Torch conv1d layer behaves this way on embedding layer: enter image description here But I want it to behave like this enter image description here Any help would be much appreciated.
conv1d expects the input's size to be (batch_size, num_channels, length) and there is no way to change that, so you have two possible ways ahead of you, you can either permute the output of embedding or you can use a conv1d instead of you embedding layer(in_channels = num_words, out_channels=word_embedding_size, and kernel_size=1) which is slower than embedding and not a good idea! input = torch.randint(0, 10, (batch_size, sentence_length)) embeddings = word_embedding(input) #(batch_size, sentence_length, embedding_size) embeddings_permuted = embeddings.permute(0, 2, 1) #(batch_size, embedding_size, sentence_length) conv_out = convolution(embeddings_permuted) #(batch_size, conv_out_channels, changed_sentence_length) #now you can either use the output as it is or permute it back (based on your upper layers) #also note that I wrote changed_sentence_length because it is a fucntion of your padding and stride
https://stackoverflow.com/questions/55227010/
What's the difference between sum and torch.sum for a torch Tensor?
I get the same results when using either the python sum or torch.sum so why did torch implement a sum function? Is there a difference between them?
nothing, torch.sum calls tensor.sum and python's sum calls __add__ (or __radd__ when needed) which calls tensor.sum again so the only difference is in the number of function calls, and tensor.sum() should be the fastest (when you have small tensors and the function call's overhead is considerable)
https://stackoverflow.com/questions/55228863/
Import LSTM from Tensorflow to PyTorch by hand
I am trying to import a pretrained Model from tensorflow to PyTorch. It takes a single input and maps it onto a single output. Confusion comes up, when I try to import the LSTM weights I read the weights and their variables from the file with the following function: def load_tf_model_weights(): modelpath = 'models/model1.ckpt.meta' with tf.Session() as sess: tf.train.import_meta_graph(modelpath) init = tf.global_variables_initializer() sess.run(init) vars = tf.trainable_variables() W = sess.run(vars) return W,vars W,V = load_tf_model_weights() Then I am inspecting the shapes of the weights In [33]: [w.shape for w in W] Out[33]: [(51, 200), (200,), (100, 200), (200,), (50, 1), (1,)] furthermore the variables are defined as In [34]: V Out[34]: [<tf.Variable 'rnn/multi_rnn_cell/cell_0/lstm_cell/kernel:0' shape=(51, 200) dtype=float32_ref>, <tf.Variable 'rnn/multi_rnn_cell/cell_0/lstm_cell/bias:0' shape=(200,) dtype=float32_ref>, <tf.Variable 'rnn/multi_rnn_cell/cell_1/lstm_cell/kernel:0' shape=(100, 200) dtype=float32_ref>, <tf.Variable 'rnn/multi_rnn_cell/cell_1/lstm_cell/bias:0' shape=(200,) dtype=float32_ref>, <tf.Variable 'weight:0' shape=(50, 1) dtype=float32_ref>, <tf.Variable 'FCLayer/Variable:0' shape=(1,) dtype=float32_ref>] So I can say that the first element of W defines the Kernel of an LSTM and the second element define its bias. According to this post, the shape for the Kernel is defined as [input_depth + h_depth, 4 * self._num_units] and the bias as [4 * self._num_units]. We already know that input_depth is 1. So we get, that h_depth and _num_units both have the value 50. In pytorch my LSTMCell, to which I want to assign the weights, looks like this: In [38]: cell = nn.LSTMCell(1,50) In [39]: [p.shape for p in cell.parameters()] Out[39]: [torch.Size([200, 1]), torch.Size([200, 50]), torch.Size([200]), torch.Size([200])] The first two entries can be covered by the first value of W which has the shape (51,200). But the LSTMCell from Tensorflow yields only one bias of shape (200) while pytorch wants two of them And by leaving the bias out I have weights left over: cell2 = nn.LSTMCell(1,50,bias=False) [p.shape for p in cell2.parameters()] Out[43]: [torch.Size([200, 1]), torch.Size([200, 50])] Thanks!
pytorch uses CuDNN's LSTM underlayer(even when you don't have CUDA, it still uses something compatible) thus it has one extra bias term. So you can pick two numbers with their sum equal to 1(0 and 1, 1/2 and 1/2 or anything else) and set your pytorch biases as those numbers times TF's bias. pytorch_bias_1 = torch.from_numpy(alpha * tf_bias_data) pytorch_bias_2 = torch.from_numpy((1.0-alpha) * tf_bias_data)
https://stackoverflow.com/questions/55229636/
pytorch: get number of classes given an ImageFolder dataset
If I have a dataset like: image_datasets['train'] = datasets.ImageFolder(train_dir, transform=train_transforms) How do I determine programatically the number of classes or unique labels in the dataset?
Use: len(image_datasets['train'].classes) .classes returns a list.
https://stackoverflow.com/questions/55235594/
How to open pretrained models in python
Hi I'm trying to load some pretrained models from .sav files and so far nothing is working. The models were originally made in pytorch and when I open the raw file in vs-code I can see that all the appropiate information was stored correctly. I've tried the following libraries: sklearn.externals.joblib pickle scipy.io pyreadstat Each library either gave me an error (such as wrong timestamp or signature mismatch) or just return an int instead of a python object. The models can be downloaded from this link.
You need to use PyTorch to load the models. On top of this, you also need the original model definition, so you need to need the clone the authors repository. In your example this repo: git clone https://github.com/tbepler/protein-sequence-embedding-iclr2019.git Then you can open the model with torch.load(). Note that you need the model definition on your path (you can simply launch python from the repo directory). Then it's straightforward to open a file: import torch model = torch.load('<downloaded models>/<model name>.sav') print(model) The last line prints the model definition. For example, me_L1_100d_lstm3x512_lm_i512_mb64_tau0.5_p0.05_epoch100.sav produced the following output: OrdinalRegression( (embedding): StackedRNN( (embed): LMEmbed( (lm): BiLM( (embed): Embedding(22, 21, padding_idx=21) (dropout): Dropout(p=0) (rnn): ModuleList( (0): LSTM(21, 1024, batch_first=True) (1): LSTM(1024, 1024, batch_first=True) ) (linear): Linear(in_features=1024, out_features=21, bias=True) ) (embed): Embedding(21, 512, padding_idx=20) (proj): Linear(in_features=4096, out_features=512, bias=True) (transform): ReLU() ) (dropout): Dropout(p=0) (rnn): LSTM(512, 512, num_layers=3, batch_first=True, bidirectional=True) (proj): Linear(in_features=1024, out_features=100, bias=True) ) (compare): L1() )
https://stackoverflow.com/questions/55237899/
How to add parameters from self.some_dictionary_of_modules to self.parameters?
Consider this simple example: import torch class MyModule(torch.nn.Module): def __init__(self): super(MyModule, self).__init__() self.conv_0=torch.nn.Conv2d(3,32,3,stride=1,padding=0) self.blocks=torch.nn.ModuleList([ torch.nn.Conv2d(3,32,3,stride=1,padding=0), torch.nn.Conv2d(32,64,3,stride=1,padding=0)]) #the problematic part self.dict_block={"key_1": torch.nn.Conv2d(64,128,3,1,0), "key_2": torch.nn.Conv2d(56,1024,3,1,0)} if __name__=="__main__": my_module=MyModule() print(my_module.parameters) The output I get here is (notice that the parmeters from self.dict_block are missing) <bound method Module.parameters of MyModule( (conv_0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) (blocks): ModuleList( (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1)) (1): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)) ) )> This means that if I want the parameters of self.dict_block to be optimised, I will need to use something like my_optimiser.add_param_group({"params": params_from_self_dict}) before using my optimiser. However, I think there might be a more straight-forward alternative that will add the parameters of self.dict_block to the parameters of the my_module_object. Something that comes close is nn.Parameter(...) as explained here, but this requires the input to be a tensor.
Found the answer. Posting it in case someone runs into the same problem: Looking into <torch_install>/torch/nn/modules/container.py it looks like there is a class torch.nn.ModuleDict that just does that. So in the example I gave in the question, the solution would be: self.dict_block=torch.nn.ModuleDict({"key_1": torch.nn.Conv2d(64,128,3,1,0), "key_2": torch.nn.Conv2d(56,1024,3,1,0)})
https://stackoverflow.com/questions/55237944/
how to get torch.Size([1, 3, 16, 112, 112]) instead of torch.Size([1, 3, 16, 64, 64]) after ConvTranspose3d
I have a torch.Size([1, 64, 8, 32, 32]) which I want after my transpose 3d convolution to become torch.Size([1, 3, 16, 112, 112]). Using this: nn.ConvTranspose3d(64, 3, kernel_size=4, stride=2, bias=False, padding=(1, 1, 1)) I get correct the output channels and the number of frames, but not the frame sizes:torch.Size([1, 3, 16, 64, 64]) What should I change in order to the right torch.Size?
you should use different strides and paddings for different dims. ConvTranspose3d(64, 3, kernel_size=4, stride=(2, 4, 4), bias=False, padding=(1, 8, 8))
https://stackoverflow.com/questions/55245081/
Is Feature Scaling recommended for AutoEncoder?
Problem: The Staked Auto Encoder is being applied to a dataset with 25K rows and 18 columns, all float values. SAE is used for feature extraction with encoding & decoding. When I train the model without feature scaling, the loss is around 50K, even after 200 epochs. But, when scaling is applied the loss is around 3 from the first epoch. My questions: Is it recommended to apply feature scaling when SAE is used for feature extraction Does it impact accuracy during decoding?
With a few exceptions, you should always apply feature scaling in machine learning, especially when working with gradient descent as in your SAE. Scaling your features will ensure a much smoother cost function and thus faster convergence to global (hopefully) minima. Also worth noting that your much smaller loss after 1 epoch with scaling should be a result of much smaller values used to compute the loss. No
https://stackoverflow.com/questions/55253587/
PyTorch did not compute gradient and update parameters for 'masking' tensors?
I am coding a LeNet in PyTorch for MNIST dataset; I add a tensor self.mask_fc1\2\3 to mask some certain connections for the full connection layers. The code is like this: import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np import torch.nn as nn import torch.nn.functional as F import torch.optim as optim def loadMNIST(): transform = transforms.Compose([transforms.ToTensor()]) trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) return trainloader, testloader class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5, 1, 2) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.mask_fc1 = torch.ones(16 * 5 * 5, 120, requires_grad=True) self.mask_fc2 = torch.ones(120, 84, requires_grad=True) self.mask_fc3 = torch.ones(84, 10, requires_grad=True) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, self.num_flat_features(x)) # first layer x = x.matmul(self.fc1.weight.t() * self.mask_fc1) if self.fc1.bias is not None: x += torch.jit._unwrap_optional(self.fc1.bias) x = F.relu(x) # second layer x = x.matmul(self.fc2.weight.t() * self.mask_fc2) if self.fc2.bias is not None: x += torch.jit._unwrap_optional(self.fc2.bias) x = F.relu(x) # third layer x = x.matmul(self.fc3.weight.t() * self.mask_fc3) if self.fc3.bias is not None: x += torch.jit._unwrap_optional(self.fc3.bias) return x def num_flat_features(self, x): size = x.size()[1:] num_features = 1 for s in size: num_features *= s return num_features if __name__ == '__main__': trainloader, testloader = loadMNIST() net = Net() # train criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print running_loss += loss.item() if i % 2000 == 1999: # mean loss print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') # print the mask print(net.mask_fc1) I achieve the masking in the forward function and implement linear layer by myself rather than call x = F.relu(self.fc1(x)), and the model performs normally (at last in terms of the loss and accuracy). However, when I print self.mask_fc1/2/3, the tensor does not change during the training. Since the tensor was set requires_grad=True in function __init__, I cannot understand why it does not change. Maybe it is because the tensor multiplication?
For training, you need to register mask_fc1/2/3 as module parameters: self.mask_fc1 = nn.Parameter(torch.ones(16 * 5 * 5, 120)) You can print net.parameters() after that to confirm.
https://stackoverflow.com/questions/55255402/
Need help understanding the label input in a CGAN
I am trying to implement a CGAN. I understand that in convolutional generator and discriminator models, you add volume to the inputs by adding depth that represents the label. So if you have 10 classes in your data, your generator and discriminator would both have the base depth + 10 as its input volume. However, I am reading various implementations online and I can't seem to find where they are actually acquiring this label. Surely CGANs can't be unsupervised because you need to obtain the label to input. e.g. in cifar10 if you are training the discriminator on a real image of a frog, you would need the 'frog' annotation. Here is one of the pieces of code I am studying: class CGAN(object): def __init__(self, args): # parameters self.epoch = args.epoch self.batch_size = args.batch_size self.save_dir = args.save_dir self.result_dir = args.result_dir self.dataset = args.dataset self.log_dir = args.log_dir self.gpu_mode = args.gpu_mode self.model_name = args.gan_type self.input_size = args.input_size self.z_dim = 62 self.class_num = 10 self.sample_num = self.class_num ** 2 # load dataset self.data_loader = dataloader(self.dataset, self.input_size, self.batch_size) data = self.data_loader.__iter__().__next__()[0] # networks init self.G = generator(input_dim=self.z_dim, output_dim=data.shape[1], input_size=self.input_size, class_num=self.class_num) self.D = discriminator(input_dim=data.shape[1], output_dim=1, input_size=self.input_size, class_num=self.class_num) self.G_optimizer = optim.Adam(self.G.parameters(), lr=args.lrG, betas=(args.beta1, args.beta2)) self.D_optimizer = optim.Adam(self.D.parameters(), lr=args.lrD, betas=(args.beta1, args.beta2)) if self.gpu_mode: self.G.cuda() self.D.cuda() self.BCE_loss = nn.BCELoss().cuda() else: self.BCE_loss = nn.BCELoss() print('---------- Networks architecture -------------') utils.print_network(self.G) utils.print_network(self.D) print('-----------------------------------------------') # fixed noise & condition self.sample_z_ = torch.zeros((self.sample_num, self.z_dim)) for i in range(self.class_num): self.sample_z_[i*self.class_num] = torch.rand(1, self.z_dim) for j in range(1, self.class_num): self.sample_z_[i*self.class_num + j] = self.sample_z_[i*self.class_num] temp = torch.zeros((self.class_num, 1)) for i in range(self.class_num): temp[i, 0] = i temp_y = torch.zeros((self.sample_num, 1)) for i in range(self.class_num): temp_y[i*self.class_num: (i+1)*self.class_num] = temp self.sample_y_ = torch.zeros((self.sample_num, self.class_num)).scatter_(1, temp_y.type(torch.LongTensor), 1) if self.gpu_mode: self.sample_z_, self.sample_y_ = self.sample_z_.cuda(), self.sample_y_.cuda() def train(self): self.train_hist = {} self.train_hist['D_loss'] = [] self.train_hist['G_loss'] = [] self.train_hist['per_epoch_time'] = [] self.train_hist['total_time'] = [] self.y_real_, self.y_fake_ = torch.ones(self.batch_size, 1), torch.zeros(self.batch_size, 1) if self.gpu_mode: self.y_real_, self.y_fake_ = self.y_real_.cuda(), self.y_fake_.cuda() self.D.train() print('training start!!') start_time = time.time() for epoch in range(self.epoch): self.G.train() epoch_start_time = time.time() for iter, (x_, y_) in enumerate(self.data_loader): if iter == self.data_loader.dataset.__len__() // self.batch_size: break z_ = torch.rand((self.batch_size, self.z_dim)) y_vec_ = torch.zeros((self.batch_size, self.class_num)).scatter_(1, y_.type(torch.LongTensor).unsqueeze(1), 1) y_fill_ = y_vec_.unsqueeze(2).unsqueeze(3).expand(self.batch_size, self.class_num, self.input_size, self.input_size) if self.gpu_mode: x_, z_, y_vec_, y_fill_ = x_.cuda(), z_.cuda(), y_vec_.cuda(), y_fill_.cuda() # update D network self.D_optimizer.zero_grad() D_real = self.D(x_, y_fill_) D_real_loss = self.BCE_loss(D_real, self.y_real_) G_ = self.G(z_, y_vec_) D_fake = self.D(G_, y_fill_) D_fake_loss = self.BCE_loss(D_fake, self.y_fake_) D_loss = D_real_loss + D_fake_loss self.train_hist['D_loss'].append(D_loss.item()) D_loss.backward() self.D_optimizer.step() # update G network self.G_optimizer.zero_grad() G_ = self.G(z_, y_vec_) D_fake = self.D(G_, y_fill_) G_loss = self.BCE_loss(D_fake, self.y_real_) self.train_hist['G_loss'].append(G_loss.item()) G_loss.backward() self.G_optimizer.step() It seems as if y_vec_ and y_fill_ are the labels for the images, but in the instance of y_fill_ which is used to label real images for the discriminator, it equals y_fill_ = y_vec_.unsqueeze(2).unsqueeze(3).expand(self.batch_size, self.class_num, self.input_size, self.input_size) It doesn't seem like its getting any information on the label from the dataset? How is it giving the discriminator the correct label? Thanks!
y_fill_ is based on y_vec_ which is based on y_, so they are reading label info from mini batches which is correct. you might be confused by the scatter operation, basically what the code's doing is transferring the label into one-hot encoding
https://stackoverflow.com/questions/55257936/
PyTorch preferred way to copy a tensor
There seems to be several ways to create a copy of a tensor in PyTorch, including y = tensor.new_tensor(x) #a y = x.clone().detach() #b y = torch.empty_like(x).copy_(x) #c y = torch.tensor(x) #d b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable. Any reasons for/against using c?
TL;DR Use .clone().detach() (or preferrably .detach().clone()) If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums as it's slightly fast and explicit in what it does. Using perflot, I plotted the timing of various methods to copy a pytorch tensor. y = tensor.new_tensor(x) # method a y = x.clone().detach() # method b y = torch.empty_like(x).copy_(x) # method c y = torch.tensor(x) # method d y = x.detach().clone() # method e The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods. Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d. import torch import perfplot perfplot.show( setup=lambda n: torch.randn(n), kernels=[ lambda a: a.new_tensor(a), lambda a: a.clone().detach(), lambda a: torch.empty_like(a).copy_(a), lambda a: torch.tensor(a), lambda a: a.detach().clone(), ], labels=["new_tensor()", "clone().detach()", "empty_like().copy()", "tensor()", "detach().clone()"], n_range=[2 ** k for k in range(15)], xlabel="len(a)", logx=False, logy=False, title='Timing comparison for copying a pytorch tensor', )
https://stackoverflow.com/questions/55266154/
Pytorch - why does preallocating memory cause "trying to backward through the graph a second time"
Suppose I have a simple one-hidden-layer network that I'm training in the typical way: for x,y in trainData: optimizer.zero_grad() out = self(x) loss = self.lossfn(out, y) loss.backward() optimizer.step() This works as expected, but if I instead pre-allocate and update the output array, I get an error: out = torch.empty_like(trainData.tensors[1]) for i,(x,y) in enumerate(trainData): optimizer.zero_grad() out[i] = self(x) loss = self.lossfn(out[i], y) loss.backward() optimizer.step() RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. What's happening here that in the second version Pytorch attempts to backward through the graph again? Why is this not an issue in the first version? (Note that this error occurs even if I don't zero_grad())
The error implies that the program is trying to backpropagate through a set of operations a second time. The first time you backpropagate through a set of operations, pytorch deletes the computational graph to free memory. Therefore, the second time you try to backpropagate it fails as the graph has already been deleted. Here's a detailed explanation of the same. Short answer Use loss.backward(retain_graph=True). This will not delete the computational graph. Detailed answer In the first version, in each loop iteration, a new computational graph is generated every time out = self(x) is run. Every loop's graph out = self(x) -> loss = self.lossfn(out, y) In the second version, since out is declared outside the loop, the computational graphs in every loop have a parent node outside. - out[i] = self(x) -> loss = self.lossfn(out[i], y) out[i] - | - out[i] = self(x) -> loss = self.lossfn(out[i], y) - out[i] = self(x) -> loss = self.lossfn(out[i], y) Therefore, here's a timeline of what happens. The first iteration runs The computation graph is deleted including the parent node The second iteration attempts to backpropagate but failed since it didn't find the the parent node
https://stackoverflow.com/questions/55268726/
pytorch linear regression given wrong results
I implemented a simple linear regression and I’m getting some poor results. Just wondering if these results are normal or I’m making some mistake. I tried different optimizers and learning rates, I always get bad/poor results Here is my code: import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt from torch.autograd import Variable class LinearRegressionPytorch(nn.Module): def __init__(self, input_dim=1, output_dim=1): super(LinearRegressionPytorch, self).__init__() self.linear = nn.Linear(input_dim, output_dim) def forward(self,x): x = x.view(x.size(0),-1) y = self.linear(x) return y input_dim=1 output_dim = 1 if torch.cuda.is_available(): model = LinearRegressionPytorch(input_dim, output_dim).cuda() else: model = LinearRegressionPytorch(input_dim, output_dim) criterium = nn.MSELoss() l_rate =0.00001 optimizer = torch.optim.SGD(model.parameters(), lr=l_rate) #optimizer = torch.optim.Adam(model.parameters(),lr=l_rate) epochs = 100 #create data x = np.random.uniform(0,10,size = 100) #np.linspace(0,10,100); y = 6*x+5 mu = 0 sigma = 5 noise = np.random.normal(mu, sigma, len(y)) y_noise = y+noise #pass it to pytorch x_data = torch.from_numpy(x).float() y_data = torch.from_numpy(y_noise).float() if torch.cuda.is_available(): inputs = Variable(x_data).cuda() target = Variable(y_data).cuda() else: inputs = Variable(x_data) target = Variable(y_data) for epoch in range(epochs): #predict data pred_y= model(inputs) #compute loss loss = criterium(pred_y, target) #zero grad and optimization optimizer.zero_grad() loss.backward() optimizer.step() #if epoch % 50 == 0: # print(f'epoch = {epoch}, loss = {loss.item()}') #print params for name, param in model.named_parameters(): if param.requires_grad: print(name, param.data) There are the poor results : linear.weight tensor([[1.7374]], device='cuda:0') linear.bias tensor([0.1815], device='cuda:0') The results should be weight = 6 , bias = 5
Problem Solution Actually your batch_size is problematic. If you have it set as one, your targetneeds the same shape as outputs (which you are, correctly, reshaping with view(-1, 1)). Your loss should be defined like this: loss = criterium(pred_y, target.view(-1, 1)) This network is correct Results Your results will not be bias=5 (yes, weight will go towards 6 indeed) as you are adding random noise to target (and as it's a single value for all your data points, only bias will be affected). If you want bias equal to 5 remove addition of noise. You should increase number of your epochs as well, as your data is quite small and network (linear regression in fact) is not really powerful. 10000 say should be fine and your loss should oscillate around 0 (if you change your noise to something sensible). Noise You are creating multiple gaussian distributions with different variations, hence your loss would be higher. Linear regression is unable to fit your data and find sensible bias (as the optimal slope is still approximately 6 for your noise, you may try to increase multiplication of 5 to 1000 and see what weight and bias will be learned). Style (a little offtopic) Please read documentation about PyTorch and keep your code up to date (e.g. Variable is deprecated in favor of Tensor and rightfully so). This part of code: x_data = torch.from_numpy(x).float() y_data = torch.from_numpy(y_noise).float() if torch.cuda.is_available(): inputs = Tensor(x_data).cuda() target = Tensor(y_data).cuda() else: inputs = Tensor(x_data) target = Tensor(y_data) Could be written succinctly like this (without much thought): inputs = torch.from_numpy(x).float() target = torch.from_numpy(y_noise).float() if torch.cuda.is_available(): inputs = inputs.cuda() target = target.cuda() I know deep learning has it's reputation for bad code and fatal practice, but please do not help spreading this approach.
https://stackoverflow.com/questions/55270266/
PyTorch - GPU is not used by tensors despite CUDA support is detected
As the title of the question clearly describes, even though torch.cuda.is_available() returns True, CPU is used instead of GPU by tensors. I have set the device of the tensor to GPU through the images.to(device) function call after defining the device. When I debug my code, I am able to see that the device is set to cuda:0; but the tensor's device is still set to cpu. Defining the device: use_cuda = torch.cuda.is_available() # returns True device = torch.device('cuda:0' if use_cuda else 'cpu') Determining the device of the tensors: for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images.to(device) labels.to(device) # both of images and labels' devices are set to cpu The software stack: Python 3.7.1 torch 1.0.1 Windows 10 64-bit p.s. PyTorch is installed with the option of Cuda 9.0 support.
tensor.to() does not modify the tensor inplace. It returns a new tensor that's stored in the specified device. Use the following instead. images = images.to(device) labels = labels.to(device)
https://stackoverflow.com/questions/55274076/
Determining the result of a convolution operation
Following guide from https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807 I'm attempting to calculate the number of output features using code below : The output of : %reset -f import torch import torch.nn as nn my_tensor = torch.randn((1, 16, 12, 12), requires_grad=False) print(my_tensor.shape) update_1 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) print(update_1(my_tensor).shape) is : torch.Size([1, 16, 12, 12]) torch.Size([1, 16, 6, 6]) How is torch.Size([1, 16, 6, 6]) calculated in the context of applying formula : (taken from https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807) Attempting to calculate the number of output features manually by applying the formula : stride = 2 padding = 1 kernel_size = 3 # 2304 as n_in = 1 * 16 * 16 * 12 n_out = ((2304 + (2 * padding) - kernel_size) / stride) + 1 print(n_out) prints : 1152.5 But the produced number of output features is print(1 * 16 * 6 *6) = 576. I've taken the product of 1,16,6,6 as this is the shape of the result of print(update_1(my_tensor).shape) Update : Based on questions below I've updated code to : %reset -f import torch import torch.nn as nn from math import floor stride_value = 2 padding_value = 1 kernel_size_value = 3 number_channels = 3 width = 10 height = 12 my_tensor = torch.randn((1, number_channels, width, height), requires_grad=False) print(my_tensor.shape) update_1 = nn.Conv2d(in_channels=number_channels, out_channels=16, kernel_size=kernel_size_value, stride=stride_value, padding=padding_value) print(update_1(my_tensor).shape) n_out = floor((number_channels + (2 * padding_value) - kernel_size_value) / stride_value) + 1 print(n_out) print(my_tensor.shape) produces : torch.Size([1, 3, 10, 12]) print(update_1(my_tensor).shape) produces : torch.Size([1, 16, 5, 6]) print(update_1(n_out).shape) produces : 2 2 does not match the number of output features in each dimension. Have I implemented the calculation correctly ? As the number of horizontal features produces is 5 and number of vertical features produces is 6 is this formula not applicable where the number of features differ as for an image it does not make sense to have differing x and y axis values length ?
I see where you confusion is coming from. The formula computes the linear number of outputs, whereas you assume that it operates on the whole tensor. So the correct code is: from math import floor stride = 2 padding = 1 kernel_size = 3 n_out = floor((12 + (2 * padding) - kernel_size) / stride) + 1 print(n_out) Therefore, it outputs 6 "horizontal" features. Since the input tensor has the same "vertical" dimension (12), the formula will also produce 6 "vertical" features. Finally, 16 is the number of output channels you have specified in Conv2d. Putting it all together, the output is 1 image in a batch, 16 channels, 6 horizontal features, and 6 vertical features, which totals in 576 features. UPDATE By convention, the number of output channels is not calculated by the formula, but provided manually as a second parameter to nn.Conv2d. Therefore, to correct the second code above: import torch import torch.nn as nn from math import floor stride_value = 2 padding_value = 1 kernel_size_value = 3 number_channels = 3 width = 10 height = 12 my_tensor = torch.randn((1, number_channels, width, height), requires_grad=False) print(my_tensor.shape) update_1 = nn.Conv2d(in_channels=number_channels, out_channels=16, kernel_size=kernel_size_value, stride=stride_value, padding=padding_value) print(update_1(my_tensor).shape) n_out1 = floor((width + (2 * padding_value) - kernel_size_value) / stride_value) + 1 n_out2 = floor((height + (2 * padding_value) - kernel_size_value) / stride_value) + 1 print("(Expected: 5, 6)", n_out1, n_out2)
https://stackoverflow.com/questions/55275826/
Different methods for initializing embedding layer weights in Pytorch
There seem to be two ways of initializing embedding layers in Pytorch 1.0 using an uniform distribution. For example you have an embedding layer: self.in_embed = nn.Embedding(n_vocab, n_embed) And you want to initialize its weights with an uniform distribution. The first way you can get this done is: self.in_embed.weight.data.uniform_(-1, 1) And another one would be: nn.init.uniform_(self.in_embed.weight, -1.0, 1.0) My question is: what is the difference between the first and second initialization form. Do both methods do the same thing?
Both are same torch.manual_seed(3) emb1 = nn.Embedding(5,5) emb1.weight.data.uniform_(-1, 1) torch.manual_seed(3) emb2 = nn.Embedding(5,5) nn.init.uniform_(emb2.weight, -1.0, 1.0) assert torch.sum(torch.abs(emb1.weight.data - emb2.weight.data)).numpy() == 0 Every tensor has a uniform_ method which initializes it with the values from the uniform distribution. Also, the nn.init module has a method uniform_ which takes in a tensor and inits it with values from uniform distribution. Both are same expect first one is using the member function and the second is using a general utility function.
https://stackoverflow.com/questions/55276504/
ValueError: operands could not be broadcast together with shapes (50,50,512) (3,) (50,50,512) while converting tensor to image in pytorch
I'm doing a neural style transfer. I'm trying to reconstruct the output of the convolutional layer conv4_2 of the VGG19 network. def get_features(image, model): layers = {'0': 'conv1_1', '5': 'conv2_1', '10': 'conv3_1', '19': 'conv4_1', '21': 'conv4_2', '28': 'conv5_1'} x = image features = {} for name, layer in model._modules.items(): x = layer(x) if name in layers: features[layers[name]] = x return features content_img_features = get_features(content_img, vgg) style_img_features = get_features(style_img, vgg) target_content = content_img_features['conv4_2'] content_img_features is a dict that contains the output of every layer. target_content is a tensor of shape torch.Size([1, 512, 50, 50]) This is the method I use to plot the image using the tensor. It works fine for the input image as well as the final output. def tensor_to_image(tensor): image = tensor.clone().detach() image = image.numpy().squeeze() image = image.transpose(1, 2, 0) image *= np.array((0.22, 0.22, 0.22))+ np.array((0.44, 0.44, 0.44)) image = image.clip(0, 1) return image image = tensor_to_image(target_content) fig = plt.figure() plt.imshow(image) But this throws the error, --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-188-a75a5f0743bb> in <module>() 1 ----> 2 image = tensor_to_image(target_content) 3 fig = plt.figure() 4 plt.imshow(image) <ipython-input-186-e9385dbc4a85> in tensor_to_image(tensor) 3 image = image.numpy().squeeze() 4 image = image.transpose(1, 2, 0) ----> 5 image *= np.array((0.22, 0.22, 0.22))+ np.array((0.44, 0.44, 0.44)) 6 image = image.clip(0, 1) 7 return image ValueError: operands could not be broadcast together with shapes (50,50,512) (3,) (50,50,512) This is the initial transformation I apply to the image before passing to the cnn layers, def transformation(img): tasks = tf.Compose([tf.Resize(400), tf.ToTensor(), tf.Normalize((0.44,0.44,0.44),(0.22,0.22,0.22))]) img = tasks(img)[:3,:,:].unsqueeze(0) return img How do I fix this? Is there another way to reconstruct the image from the convolution layer?
Your tensor_to_image method only works for 3 channel images. Your input to the network is 3 channels, so is the final output, therefore it works fine there. But you cannot do the same at an internal high dimensional activation. Essentially the problem is that you try to apply a channel-wise normalization, but you have parameters for only three channels, that's why that particular line fails. You would need a 512 element vector of means and standard deviations. So for example this would work: image *= np.random.random([512]) + np.random.random([512]) However the fundamental problem is still that you try to visualize a high dimensional, 512 channel image, instead a traditional 3 channel (RGB) image. You may try to visualize channels separately, or in groups of 3, but still it might not be really useful.
https://stackoverflow.com/questions/55277192/
RuntimeError: Expected object of backend CUDA but got backend CPU for argument: ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
When the forward function of my neural network (after the training phase is completed) is being executed, I'm experiencing RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1'. The error trace indicates the error happens due to the call of output = self.layer1(x) command. I have tried to move all the data of the tensors to my GPU. It seems I miss something to be moved as well. Here is the code I have tried: use_cuda = torch.cuda.is_available() device = torch.device('cuda:0' if use_cuda else 'cpu') class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(NeuralNet, self).__init__() self.layer1 = nn.Linear(input_size, hidden_size).cuda(device) self.layer2 = nn.Linear(hidden_size, output_size).cuda(device) self.relu = nn.ReLU().cuda(device) def forward(self, x): x.cuda(device) output = self.layer1(x) # throws the error output = self.relu(output) output = self.layer2(output) return output def main(): transform = transforms.Compose([ transforms.ToTensor() ]) mnist_trainset = datasets.MNIST(root='D:\\MNIST', train=True, download=False, transform=transform) mnist_testset = datasets.MNIST(root='D:\\MNIST', train=False, download=False, transform=transform) train_loader = DataLoader(dataset=mnist_trainset, batch_size=100, shuffle=True) test_loader = DataLoader(dataset=mnist_testset, batch_size=100, shuffle=False) input_size = 784 hidden_size = 500 output_size = 10 num_epochs = 5 learning_rate = 0.001 model = NeuralNet(input_size, hidden_size, output_size) model.cuda(device) lossFunction = nn.CrossEntropyLoss() lossFunction.cuda(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) losses_in_epochs = [] total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.to(device) labels = labels.to(device) images = images.reshape(-1, 28 * 28) out = model(images) loss = lossFunction(out, labels) optimizer.zero_grad() loss.backward() optimizer.step() if (i + 1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) if (i % 600) == 0: losses_in_epochs.append(loss.item()) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, 28 * 28) out = model(images) _, predicted = torch.max(out.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total)) if __name__ == '__main__': main() The software stack: Python 3.7.1 torch 1.0.1 (with Cuda 9.0) Windows 10 64-bit
The error only happens only at the testing step, when you try calculating the accuracy, this might already give you a hint. The training loop runs without a problem. The error is simply that you don't send the images and labels to the GPU at this step. This is your corrected evaluation loop: with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) # missing line from original code labels = labels.to(device) # missing line from original code images = images.reshape(-1, 28 * 28) out = model(images) _, predicted = torch.max(out.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() BTW you don't need to send all your layers to the GPU separately (at your class __init__()). It's better to just send the whole instantiated model to the gpu at once.
https://stackoverflow.com/questions/55278566/
Pytorch RNN HTML Generation
I’ m stuck for a couple of days trying to make and RNN network to learn a basic HTML template. I tried different approaches and I even overfit on the following data: <!DOCTYPE html> <html> <head> <title>Page Title</title> </head> <body> <h1>This is a Heading</h1> <p>This is a paragraph.</p> </body> </html> Obtaining 100% accuracy on training and validation using Adam Optimizer and CrossEntropyLoss. The problem is that when I try to sample from the network, the results are completely random and I don’t know whats the problem: ..<a<a<a<a<aa<ttp11111b11111b11111111b11b1bbbb<btttn111 My sampling function is the following: def sample_sentence(): words = list() count = 0 modelOne.eval() with torch.no_grad(): # Setup initial input state, and input word (we use "the"). previousWord = torch.LongTensor(1, 1).fill_(trainData.vocabulary['letter2id']['[START]']) hidden = Variable(torch.zeros(6, 1, 100).to(device)) while True: # Predict the next word based on the previous hidden state and previous word. inputWord = torch.autograd.Variable(previousWord.to(device)) predictions, newHidden = modelOne(inputWord, hidden) hidden = newHidden pred = torch.nn.functional.softmax(predictions.squeeze()).data.cpu().numpy().astype('float64') pred = pred/np.sum(pred) nextWordId = np.random.multinomial(1, pred, 1).argmax() if nextWordId == 0: continue words.append(trainData.vocabulary['id2letter'][nextWordId]) # Setup the inputs for the next round. previousWord.fill_(nextWordId) # Keep adding words until the [END] token is generated. if nextWordId == trainData.vocabulary['letter2id']['[END]']: break if count>20000: break count += 1 words.insert(0, '[START]') return words And my network architecture is here: class ModelOne(Model) : def __init__(self, vocabulary_size, hidden_size, num_layers, rnn_dropout, embedding_size, dropout, num_directions): super(Model, self).__init__() self.vocabulary_size = vocabulary_size self.hidden_size = hidden_size self.num_layers = num_layers self.rnn_dropout = rnn_dropout self.dropout = dropout self.num_directions = num_directions self.embedding_size = embedding_size self.embeddings = nn.Embedding(self.vocabulary_size, self.embedding_size) self.rnn = nn.GRU(self.embedding_size, self.hidden_size, num_layers=self.num_layers, bidirectional=True if self.num_directions==2 else False, dropout=self.rnn_dropout, batch_first=True) self.linear = nn.Linear(self.hidden_size*self.num_directions, self.vocabulary_size) def forward(self, paddedSeqs, hidden): batchSequenceLength = paddedSeqs.size(1) batchSize = paddedSeqs.size(0) lengths = paddedSeqs.ne(0).sum(dim=1) embeddingVectors = self.embeddings(paddedSeqs) x = torch.nn.utils.rnn.pack_padded_sequence(embeddingVectors, lengths, batch_first=True) self.rnn.flatten_parameters() x,hid = self.rnn(x, hidden) output, _ = torch.nn.utils.rnn.pad_packed_sequence(x, batch_first=True, padding_value=0, total_length=batchSequenceLength) predictions = self.linear(output) return predictions.view(batchSize, self.vocabulary_size, batchSequenceLength), hid def init_hidden(self, paddedSeqs): hidden = Variable(torch.zeros(self.num_layers*self.num_directions, 1, self.hidden_size).to(device)) return hidden modelOne =ModelOne(vocabulary_size=vocabularySize, hidden_size=100, embedding_size=50, num_layers=3, rnn_dropout=0.0, dropout=0, num_directions=2).to(device) If you have any idea of what needs to be changed, please let me know. I added all the code to github repository here: https://github.com/OverclockRo/HTMLGeneration/blob/SamplingTestTemplate/Untitled.ipynb
First of all for a GRU (RNN) to be efficient, you may need more data to train. Second, it seems that you have a problem with the embedding. It looks like, the mapping vocabulary['id2letter'] does not work, otherwise you would obtain sequences of tags like <head><title><title><title>, instead of p111. EDIT I have trained this character-level GRU network on the html source code of this page for 1700 epochs. And here an example 2000-character excerpt of what it generates: A+Implexementation--nope bande that shoos</td></tr><tr><td class="line-number" value="296"></td><td class="line-content"> </td></tr><tr><td class="line-number" value="1437"></td><td class="line-content"> <span class="html-tag"></a></span></td></tr><tr><td class="line-number" value="755"></td><td class="line-content"> </td></tr><tr><td class="line-number" value="584"></td><td class="line-content"> <span class="html-tag"><a <span class="html-attribute-name">data-controller</span>="<span class="html-attribute-value">footer__menu__link</span>"></span><span class="html-tag"><div <span class="html-attribute-name">data-target</span>="<span class="html-attribute-value">release__line</span>"></span><span class="html-tag"><a <span class="html-attribute-name">class</span>="<span class="html-attribute-value">/hase__version-date</span>"></span></td></tr><tr><td class="line-number" value="174"></td><td class="line-content"><br></td></tr><tr><td class="line-number" value="1315"></td><td class="line-content">Bule and the use the twith a hoas suiecode excess ardates</td></tr><tr><td class="line-number" value="1003"></td><td class="line-content"><span class="html-tag"></a></span></td></tr><tr><td class="line-number" value="129"></td><td class="line-content"> </td></tr><tr><td class="line-number" value="269"></td><td class="line-content"> <span class="html-tag"></ul></span></td></tr><tr><td class="line-number" value="591"></td><td class="line-content"> <span class="html-tag"></div></span></td></tr><tr><td class="line-number" value="553"></td><td class="line-content"> <span class="html-tag"><div <span class="html-attribute-name">href</span>="<a class="html-attribute-value html-external-link" target__link</td><td class="line-content"> </td></tr><tr><td class="line-number" value="103"></td><td class="line-content"> </td></tr><tr><td cla I hope, this helps.
https://stackoverflow.com/questions/55281499/
not able to predict using pytorch [MNIST]
pytorch noob here,trying to learn. link to my notebook: https://gist.github.com/jagadeesh-kotra/412f371632278a4d9f6cb31a33dfcfeb I get validation accuracy of 95%. i use the following to predict: m.eval() testset_predictions = [] for batch_id,image in enumerate(test_dataloader): image = torch.autograd.Variable(image[0]) output = m(image) _, predictated = torch.max(output.data,1) for prediction in predicted: testset_predictions.append(prediction.item()) len(testset_predictions) The problem is i get only 10% accuracy when i submit the result to kaggle competition,which is as good as random prediction.I cant figure out what i'm doing wrong. please help :)
Most probably it is due to a typo; while you want to use the newly created predictated outcomes, you actually use predicted: _, predictated = torch.max(output.data,1) for prediction in predicted: which predicted comes from earlier in your linked code, and it contains predictions from the validation set instead of your test set: #validation # ... for batch_idx, (data, target) in enumerate(val_dataloader): data, target = Variable(data), Variable(target) output = m.forward(data) _, predicted = torch.max(output.data,1) So, you don't even get an error message, because predicted indeed exists - it's just not what you actually want to use... You end up submitting the results for the validation set instead of the test one (it certainly doesn't help that both consist of 10,000 samples), hence you expectedly get a random guessing accuracy of ~ 10%...
https://stackoverflow.com/questions/55282224/
int8 data type in Pytorch
What is the best way to run a qauntized model using int8 data types in Pytorch? I know in pytorch I can define tensors as int8, however, when I actually want to use int8, I get: RuntimeError: _thnn_conv2d_forward is not implemented for type torch.CharTensor So I am confused, how to run quantized model in pytorch that uses for instance int8 when the datatype is not supported for computational blocks such as convolutions? I am using pytorch version 1.0.1.post2.
Depends what your goals are. If you want to simulate your quantized model: You may stick to existing float data type and only introduce truncation as needed, i.e.: x = torch.floor(x * 2**8) / 2**8 assuming x is a float tensor. If you want to simulate your quantized model efficiently: Then, I am afraid PyTorch will be not very useful since the low-level convolutional operator is implemented only for float type.
https://stackoverflow.com/questions/55289703/
Loss over pixels
During backpropagation, will these cases have different effect:- sum up loss over all pixels then backpropagate. average loss over all pixels then backpropagate backpropagate individuallyover all pixels. My main doubts in regarding the numerical value but the effect all these would be having.
The difference between no 1 and 2 is basically : since sum will result in bigger than mean, the magnitude of gradients from sum operation will be bigger, but direction will be same. Here's a little demonstration, lets first declare necessary variables: x = torch.tensor([4,1,3,7],dtype=torch.float32,requires_grad=True) target = torch.tensor([4,2,5,4],dtype=torch.float32) Now lets compute gradient for x using L2 loss with sum: loss = ((x-target)**2).sum() loss.backward() print(x.grad) This outputs: tensor([ 0., -2., -4., 6.]) Now using mean: (after resetting x grad) loss = ((x-target)**2).mean() loss.backward() print(x.grad) And this outputs: tensor([ 0.0000, -0.5000, -1.0000, 1.5000]) Notice how later gradients are exactly 1/4th of that of sum, that's because the tensors here contain 4 elements. About third option, if I understand you correctly, that's not possible. You can not backpropagate before aggregating individual pixel errors to a scalar, using sum, mean or anything else.
https://stackoverflow.com/questions/55299261/
Pytorch nn embeddings dimension size?
What is the correct dimension size for nn embeddings in Pytorch? I'm doing batch training. I'm just a little confused with what the dimensions of "self.embeddings" in the code below are supposed to be when I get "shape"? self.embeddings = nn.Embedding(vocab_size, embedding_dim)
The shape of the self.embedding will be [sentence_length, batch_size, embedding_dim] where sentence_length is the length of inputs in each batch.
https://stackoverflow.com/questions/55299435/
pip install horovod fails on conda + OSX 10.14
Running pip install horovod in a conda environment with pytorch installed resulted in error: None of TensorFlow, PyTorch, or MXNet plugins were built. See errors above. where the root problem near the top of stdout is ld: library not found for -lstdc++ clang: error: linker command failed with exit code 1 (use -v to see invocation) INFO: Unable to build PyTorch plugin, will skip it.
CFLAGS=-mmacosx-version-min=10.9 pip install horovod, inspired from this seemingly unrelated Horovod issue. This issue thread from pandas has a nice explanation: The compiler standard library defaults to either libstdc++ or libc++, depending on the targetted macOS version - libstdc++ for 10.8 and below, and libc++ for 10.9 and above. This is determined by the environment variable MACOSX_DEPLOYMENT_TARGET or the compiler option-mmacosx-version-min, defaulting to the system version otherwise. When distuils builds extensions on macOS, it setsMACOSX_DEPLOYMENT_TARGET to the version that python was compiled with, even if the host system / Xcode is newer. Recent macOS versions of python have a 64-bit only variant built for 10.9 (python.org), and a universal 64/32-bit variant built for 10.6 (python.org) or 10.7 (conda). I am running the conda universal variant, so distutils targets macOS 10.7, despite my system being 10.14, with Xcode 10 which doesn't install libstdc++.
https://stackoverflow.com/questions/55305018/
Creating a torch tensor from a generator
I attempt to construct a tensor from a generator as follows: >>> torch.tensor(i**2 for i in range(10)) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Could not infer dtype of generator Currently I just do: >>> torch.tensor([i**2 for i in range(10)]) tensor([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81]) Is there a way to avoid needing this intermediate list?
As @blue-phoenox already points out, it is preferred to use the built-in PyTorch functions to create the tensor directly. But if you have to deal with generator, it can be advisable to use numpy as a intermediate stage. Since PyTorch avoid to copy the numpy array, it should be quite performat (compared to the simple list comprehension) >>> import torch >>> import numpy as np >>> torch.from_numpy(np.fromiter((i**2 for i in range(10)), int)) tensor([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
https://stackoverflow.com/questions/55307368/
Loss not decreasing - Pytorch
I am using dice loss for my implementation of a Fully Convolutional Network(FCN) which involves hypernetworks. The model has two inputs and one output which is a binary segmentation map. The model is updating weights but loss is constant. It is not even overfitting on only three training examples I have used other loss functions as well like dice+binarycrossentropy loss, jacard loss and MSE loss but the loss is almost constant. I have also tried almost every activation function like ReLU, LeakyReLU, Tanh. Moreover I have to use sigmoid at the the output because I need my outputs to be in range [0,1] Learning rate is 0.01. Moreover, I have tried different learning rates as well like 0.0001, 0.001, 0.1. And no matter what loss the training starts at, it always comes at this value This shows gradients for three training examples. And overall loss tensor(0.0010, device='cuda:0') tensor(0.1377, device='cuda:0') tensor(0.1582, device='cuda:0') Epoch 9, Overall loss = 0.9604763123724196, mIOU=0.019766070265581623 tensor(0.0014, device='cuda:0') tensor(0.0898, device='cuda:0') tensor(0.0455, device='cuda:0') Epoch 10, Overall loss = 0.9616242945194244, mIOU=0.01919178702228237 tensor(0.0886, device='cuda:0') tensor(0.2561, device='cuda:0') tensor(0.0108, device='cuda:0') Epoch 11, Overall loss = 0.960331304506822, mIOU=0.01983801422510155 I expect the loss to converge in few epochs. What should I do?
@Muhammad Hamza Mughal You got to add code of at least your forward and train functions for us to pinpoint the issue, @Jatentaki is right there could be so many things that could mess up a ML / DL code. Even I moved recently to pytorch from Keras, took some time to get used to it. But, here are the things I'd do: 1) As you're dealing with images, try to pre-process them a bit ( rotation, normalization, Gaussian Noise etc). 2) Zero gradients of your optimizer at the beginning of each batch you fetch and also step optimizer after you calculated loss and called loss.backward(). 3) Add a weight decay term to your optimizer call, typically L2, as you're dealing with Convolution networks have a decay term of 5e-4 or 5e-5. 4) Add a learning rate scheduler to your optimizer, to change learning rates if there's no improvement over time. We really can't include code in our answers. It's up to the practitioner to scout for how to implement all this stuff. Hope this helps.
https://stackoverflow.com/questions/55311932/
PyTorch 1.0 loading VGGFace2 weights in Python3.7
I am using Python3.7 and PyTorch 1.0 to develop a face recognition system. I want to use VGGFace2 Resnet50 pretrained model as described here as a feature extractor. I have downloaded the model and weights. I run the following codes as project readme says: MainModel = imp.load_source('MainModel', 'resnet50_128_pytorch.py') model = torch.load('resnet50_128_pytorch.pth') First line executed as expected but in the second line i got 'ascii' codec can't decode byte 0xc3 in position 1124: ordinal not in range(128) I searched in the Stackoverflow and Google and i saw that it may be about this model saved with Python2 and loading from Python3 creates a problem. Is there any way that i can solve this? Thank you.
I found a solution which currently looks like it's working. It basically changes the pickle load with latin1 encoding. from functools import partial import pickle pickle.load = partial(pickle.load, encoding="latin1") pickle.Unpickler = partial(pickle.Unpickler, encoding="latin1") MainModel = imp.load_source('MainModel', 'resnet50_ft_pytorch.py') model = torch.load('resnet50_ft_pytorch.pth', map_location=lambda storage, loc: storage, pickle_module=pickle)
https://stackoverflow.com/questions/55312396/
BatchNorm1d needs 2d input?
I want to fix problem in PyTorch. I wrote the following code that is learning sine functions as tutorial. import torch from torch import nn from torch import optim from torch.autograd import Variable as V from torch.utils.data import TensorDataset, DataLoader import numpy as np # y=sin(x1) numTrain = 512 numTest = 128 noiseScale = 0.01 PI2 = 3.1415 * 2 X_train = np.random.rand(numTrain,1) * PI2 y_train = np.sin(X_train) + np.random.randn(numTrain,1) * noiseScale + 1.5 X_test = np.random.rand(numTest,1) * PI2 y_test = np.sin(X_test) + np.random.randn(numTest,1) * noiseScale # Construct DataSet X_trainT = torch.Tensor(X_train) y_trainT = torch.Tensor(y_train) X_testT = torch.Tensor(X_test) y_testT = torch.Tensor(y_test) ds_train = TensorDataset(X_trainT, y_trainT) ds_test = TensorDataset(X_testT, y_testT) # Construct DataLoader loader_train = DataLoader(ds_train, batch_size=64, shuffle=True) loader_test = DataLoader(ds_test, batch_size=64, shuffle=False) # Construct network net = nn.Sequential( nn.Linear(1,10), nn.ReLU(), nn.BatchNorm1d(10), nn.Linear(10,5), nn.ReLU(), nn.BatchNorm1d(5), nn.Linear(5,1), ) optimizer = optim.Adam(net.parameters()) loss_fn = nn.SmoothL1Loss() # Training losses = [] net.train() for epoc in range(100): for data, target in loader_train: y_pred = net(data) loss = loss_fn(target,y_pred) optimizer.zero_grad() loss.backward() optimizer.step() losses.append(loss.data) # evaluation %matplotlib inline from matplotlib import pyplot as plt #plt.plot(losses) plt.scatter(X_train, y_train) net.eval() sinsX = [] sinsY = [] for t in range(128): x = t/128 * PI2 output = net(V(torch.Tensor([x]))) sinsX.append(x) sinsY.append(output.detach().numpy()) plt.scatter(sinsX,sinsY) Training is done without error, But the next line caused an error, "expected 2D or 3D input (got 1D input)" output = net(V(torch.Tensor([x]))) This error doesn't occur if it is without BatchNorm1d(). I feel strange because the input is 1D. How to fix it? Thanks. Update: How did I fix arr = np.array([x]) output = net(V(torch.Tensor(arr[None,...])))
When working with 1D signals, pyTorch actually expects a 2D tensors: the first dimension is the "mini-batch" dimension. Therefore, you should evaluate your net on a batch with one 1D signal: output - net(V(torch.Tensor([x[None, ...]])) Make sure you set your net to "eval" mode before evaluating it: net.eval()
https://stackoverflow.com/questions/55320883/
How to clear CUDA memory in PyTorch
I am trying to get the output of a neural network which I have already trained. The input is an image of the size 300x300. I am using a batch size of 1, but I still get a CUDA error: out of memory error after I have successfully got the output for 25 images. I tried torch.cuda.empty_cache(), but this still doesn't seem to solve the problem. Code: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") train_x = torch.tensor(train_x, dtype=torch.float32).view(-1, 1, 300, 300) train_x = train_x.to(device) dataloader = torch.utils.data.DataLoader(train_x, batch_size=1, shuffle=False) right = [] for i, left in enumerate(dataloader): print(i) temp = model(left).view(-1, 1, 300, 300) right.append(temp.to('cpu')) del temp torch.cuda.empty_cache() This for loop runs for 25 times every time before giving the memory error. Every time, I am sending a new image in the network for computation. So, I don't really need to store the previous computation results in the GPU after every iteration in the loop. Is there any way to achieve this?
I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem. Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. But since I only wanted to perform a forward propagation, I simply needed to specify torch.no_grad() for my model. Thus, the for loop in my code could be rewritten as: for i, left in enumerate(dataloader): print(i) with torch.no_grad(): temp = model(left).view(-1, 1, 300, 300) right.append(temp.to('cpu')) del temp torch.cuda.empty_cache() Specifying no_grad() to my model tells PyTorch that I don't want to store any previous computations, thus freeing my GPU space.
https://stackoverflow.com/questions/55322434/
Unity ML-Agents Running Very Slowly
Using the Python API, whether I run using a build or in the Editor the simulation is much slower than when using the provided mlagents-learn methods. I'm running something similar to this, using a PyTorch implementation of DDPG and CUDA 9.0. Is this expected behaviour?
Got to the Academy's proprieties and make sure that the value of Time Scale in the Inference Configuration is equals to the Training Configuration's one. For other info check the official documentation of ML-Agents at Learning-Environment-Design-Academy
https://stackoverflow.com/questions/55324790/
Visualizing the output of intermediate layers of cnn in pytorch
I'm trying to visualize the output of the intermediate layers of the VGG19 network, from the torchvision module, specifically the layer, conv4_2. I've extracted the output in a tensor of shape [1, 512, 50, 50]. But how do I visualize an image with 512 channels?
Feature visualization is a very complex subject. If you want to have a visual idea what each filter (of the 512) of the trained net is responding to, you can use methods like these: propagating gradients from conv4_2's output to the input image, and change the image to maximize the feature response. You will have to work your way through regularization etc. to get smooth visually pleasing results. Alternatively, you can see the specific responses of each filter (out of the 512) to each location (overlapping receptive fields). In that case you have 512 different 50-by-50 intensity images, each showing the response map of each neuron to the input image.
https://stackoverflow.com/questions/55335583/
Why there are different output between model.forward(input) and model(input)
I'm using pytorch to build a simple model like VGG16,and I have overloaded the function forward in my model. I found everyone tends to use model(input) to get the output rather than model.forward(input), and I am interested in the difference between them. I try to input the same data, but the result is different. I'm confused. I have output the layer_weight before I input data, the weight not be changed, and I know when we using model(input) it using __call__ function, and this function will call model.forward. vgg = VGG() vgg.double() for layer in vgg.modules(): if isinstance(layer,torch.nn.Linear): print(layer.weight) print(" use model.forward(input) ") result = vgg.forward(array) for layer in vgg.modules(): if isinstance(layer,torch.nn.Linear): print(layer.weight) print(" use model(input) ") result_2 = vgg(array) print(result) print(result_2) output: Variable containing:1.00000e-02 * -0.2931 0.6716 -0.3497 -2.0217 -0.0764 1.2162 1.4983 -1.2881 [torch.DoubleTensor of size 1x8] Variable containing: 1.00000e-02 * 0.5302 0.4494 -0.6866 -2.1657 -0.9504 1.0211 0.8308 -1.1665 [torch.DoubleTensor of size 1x8]
model.forward just calls the forward operations as you mention but __call__ does a little extra. If you dig into the code of nn.Module class you will see __call__ ultimately calls forward but internally handles the forward or backward hooks and manages some states that pytorch allows. When calling a simple model like just an MLP, it may not be really needed but more complex model like spectral normalization layers have hooks and therefore you should use model(.) signature as much as possible unless you explicitly just want to call model.forward Also see Calling forward function without .forward() In this case, however, the difference may be due to some dropout layer, you should call vgg.eval() to make sure all the stochasticity in network is turned off before comparing the outputs.
https://stackoverflow.com/questions/55338756/
How to do parallel processing in pytorch
I am working on a deep learning problem. I am solving it using pytorch. I have two GPU's which are on the same machine (16273MiB,12193MiB). I want to use both the GPU's for my training (video dataset). I get a warning: There is an imbalance between your GPUs. You may want to exclude GPU 1 which has less than 75% of the memory or cores of GPU 0. You can do so by setting the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES environment variable. warnings.warn(imbalance_warn.format(device_ids[min_pos], device_ids[max_pos])) I also get an error: raise TypeError('Broadcast function not implemented for CPU tensors') TypeError: Broadcast function not implemented for CPU tensors if __name__ == '__main__': opt.scales = [opt.initial_scale] for i in range(1, opt.n_scales): opt.scales.append(opt.scales[-1] * opt.scale_step) opt.arch = '{}-{}'.format(opt.model, opt.model_depth) opt.mean = get_mean(opt.norm_value) opt.std = get_std(opt.norm_value) print("opt",opt) with open(os.path.join(opt.result_path, 'opts.json'), 'w') as opt_file: json.dump(vars(opt), opt_file) torch.manual_seed(opt.manual_seed) model, parameters = generate_model(opt) #print(model) pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad) print("Total number of trainable parameters: ", pytorch_total_params) # Define Class weights if opt.weighted: print("Weighted Loss is created") if opt.n_finetune_classes == 2: weight = torch.tensor([1.0, 3.0]) else: weight = torch.ones(opt.n_finetune_classes) else: weight = None criterion = nn.CrossEntropyLoss() if not opt.no_cuda: criterion = nn.DataParallel(criterion.cuda()) if opt.no_mean_norm and not opt.std_norm: norm_method = Normalize([0, 0, 0], [1, 1, 1]) elif not opt.std_norm: norm_method = Normalize(opt.mean, [1, 1, 1]) else: norm_method = Normalize(opt.mean, opt.std) train_loader = torch.utils.data.DataLoader( training_data, batch_size=opt.batch_size, shuffle=True, num_workers=opt.n_threads, pin_memory=True) train_logger = Logger( os.path.join(opt.result_path, 'train.log'), ['epoch', 'loss', 'acc', 'precision','recall','lr']) train_batch_logger = Logger( os.path.join(opt.result_path, 'train_batch.log'), ['epoch', 'batch', 'iter', 'loss', 'acc', 'precision', 'recall', 'lr']) if opt.nesterov: dampening = 0 else: dampening = opt.dampening optimizer = optim.SGD( parameters, lr=opt.learning_rate, momentum=opt.momentum, dampening=dampening, weight_decay=opt.weight_decay, nesterov=opt.nesterov) # scheduler = lr_scheduler.ReduceLROnPlateau( # optimizer, 'min', patience=opt.lr_patience) if not opt.no_val: spatial_transform = Compose([ Scale(opt.sample_size), CenterCrop(opt.sample_size), ToTensor(opt.norm_value), norm_method ]) print('run') for i in range(opt.begin_epoch, opt.n_epochs + 1): if not opt.no_train: adjust_learning_rate(optimizer, i, opt.lr_steps) train_epoch(i, train_loader, model, criterion, optimizer, opt, train_logger, train_batch_logger) I have also made changes in my train file: model = nn.DataParallel(model(),device_ids=[0,1]).cuda() outputs = model(inputs) It does not seem to work properly and is giving error. Please advice, I am new to pytorch. Thanks
As mentioned in this link, you have to do model.cuda() before passing it to nn.DataParallel. net = nn.DataParallel(model.cuda(), device_ids=[0,1]) https://github.com/pytorch/pytorch/issues/17065
https://stackoverflow.com/questions/55343893/
How to make a dataset from video datasets(tensorflow first)
everyone. Now I have an object classification task, and I have a dataset containing a large number of videos. In every video, some frames(not every frame, about 160 thousand frames) have its labels, since a frame may have multiple objects. I have some confusion about creating the dataset. My idea is to convert videos to frames firstly, then every frame only with labels will be made as tfrecord or hdf5 format. Finally, I would write every frame's path into csv files (training and validation) using for my task. My question is : 1. Is there efficient enough(tfrecord or hdf5)? Should I preprocess every frame such as compression for save the storage space before creating tfrecord or hdf5 files? 2. Is there a way to handle the video dataset directly in tensorflow or pytorch? I want to find an efficient and conventional way to handle video datasets. Really looking forward to every answer.
I am no TensorFlow guy, so my answer won't cover that, sorry. Video formats generally gain compression at the cost of longer random-access times thanks to exploiting temporal correlations in the data. It makes sense because one usually accesses video frames sequentially, but if your access is entirely random I suggest you convert to hdf5. Otherwise, if you access sub-sequences of video, it may make sense to stay with video formats. PyTorch does not have any "blessed" approaches to video AFAIK, but I use imageio to read videos and seek particular frames. A short wrapper makes it follow the PyTorch Dataset API. The code is rather simple but has a caveat, which is necessary to allow using it with multiprocessing DataLoader. import imageio, torch class VideoDataset: def __init__(self, path): self.path = path # explained in __getitem__ self._reader = None reader = imageio.get_reader(self.path, 'ffmpeg') self._length = reader.get_length() def __getitem__(self, ix): # Below is a workaround to allow using `VideoDataset` with # `torch.utils.data.DataLoader` in multiprocessing mode. # `DataLoader` sends copies of the `VideoDataset` object across # processes, which sometimes leads to bugs, as `imageio.Reader` # does not support being serialized. Since our `__init__` set # `self._reader` to None, it is safe to serialize a # freshly-initialized `VideoDataset` and then, thanks to the if # below, `self._reader` gets initialized independently in each # worker thread. if self._reader is None: self._reader = imageio.get_reader(self.path, 'ffmpeg') # this is a numpy ndarray in [h, w, channel] format frame = self._reader.get_data(ix) # PyTorch standard layout [channel, h, w] return torch.from_numpy(frame.transpose(2, 0, 1)) def __len__(self): return self.length This code can be adapted to support multiple video files as well as to output the labels as you would like to have them.
https://stackoverflow.com/questions/55353187/
Pooling over channels in pytorch
In tensorflow, I can pool over the depth dimension which would reduce the channels and leave the spatial dimensions unchanged. I'm trying to do the same in pytorch but the documentation seems to say pooling can only be done over the height and width dimensions. Is there a way I can pool over channels in pytorch? I've a tensor of shape [1,512,50,50] I'm trying to use pooling to bring the number of channels down to 3. I saw this question but did not find the answer helpful.
The easiest way to reduce the number of channels is using a 1x1 kernel: import torch x = torch.rand(1, 512, 50, 50) conv = torch.nn.Conv2d(512, 3, 1) y = conv(x) print(y.size()) # torch.Size([1, 3, 50, 50]) If you really need to perform pooling along the channels dimension due to some reason, you may want to permute the dimensions so that the channels dimension is swapped with some other dimension (e.g. width). This idea was referenced here.
https://stackoverflow.com/questions/55355669/
Character level CNN - 1D or 2D
I want to implement a character-level CNN in Pytorch. My input has 4 dimensions: (batch_size, seq_length, padded_character_length, embedding_dim) I'm wondering whether I should merge two dimensions and use a Conv1D-layer or instead use a Conv2D-layer on the existing dimensions. Given the dimensions of the input both would technically work fine, I also have seen implementations for both versions. So I'm wondering which method to prefer. Does one of the two methods have particular advantages over the other?
I agree with Venkatesh that 1D might make more sense for your implementation. Instead of merging, I typically use the TimeDistributed layers that are found in Keras. This takes one layer and applies it across time dimensions. The advantage is that you keep the features from each dimension separate until you want to merge them. If you are using padding (as you mentioned) then it makes more sense to apply the same layer across the time dimensions instead of merging the layers and creating awkward padding space in between actual characters. TimeDistributed layers solves this. I googled for just a moment and found someone who has tried implementing this feature in PyTorch, that might at least get you started. PyTorch TimeDistributed To expand on my usage a bit.. My latest model has 5, 12 hour periods, each 12 hour period has sporadic activity so it is padded to reach a standard length of 30, so the final input shape is (?, 5, 30, embedding_size). I generate features within a single period using a TimeDistributed 1D CNN, then I max pool those features and concatenate to create a new shape of (?, 5, n_feats) where there are now 5 sets of feature maps. I again run that data over with a different 1D CNN layer which looks across the 5, 12 hour periods. The padding in each period is independent of each other, so I can't simply use a 2D CNN as elements at the same index wouldn't represent the same time across periods. Edit: I think the Keras implementation is a little more sophisticated but that should be close. Their documentation says "This wrapper applies a layer to every temporal slice of an input." If getting there requires to merge then restore after then their may be some considerations around the feature map. For example, if the filter size is 2 then the last item in the first feature map (after reshaping) will include the last feature and the first feature between two time slices. Here is one more link on discussion around this capability in PyTorch that might be helpful.
https://stackoverflow.com/questions/55357600/
installed pytorch1.0.1 for OS X with pip3 but cannot import, what can I do?
I have already installed pytorch for MacOS 10.14 with pip3, but I can not import it in the python script. What should I do? System: MacOS 10.14 Python3: v3.7 ➜ ~ pip3 list Package Version ----------- ----------- numpy 1.16.2 Pillow 5.4.1 pip 18.1 pycairo 1.17.1 pygobject 3.28.3 setuptools 40.5.0 six 1.12.0 torch 1.0.1.post2 torchvision 0.2.2.post3 virtualenv 16.1.0 wheel 0.32.2 ➜ ~ python3 Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 26 2018, 23:26:24) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch' >>>
To expand upon my comment: There's no strict guarantee that a pip3 wrapper script somewhere on your system is related to the pip package manager/module for your python3 binary. That wrapper may be created by a different installation of Python – maybe your system's own, maybe something else. (You can see where the script is located with which pip3 and see which interpreter it uses with less $(which pip3) and looking at the shebang line at the top.) Each version of Python you have installed has its own site-packages directory, which contains the globally (as far as that version is concerned) installed packages. Fortunately, pip can be run exactly equally as the wrapper script would with the -m switch, so to be sure Torch and Torchvision get installed into your python3 (which appears to be Python 3.7.0 at this time), python3 -m pip install torch torchvision should do the trick. However, globally (well, interpreter-globally, as discussed above) installed packages should be avoided, since you can easily get into hairy conflicts when you're working on multiple projects. You should instead use virtualenvs to separate your library installations from each other – the venv module is included in Python these days, and the official documentation has a guide on it. (Other options are pipenv and poetry, but it's worth knowing the lower-level tooling.)
https://stackoverflow.com/questions/55359707/
Why the pytorch implementation is so inefficient?
I have implemented a paper about a CNN architecture in both Keras and Pytorch but keras implementation is much more efficient it takes 4 gb of gpu for training with 50000 samples and 10000 validation samples but pytorch one takes all the 12 gb of gpu and i cant even use a validation set ! Optimizer for both of them is sgd with momentum and same settings for both. more info about the paper:[architecture]:https://github.com/Moeinh77/Lightweight-Deep-Convolutional-Network-for-Tiny-Object-Recognition/edit/master/train.py pytorch code : class SimpleCNN(torch.nn.Module): def __init__(self): super(SimpleCNN, self).__init__() self.conv2d_11 = torch.nn.Conv2d(3, 64, kernel_size = 3, stride = 1, padding = 1) self.conv2d_12 = torch.nn.Conv2d(64, 64, kernel_size = 3, stride = 1, padding = 1) self.conv2d_21 = torch.nn.Conv2d(64, 128, kernel_size = 3, stride = 1, padding = 1) self.conv2d_22 = torch.nn.Conv2d(128, 128, kernel_size = 3, stride = 1, padding = 1) self.conv2d_31 = torch.nn.Conv2d(128, 256, kernel_size = 3, stride = 1, padding = 1) self.conv2d_32 = torch.nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1) self.conv2d_33 = torch.nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1) self.conv2d_41 = torch.nn.Conv2d(256, 512, kernel_size = 3, stride = 1, padding = 1) self.conv2d_42 = torch.nn.Conv2d(512, 512, kernel_size = 3, stride = 1, padding = 1) self.conv2d_51 = torch.nn.Conv2d(512, 512, kernel_size = 3, stride = 1, padding = 1) self.Batchnorm_1=torch.nn.BatchNorm2d(64) self.Batchnorm_2=torch.nn.BatchNorm2d(128) self.Batchnorm_3=torch.nn.BatchNorm2d(256) self.Batchnorm_4=torch.nn.BatchNorm2d(512) self.dropout2d_1=torch.nn.Dropout2d(p=0.3) self.dropout2d_2=torch.nn.Dropout2d(p=0.4) self.dropout2d_3=torch.nn.Dropout2d(p=0.5) self.dropout1d=torch.nn.Dropout(p=0.5) self.maxpool2d = torch.nn.MaxPool2d(kernel_size = 2, stride = 2, padding = 0) self.avgpool2d = torch.nn.AvgPool2d(kernel_size = 2, stride = 2, padding = 0) self.fc = torch.nn.Linear(512, 10) def forward(self, x): ############################# Phase 1 #print(x.size()) x = F.relu(self.conv2d_11(x)) x = self.dropout2d_1(x) #rate =0.3 x = self.Batchnorm_1(x) #input 64 #print(x.size()) x = F.relu(self.conv2d_12(x)) x = self.dropout2d_1(x) #rate=0.3 x = self.Batchnorm_1(x) #input 64 #print(x.size()) x = self.maxpool2d(x) #print(x.size()) ############################# Phase 2 x = F.relu(self.conv2d_21(x)) x = self.dropout2d_1(x) #rate=0.3 x = self.Batchnorm_2(x) #input 128 #print(x.size()) x = F.relu(self.conv2d_22(x)) x = self.dropout2d_1(x) #rate=0.3 x = self.Batchnorm_2(x) #input 128 #print(x.size()) x = self.maxpool2d(x) #print(x.size()) ############################# Phase 3 x = F.relu(self.conv2d_31(x)) x = self.dropout2d_2(x) #rate=0.4 x = self.Batchnorm_3(x) #input 256 #print(x.size()) x = F.relu(self.conv2d_32(x)) x = self.dropout2d_2(x) #rate=0.4 x = self.Batchnorm_3(x) #input 256 #print(x.size()) x = F.relu(self.conv2d_33(x)) x = self.dropout2d_2(x) #rate=0.4 x = self.Batchnorm_3(x) #input 256 #print(x.size()) x = self.maxpool2d(x) #print(x.size()) ############################# Phase 4 x = F.relu(self.conv2d_41(x)) x = self.dropout2d_2(x) x = self.Batchnorm_4(x) #print(x.size()) x = F.relu(self.conv2d_42(x)) x = self.dropout2d_2(x) x = self.Batchnorm_4(x) #print(x.size()) x = self.maxpool2d(x) #print(x.size()) ############################# Phase 5 x = F.relu(self.conv2d_51(x)) x = self.dropout2d_3(x) x = self.Batchnorm_4(x) #print(x.size()) x = self.avgpool2d(x) #print(x.size()) x = x.view(x.size(0), -1) #print(x.size()) x = self.dropout1d(x) x = F.relu(self.fc(x)) x = self.dropout1d(x) #print(x.size()) x = F.softmax(x) ############################### return(x) import time from torch.optim.lr_scheduler import ReduceLROnPlateau def trainNet(model, batch_size, n_epochs, learning_rate): lr=learning_rate #Print all of the hyperparameters of the training iteration: print("======= HYPERPARAMETERS =======") print("Batch size=", batch_size) print("Epochs=", n_epochs) print("Base learning_rate=", learning_rate) print("=" * 30) #Get training data n_batches = len(train_loader) #Time for printing training_start_time = time.time() #Loss function" loss = torch.nn.CrossEntropyLoss() optimizer = createOptimizer(model, lr) scheduler = ReduceLROnPlateau(optimizer, 'min' ,patience=3,factor=0.9817 ,verbose=True,) #Loop for n_epochs for epoch in range(n_epochs): #save the weightsevery 10 epochs if epoch % 10 == 0 : torch.save(model.state_dict(), 'model.ckpt') #print('learning rate : {:.3f} '.format(lr)) #Create our loss and optimizer functions running_loss = 0.0 print_every = n_batches // 10 start_time = time.time() total_train_loss = 0 total_train_acc = 0 epoch_time = 0 for i, data in enumerate(train_loader, 0): #free up the cuda memory inputs=None labels=None inputs, labels = data inputs, labels = Variable(inputs.to(device)), Variable(labels.to(device)) optimizer.zero_grad() outputs = model(inputs) score, predictions = torch.max(outputs.data, 1) acc = (labels==predictions).sum() total_train_acc += acc loss_size = loss(outputs, labels) loss_size.backward() optimizer.step() running_loss += loss_size.item() total_train_loss += loss_size.item() #Print every 10th batch of an epoch if (i + 1) % (print_every + 1) == 0: print("Epoch {}, {:d} % \t | train_loss: {:.3f} | train_acc:{}% | took: {:.2f}s".format( epoch+1, int(100 * (i+1) / n_batches), running_loss / print_every ,int(acc), time.time() - start_time)) epoch_time += (time.time() - start_time) #Reset running loss and time running_loss = 0.0 start_time = time.time() scheduler.step(total_train_loss) torch.cuda.empty_cache() #At the end of the epoch, do a pass on the validation set total_val_loss = 0 for inputs, labels in val_loader: #Wrap tensors in Variables inputs, labels = Variable(inputs.to(device)), Variable(labels.to(device)) #Forward pass val_outputs = model(inputs) val_loss_size = loss(val_outputs, labels) total_val_loss += val_loss_size.item() print("-"*30) print("Train loss = {:.2f} | Train acc = {:.1f}% | Val loss={:.2f} | took: {:.2f}s".format( total_train_loss / len(train_loader),total_train_acc/ len(train_loader) ,total_val_loss/len(val_loader),epoch_time)) print("="*60) print("Training finished, took {:.2f}s".format(time.time() - training_start_time)) CNN = SimpleCNN().to(device) CNN.eval() trainNet(CNN, batch_size=64, n_epochs=250, learning_rate=0.1) Keras: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Flatten,Activation from tensorflow.keras.layers import Conv2D, MaxPool2D,BatchNormalization,GlobalAveragePooling2D model = Sequential() ##################################################### # Phase 1 model.add(Conv2D(64,(3,3),input_shape=(32,32,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.3)) model.add(BatchNormalization()) #(32,32,3) model.add(Conv2D(64,(3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.3)) model.add(BatchNormalization()) #(32,32,3) model.add(MaxPool2D((2,2))) #(16,16,3) ##################################################### #Phase 2 model.add(Conv2D(128, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.3)) model.add(BatchNormalization()) #(16,16,3) model.add(Conv2D(128, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.3)) model.add(BatchNormalization()) #(16,16,3) model.add(MaxPool2D((2,2),padding='same')) #(8,8,3) ##################################################### #Phase 3 model.add(Conv2D(256, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.4)) model.add(BatchNormalization()) #(8,8,3) model.add(Conv2D(256, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.4)) model.add(BatchNormalization()) #(8,8,3) model.add(Conv2D(256, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.4)) model.add(BatchNormalization()) #(8,8,3) model.add(MaxPool2D((2,2))) #(4,4,3) ##################################################### #Phase 4 model.add(Conv2D(512, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.4)) model.add(BatchNormalization()) #(4,4,3) model.add(Conv2D(512, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.4)) model.add(BatchNormalization()) #(4,4,3) model.add(MaxPool2D((2,2))) #(2,2,3) ##################################################### #Phase 5 model.add(Conv2D(512, (3,3),padding='same')) model.add(Activation('relu')) model.add(Dropout(rate=0.5)) model.add(BatchNormalization()) #(2,2,3) model.add(GlobalAveragePooling2D(data_format='channels_last')) model.add(Flatten()) model.add(Dropout(rate=0.5)) model.add(Dense(10,activation='relu')) model.add(Dropout(rate=0.5)) model.add(Dense(10, activation='softmax')) model.compile(optimizer=sgd_optimizer,loss='categorical_crossentropy',metrics=['accuracy']) history=model.fit(x=x_train,y=y_train,batch_size=64, epochs=250,verbose=1,callbacks=[checkpoint],validation_data=(x_test,y_test))
Edit: on a closer look, acc doesn't seem to require gradient, so this paragraph probably doesn't apply. It looks like the most significant issue is that total_train_acc accumulates history across the training loop (see https://pytorch.org/docs/stable/notes/faq.html for details). Changing total_train_acc += acc to total_train_acc += acc.item() should fix this. Another thing you should use with torch.no_grad() for the validation loop. Not really about speed, but model.train() and model.eval() should be used for training/evaluation to make batchnorm and dropout layers work in correct mode.
https://stackoverflow.com/questions/55362722/
How to save grayscale image in Pytorch?
I want to save grayscale image in Pytorch, each image has four gray values, 0, 60, 120 and 180. I try the following way to save images, but the saved image is not I expected. for i, (inputs) in enumerate(test_generator): pred = modelPl(inputs.float()).detach() fig,ax = plt.subplots(1,1,figsize = (5,5)) ax.imshow(pred[0,:,:], cmap = "gray") print(pred.shape) torchvision.utils.save_image(pred, saveTestPath + 'img_{0}.png'.format(i)) Output: torch.Size([400, 400]) Expected image: However, the saved picture is not correct as follows:
It might be that torchvision.utils.save_image requires values to be in range 0 to 1. Your images have values which are greater than 1 and hence the problem. You can check this by dividing the tensor by 255 (or some appropriate number). You can also try to set normalize=True and see if it can automatically normalize the data for you.
https://stackoverflow.com/questions/55368465/
PyTorch doesn't seem to be optimizing correctly
I have posted this question on Data Science StackExchange site since StackOverflow does not support LaTeX. Linking it here because this site is probably more appropriate. The question with correctly rendered LaTeX is here: https://datascience.stackexchange.com/questions/48062/pytorch-does-not-seem-to-be-optimizing-correctly The idea is that I am considering sums of sine waves with different phases. The waves are sampled with some sample rate s in the interval [0, 2pi]. I need to select phases in such a way, that the sum of the waves at any sample point is minimized. Below is the Python code. Optimization does not seem to be computed correctly. import numpy as np import torch def phaseOptimize(n, s = 48000, nsteps = 1000): learning_rate = 1e-3 theta = torch.zeros([n, 1], requires_grad=True) l = torch.linspace(0, 2 * np.pi, s) t = torch.stack([l] * n) T = t + theta for jj in range(nsteps): loss = T.sin().sum(0).pow(2).sum() / s loss.backward() theta.data -= learning_rate * theta.grad.data print('Optimal theta: \n\n', theta.data) print('\n\nMaximum value:', T.sin().sum(0).abs().max().item()) Below is a sample output. phaseOptimize(5, nsteps=100) Optimal theta: tensor([[1.2812e-07], [1.2812e-07], [1.2812e-07], [1.2812e-07], [1.2812e-07]], requires_grad=True) Maximum value: 5.0 I am assuming this has something to do with broadcasting in T = t + theta and/or the way I am computing the loss function. One way to verify that optimization is incorrect, is to simply evaluate the loss function at random values for the array $\theta_1, \dots, \theta_n$, say uniformly distributed in $[0, 2\pi]$. The maximum value in this case is almost always much lower than the maximum value reported by phaseOptimize(). Much easier in fact is to consider the case with $n = 2$, and simply evaluate at $\theta_1 = 0$ and $\theta_2 = \pi$. In that case we get: phaseOptimize(2, nsteps=100) Optimal theta: tensor([[2.8599e-08], [2.8599e-08]]) Maximum value: 2.0 On the other hand, theta = torch.FloatTensor([[0], [np.pi]]) l = torch.linspace(0, 2 * np.pi, 48000) t = torch.stack([l] * 2) T = t + theta T.sin().sum(0).abs().max().item() produces 3.2782554626464844e-07
You have to move computing T inside the loop, or it will always have the same constant value, thus constant loss. Another thing is to initialize theta to different values at indices, otherwise because of the symmetric nature of the problem the gradient is the same for every index. Another thing is that you need to zero gradient, because backward just accumulates them. This seems to work: def phaseOptimize(n, s = 48000, nsteps = 1000): learning_rate = 1e-1 theta = torch.zeros([n, 1], requires_grad=True) theta.data[0][0] = 1 l = torch.linspace(0, 2 * np.pi, s) t = torch.stack([l] * n) for jj in range(nsteps): T = t + theta loss = T.sin().sum(0).pow(2).sum() / s loss.backward() theta.data -= learning_rate * theta.grad.data theta.grad.zero_()
https://stackoverflow.com/questions/55369652/
Converting Keras (Tensorflow) convolutional neural networks to PyTorch convolutional networks?
Keras and PyTorch use different arguments for padding: Keras requires a string to be input, while PyTorch works with numbers. What is the difference, and how can one be translated into another (what code gets the equivalent result in either framework)? PyTorch also takes the args in_channels, out_chanels while keras only takes an argument called filters. What does 'filters' mean?
Regarding padding, Keras => 'valid' - no padding; 'same' - input is padded so that the output shape is same as input shape Pytorch => you explicitly specify the padding Valid padding >>> model = keras.Sequential() >>> model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='valid', input_shape=(28,28,3))) >>> model.layers[0].output_shape (None, 26, 26, 10) >>> x = torch.randn((1,3,28,28)) >>> conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3) >>> conv(x).shape torch.Size([1, 10, 26, 26]) Same padding >>> model = keras.Sequential() >>> model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='same', input_shape=(28,28,3))) >>> model.layers[0].output_shape (None, 28, 28, 10) >>> x = torch.randn((1,3,28,28)) >>> conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, padding=1) >>> conv(x).shape torch.Size([1, 10, 28, 28]) W - Input Width, F - Filter(or kernel) size, P - padding, S - Stride, Wout - Output width Wout = ((W−F+2P)/S)+1 Similarly for Height. With this formula, you can calculate the amount of padding required to retain the input width or height in the output. http://cs231n.github.io/convolutional-networks/ Regarding in_channels, out_chanels and filters, filters is the same as out_channels. In Keras, the in_channels is automatically inferred from the previous layer shape or input_shape(in case of first layer).
https://stackoverflow.com/questions/55381052/
Can anyone tell me how to checking if Pytorch model exists, and if it does, delete it and replace it with a new one?
So I save a lot of torch models for training and with different batchsize and epochs, and the models are saves with strings of epoch and batchsize. Basically I sometimes change some layers hyperparamters and some augmentation to check the prediction results, but if the torch model is there, I want to delete it and replace it with the new one.
The simplest solution is simply saving a model with the same name, essentially overwriting the existing one. This is equivalent to checking if it exists, deleting and then saving. If you want to explicitly check if it exists, you can do that easily with os. import os if os.path.exists('path/to/model.pth'): # checking if there is a file with this name os.remove('path/to/model.pth') # deleting the file torch.save(model, 'path/to/model.pth') # saving a new model with the same name
https://stackoverflow.com/questions/55388781/
How to convert ndarray to autograd variable in GPU format?
I am trying to do something like this, data = torch.autograd.Variable(torch.from_numpy(nd_array)) It comes under the type as Variable[torch.FloatTensor], But I need Variable[torch.cuda.FloatTensor] also I want to do this in pytorch version 0.3.0 which lacks few methods like to(device) or set_default_device
You can use cuda() method of your tensor. If you'd like to use specific device you could go with context manager, e.g. with torch.cuda.device(device_index): t = torch.FloatTensor(1.).cuda() For more specific information check documentation for version 0.3.0.
https://stackoverflow.com/questions/55397711/
Get each sequence's last item from packed sequence
I am trying to put a packed and padded sequence through a GRU, and retrieve the output of the last item of each sequence. Of course I don't mean the -1 item, but the actual last, not-padded item. We know the lengths of the sequences in advance, so it should be as easy as to extract for each sequence the length-1 item. I tried the following import torch from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence # Data input = torch.Tensor([[[0., 0., 0.], [1., 0., 1.], [1., 1., 0.], [1., 0., 1.], [1., 0., 1.], [1., 1., 0.]], [[1., 1., 0.], [0., 1., 0.], [0., 0., 0.], [0., 1., 0.], [0., 0., 0.], [0., 0., 0.]], [[0., 0., 0.], [1., 0., 0.], [1., 1., 1.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], [[1., 1., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]]) lengths = [6, 4, 3, 1] p = pack_padded_sequence(input, lengths, batch_first=True) # Forward gru = torch.nn.GRU(3, 12, batch_first=True) packed_output, gru_h = gru(p) # Unpack output, input_sizes = pad_packed_sequence(packed_output, batch_first=True) last_seq_idxs = torch.LongTensor([x-1 for x in input_sizes]) last_seq_items = torch.index_select(output, 1, last_seq_idxs) print(last_seq_items.size()) # torch.Size([4, 4, 12]) But the shape is not what I expect. I had expected to get 4x12, i.e. last item of each individual sequence x hidden.` I could loop through the whole thing, and build a new tensor containing the items I need, but I was hoping for a built-in approach that took advantage of some smart math. I fear that manually looping and building, will result in very poor performance.
Instead of last two operations last_seq_idxs and last_seq_items you could just do last_seq_items=output[torch.arange(4), input_sizes-1]. I don't think index_select is doing the right thing. It will select the whole batch at the index you passed and therefore your output size is [4,4,12].
https://stackoverflow.com/questions/55399115/
What is the difference between view and view_as in PyTorch?
I am building neural networks in Pytorch, I see view and view_as used interchangeably in various implementation what is the difference between them?
view and view_as are very similar with a slight difference. In view() the shape of the desired output tensor is to be passed in as the parameter, whereas in view_as() a tensor whose shape is to be mimicked is passed. tensor.view_as(other) is equivalent to tensor.view(other.size())
https://stackoverflow.com/questions/55403843/
Is there a pytorch method to check the number of cpus?
I can use this torch.cuda.device_count() to check the number of GPUs. I was wondering if there was something equivalent to check the number of CPUs.
just use this : os.cpu_count()
https://stackoverflow.com/questions/55411921/
In a Kaggle kernel while having selected the GPU option when checking torch.cuda.is_available(), it says is not available
I created a kernel for a finished Kaggle competition in which i used pytorch. When checking if cuda is available, it returns False. I checked the GPU option from settings and it says it is on in the bottom bar with resources info. I tried to restart the session without any changes. What could be the problem? (cpu only pytorch installed maybe?)
The problem was with the selected docker configuration from settings. Selecting the "Latest Available" fixed the problem.
https://stackoverflow.com/questions/55426042/
How to implement low-dimensional embedding layer in pytorch
I recently read a paper about embedding. In Eq. (3), the f is a 4096X1 vector. the author try to compress the vector in to theta (a 20X1 vector) by using an embedding matrix E. The equation is simple theta = E*f I was wondering if it can using pytorch to achieve this goal, then in the training, the E can be learned automatically. How to finish the rest? thanks so much. The demo code is follow: import torch from torch import nn f = torch.randn(4096,1)
Assuming your input vectors are one-hot that is where "embedding layers" are used, you can directly use embedding layer from torch which does above as well as some more things. nn.Embeddings take non-zero index of one-hot vector as input as a long tensor. For ex: if feature vector is f = [[0,0,1], [1,0,0]] then input to nn.Embeddings will be input = [2, 0] However, what OP has asked in question is getting embeddings by matrix multiplication and below I will address that. You can define a module to do that as below. Since, param is an instance of nn.Parameter it will be registered as a parameter and will be optimized when you call Adam or any other optimizer. class Embedding(nn.Module): def __init__(self, input_dim, embedding_dim): super().__init__() self.param = torch.nn.Parameter(torch.randn(input_dim, embedding_dim)) def forward(self, x): return torch.mm(x, self.param) If you notice carefully this is the same as a linear layer with no bias and slightly different initialization. Therefore, you can achieve the same by using a linear layer as below. self.embedding = nn.Linear(4096, 20, bias=False) # change initial weights to normal[0,1] or whatever is required embedding.weight.data = torch.randn_like(embedding.weight)
https://stackoverflow.com/questions/55427386/
Training model in eval() mode gives better result in PyTorch?
I have a model with Dropout layers (with p=0.6). I ended up training the model in .eval() mode and again trained the model in .train() mode, I find that the training .eval() mode gave me better accuracy and quicker loss reduction on training data, train(): Train loss : 0.832, Validation Loss : 0.821 eval(): Train loss : 0.323, Validation Loss : 0.251 Why is this so?
This seems like the model architecture is simple and when in train mode, is not able to capture the features in the data and hence undergoes underfitting. eval() disables dropouts and Batch normalization, among other modules. This means that the model trains better without dropout helping the model the learn better with more neurons, also increasing the layer size, increasing the number of layers, decreasing the dropout probability, helps.
https://stackoverflow.com/questions/55448941/