instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Pytorch default dataloader gets stuck for large image classification training set
I am training image classification models in Pytorch and using their default data loader to load my training data. I have a very large training dataset, so usually a couple thousand sample images per class. I've trained models with about 200k images total without issues in the past. However I've found that when have over a million images in total, the Pytorch data loader get stuck. I believe the code is hanging when I call datasets.ImageFolder(...). When I Ctrl-C, this is consistently the output: Traceback (most recent call last): │ File "main.py", line 412, in <module> │ main() │ File "main.py", line 122, in main │ run_training(args.group, args.num_classes) │ File "main.py", line 203, in run_training │ train_loader = create_dataloader(traindir, tfm.train_trans, shuffle=True) │ File "main.py", line 236, in create_dataloader │ dataset = datasets.ImageFolder(directory, trans) │ File "/home/username/.local/lib/python3.5/site-packages/torchvision/datasets/folder.py", line 209, in __init__ │ is_valid_file=is_valid_file) │ File "/home/username/.local/lib/python3.5/site-packages/torchvision/datasets/folder.py", line 94, in __init__ │ samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file) │ File "/home/username/.local/lib/python3.5/site-packages/torchvision/datasets/folder.py", line 47, in make_dataset │ for root, _, fnames in sorted(os.walk(d)): │ File "/usr/lib/python3.5/os.py", line 380, in walk │ is_dir = entry.is_dir() │ Keyboard Interrupt I thought there might be a deadlock somewhere, however based off the stack output from Ctrl-C it doesn't look like its waiting on a lock. So then I thought that the dataloader was just slow because I was trying to load a lot more data. I let it run for about 2 days and it didn't make any progress, and in the last 2 hours of loading I checked the amount of RAM usage stayed the same. I also have been able to load training datasets with over 200k images in less than a couple hours in the past. I also tried upgrading my GCP machine to have 32 cores, 4 GPUs, and over 100GB in RAM, however it seems to be that after a certain amount of memory is loaded the data loader just gets stuck. I'm confused how the data loader could be getting stuck while looping through the directory, and I'm still unsure if its stuck or just extremely slow. Is there some way I can change the Pytortch dataloader to be able to handle 1million+ images for training? Any debugging suggestions are also appreciated! Thank you!
It's not a problem with DataLoader, it's a problem with torchvision.datasets.ImageFolder and how it works (and why it works much much worse the more data you have). It hangs on this line, as indicated by your error: for root, _, fnames in sorted(os.walk(d)): Source can be found here. Underlying problem is it keeps each path and corresponding label in giant list, see the code below (a few things removed for brevity): def make_dataset(dir, class_to_idx, extensions=None, is_valid_file=None): images = [] dir = os.path.expanduser(dir) # Iterate over all subfolders which were found previously for target in sorted(class_to_idx.keys()): d = os.path.join(dir, target) # Create path to this subfolder # Assuming it is directory (which usually is the case) for root, _, fnames in sorted(os.walk(d, followlinks=True)): # Iterate over ALL files in this subdirectory for fname in sorted(fnames): path = os.path.join(root, fname) # Assuming it is correctly recognized as image file item = (path, class_to_idx[target]) # Add to path with all images images.append(item) return images Obviously images will contain 1 million strings (quite lengthy as well) and corresponding int for the classes which definitely is a lot and depends on RAM and CPU. You can create your own datasets though (provided you change names of your images beforehand) so no memory will be occupied by the dataset. Setup data structure Your folder structure should look like this: root class1 class2 class3 ... Use how many classes you have/need. Now each class should have the following data: class1 0.png 1.png 2.png ... Given that you can move on to creating datasets. Create Datasets Below torch.utils.data.Dataset uses PIL to open images, you could do it in another way though: import os import pathlib import torch from PIL import Image class ImageDataset(torch.utils.data.Dataset): def __init__(self, root: str, folder: str, klass: int, extension: str = "png"): self._data = pathlib.Path(root) / folder self.klass = klass self.extension = extension # Only calculate once how many files are in this folder # Could be passed as argument if you precalculate it somehow # e.g. ls | wc -l on Linux self._length = sum(1 for entry in os.listdir(self._data)) def __len__(self): # No need to recalculate this value every time return self._length def __getitem__(self, index): # images always follow [0, n-1], so you access them directly return Image.open(self._data / "{}.{}".format(str(index), self.extension)) Now you can create your datasets easily (folder structure assumed like the one above: root = "/path/to/root/with/images" dataset = ( ImageDataset(root, "class0", 0) + ImageDataset(root, "class1", 1) + ImageDataset(root, "class2", 2) ) You could add as many datasets with specified classes as you wish, do it in loop or whatever. Finally, use torch.utils.data.DataLoader as per usual, e.g.: dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True)
https://stackoverflow.com/questions/60173417/
Size of the training data of GPT2-XL pre-trained model
In huggingface transformer, it is possible to use the pre-trained GPT2-XL language model. But I don't find, on which dataset it is trained? Is it the same trained model which OpenAI used for their paper (trained on 40GB dataset called webtext) ?
The GPT2-XL model is the biggest of the four architectures detailed in the paper you linked (1542M parameters). It is trained on the same data as the other three, which is the WebText you're mentioning.
https://stackoverflow.com/questions/60173639/
How do I use Pytorch's View function correctly?
I have a tensor of size x={4,2,C,H,W}. I need it reshape to y={8,C,H,W}, but I want to make sure the images are stored in the right order so say an image at x[1,0,:,:,:] has to be equal to y[2,C,H,W]. I know I can use the view function for this but I am not sure how to use it correctly. Currently I am doing it as such feat_imgs_all = feat_imgs_all.view( rgb.shape[0], rgb.shape[1], feat_imgs_all.shape[1], feat_imgs_all.shape[2], feat_imgs_all.shape[3]) This seems really hacky, is there a way I can just feed the first two shapes, and pytorch figures out the rest?
You can do it easily using flatten and end_dim argument, see documentation: import torch a = torch.randn(4, 2, 32, 64, 64) flattened = a.flatten(end_dim=1) torch.all(flattened[2, ...] == a[1, 0, ...]) # True view could be used as well, like below, though it's not too readable nor too pleasant: import torch a = torch.randn(4, 2, 32, 64, 64) flattened = a.view(-1, *a.shape[2:]) torch.all(flattened[2, ...] == a[1, 0, ...]) # True as well
https://stackoverflow.com/questions/60173818/
Pytorch Tensorboard SummaryWriter.add_video() Produces Bad Videos
I’m trying to create a video by generating a sequence of 500 matplotlib plots, converting each to a numpy array, stacking them and then passing them to a SummaryWriter()'s add_video(). When I do this, the colorbar is converted from colored to black & white, and only a small number (~3-4) of the matplotlib plots are repeated. I confirmed that my numpy arrays are correct by using them to recreate a matplotlib figure. My input tensor has shape (B,C,T,H,W), dtype np.uint8, and values between [0, 255]. Minimal working example below. To be clear, the code runs without any errors. My problem is that the resulting video is wrong. import matplotlib.pyplot as plt import numpy as np import torch from torch.utils.tensorboard import SummaryWriter tensorboard_writer = SummaryWriter() print(tensorboard_writer.get_logdir()) def fig2data(fig): # draw the renderer fig.canvas.draw() # Get the RGB buffer from the figure data = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8) data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) return data size = 500 x = np.random.uniform(0, 2., size=500) y = np.random.uniform(0, 2., size=500) trajectory_len = len(x) trajectory_indices = np.arange(trajectory_len) width, height = 3, 2 # tensorboard takes video of shape (B,C,T,H,W) video_array = np.zeros( shape=(1, 3, trajectory_len, height*100, width*100), dtype=np.uint8) for trajectory_idx in trajectory_indices: fig, axes = plt.subplots( 1, 2, figsize=(width, height), gridspec_kw={'width_ratios': [1, 0.05]}) fig.suptitle('Example Trajectory') # plot the first trajectory sc = axes[0].scatter( x=[x[trajectory_idx]], y=[y[trajectory_idx]], c=[trajectory_indices[trajectory_idx]], s=4, vmin=0, vmax=trajectory_len, cmap=plt.cm.jet) axes[0].set_xlim(-0.25, 2.25) axes[0].set_ylim(-0.25, 2.25) colorbar = fig.colorbar(sc, cax=axes[1]) colorbar.set_label('Trajectory Index Number') # extract numpy array of figure data = fig2data(fig) # UNCOMMENT IF YOU WANT TO VERIFY THAT THE NUMPY ARRAY WAS CORRECTLY EXTRACTED # plt.show() # fig2 = plt.figure() # ax2 = fig2.add_subplot(111, frameon=False) # ax2.imshow(data) # plt.show() # close figure to save memory plt.close(fig=fig) video_array[0, :, trajectory_idx, :, :] = np.transpose(data, (2, 0, 1)) # tensorboard takes video_array of shape (B,C,T,H,W) tensorboard_writer.add_video( tag='sampled_trajectory', vid_tensor=torch.from_numpy(video_array), global_step=0, fps=4) print('Added video') tensorboard_writer.close()
According to pytorch docs, the video tensor should have shape (N,T,C,H,W), which I think it means: batch, time, channels, height and width. You said your tensor has shape (B,C,T,H,W). So it seems as your channels and time axis are swapped.
https://stackoverflow.com/questions/60180386/
How do I serialize an NLP classification PyTorch model
I am attempting to use a new NLP model within the PyTorch android demo app Demo App Git however I am struggling to serialize the model so that it works with Android. The demonstration given by PyTorch is as follows for a Resnet model: model = torchvision.models.resnet18(pretrained=True) model.eval() example = torch.rand(1, 3, 224, 224) traced_script_module = torch.jit.trace(model, example) traced_script_module.save("app/src/main/assets/model.pt") However I am not sure what to use for the 'example' input with my NLP model. The model that I am using from a fastai tutorial and the python is linked here: model Here is the Python used to create my model (using the Fastai library). It is the same as in the model link above, but in a simplified form. from fastai.text import * path = untar_data('http://files.fast.ai/data/examples/imdb_sample') path.ls() #: [PosixPath('/storage/imdb_sample/texts.csv')] data_lm = TextDataBunch.from_csv(path, 'texts.csv') data = (TextList.from_csv(path, 'texts.csv', cols='text') .split_from_df(col=2) .label_from_df(cols=0) .databunch()) bs=48 path = untar_data('https://s3.amazonaws.com/fast-ai-nlp/imdb') data_lm = (TextList.from_folder(path) .filter_by_folder(include=['train', 'test', 'unsup']) .split_by_rand_pct(0.1) .label_for_lm() .databunch(bs=bs)) learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3) learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7)) learn.unfreeze() learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7)) learn.save_encoder('fine_tuned_enc') path = untar_data('https://s3.amazonaws.com/fast-ai-nlp/imdb') data_clas = (TextList.from_folder(path, vocab=data_lm.vocab) .split_by_folder(valid='test') .label_from_folder(classes=['neg', 'pos']) .databunch(bs=bs)) learn = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5) learn.load_encoder('fine_tuned_enc') learn.fit_one_cycle(1, 2e-2, moms=(0.8,0.7)) learn.freeze_to(-2) learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2), moms=(0.8,0.7)) learn.freeze_to(-3) learn.fit_one_cycle(1, slice(5e-3/(2.6**4),5e-3), moms=(0.8,0.7)) learn.unfreeze() learn.fit_one_cycle(2, slice(1e-3/(2.6**4),1e-3), moms=(0.8,0.7))
I worked out how to do this after a while. The issue was that the Fastai model wasn't tracing correctly no matter what shape of input I was using. In the end, I used another text classification model and got it to work. I wrote a tutorial about how I did it, in case it can help anyone else. NLP PyTorch Tracing Tutorial Begin by opening a new Jupyter Python Notebook using your preferred cloud machine provider (I use Paperspace). Next, copy and run the code in the PyTorch Text Classification tutorial. But replace the line… device = torch.device("cuda" if torch.cuda.is_available() else "cpu") With… device = torch.device("cpu") NOTE: It caused issues tracing when the device was set to CUDA so I forced it on to the CPU. (this will slow training, but inference on the mobile will run at the same speed as it is cpu anyway) Lastly, run the code below to correctly trace the model to allow it to be run on Android: data = DataLoader(test_dataset, batch_size=1, collate_fn=generate_batch) for text, offsets, cls in data: text, offsets, cls = text.to(device), offsets.to(device), cls.to(device) example = text, offsets traced_script_module = torch.jit.trace(model, example) traced_script_module.save("model.pt") In addition, if you would like a CSV copy of the vocab list for use on Android when you are making predictions, run the following code afterwards: import pandas as pd vocab = train_dataset.get_vocab() df = pd.DataFrame.from_dict(vocab.stoi, orient='index', columns=['token']) df[:30] df.to_csv('out.csv') This model should work fine on Android using the PyTorch API.
https://stackoverflow.com/questions/60181107/
How to get the predict probability?
This code get the 1 or 0 value from model. If I want to get the probability of the prediction Which line should I change? from torch.autograd import Variable results = [] #names = [] with torch.no_grad(): model.eval() print('===============================================start') for num, data in enumerate(test_loader): #print(num) print("=====================================================") imgs, label = data imgs,labels = imgs.to(device), label.to(device) test = Variable(imgs) output = model(test) #print(output) ps = torch.exp(output) print(ps) top_p, top_class = ps.topk(1, dim = 1) results += top_class.cpu().numpy().tolist() model = models.resnet50(pretrained=True) model.fc = nn.Linear(2048, num_classes) model.cuda()
Models usually outputs raw prediction logits. To convert them to probability you should use softmax function import torch.nn.functional as nnf # ... prob = nnf.softmax(output, dim=1) top_p, top_class = prob.topk(1, dim = 1) new variable top_p should give you the probability of the top k classes.
https://stackoverflow.com/questions/60182984/
How to check the root cause of CUDA out of memory issue in the middle of training?
I'm running roberta on huggingface language_modeling.py. After doing 400 steps I suddenly get a CUDA out of memory issue. Don't know how to deal with it. Can you please help? Thanks
My problem was that I didn't check the size of my GPU memory with comparison to the sizes of samples. I had a lot of pretty small samples and after many iterations a large one. My bad. Thank you and remember to check these things if it happens to you to.
https://stackoverflow.com/questions/60184117/
How to embed Sequence of Sentences in RNN?
I am trying to make a RNN model (in Pytorch), that takes couple of sentences and then classifies it to be either Class 0 or Class 1. For the sake of this question let's assume that the max_len of the sentence is 4 and max_amount of time steps is 5. Thus, each datapoint is on the form (0 is a value that used for padding padded value): x[1] = [ # Input features at timestep 1 [1, 48, 91, 0], # Input features at timestep 2 [20, 5, 17, 32], # Input features at timestep 3 [12, 18, 0, 0], # Input features at timestep 4 [0, 0, 0, 0], # Input features at timestep 5 [0, 0, 0, 0] ] y[1] = [1] When I have just one sentence per target: I simply pass each word to the embedding layer and then to the LSTM or GRU, but I am a bit stuck on what to do when I have a sequence of sentences per target? How do I build an embedding that can handle sentences?
The simplest way is to use 2 kinds of LSTM. Prepare the toy dataset xi = [ # Input features at timestep 1 [1, 48, 91, 0], # Input features at timestep 2 [20, 5, 17, 32], # Input features at timestep 3 [12, 18, 0, 0], # Input features at timestep 4 [0, 0, 0, 0], # Input features at timestep 5 [0, 0, 0, 0] ] yi = 1 x = torch.tensor([xi, xi]) y = torch.tensor([yi, yi]) print(x.shape) # torch.Size([2, 5, 4]) print(y.shape) # torch.Size([2]) Then, x is the batch of inputs. Here batch_size = 2. Embed the input vocab_size = 1000 embed_size = 100 hidden_size = 200 embed = nn.Embedding(vocab_size, embed_size) # shape [2, 5, 4, 100] x = embed(x) The first word-LSTM is to encode each sequence into a vector # convert x into a batch of sequences # Reshape into [2, 20, 100] x = x.view(bs * 5, 4, 100) wlstm = nn.LSTM(embed_size, hidden_size, batch_first=True) # get the only final hidden state of each sequence _, (hn, _) = wlstm(x) # hn shape [1, 10, 200] # get the output of final layer hn = hn[0] # [10, 200] The second seq-LSTM is to encode sequences into a single vector # Reshape hn into [bs, num_seq, hidden_size] hn = hn.view(2, 5, 200) # Pass to another LSTM and get the final state hn slstm = nn.LSTM(hidden_size, hidden_size, batch_first=True) _, (hn, _) = slstm(hn) # [1, 2, 200] # Similarly, get the hidden state of the last layer hn = hn[0] # [2, 200] Add some classification layers pred_linear = nn.Linear(hidden_size, 1) # [2, 1] output = torch.sigmoid(pred_linear(hn))
https://stackoverflow.com/questions/60186944/
TypeError: 'tuple' object cannot be interpreted as an integer while creating data generators PyTorch
I am new to PyTorch and I am learning to create batches of data for segmentation. The code is shown below: class NumbersDataset(Dataset): def __init__(self): self.X = list(df['input_img']) self.y = list(df['mask_img']) def __len__(self): return len(self.X), len(self.y) def __getitem__(self, idx): return self.X[idx], self.y[idx] if __name__ == '__main__': dataset = NumbersDataset() dataloader = DataLoader(dataset, batch_size=50, shuffle=True, num_workers=2) # print(len(dataset)) # plt.imshow(dataset[100]) # plt.show() print(next(iter(dataloader))) where df['input_img'] column contains the location of the image ('/path/to/pic/480p/boxing-fisheye/00010.jpg') and df['mask_img'] contains the location of all the mask images. I am trying to load the images but I get the error: TypeError: 'tuple' object cannot be interpreted as an integer However, if I don't use DataLoader and just do the following: dataset = NumbersDataset() print(len(dataset)) print(dataset[10:20]) then I get what I expect. Can someone tell me what am I doing wrong?
You cannot return a tuple for the __len__ method. The expected type is int # perhaps you can add the list length's for the total length # but no matter how you choose to implement the method you can # only return on value of type integer `int` def __len__(self): return len(self.X) + len(self.y)
https://stackoverflow.com/questions/60191935/
Why is very simple PyTorch LSTM model not learning?
I am trying to do very simple learning so that I can better understand how PyTorch and LSTMs work. To that end, I am trying to learn a mapping from an input tensor to an output tensor (same shape) that is twice the value. So [1 2 3] as input should learn [2 4 6] as an output. To that end, I have a dataloader: class AudioDataset(Dataset): def __init__(self, corrupted_path, train_set=False, test_set=False): torch.manual_seed(0) numpy.random.seed(0) def __len__(self): return len(self.file_paths) def __getitem__(self, index): random_tensor = torch.rand(1, 5) * 2 random_tensor = random_tensor - 1 return random_tensor, random_tensor * 2 My LSTM itself is pretty simple: class MyLSTM(nn.Module): def __init__(self, input_size=4000): super(MyLSTM, self).__init__() self.lstm = nn.LSTM(input_size=input_size, hidden_size=input_size, num_layers=2) def forward(self, x): y = self.lstm(x) return y My training looks like: train_loader = torch.utils.data.DataLoader( train_set, batch_size=1, shuffle=True, **kwargs) model = MyLSTM(input_size=5) optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001) loss_fn = torch.nn.MSELoss(reduction='sum') for epoch in range(300): for i, data in enumerate(train_loader): inputs = data[0] outputs = data[1] print('inputs', inputs, inputs.size()) print('outputs', outputs, outputs.size()) optimizer.zero_grad() pred = model(inputs) print('pred', pred[0], pred[0].size()) loss = loss_fn(pred[0], outputs) model.zero_grad() loss.backward() optimizer.step() After 300 epochs, my loss looks like tensor(1.4892, grad_fn=<MseLossBackward>). Which doesn't appear to be very good. Randomly looking at some of the inputs / outputs and predictions: inputs tensor([[[0.5050, 0.4669, 0.8310, ..., 0.0659, 0.5043, 0.8885]]]) torch.Size([1, 1, 4000]) outputs tensor([[[1.0100, 0.9338, 1.6620, ..., 0.1319, 1.0085, 1.7770]]]) torch.Size([1, 1, 4000]) pred tensor([[[ 0.6930, 0.0231, -0.6874, ..., -0.5225, 0.1096, 0.5796]]], grad_fn=<StackBackward>) torch.Size([1, 1, 4000]) We see that it hasn't learned very much at all. I can't understand what I'm doing wrong; if someone can guide me, that would be greatly appreciated.
LSTMs are made of neurons that generate an internal state based upon a feedback loop from previous training data. Each neuron has four internal gates that take multiple inputs and generate multiple outputs. It's one of the more complex neurons to work with and understand, and I'm not really skilled enough to give an in-depth answer. What I see in your example code is a lack of understanding of how they work, and it seems like you're assuming they work like a linear layer. I say that, because your forward method doesn't handle the internal state and you're not reshaping the outputs. You define the LSTM like this: self.lstm = nn.LSTM(input_size=input_size, hidden_size=input_size, num_layers=2) The hidden_size relates to how memory and features work with the gates. PyTorch documentation says the following: hidden_size – The number of features in the hidden state h It is referring to the size of the hidden state used to train the internal gates for long and short term memory. The gates are a function across the hidden features that store previous gate outputs. Each time a neuron is trained the hidden state is updated and is used again for the next training data. So why is this so important? You are throwing away the hidden state data during training, and I don't know what happens if you don't define the hidden state. I assume the LSTM works as if there is never any history. The forward function should look something like this: def forward(self, x, hidden): lstm_output, hidden = self.lstm(x, hidden) return lstm_output, hidden During training you have to keep track of the hidden state yourself. for i in range(epochs): hidden = (torch.zeros(num_layers, batch_size, num_hidden), torch.zeros(num_layers, batch_size, num_hidden)) for x, y in generate_batches(...): # missing code.... lstm_output, hidden = model.forward(x, hidden) Take note of the shape for the hidden state. It's different from what you usually do with linear layers. There are some steps above missing that relate to resetting the hidden state, but I can't remember how that part works. LSTMs on their own only describe features much like convolution layers. It's unlikely that the outputs from a LSTMs is what you're interested in using. Most models that use LSTMs or convolutions will have a bottom section of fully connected layers (for example: nn.Linear()). These layers will train on the features to predict the outputs you're interested in. The problem here is that the outputs from LSTMs are in the wrong shape, and you have to reshape the tensors so that a linear layer can use them. Here is an example LSTM forward function that I have used: def forward(self, x, hidden): lstm_output, hidden = self.lstm(x, hidden) drop_output = self.dropout(lstm_output) drop_output = drop_output.contiguous().view(-1, self.num_hidden) final_out = self.fc_linear(drop_output) return final_out, hidden LSTMs are definitely an advanced topic in machine learning, and PyTorch isn't an easy library to learn to begin with. I would recommend reading up on LSTMs using the TensorFlow documentation and online blogs to get a better grasp of how they work.
https://stackoverflow.com/questions/60196755/
Advance indexing in Pytorch to get rid of nested for-loops
I have a situation for which I am using nested for-loops, but I want to know if there's a faster way of doing this using some advanced indexing in Pytorch. I have a tensor named t: t = torch.randn(3,8) print(t) tensor([[-1.1258, -1.1524, -0.2506, -0.4339, 0.8487, 0.6920, -0.3160, -2.1152], [ 0.4681, -0.1577, 1.4437, 0.2660, 0.1665, 0.8744, -0.1435, -0.1116], [ 0.9318, 1.2590, 2.0050, 0.0537, 0.6181, -0.4128, -0.8411, -2.3160]]) I want to create a new tensor which indexes values from t. Let's say these indexes are stored in variable indexes indexes = [[(0, 1, 4, 5), (0, 1, 6, 7), (4, 5, 6, 7)], [(2, 3, 4, 5)], [(4, 5, 6, 7), (2, 3, 6, 7)]] Each inner tuple in indexes represents four indexes that are to be taken from a row. As an example, based on these indexes my output would be a 6x4 dimension tensor (6 is the total number of tuples in indexes, and 4 corresponds to one value in a tuple) For instance, this is what I want to do: #counting the number of tuples in indexes count_instances = sum([1 for lst in indexes for tupl in lst]) #creating a zero output matrix final_tensor = torch.zeros(count_instances,4) final_tensor[0] = t[0,indexes[0][0]] final_tensor[1] = t[0,indexes[0][1]] final_tensor[2] = t[0,indexes[0][2]] final_tensor[3] = t[1,indexes[1][0]] final_tensor[4] = t[2,indexes[2][0]] final_tensor[5] = t[2,indexes[2][1]] The final output looks like this: print(final_tensor) tensor([[-1.1258, -1.1524, 0.8487, 0.6920], [-1.1258, -1.1524, -0.3160, -2.1152], [ 0.8487, 0.6920, -0.3160, -2.1152], [ 1.4437, 0.2660, 0.1665, 0.8744], [ 0.6181, -0.4128, -0.8411, -2.3160], [ 2.0050, 0.0537, -0.8411, -2.3160]]) I created a function build_tensor (shown below) to achieve this with nested for-loops, but I want to know if there's a faster way of doing it with simple indexing in Pytorch. I want a faster way of doing it because I'm doing this operation hundreds of times with bigger index and t sizes. Any help? def build_tensor(indexes, t): #count tuples count_instances = sum([1 for lst in indexes for tupl in lst]) #create a zero tensor final_tensor = torch.zeros(count_instances,4) final_tensor_idx = 0 for curr_idx, lst in enumerate(indexes): for tupl in lst: final_tensor[final_tensor_idx] = t[curr_idx,tupl] final_tensor_idx+=1 return final_tensor
You can arrange the indices into 2D arrays then do the indexing in one shot like this: rows = [(row,)*len(index_tuple) for row, row_indices in enumerate(indexes) for index_tuple in row_indices] columns = [index_tuple for row_indices in indexes for index_tuple in row_indices] final_tensor = t[rows, columns]
https://stackoverflow.com/questions/60197465/
Pytorch summary only works for one specific input size for U-Net
I am trying to implement the UNet architecture in Pytorch. When I print the model using print(model) I get the correct architecture: but when I try to print the summary using (or any other input size for that matter): from torchsummary import summary summary(model, input_size=(13, 572, 572)) I get an error: RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 70 and 71 in dimension 2 at /Users/distiller/project/conda/conda-bld/pytorch_1579022061893/work/aten/src/TH/generic/THTensor.cpp:612 However, it works perfectly if I give the input_size as input_size=(3, 224, 224))( like it worked for this person here). I am so baffled. Can someone help me what's wrong? Edit: I have used the model architecture from here.
This UNet architecture you provided doesn't support that shape (unless the depth parameter is <= 3). Ultimately the reason for this is that the size of a downsampling operation isn't invertible since multiple input shapes map to the same output shape. For example consider >> torch.nn.functional.max_pool2d(torch.zeros(1, 1, 10, 10), 2).shape torch.Size([1, 1, 5, 5]) >> torch.nn.functional.max_pool2d(torch.zeros(1, 1, 11, 11), 2).shape torch.Size([1, 1, 5, 5]) So the question is, given only the output shape is 5x5, what was the shape of the input? Was it 10x10 or 11x11? This same phenomenon applies to downsampling via strided convolutions. The problem is that the UNet class tries to combine features from the downsampling half to the network to the features in the upsampling half. If it "guesses wrong" about the original shape during upsampling then you will receive a dimension mismatch error. To avoid this issue you'll need to ensure that the height and width of your input data are multiples of 2**(depth-1). So, for the default depth=5 you need the input image height and width to be a multiple of 16 (e.g. 560 or 576). Alternatively, since 572 is divisible by 4 then you could also set depth=3 to make it work.
https://stackoverflow.com/questions/60199088/
Pysyft Federated learning, Error with Websockets
I am trying to run a federated learning from pysyft (https://github.com/OpenMined/PySyft/blob/dev/examples/tutorials/advanced/websockets-example-MNIST-parallel/Asynchronous-federated-learning-on-MNIST.ipynb) that creates remote workers and connect to them via websockets. however I am getting an error in folllowing evaluation step. future: <Task finished coro=<WebsocketServerWorker._producer_handler() done, defined at C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py:95> exception=AttributeError("'dict' object has no attribute 'owner'")> Traceback (most recent call last): File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 663, in register_response register_response_function = register_response_functions[attr_id] KeyError: 'evaluate' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py", line 113, in _producer_handler response = self._recv_msg(message) File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\websocket_server.py", line 124, in _recv_msg return self.recv_msg(message) File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\base.py", line 310, in recv_msg response = self._message_router[type(msg)](msg.contents) File "C:\Users\Public\Anaconda\lib\site-packages\syft\workers\base.py", line 457, in execute_command command_name, response, list(return_ids), self File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 672, in register_response new_response = register_response_function(response, response_ids=response_ids, owner=owner) File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 766, in <lambda> return lambda x, **kwargs: f(lambdas, x, **kwargs) File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 522, in two_fold return lambdas[0](args[0], **kwargs), lambdas[1](args[1], **kwargs) File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 744, in <lambda> else lambda i, **kwargs: register_tensor(i, **kwargs) File "C:\Users\Public\Anaconda\lib\site-packages\syft\generic\frameworks\hook\hook_args.py", line 712, in register_tensor tensor.owner = owner AttributeError: 'dict' object has no attribute 'owner' There are no clear answer from their forum. does anyone have any clue as to what the issue is in this script. My syft version: syft : 0.2.3a1 syft-proto : 0.1.1a1.post12 torch : 1.4.0
I came across this problem as well and pushed a fix in https://github.com/OpenMined/PySyft/pull/2948
https://stackoverflow.com/questions/60202610/
Why pytorch model cannot recognize the tensors I defined?
I'm just learn pytorch recently. And I try to write a same model like the paper that I have read for practice. This is the PDF of the paper I refer. https://dl.acm.org/doi/pdf/10.1145/3178876.3186066?download=true Here is the code what I wrote. class Tem(torch.nn.Module): def __init__(self, embedding_size, hidden_size): super(Tem, self).__init() self.embedding_size = embedding_size self.hidden_size = hidden_size self.leaf_size = 0 self.xgb_model = None self.vec_embedding = None self.multi_hot_Q = None self.user_embedding = torch.nn.Linear(1, embedding_size) self.item_embedding = torch.nn.Linear(1, embedding_size) def pretrain(self, ui_attributes, labels): print("Start XGBoost Training...") self.xgb_model = XGBoost(ui_attributes, labels) self.leaf_size = self.xgb_model.leaf_size self.vec_embedding = Variable(torch.rand(self.embedding_size, self.leaf_size, requires_grad=True)) self.h = Variable(torch.rand(self.hidden_size, 1, requires_grad=True)) self.att_w = Variable(torch.rand(2 * self.embedding_size, self.hidden_size, requires_grad=True)) self.att_b = Variable(torch.rand(self.leaf_size, self.hidden_size, requires_grad=True)) self.r_1 = Variable(torch.rand(self.embedding_size, 1, requires_grad=True)) self.r_2 = Variable(torch.rand(self.embedding_size, 1, requires_grad=True)) self.bias = Variable(torch.rand(1, 1, requires_grad=True)) def forward(self, ui_ids, ui_attributes): if self.xgb_model == None: raise Exception("Please run Tem.pretrain() to pre-train XGBoost model first.") n_data = len(ui_ids) att_input = torch.FloatTensor(ui_attributes) self.multi_hot_Q = torch.FloatTensor(self.xgb_model.multi_hot(att_input)).permute(0,2,1) vq = self.vec_embedding * self.multi_hot_Q id_input = torch.FloatTensor(ui_ids) user_embedded = self.user_embedding(id_input[:,0].reshape(n_data, 1)) item_embedded = self.item_embedding(id_input[:,1].reshape(n_data, 1)) ui = (user_embedded * item_embedded).reshape(n_data, self.embedding_size, 1) ui_repeat = ui.repeat(1, 1, self.leaf_size) cross = torch.cat([ui_repeat, vq], dim=1).permute(0,2,1) re_cross = corss.reshape(cross.shape[0] * cross.shape[1], cross.shape[2]) attention = torch.mm(re_cross, self.att_w) attention = F.leaky_relu(attention + self.att_b.repeat(n_data, 1)) attention = torch.mm(attention, self.h).reshape(n_data, self.leaf_size) attention = F.softmax(attention).reshape(n_data, self.leaf_size, 1) attention = self.vec_embedding.permute(1,0) * attention.repeat(1,1,20) pool = torch.max(attention, 1).values y_hat = self.bias.repeat(n_data, 1) + torch.mm(ui.reshape(n_data, self.embedding_size), self.r_1) + torch.mm(pool, self.r_2) y_hat = F.softmax(torch.nn.Linear(1, 2)(y_hat)) return y_hat My question is...It seems torch didn't know what tensor should be calculate gradient in backward propagation. print(tem) Tem( (user_embedding): Linear(in_features=1, out_features=20, bias=True) (item_embedding): Linear(in_features=1, out_features=20, bias=True) ) I googled this problem, someone says those tensors should use torch.autograd.Variable(), but it didn't solve my problem. And someone says autograd directly supports tensors now. torch.autograd.Variable() is not necessary. loss_func = torch.nn.CrossEntropyLoss() optimizer = torch.Adagrad(tem.parameters(), lr=0.02) for t in range(20): prediction = tem(ids_train, att_train) loss = loss_func(prediction, y_train) optimizer.zero_grad() loss.backward() optimizer.step() if t % 5 == 0: print("loss: ", loss) loss: tensor(0.8133, grad_fn=<NllLossBackward>) loss: tensor(0.8133, grad_fn=<NllLossBackward>) loss: tensor(0.8133, grad_fn=<NllLossBackward>) loss: tensor(0.8133, grad_fn=<NllLossBackward>)
Your problem is not related to Variable. As you said, it's not necessary anymore. To compute the gradients of a tensor declared in a model (that extends nn.Module) you need to include them into the model's parameters using the method nn.Parameter(). For example, to include self.h, you can do: self.h = nn.Parameter(torch.zeros(10,10) Now, when you call loss.backward() it'll collect the gradient for this variable (of course, loss must be dependent on self.h).
https://stackoverflow.com/questions/60205023/
Pytorch transforms.RandomRotation() does not work on Google Colab
Normally i was working on letter&digit recognition on my computer and I wanted to move my project to Colab but unfortunately there was an error (you can see the error below). after some debugging i found which line is giving me error. transforms.RandomRotation(degrees=(90, -90)) below i wrote simple abstract code to show this error.This code does not work on colab but it works fine at my own computer environment.Problem might be about the different versions of pytorch library i have version 1.3.1 on my computer and colab uses version 1.4.0. import torch import torchvision from torchvision import datasets, transforms import matplotlib.pyplot as plt transformOpt = transforms.Compose([ transforms.RandomRotation(degrees=(90, -90)), transforms.ToTensor() ]) train_set = datasets.MNIST( root='', train=True, transform=transformOpt, download=True) test_set = datasets.MNIST( root='', train=False, transform=transformOpt, download=True) train_loader = torch.utils.data.DataLoader( dataset=train_set, batch_size=100, shuffle=True) test_loader = torch.utils.data.DataLoader( dataset=test_set, batch_size=100, shuffle=False) images, labels = next(iter(train_loader)) plt.imshow(images[0].view(28, 28), cmap="gray") plt.show() The full error I got when I execute this sample code above on Google Colab. TypeError Traceback (most recent call last) <ipython-input-1-8409db422154> in <module>() 24 shuffle=False) 25 ---> 26 images, labels = next(iter(train_loader)) 27 plt.imshow(images[0].view(28, 28), cmap="gray") 28 plt.show() 10 frames /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self) 343 344 def __next__(self): --> 345 data = self._next_data() 346 self._num_yielded += 1 347 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 383 def _next_data(self): 384 index = self._next_index() # may raise StopIteration --> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 386 if self._pin_memory: 387 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.6/dist-packages/torchvision/datasets/mnist.py in __getitem__(self, index) 95 96 if self.transform is not None: ---> 97 img = self.transform(img) 98 99 if self.target_transform is not None: /usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in __call__(self, img) 68 def __call__(self, img): 69 for t in self.transforms: ---> 70 img = t(img) 71 return img 72 /usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py in __call__(self, img) 1001 angle = self.get_params(self.degrees) 1002 -> 1003 return F.rotate(img, angle, self.resample, self.expand, self.center, self.fill) 1004 1005 def __repr__(self): /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py in rotate(img, angle, resample, expand, center, fill) 727 fill = tuple([fill] * 3) 728 --> 729 return img.rotate(angle, resample, expand, center, fillcolor=fill) 730 731 /usr/local/lib/python3.6/dist-packages/PIL/Image.py in rotate(self, angle, resample, expand, center, translate, fillcolor) 2003 w, h = nw, nh 2004 -> 2005 return self.transform((w, h), AFFINE, matrix, resample, fillcolor=fillcolor) 2006 2007 def save(self, fp, format=None, **params): /usr/local/lib/python3.6/dist-packages/PIL/Image.py in transform(self, size, method, data, resample, fill, fillcolor) 2297 raise ValueError("missing method data") 2298 -> 2299 im = new(self.mode, size, fillcolor) 2300 if method == MESH: 2301 # list of quads /usr/local/lib/python3.6/dist-packages/PIL/Image.py in new(mode, size, color) 2503 im.palette = ImagePalette.ImagePalette() 2504 color = im.palette.getcolor(color) -> 2505 return im._new(core.fill(mode, size, color)) 2506 2507 TypeError: function takes exactly 1 argument (3 given)
You're absolutely correct. torchvision 0.5 has a bug in RandomRotation() in the fill argument probably due to incompatible Pillow version. This issue has now been fixed (PR#1760) and will be resolved in the next release. Temporarily, you add fill=(0,) to RandomRotation transform to fix it. transforms.RandomRotation(degrees=(90, -90), fill=(0,))
https://stackoverflow.com/questions/60205829/
FileNotFoundError in ...\.cache\ using SentenceTransformer
Does anyone have experience working with SentenceTransformer (Bert)? My Code: from sentence_transformers import SentenceTransformer model = SentenceTransformer('roberta-large-nli-stsb-mean-tokens') My Error: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\ga2943/.cache\\torch\\sentence_transformers\\public.ukp.informatik.tu-darmstadt.de_reimers_sentence-transformers_v0.2_roberta-large-nli-stsb-mean-tokens.zip\\modules.json' Once in a while I get a different error running the same Code with another Bert-model: PermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\Users\\ga2943/.cache\\torch\\sentence_transformers\\public.ukp.informatik.tu-darmstadt.de_reimers_sentence-transformers_v0.2_bert-large-nli-cls-token.zip\\model.zip' (Translates to: Data is beeing used in a different Process at the time, which cant be the case. I restarted my machine to ensure all programs are closed in the background but no difference) The first time I ran my code it worked. It downloaded the Sentenceencoder I wanted (different Bert model) Now that I want to test a different model, it does not download it but gives me the errors. Funny thing is, on a different Computer I was able to load a different model but it resulted in the same case. Only the first ever downloaded model works on the machine. Any ideas? Still getting the error, no one knows how to fix it?
I had to re-install everything over again to solve the issue. I cant put my finger on what caused the error. As I did not delete or update anything myself I am assuming anaconda patched something without letting me know. The second error still remains and occurs unpredictable from time to time.
https://stackoverflow.com/questions/60223717/
Use of scheduler with self-ajusting optimizers in PyTorch
In PyTorch, the weight adjustment policy is determined by the optimizer, and the learning rate is adjusted with a scheduler. When the optimizer is SGD, there is only one learning rate and this is straightforward. When using Adagrad, Adam, or any similar optimizer which inherently adjusts the learning rate on a per-parameter basis, is there something in particular to look out for? Can I ignore the scheduler completely since the algorithm adjusts its own learning rates? Should I parameterize it very differently than if I'm using SGD?
The learning rate you define for optimizers like ADAM are upper bounds. You can see this in the paper in Section 2.1. The stepsize α in the paper is the learning rate. The effective magnitude of the steps taken in parameter space at each are approximately bounded by the stepsize setting α Also this stepsize α is directly used and multiplied with the step size correction, which is learned. So changing the learning rate e.g. reducing it will reduce all individual learning rates and reduces the upper bound. This can be helpful during the "end" of an training, to reduce the overall step sizes, so only smaller steps occur and might help the network to find a minima in the loss function. I saw learning rate decay in some papers using ADAM and used it myself and it did help. What I found is that you should do it slower than e.g. with SGD. With one model I just multiply it with 0.8 every 10 epochs. So it is a gradual decay which I think works better than more drastic steps since you don't "invalidate" the estimated momentums to much. But this is just my theory.
https://stackoverflow.com/questions/60229897/
Pytorch torch.cholesky ignoring exception
For some matrices on my batch I'm having an exception due the matrix being singular. L = th.cholesky(Xt.bmm(X)) cholesky_cpu: For batch 51100: U(22,22) is zero, singular U Since they are few for my use case I would like to ignore the exception and further deal with them. I will set the resulting calculation as nan is it possible somehow? Actually if I catch the exception and use continue still it doesn’t finish the calculation of the rest of the batch. The same happens in C++ with Pytorch libtorch.
It's not possible to catch the exception according to Pytorch Discuss forum. The solution, unfortunately, was to implement my own simple batched cholesky (th.cholesky(..., upper=False)) and then deal with Nan values using th.isnan. import torch as th # nograd cholesky def cholesky(A): L = th.zeros_like(A) for i in range(A.shape[-1]): for j in range(i+1): s = 0.0 for k in range(j): s = s + L[...,i,k] * L[...,j,k] L[...,i,j] = th.sqrt(A[...,i,i] - s) if (i == j) else \ (1.0 / L[...,j,j] * (A[...,i,j] - s)) return L
https://stackoverflow.com/questions/60230464/
PyTorch tensor indexing or conditional selection?
for c in range(self.n_class): target[c][label == c] = 1 self.n_class is 32. and target is 32 x 1024 x 2048 tensor. I know that target[c] select the each one of 1 x 1024 x 2048. But I don't understand [label == c]. Because by thumb of rule, an integer should go in the square []. Could someone explain what second square does and how it makes sense?
PyTorch supports "Advanced Indexing." It implements the ability to accept a tensor argument to the [] operator. The result of the == operator is a boolean mask. The [] operator is using that mask to select elements. This example below might help clarify: >>> x=torch.arange(0,10) >>> x tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> x < 5 tensor([ True, True, True, True, True, False, False, False, False, False]) >>> x[x < 5] tensor([0, 1, 2, 3, 4]) >>> x[x > 5] tensor([6, 7, 8, 9]) >>> Some general docs: https://www.pythonlikeyoumeanit.com/Module3_IntroducingNumpy/BasicIndexing.html Advanced indexing in numpy: https://numpy.org/doc/1.18/reference/arrays.indexing.html
https://stackoverflow.com/questions/60232334/
How can I implement basic question answering with hugging-face?
I have: from transformers import XLNetTokenizer, XLNetForQuestionAnswering import torch tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') model = XLNetForQuestionAnswering.from_pretrained('xlnet-base-cased') input_ids = torch.tensor(tokenizer.encode("What is my name?", add_special_tokens=True)).unsqueeze(0) # Batch size 1 start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) loss = outputs[0] print(outputs) print(loss) As per the docs. This does something giving: (tensor(2.3008, grad_fn=<DivBackward0>),) tensor(2.3008, grad_fn=<DivBackward0>) However, I want an actual answer if possible?
Thanks to Joe Davison for providing the answer on Twitter: from transformers import pipeline qa = pipeline('question-answering') response = qa(context='I like to eat apples, but hate bananas.', question='What do I like?') print(response) gives a response of: {'score': 0.282511100858045, 'start': 31, 'end': 38, 'answer': 'bananas.'} Not quite right, but at least the score is low.
https://stackoverflow.com/questions/60232485/
Too slow first run TorchScript model and its implementation in Flask
I'm trying to deploy torchscripted model in Python and Flask. As I realized (at least as mentioned here ) that scripted models need to be "warmed up" before using, so first run of such models takes much longer than subsequent ones. My question is: is there any way to load torchscripted models in Flask route and predict without loss of "worm-up" time? Can I store somewhere "warm-uped" model to avoid warming-up in every request? I wrote simple code that reproduce the "warm-up" pass: import torchvision, torch, time model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) model = torch.jit.script(model) model.eval() x = [torch.randn((3,224,224))] for i in range(3): start = time.time() model(x) print(‘Time elapsed: {}’.format(time.time()-start)) Output: Time elapsed: 38.29<br> Time elapsed: 6.65<br> Time elapsed: 6.65<br> And Flask code: import torch, torchvision, os, time from flask import Flask app = Flask(__name__) @app.route('/') def test_scripted_model(path='/tmp/scripted_model.pth'): if os.path.exists(path): model = torch.jit.load(path, map_location='cpu') else: model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True) model = torch.jit.script(model) torch.jit.save(model, path) model.eval() x = [torch.randn((3, 224, 224))] out = '' for i in range(3): start = time.time() model(x) out += 'Run {} time: {};\t'.format(i+1, round((time.time() - start), 2)) return out if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, debug=False) Output: Run 1 time: 46.01; Run 2 time: 8.76; Run 3 time: 8.55; OS: Ubuntu 18.04 & Windows10 Python version: 3.6.9 Flask: 1.1.1 Torch: 1.4.0 Torchvision: 0.5.0 Update: Solved "warm-up" problem as: with torch.jit.optimized_execution(False): model(x) Update2: Solved Flask problem (as mentioned below) with creating global python model object before server starts and warming up it there. Then in each request the model is ready to use. model = torch.jit.load(path, map_location='cpu').eval() model(x) app = Flask(__name__) and then in @app.route: @app.route('/') def test_scripted_model(): global model ... ...
Can I store somewhere "warm-uped" model to avoid warming-up in every request? Yes, just instantiate your model outside of the test_scripted_model function and refer to it from within the function.
https://stackoverflow.com/questions/60232846/
How to make a Truncated normal distribution in pytorch?
I want to create a Truncated normal distribution(that is Gaussian distribution with a range) in PyTorch. I want to be able to change the mean, std, and range. Is there a PyTorch method for that?
Use torch.nn.init.trunc_normal_. Description as given Here: Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution :math:\mathcal{N}(\text{mean}, \text{std}^2) with values outside :math:[a, b] redrawn until they are within the bounds. The method used for generating the random values works best when :math:a \leq \text{mean} \leq b.
https://stackoverflow.com/questions/60233216/
PyTorch Geometric CUDA installation issues on Google Colab
I was working on a PyTorch Geometric project using Google Colab for CUDA support. Since it's library isn't present by default, I run: !pip install --upgrade torch-scatter !pip install --upgrade torch-sparse !pip install --upgrade torch-cluster !pip install --upgrade torch-spline-conv !pip install torch-geometric Recently, while importing torch_geometric, owing to version upgrades, there's a CUDA version mismatch saying: RuntimeError: Detected that PyTorch and torch_sparse were compiled with different CUDA versions. PyTorch has CUDA version 10.1 and torch_sparse has CUDA version 10.0. Please reinstall the torch_sparse that matches your PyTorch install. To solve this, I tried using conda for specific CUDA version as: !conda install pytorch==1.4.0 cudatoolkit=10.0 -c pytorch Yet, on running print(torch.version.cuda), I get 10.1 as the output and not 10.0 as I wanted. This is a recent error since it wasn't throwing up this issue in past week. Any best practice to solve this issue?
From their website Try this !pip install torch-geometric \ torch-sparse==latest+cu101 \ torch-scatter==latest+cu101 \ torch-cluster==latest+cu101 \ -f https://pytorch-geometric.com/whl/torch-1.4.0.html
https://stackoverflow.com/questions/60236134/
conv2d(): argument 'input' (position 1) must be Tensor, not str in loop function
I am trying to write a loop function to extract features from my data and save it in the list. And here is my code: import pickle output_features = [] with torch.no_grad(): for (inputs, labels) in dataloaders.items(): x=feature_extractor(inputs) output_features.append(x) output_features = torch.cat(output_features).numpy pickle.dump(output_features, open("features.pkl", "w")) But i have an error. conv2d(): argument 'input' (position 1) must be Tensor, not str Then I changed again it to with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): x=feature_extractor(inputs) output_features.append(x) output_features = torch.cat(output_features).numpy pickle.dump(output_features, open("features.pkl", "wb")) Then I got this error 'builtin_function_or_method' object has no attribute 'append' Please, someone, explain to me why I got this error and how to get features from each input with its label and save them in an array. Thank you
For the first issue, it really depends on how the data is saved into the dataloader. But according to convention, I'm guessing it holds tuples of (input tensor, label), which can simply be accessed by for (inputs, labels) in dataloader:. The enumeration isn't compulsory, unless you would like to keep count of each iteration. As for the second error, this is because you accidentally wrote output_features = torch.cat(output_features).numpy, which calls a built-in method, instead of applying it to the output tensor. You can correct this to output_features = torch.cat(output_features).numpy() and you should be able to get the output features fine!
https://stackoverflow.com/questions/60237455/
pytorch RuntimeError: Expected object of scalar type Double but got scalar type Float
I am trying to implement a custom dataset for my neural network. But got this error when running the forward function. The code is as follows. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader import numpy as np class ParamData(Dataset): def __init__(self,file_name): self.data = torch.Tensor(np.loadtxt(file_name,delimiter = ',')) #first place def __len__(self): return self.data.size()[0] def __getitem__(self,i): return self.data[i] class Net(nn.Module): def __init__(self,in_size,out_size,layer_size=200): super(Net,self).__init__() self.layer = nn.Linear(in_size,layer_size) self.out_layer = nn.Linear(layer_size,out_size) def forward(self,x): x = F.relu(self.layer(x)) x = self.out_layer(x) return x datafile = 'data1.txt' net = Net(100,1) dataset = ParamData(datafile) n_samples = len(dataset) #dataset = torch.Tensor(dataset,dtype=torch.double) #second place #net.float() #thrid place net.forward(dataset[0]) #fourth place In the file data1.txt is a csv formatted text file containing certain numbers, and each dataset[i] is a size 100 by 1 torch.Tensor object of dtype torch.float64. The error message is as follows: Traceback (most recent call last): File "Z:\Wrong.py", line 33, in <module> net.forward(dataset[0]) File "Z:\Wrong.py", line 23, in forward x = F.relu(self.layer(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "E:\Python38\lib\site-packages\torch\nn\functional.py", line 1372, in linear output = input.matmul(weight.t()) RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm It seems that I should change the dtype of the numbers in dataset to torch.double. I tried things like changing the line at the first place to self.data = torch.tensor(np.loadtxt(file_name,delimiter = ','),dtype=torch.double) changing the line at the fourth place to net.forward(dataset[0].double()) uncommenting one of the two lines at the second or the thrid place I think these are the solutions I have seen from similar questions, but they either give new errors or don't do anything. What should I do? Update: So I got it working by changing the first place to self.data = torch.from_numpy(np.loadtxt(file_name,delimiter = ',')).float() which is weird because it is exactly the opposite of the error message. Is this a bug? I'd still like some explaining.
Now that I have more experience with pytorch, I think I can explain the error message. It seems that the line RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'mat2' in call to _th_mm is actually refering to the weights of the linear layer when the matrix multiplication is called. Since the input is double while the weights are float, it makes sense for the line output = input.matmul(weight.t()) to expect the weights to be double.
https://stackoverflow.com/questions/60239051/
PyTorch MaxPool2D unexpected behavior with padding=1
I was playing around with MaxPool2D in PyTorch and discovered strange behavior when setting padding=1. Here is what I got: Code: import torch from torch.nn.functional import max_pool2d TEST = 1 def test_maxpool(negative=False, tnsr_size=2, kernel_size=2, stride=2, padding=0): """Test MaxPool2D. """ global TEST print(f'=== TEST {TEST} ===') print(*[f'{i[0]}: {i[1]}' for i in locals().items()], sep=' | ') inp = torch.arange(1., tnsr_size ** 2 + 1).reshape(1, tnsr_size, tnsr_size) inp = -inp if negative else inp print('In:') print(inp) out = max_pool2d(inp, kernel_size, stride, padding=padding) print('Out:') print(out) print() TEST += 1 test_maxpool() test_maxpool(True) test_maxpool(padding=1) test_maxpool(True, padding=1) Out: === TEST 1 === negative: False | tnsr_size: 2 | kernel_size: 2 | stride: 2 | padding: 0 In: tensor([[[1., 2.], [3., 4.]]]) Out: tensor([[[4.]]]) === TEST 2 === negative: True | tnsr_size: 2 | kernel_size: 2 | stride: 2 | padding: 0 In: tensor([[[-1., -2.], [-3., -4.]]]) Out: tensor([[[-1.]]]) === TEST 3 === negative: False | tnsr_size: 2 | kernel_size: 2 | stride: 2 | padding: 1 In: tensor([[[1., 2.], [3., 4.]]]) Out: tensor([[[1., 2.], [3., 4.]]]) === TEST 4 === negative: True | tnsr_size: 2 | kernel_size: 2 | stride: 2 | padding: 1 In: tensor([[[-1., -2.], [-3., -4.]]]) Out: tensor([[[-1., -2.], [-3., -4.]]]) Tests 1, 2, 3 are fine but Test 4 is weird, I expected to get [[0 0], [0 0]] tensor: In: [[-1 -2] [-3 -4]] + padding -> [[ 0 0 0 0] [ 0 -1 -2 0] [ 0 -3 -4 0] [ 0 0 0 0]] -> kernel_size=2, stride=2 -> [[0 0] [0 0]] According to Test 3 zero padding was used but Test 4 produced controversial result. What kind of padding (if any) was that? Why does MaxPool2D behave like that? pytorch 1.3.1
This was expected behavior since negative infinity padding is done by default. The documentation for MaxPool is now fixed. See this PR: Fix MaxPool default pad documentation #59404 .
https://stackoverflow.com/questions/60240434/
What is the meaning of the second output of Huggingface's Bert?
Using the vanilla configuration of base BERT model in the huggingface implementation, I get a tuple of length 2. import torch import transformers from transformers import AutoModel,AutoTokenizer bert_name="bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(bert_name) BERT = AutoModel.from_pretrained(bert_name) e=tokenizer.encode('I am hoping for the best', add_special_tokens=True) q=BERT(torch.tensor([e])) print (len(q)) #Output: 2 The first element is what I expect to receive - the 768 dimension embedding of each input token. print (e) #Output : [101, 1045, 2572, 5327, 2005, 1996, 2190, 102] print (q[0].shape) #Output : torch.Size([1, 8, 768]) But what is the second element in the tuple? print (q[1].shape) # torch.Size([1, 768]) It has the same size as the encoding of each token. But what is it? Maybe a copy of the [CLS] token, a representation for the classification of the entire encoded text? Let's check. a= q[0][:,0,:] b=q[1] print (torch.eq(a,b)) #Output : Tensor([[False, False, False, .... False]]) Nope! What about a copy the embedding of the last token (for whatever reason)? c= q[0][:,-1,:] b=q[1] print (torch.eq(a,c)) #Output : Tensor([[False, False, False, .... False]]) So, also not that. The documentation talks about how changing the config can result in more tuple elements (like hidden states), but I did not find any description of this "mysterious" tuple element outputted by the default configuration. Any ideas as to what is it and what is its usage?
The output in this case is a tuple of (last_hidden_state, pooler_output). You can find documentation about what the returns could be here.
https://stackoverflow.com/questions/60243099/
How necessary are activation functions after dense layer in neural networks?
I'm currently training multiple recurrent convolutional neural networks with deep q-learning for the first time. Input is a 11x11x1 matrix, each network consists of 4 convolutional layer with dimensions 3x3x16, 3x3x32, 3x3x64, 3x3x64. I use stride=1 and padding=1. Each convLayer is followed by ReLU activation. The output is fed into a feedforward fully-connected dense layer with 128 units and after that into an LSTM layer, also containing 128 units. Two following dense layer produce separate advantage and value steams. So training is running for a couple of days now and now I've realized (after I've read some related paper), I didn't add an activation function after the first dense layer (as in most of the papers). I wonder if adding one would significantly improve my network? Since I'm training the networks for university, I don't have unlimited time for training, because of a deadline for my work. However, I don't have enough experience in training neural networks, to decide on what to do... What do you suggest? I'm thankful for every answer!
If I have to talk in general using an activation function helps you to include some non-linear property in your network. The purpose of an activation function is to add some kind of non-linear property to the function, which is a neural network. Without the activation functions, the neural network could perform only linear mappings from inputs x to the outputs y. Why is this so? Without the activation functions, the only mathematical operation during the forward propagation would be dot-products between an input vector and a weight matrix. Since a single dot product is a linear operation, successive dot products would be nothing more than multiple linear operations repeated one after the other. And successive linear operations can be considered as a one single learn operation. A neural network without any activation function would not be able to realize such complex mappings mathematically and would not be able to solve tasks we want the network to solve.
https://stackoverflow.com/questions/60245147/
Read multiple .gz file and return it in one tensor
I am trying to read multiple .gz file and return its content in one tensor as follows: with ReadHelper('ark: gunzip -c /home/mnabih/kaldi/egs/timit/s5/exp/mono_ali/*.gz|') as reader: for key, b in reader: #print(type(b)) c = torch.from_numpy(b) labels = torch.cat(c) Unfortunately, it gives me this error: cat(): argument 'tensors' (position 1) must be tuple of Tensors, not Tensor
As the error message explains, c is a tensor. To use torch.cat() you must pass a group of tensors or a list. To solve your problem you may use: temp = list() for key, b in reader: temp.append(torch.from_numpy(b)) labels = torch.cat(temp) For more, you can check the manual here
https://stackoverflow.com/questions/60265025/
How to fix multiprocessing problems in python in windows10
I try to use this tutorial to train my own car model recognition model: https://github.com/Helias/Car-Model-Recognition. And i want to use coda and my gpu perfomance to enhance training speed (preprocesssing step was completed without any errors).But when I try to train my model, I've got the following errors: ######### ERROR ####### An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ######### batch ####### Traceback (most recent call last): File "D:\Car-Model-Recognition\main.py", line 78, in train_model ######### ERROR ####### [Errno 32] Broken pipe for i, batch in enumerate(loaders[mode]): ######### batch ####### File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__ return _MultiProcessingDataLoaderIter(self) Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__ File "main.py", line 78, in train_model w.start() File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start for i, batch in enumerate(loaders[mode]): File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__ self._popen = self._Popen(self) File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen return _MultiProcessingDataLoaderIter(self) File "C:\Program Files\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__ return _default_context.get_context().Process._Popen(process_obj) File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen w.start() return Popen(process_obj) File "C:\Program Files\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__ File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data self._popen = self._Popen(self) File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen _check_not_importing_main() File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main return _default_context.get_context().Process._Popen(process_obj) File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. return Popen(process_obj) I have used the exact code from given link, and if i start my code using wsl, everything is ok, but I can't use my gpu from wsl. Where should I insert this name == 'main' check to prevent such a mistake or how can i disable this multiprocessing
Looking at main.py, you run a lot of code at the module level. On Windows, python's multiprocessing module will start a new python interpreter, import your modules, unpickle a snapshot of your parent context and then call your worker function. The problem is that all of that module level code executes merely by import and you essentially run a new copy of your program instead of building a context for your worker. The solution is two-fold. First, move all of the module level code into functions. You want to be a able to import your module without side effects. Second, call the function(s) that start your program from a conditional def main(): the stuff you were doing a module level if __name__ == "__main__": main() The reason this works is in the module name. When you run the top level script of a python (e.g., python main.py), its a script called "__main__", not a module. If a different program imports main its a module called "main" (or whatever you named your script). That 'if' stops your main code from executing if its imported by some other python code - such as the multiprocessing module. Its okay to have some executable code at the module level, especially if you are setting up defaults and such. But don't do anything at the module level that you wouldn't want done if some other code imports your script.
https://stackoverflow.com/questions/60266256/
After finetuning Faster RCNN object detection model, how to visualize bbox prediction?
I finetuned pytorch torchvision model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) on my own custom dataset. I followed this guide https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#torchvision-object-detection-finetuning-tutorial but only trained Faster RCNN, not Mask RCNN. I successfully finished training with no error, and the model returned a dict containing predicted boxes, labels, and scores. In the guide I followed, they show how to visualize masks predicted by the model trained. Is there a similar method to visualize bounding box? I'm having a lot of trouble figuring this out. Thank you
The prediction from FasterRCNN is of the form: >>> predictions = model([input_img_tensor]) [{'boxes': tensor([[419.6865, 170.0683, 536.0842, 493.7452], [159.0727, 180.3606, 298.8194, 434.4604], [439.7836, 222.6208, 452.0138, 271.8359], [444.3562, 224.4628, 456.1511, 265.5336], [437.7808, 226.5965, 446.2904, 271.2691]], grad_fn=<StackBackward>), 'labels': tensor([ 1, 1, 32, 32, 32]), 'scores': tensor([0.9997, 0.9996, 0.5827, 0.2102, 0.0943], grad_fn=<IndexBackward>)}] where the predicted boxes are of [x1, y1, x2, y2] format, with values between 0 and H and 0 and W. You can use OpenCV's rectangle function to overlay bounding boxes on image. import cv2 img = cv2.imread('input_iamge.png', cv2.COLOR_BGR2RGB) for i in range(len(predictions[0]['boxes'])): x1, x2, x3, x4 = map(int, predictions[0]['boxes'][i].tolist()) print(x1, x2, x3, x4) image = cv2.rectangle(img, (x1, x2), (x3, x4), (255, 0, 0), 1) cv2_imshow('img', image)
https://stackoverflow.com/questions/60272086/
How to calculate a Forward Pass with matrix operations in PyTorch?
I have got an input x, layer 1 weight matrix, and layer 2 weight matrix. Now I want to calculate the output of this pre-trained neural network via hand: x * weights1 * weights2 While doing this I receive a RuntimeError: The size of tensor a (6) must match the size of tensor b (4) at non-singleton dimension 1 class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4,6) self.fc2 = nn.Linear(6,2) self.fc3 = nn.Linear(2,1) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) return x net = Net() X = torch.randn(1000,4) net.fc2.weight*(net.fc1.weight * X[0])
You are confusing element-wise multiplication (* operator) with matrix multiplication (@ operator). Try: net.fc2.weight @ (net.fc1.weight @ X[0])
https://stackoverflow.com/questions/60275705/
Cuda and pytorch memory usage
I am using Cuda and Pytorch:1.4.0. When I try to increase batch_size, I've got the following error: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.74 GiB already allocated; 7.80 MiB free; 2.96 GiB reserved in total by PyTorch) I haven't found anything about Pytorch memory usage. Also, I don't understand why I have only 7.80 mib available? Should I just use a videocard with better perfomance, or can I free some memory? FYI, I have a GTX 1050 TI, python 3,7 and torch==1.4.0 and my os is Windows 10.
I had the same problem, the following worked for me: torch.cuda.empty_cache() # start training from here Even after this if you get the error, then you should decrease the batch_size
https://stackoverflow.com/questions/60276672/
Is there any tool to profile each layer of a DNN in Pytorch and Tensorflow?
I want to know the execution time of each layer of a DNN. Is there any tool can profile each layer? I know I can insert some print in a model, but I am not sure it will be accurate.
To check the size and parameters of each layer you can use the torch summary. To compute time the execution time of each layer, you can use the torchprof. I don't know any project that merges both libs. Maybe it's your opportunity (lol)
https://stackoverflow.com/questions/60286286/
Extracting the encoded representations of an image in PyTorch?
I have a template autoencoder neural net model made in PyTorch that I'm using on the Omniglot dataset. I'd like to extract the encoded representations of the image, but I'm unsure how. # Load data mean = 0.5 std = 0.5 batch_size = 128 img_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((mean,), (std,)) ]) dataset = Omniglot('.', download=True, transform=img_transform) dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True) # Define Autoencoder class Autoencoder(nn.Module): def __init__(self, n=64): super(Autoencoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(105*105, 256, bias=True), nn.ReLU(True), nn.Linear(256, 64, bias=True), nn.ReLU(True), nn.Linear(64, n, bias=True), nn.ReLU(True) ) self.decoder = nn.Sequential( nn.Linear(n, 64, bias=True), nn.ReLU(True), nn.Linear(64, 256, bias=True), nn.ReLU(True), nn.Linear(256, 105*105, bias=True), nn.Tanh() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x def train(num_epochs, dataloader, model, criterion, optimizer): for epoch in range(num_epochs): for data in dataloader: img, label = data img = img.view(img.size(0), -1) img = Variable(img).cuda() output = model(img) loss = criterion(output, img) optimizer.zero_grad() loss.backward() optimizer.step() return model # Train model num_epochs = 25 learning_rate = 1e-3 model = Autoencoder().cuda() criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) trained_model = train(num_epochs, dataloader, model, criterion, optimizer)
You can simply return the encoded output in the forward function as follows: class Autoencoder(nn.Module): ... def forward(self, x): x = self.encoder(x) encoded_x = x x = self.decoder(x) return x, encoded_x Modify the training function a little bit: output, encoded_output = model(img) OR you can simply call encoder: encoded_output = model.encoder(img)
https://stackoverflow.com/questions/60292403/
Pytorch custom Dataset class giving wrong output
I am trying to use this class I built for a dataset but it saying that it should be a PIL or ndarray. Im not quite sure whats wrong with it. Here is the class that I am using class RotateDataset(Dataset): def __init__(self, image_list, size,transform = None): self.image_list = image_list self.size = size self.transform = transform def __len__(self): return len(self.image_list) def __getitem__(self, idx): img = cv2.imread(self.image_list[idx]) image_height, image_width = img.shape[:2] print("ID: ", idx) if idx % 2 == 0: label = 0 # Set label # chose negative or positive rotation rotation_degree = random.randrange(35, 50, 1) posnegrot = np.random.randint(2) if posnegrot == 0: #positive rotation #rotation_matrix = cv2.getRotationMatrix2D((num_cols/2, num_rows/2), rotation_degree, 1) #img = cv2.warpAffine(img, rotation_matrix, (num_cols, num_rows)) img = rotate_image(img, rotation_degree) img = crop_around_center(img, *largest_rotated_rect(image_width, image_height, math.radians(rotation_degree))) else: # Negative rotation rotation_degree = -rotation_degree img = crop_around_center(img, *largest_rotated_rect(image_width, image_height, math.radians(rotation_degree))) else: label = 1 img = cv2.resize(img, self.size, cv2.INTER_AREA) return self.transform(img), self.transform(label) The error that it is giving me is TypeError: pic should be PIL Image or ndarray. Got class 'int' It should give me a img (tensor) and a label (tensor) but I dont think it is doing it correctly. TypeError Traceback (most recent call last) <ipython-input-34-f47943b2600c> in <module> 2 train_loss = 0.0 3 net.train() ----> 4 for image, label in enumerate(train_loader): 5 if train_on_gpu: 6 image, label = image.cuda(), label.cuda() ~\Anaconda3\envs\TF2\lib\site-packages\torch\utils\data\dataloader.py in __next__(self) 343 344 def __next__(self): --> 345 data = self._next_data() 346 self._num_yielded += 1 347 if self._dataset_kind == _DatasetKind.Iterable and \ ~\Anaconda3\envs\TF2\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self) 383 def _next_data(self): 384 index = self._next_index() # may raise StopIteration --> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 386 if self._pin_memory: 387 data = _utils.pin_memory.pin_memory(data) ~\Anaconda3\envs\TF2\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] ~\Anaconda3\envs\TF2\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] <ipython-input-28-6c77357ff619> in __getitem__(self, idx) 35 label = 1 36 img = cv2.resize(img, self.size, cv2.INTER_AREA) ---> 37 return self.transform(img), self.transform(label) ~\Anaconda3\envs\TF2\lib\site-packages\torchvision\transforms\transforms.py in __call__(self, pic) 99 Tensor: Converted image. 100 """ --> 101 return F.to_tensor(pic) 102 103 def __repr__(self): ~\Anaconda3\envs\TF2\lib\site-packages\torchvision\transforms\functional.py in to_tensor(pic) 53 """ 54 if not(_is_pil_image(pic) or _is_numpy(pic)): ---> 55 raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic))) 56 57 if _is_numpy(pic) and not _is_numpy_image(pic): TypeError: pic should be PIL Image or ndarray. Got <class 'int'>
As discussed in the comments, the problem was applying transform on label as well. The label should instead simply be written as tensor: return self.transform(img), torch.tensor(label)
https://stackoverflow.com/questions/60293600/
How are neural networks, loss and optimizer connected in PyTorch?
I've seen answers to this question, but I still don't understand it at all. As far as I know, this is the most basic setup: net = CustomClassInheritingFromModuleWithDefinedInitAndForward() criterion = nn.SomeLossClass() optimizer = optim.SomeOptimizer(net.parameters(), ...) for _, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() What I don't understand is: Optimizer is initialized with net.parameters(), which I thought are internal weights of the net. Loss does not access these parameters nor the net itself. It only has access to net's outputs and input labels. Optimizer does not access loss either. So if loss only works on outputs and optimizer only on net.parameters, how can they be connected?
Optimizer is initialized with net.parameters(), which I thought are internal weights of the net. This is because the optimizer will modify the parameters of your net during the training. Loss does not access these parameters nor the net itself. It only has access to net's outputs and input labels. The loss only computes an error between a prediction and the truth. Optimizer does not access loss either. It accesses the tensors that were computed during loss.backward
https://stackoverflow.com/questions/60296637/
How to feed the output of a finetuned bert model as inpunt to another finetuned bert model?
I finetuned two separate bert model (bert-base-uncased) on sentiment analysis and pos tagging tasks. Now, I want to feed the output of the pos tagger (batch, seqlength, hiddensize) as input to the sentiment model.The original bert-base-uncased model is in 'bertModel/' folder which contains 'model.bin' and 'config.json'. Here is my code: class DeepSequentialModel(nn.Module): def __init__(self, sentiment_model_file, postag_model_file, device): super(DeepSequentialModel, self).__init__() self.sentiment_model = SentimentModel().to(device) self.sentiment_model.load_state_dict(torch.load(sentiment_model_file, map_location=device)) self.postag_model = PosTagModel().to(device) self.postag_model.load_state_dict(torch.load(postag_model_file, map_location=device)) self.classificationLayer = nn.Linear(768, 1) def forward(self, seq, attn_masks): postag_context = self.postag_model(seq, attn_masks) sent_context = self.sentiment_model(postag_context, attn_masks) logits = self.classificationLayer(sent_context) return logits class PosTagModel(nn.Module): def __init__(self,): super(PosTagModel, self).__init__() self.bert_layer = BertModel.from_pretrained('bertModel/') self.classificationLayer = nn.Linear(768, 43) def forward(self, seq, attn_masks): cont_reps, _ = self.bert_layer(seq, attention_mask=attn_masks) return cont_reps class SentimentModel(nn.Module): def __init__(self,): super(SentimentModel, self).__init__() self.bert_layer = BertModel.from_pretrained('bertModel/') self.cls_layer = nn.Linear(768, 1) def forward(self, input, attn_masks): cont_reps, _ = self.bert_layer(encoder_hidden_states=input, encoder_attention_mask=attn_masks) cls_rep = cont_reps[:, 0] return cls_rep But I get the below error. I appreciate it if someone could help me. Thanks! cont_reps, _ = self.bert_layer(encoder_hidden_states=input, encoder_attention_mask=attn_masks) result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'
To formulate this as an answer, too, and keep it properly visible for future visitors, the forward() call of transformers does not support these arguments in version 2.1.1, or any earlier version, for that matter. note that the link in my comment is in fact pointing to a different forward function, but otherwise the point still holds. Passing encoder_hidden_states to forward() was first possible in version 2.2.0.
https://stackoverflow.com/questions/60297908/
Use pytorch to bulid multi-linear,but the result i get is not what i want?
import sklearn from sklearn.datasets import load_boston import torch import torch.nn as nn boston = load_boston() num_epochs = 10 linear_model = nn.Linear(13,1,bias = True) criterion = nn.MSELoss() optimizer = torch.optim.SGD(linear_model.parameters(), lr=0.01) for epoch in range(num_epochs): inputs = torch.from_numpy(boston.data) targets = torch.from_numpy(boston.target) targets = targets.float() inputs = inputs.float() outputs = linear_model(inputs) loss = criterion(outputs,targets) optimizer.zero_grad() loss.backward() optimizer.step() print('epoch:{}/{} ...... loss:{:.4f}'.format(epoch,num_epochs,loss.item()))
The problem with your code seems to be data normalization, or, the lack of it. I edited your code to add data normalization ((x - mean) / std), set epochs to 50 and switched the optimiser to Adam so it converges faster. import sklearn from sklearn.datasets import load_boston import torch import numpy as np import torch.nn as nn def normalize(X): mean = np.mean(X) std = np.std(X) return ((X - mean) / std), mean, std boston = load_boston() m_in = np.zeros(13) s_in = np.zeros(13) b_in = boston.data b_out = boston.target for i in range(13): b_in[:, i], m_in[i], s_in[i] = normalize(b_in[:, i]) b_out, m_out, s_out = normalize(b_out) num_epochs = 50 linear_model = nn.Linear(13, 1, bias=True) criterion = nn.MSELoss() optimizer = torch.optim.Adam(linear_model.parameters(), lr=0.01) linear_model.train() for epoch in range(num_epochs): inputs = torch.from_numpy(b_in) targets = torch.from_numpy(b_out) targets = targets.float().unsqueeze(1) inputs = inputs.float() outputs = linear_model(inputs) loss = criterion(outputs, targets) optimizer.zero_grad() loss.backward() optimizer.step() print('epoch: {}/{} ...... loss: {:.4f}'.format(epoch, num_epochs, loss.item())) The new output looks like this: epoch: 0/50 ...... loss: 1.2610 epoch: 1/50 ...... loss: 1.1395 epoch: 2/50 ...... loss: 1.0332 epoch: 3/50 ...... loss: 0.9421 epoch: 4/50 ...... loss: 0.8660 epoch: 5/50 ...... loss: 0.8040 epoch: 6/50 ...... loss: 0.7548 epoch: 7/50 ...... loss: 0.7164 epoch: 8/50 ...... loss: 0.6863 epoch: 9/50 ...... loss: 0.6623 epoch: 10/50 ...... loss: 0.6421 epoch: 11/50 ...... loss: 0.6240 epoch: 12/50 ...... loss: 0.6068 epoch: 13/50 ...... loss: 0.5895 epoch: 14/50 ...... loss: 0.5717 epoch: 15/50 ...... loss: 0.5532 epoch: 16/50 ...... loss: 0.5339 epoch: 17/50 ...... loss: 0.5143 epoch: 18/50 ...... loss: 0.4946 epoch: 19/50 ...... loss: 0.4752 epoch: 20/50 ...... loss: 0.4565 epoch: 21/50 ...... loss: 0.4389 epoch: 22/50 ...... loss: 0.4226 epoch: 23/50 ...... loss: 0.4079 epoch: 24/50 ...... loss: 0.3949 epoch: 25/50 ...... loss: 0.3835 epoch: 26/50 ...... loss: 0.3736 epoch: 27/50 ...... loss: 0.3652 epoch: 28/50 ...... loss: 0.3580 epoch: 29/50 ...... loss: 0.3517 epoch: 30/50 ...... loss: 0.3461 epoch: 31/50 ...... loss: 0.3410 epoch: 32/50 ...... loss: 0.3361 epoch: 33/50 ...... loss: 0.3315 epoch: 34/50 ...... loss: 0.3270 epoch: 35/50 ...... loss: 0.3226 epoch: 36/50 ...... loss: 0.3183 epoch: 37/50 ...... loss: 0.3142 epoch: 38/50 ...... loss: 0.3103 epoch: 39/50 ...... loss: 0.3068 epoch: 40/50 ...... loss: 0.3036 epoch: 41/50 ...... loss: 0.3008 epoch: 42/50 ...... loss: 0.2984 epoch: 43/50 ...... loss: 0.2963 epoch: 44/50 ...... loss: 0.2945 epoch: 45/50 ...... loss: 0.2930 epoch: 46/50 ...... loss: 0.2917 epoch: 47/50 ...... loss: 0.2904 epoch: 48/50 ...... loss: 0.2892 epoch: 49/50 ...... loss: 0.2881 The features in the dataset seem to have different ranges and this poses a problem for the learning algorithm. By normalizing the data, we make it much easier for the model to learn. [1] This is a great article that explains why normalization is needed: https://towardsdatascience.com/understand-data-normalization-in-machine-learning-8ff3062101f0
https://stackoverflow.com/questions/60298294/
How can I solve "TypeError: max() received an invalid combination of arguments - got (Linear, int), but expected"?
How I solve this error, I am trying to train a ticket classification model I'm trying to make a ticket sorter with the Pytorch library, but I have this error. I can't understand what I've done wrong Can you help me? data_transforms = { 'train' : transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val' : transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } data_dir = 'dataset_billete_argentino' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_name = image_datasets['train'].classes class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3,6,5) self.pool = nn.MaxPool2d(2,2) self.conv2 = nn.Conv2d(6,16,5) self.fc1 = nn.Linear(16 * 53 * 53, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(x.size(0), 16* 53 * 53) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3 return x net = Net() def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model from torch.optim import lr_scheduler exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) net = train_model(net, criterion, optimizer, exp_lr_scheduler, num_epochs=25) And give this error TypeError: max() received an invalid combination of arguments - got (Linear, int), but expected one of: * (Tensor input) * (Tensor input, name dim, bool keepdim, tuple of Tensors out) * (Tensor input, Tensor other, Tensor out) * (Tensor input, int dim, bool keepdim, tuple of Tensors out) Epoch 0/24 ---------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-27-29dfe3459d8a> in <module> 4 5 net = train_model(net, criterion, optimizer, exp_lr_scheduler, ----> 6 num_epochs=25) <ipython-input-19-1a5d4f162548> in train_model(model, criterion, optimizer, scheduler, num_epochs) 31 with torch.set_grad_enabled(phase == 'train'): 32 outputs = model(inputs) ---> 33 _, preds = torch.max(outputs, 1) 34 loss = criterion(outputs, labels) 35 TypeError: max() received an invalid combination of arguments - got (Linear, int), but expected one of: * (Tensor input) * (Tensor input, name dim, bool keepdim, tuple of Tensors out) * (Tensor input, Tensor other, Tensor out) * (Tensor input, int dim, bool keepdim, tuple of Tensors out)
As the error message says the problem is in this line: _, preds = torch.max(outputs, 1) There are two problems here: As @Idodo said, you're giving 2 arguments and neither of them is a tensor. According to the message they are a Linear and a int, respectively. If you remove the int you still have an error, because you're trying to compute a max value of a nn.Linear, which is not possible. Assessing your code I got the second error. In your model's forward method you have: x = self.fc3 That's the problem. You must do: x = self.fc3(x)
https://stackoverflow.com/questions/60309505/
pytorch: sum of cross entropy over all classes
I want to compute sum of cross entropy over all classes for each prediction, where the input is batch (size n), and the output is batch (size n). The simplest way is for loop (for 1000 classes): def sum_of_CE_lost(input): L = 0 for c in range(1000): L = L + torch.nn.CrossEntropyLoss(input, c) return L However, it is very slow. What is a better way? How can we parallelized it for GPU (CUDA)?
I found the answer: torch.nn.functional.log_softmax (input).sum() / input.shape[0] We divide by input.shape[0] because cross_entropy() takes, by default the mean across the batch dimension.
https://stackoverflow.com/questions/60311785/
How to cache Pytorch models for use when not connected to the internet?
I'm using vgg19 in a classification problem. I have access to the campus research computer to train on, but the nodes where the computation is done don't have access to the internet. So running a line of code like self.net = models.vgg19(pretrained=True) fails with the error urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable> Is there a way I could cache the model on the head node (where I have internet access), and load the model from the cache instead of the internet on the compute node?
If you just save the weights of pretrained networks somewhere, you can load them just like you can load any other network weights. Saving: import torchvision # I am assuming we have internet access here model = torchvision.models.vgg16(pretrained=True) torch.save(model.state_dict(), "Somewhere") Loading: import torchvision def create_vgg16(dict_path=None): model = torchvision.models.vgg16(pretrained=False) if (dict_path != None): model.load_state_dict(torch.load(dict_path)) return model model = create_vgg16("Somewhere")
https://stackoverflow.com/questions/60312752/
Where is '_DataLoaderIter' in pytorch 1.3.1?
When I use pytorch 1.3.1 with python3.7.4, like this import torch from torch.utils.data.dataloader import _DataLoaderIter Here is an errer : cannot import name '_DataLoaderIter' from 'torch.utils.data.dataloader' How should I solve that? Should I uninstall 1.3.1? I found _DataLoaderIter is in dataloader.pyi : class _DataLoaderIter: def __init__(self, loader: DataLoader) -> None:... def __len__(self) -> int: ... def __iter__(self) -> _DataLoaderIter: ... def __next__(self) -> Any: ... But I can't find it in dataloader.py.
_DataLoaderIter does not exist any more. This code is the latest one that contains _DataLoaderIter. You can use _SingleProcessDataLoaderIter or _MultiProcessingDataLoaderIter. I don't think the .pyi file you mentioned is in version 1.3.1.
https://stackoverflow.com/questions/60320948/
2-step 3D rotation of x,y,z tensors
In Python, using PyTorch, I'm trying to rotate 3 cubes x, y, and z, first about the x-axis and then about the z-axis. However, the rotation about the z-axis seems to be behaving in a way I would not expect. I am creating 3 cubes to rotate below and then rotating them 90 degrees about the x-axis and then 90 degrees about the z-axis. import torch from numpy import pi import matplotlib.pyplot as plt def rotation(x,y,z,theta,phi): ### A function for rotating about the x and then z axes ### xx = (x*torch.cos(theta)) - (((y*torch.cos(phi)) - (z*torch.sin(phi)))*torch.sin(theta)) yy = (x*torch.sin(theta)) + (((y*torch.cos(phi)) - (z*torch.sin(phi)))*torch.cos(theta)) zz = (y*torch.sin(phi)) + (z*torch.cos(phi)) return xx,yy,zz ### Creating the 3 cubes: x, y, z ### l = torch.arange(-2,3,1) x,y,z=torch.meshgrid(l,l,l) ### Scaling the cubes so they can be differentiated from one another ### x = x.clone().T y = y.clone().T*2 z = z.clone().T*3 ### Defining the amount of rotation about the x and z axes phi = torch.tensor([pi/2]).to(torch.float) # about the x axis theta = torch.tensor([pi/2]).to(torch.float) # about the z axis ### Performing the rotation x_r,y_r,z_r = rotation(x, y, z, theta, phi) By visualising the first slice of each cube I can see that the rotation has not been successful as at first glance it looks like the cubes have actually been rotated about the x-axis followed by the y-axis instead. Is there a specific way that Python handles rotations like this that I'm missing, such as the axes changing along with rotations, meaning the initial rotation matrix operation no longer applies? Extra information If one were to replace theta as 0 instead of pi/2, it can be seen that the first rotation behaves as expected by looking at the first slice of each rotated cube: Code for visualising: plt.figure() plt.subplot(231) x_before = plt.imshow(x[0,:,:]) plt.xlabel('x-before'); plt.colorbar(x_before,fraction=0.046, pad=0.04) plt.subplot(232) y_before = plt.imshow(y[0,:,:]) plt.xlabel('y-before'); plt.colorbar(y_before,fraction=0.046, pad=0.04) plt.subplot(233) z_before = plt.imshow(z[0,:,:]) plt.xlabel('z-before'); plt.colorbar(z_before,fraction=0.046, pad=0.04) plt.subplot(234) x_after = plt.imshow(x_r[0,:,:]) plt.xlabel('x-after'); plt.colorbar(x_after,fraction=0.046, pad=0.04) plt.subplot(235) y_after = plt.imshow(y_r[0,:,:]) plt.xlabel('y-after'); plt.colorbar(y_after,fraction=0.046, pad=0.04) plt.subplot(236) z_after = plt.imshow(z_r[0,:,:]) plt.xlabel('z-after'); plt.colorbar(z_after,fraction=0.046, pad=0.04) plt.tight_layout()
This sounds like a global vs local axis problem - you are rotating 90 degrees around the X axis to start with, which moves your cube so that its local Y axis ends up pointing along the global Z axis. Applying the next rotation of 90 degrees around global Z looks like a rotation around the local Y axis. To solve this you either need to apply different rotations to achieve the orientation you want (in this case rotate -90 degrees around global Y, since the local Z axis is now facing down the negative global Y axis), or write a different rotation function that can rotate around any vector and track the local axes of the cube (by passing them through the same rotations as the cube itself). You may also be able to work in local coordinates, by applying the rotations in reverse order, ie a global rotation around Y by 90, followed by a global rotation around X by 90 will be equivalent to local rotations in X then Y.
https://stackoverflow.com/questions/60325487/
How can I output some data during a model.fit() run in tensorflow?
I would like to print the value and/or the shape of a tensor during a model.fit() run and not before. In PyTorch I can just put a print(input.shape) statement into the model.forward() function. Is there something similar in TensorFlow?
You can pass a callback object to the model.fit() method and then perform actions at different stages during fitting. https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback class MyCustomCallback(tf.keras.callbacks.Callback): def on_train_batch_begin(self, batch, logs=None): print('Training: batch {} begins at {}'.format(batch, datetime.datetime.now().time())) def on_train_batch_end(self, batch, logs=None): print('Training: batch {} ends at {}'.format(batch, datetime.datetime.now().time())) def on_test_batch_begin(self, batch, logs=None): print('Evaluating: batch {} begins at {}'.format(batch, datetime.datetime.now().time())) def on_test_batch_end(self, batch, logs=None): print('Evaluating: batch {} ends at {}'.format(batch, datetime.datetime.now().time())) model = get_model() model.fit(x_train, y_train, callbacks=[MyCustomCallback()]) https://www.tensorflow.org/guide/keras/custom_callback
https://stackoverflow.com/questions/60332169/
PyTorch MNIST example not converge
I'm writing a toy example performing the MNIST classification. Here is the full code of my example: import matplotlib matplotlib.use("Agg") import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader import torchvision.transforms as transforms import torchvision.datasets as datasets import matplotlib.pyplot as plt import os from os import system, listdir from os.path import join, isfile, isdir, dirname def img_transform(image): transform=transforms.Compose([ # transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) return transform(image) def normalize_output(img): img = img - img.min() img = img / img.max() return img def save_checkpoint(state, filename='checkpoint.pth.tar'): torch.save(state, filename) class Net(nn.Module): """docstring for Net""" def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.max_pool2d(x, 2) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output os.environ['CUDA_VISIBLE_DEVICES'] = '0' data_images, data_labels = torch.load("./PATH/MNIST/processed/training.pt") model = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=1e-2) epochs = 5 batch_size = 30 num_batch = int(data_images.shape[0] / batch_size) for epoch in range(epochs): for batch_idx in range(num_batch): data = data_images[ batch_idx*batch_size : (batch_idx+1)*batch_size ].float() label = data_labels[ batch_idx*batch_size : (batch_idx+1)*batch_size ] data = img_transform(data) data = data.unsqueeze_(1) pred_score = model(data) loss = criterion(pred_score, label) loss.backward() optimizer.step() if batch_idx % 200 == 0: print('epoch', epoch, batch_idx, '/', num_batch, 'loss', loss.item()) _, pred = pred_score.topk(1) pred = pred.t().squeeze() correct = pred.eq(label) num_correct = correct.sum(0).item() print('acc=', num_correct/batch_size) dict_to_save = { 'epoch': epochs, 'state_dict': model.state_dict(), 'optimizer' : optimizer.state_dict(), } ckpt_file = 'a.pth.tar' save_checkpoint(dict_to_save, ckpt_file) print('save to ckpt_file', ckpt_file) exit() The code is executable with MNIST dataset saved in the path ./PATH/MNIST/processed/training.pt However, the training process does not converge, with the training accuracy always lower than 0.2. What's wrong with my implementation? I have tried different learning rates and batch size. It doesn't work. Is there any other problem in my code? Here are some of the training logs epoch 0 0 / 2000 loss 27.2023868560791 acc= 0.1 epoch 0 200 / 2000 loss 2.3346288204193115 acc= 0.13333333333333333 epoch 0 400 / 2000 loss 2.691042900085449 acc= 0.13333333333333333 epoch 0 600 / 2000 loss 2.6452369689941406 acc= 0.06666666666666667 epoch 0 800 / 2000 loss 2.7910964488983154 acc= 0.13333333333333333 epoch 0 1000 / 2000 loss 2.966330051422119 acc= 0.1 epoch 0 1200 / 2000 loss 3.111387014389038 acc= 0.06666666666666667 epoch 0 1400 / 2000 loss 3.1988155841827393 acc= 0.03333333333333333
I see at least four issues that impact on the results you're getting: You need to zero the gradient, ex: optimizer.zero_grad() loss.backward() optimizer.step() You're feeding nn.CrossEntropyLoss() with F.softmax. It expects logits. Remove this: output = F.log_softmax(x, dim=1) You're computing the loss and acc only for the current batch when you print it. So, it's not the correct result. To solve it you need to store all losses/accs and compute the average before print, for ex: # During the loop loss_value += loss.item() # When printing: print(loss_value/number_of_batch_losses_stored) It's not a huge problem, but I'd say this learning rate should be lesser, ex: 1e-3. As a tip to improve your pipeline, it's better to use a DataLoader to load your data. Have a look at torch.utils.data to learn how to do that. It's not efficient loading the batches the way you're doing because you're not using generators. Also, MNIST is already available on torchvision.datasets.MNIST. It'll save you some time if you load data from there.
https://stackoverflow.com/questions/60335387/
pytorch difference in 1.1.0 and 1.3.0 in Parameter
I am trying to learn PyTorch and have tried running some code that I got from the Kaggle website. # Get all hidden layers' weights for i in range(len(hidden_units)): fc_layers.extend([ TrainNet.model.hidden_layers[i].weight.T.tolist(), # weights TrainNet.model.hidden_layers[i].bias.tolist() # bias ]) This gives the following error: AttributeError Traceback (most recent call last) <ipython-input-12-65f871b6f0b7> in <module> 4 for i in range(len(hidden_units)): 5 fc_layers.extend([ ----> 6 TrainNet.model.hidden_layers[i].weight.T.tolist(), # weights 7 TrainNet.model.hidden_layers[i].bias.tolist() # bias 8 ]) AttributeError: 'Parameter' object has no attribute 'T' If I print out the type of 'TrainNet.model.hidden_layers[i].weight' it is indeed type Parameter. This block of code works without error on the Kaggle website run in there notebook (Google colab I believe) where the version of Torch is 1.3.0. On my home machine where the error occurs my Anaconda distribution which I have just updated is running Torch 1.1.0. Is this the source of the error and how do I sort it? Thanks
class Parameter is a subclass of Tensor hence has all the attributes of a torch.Tensor class Tensor.T was introduced in 1.2.0 and hence is not available in your 1.1.0 You can either update the pytorch version or use the permute method in it's place as below example >>> t = torch.Tensor(np.random.randint(0,100,size=(2,3,4))) # -> random Tensor with t.shape = (2,3,4) >>> t.shape torch.Size([2, 3, 4]) >>> list(range(len(t.shape)))[::-1] [2,1,0] # This is the sequence of dimensions t.T will return in 1.2.0 onwards >>> t = t.permute(list(range(len(t.shape)))[::-1]) >>> t.shape torch.Size([4, 3, 2]) It is equivalent to doing transpose of a matrix, i.e., reversing the sequence of dimensions but in N dimensional tensors
https://stackoverflow.com/questions/60344725/
I want to use the GPU instead of CPU while performing computations using PyTorch
I'm trying to switch the stress from CPU to GPU as my trusty RTX2070 can do it better than the CPU but I keep running into this problem and I'm quite new to AI so if you are kind enough to share some insights with me regarding any potential solution, it would be highly appreciated, thank you. **I'm using PyTorch Here's the code that I'm using : # to measure run-time # for csv dataset import os # to shuffle data import random # to get the alphabet import string # import statements for iterating over csv file import cv2 # for plotting import matplotlib.pyplot as plt import numpy as np # pytorch stuff import torch import torch.nn as nn from PIL import Image # generate the targets # the targets are one hot encoding vectors # print(torch.cuda.is_available()) nvcc_args = [ '-gencode', 'arch=compute_30,code=sm_30', '-gencode', 'arch=compute_35,code=sm_35', '-gencode', 'arch=compute_37,code=sm_37', '-gencode', 'arch=compute_50,code=sm_50', '-gencode', 'arch=compute_52,code=sm_52', '-gencode', 'arch=compute_60,code=sm_60', '-gencode', 'arch=compute_61,code=sm_61', '-gencode', 'arch=compute_70,code=sm_70', '-gencode', 'arch=compute_75,code=sm_75' ] alphabet = list(string.ascii_lowercase) target = {} # Initalize a target dict that has letters as its keys and empty one-hot encoding vectors of size 37 as its values for letter in alphabet: target[letter] = [0] * 37 # Do the one-hot encoding for each letter now curr_pos = 0 for curr_letter in target.keys(): target[curr_letter][curr_pos] = 1 curr_pos += 1 # extra symbols symbols = ["space", "number", "period", "comma", "colon", "apostrophe", "hyphen", "semicolon", "question", "exclamation", "capitalize"] # create vectors for curr_symbol in symbols: target[curr_symbol] = [0] * 37 # create one-hot encoding vectors for curr_symbol in symbols: target[curr_symbol][curr_pos] = 1 curr_pos += 1 # collect all data from the csv file data = [] for tgt in os.listdir("dataset"): if not tgt == ".DS_Store": for folder in os.listdir("dataset/" + tgt + "/Uploaded"): if not folder == ".DS_Store": for filename in os.listdir("dataset/" + tgt + "/Uploaded/" + folder): if not filename == ".DS_Store": # store the image and label picture = [] curr_target = target[tgt] image = Image.open("dataset/" + tgt + "/Uploaded/" + folder + "/" + filename) image = image.convert('RGB') # f.show() image = np.array(image) # resize image to 28x28x3 image = cv2.resize(image, (28, 28)) # normalize to 0-1 image = image.astype(np.float32) / 255.0 image = torch.from_numpy(image) picture.append(image) # convert the target to a long tensor curr_target = torch.Tensor([curr_target]) picture.append(curr_target) # append the current image & target data.append(picture) # create a dictionary of all the characters characters = alphabet + symbols index2char = {} number = 0 for char in characters: index2char[number] = char number += 1 # find the number of each character in a dataset def num_chars(dataset, index2char): chars = {} for _, label in dataset: char = index2char[int(torch.argmax(label))] # update if char in chars: chars[char] += 1 # initialize else: chars[char] = 1 return chars # Create dataloader objects # shuffle all the data random.shuffle(data) # batch sizes for train, test, and validation batch_size_train = 30 batch_size_test = 30 batch_size_validation = 30 # splitting data to get training, test, and validation sets # change once get more data # 1600 for train train_dataset = data[:22000] # test has 212 test_dataset = data[22000:24400] # validation has 212 validation_dataset = data[24400:] # create the dataloader objects train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size_test, shuffle=False) validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=batch_size_validation, shuffle=True) # to check if a dataset is missing a char test_chars = num_chars(test_dataset, index2char) num = 0 for char in characters: if char in test_chars: num += 1 else: break print(num) class CNN(nn.Module): def __init__(self): self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") super(CNN, self).__init__() self.block1 = nn.Sequential( # 3x28x28 nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5, stride=1, padding=2), # batch normalization # nn.BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True), # 16x28x28 nn.MaxPool2d(kernel_size=2), # 16x14x14 nn.LeakyReLU() ) # 16x14x14 self.block2 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2), # batch normalization # nn.BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True), # 32x14x14 nn.MaxPool2d(kernel_size=2), # 32x7x7 nn.LeakyReLU() ) # linearly self.block3 = nn.Sequential( nn.Linear(32 * 7 * 7, 100), # batch normalization # nn.BatchNorm1d(100), nn.LeakyReLU(), nn.Linear(100, 37) ) # 1x37 def forward(self, x): out = self.block1(x) out = self.block2(out) # flatten the dataset out = out.view(-1, 32 * 7 * 7) out = self.block3(out) return out # convolutional neural network model model = CNN() model.cuda() # print summary of the neural network model to check if everything is fine. print(model) print("# parameter: ", sum([param.nelement() for param in model.parameters()])) # setting the learning rate learning_rate = 1e-4 # Using a variable to store the cross entropy method criterion = nn.CrossEntropyLoss() # Using a variable to store the optimizer optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # list of all train_losses train_losses = [] # list of all validation losses validation_losses = [] # for loop that iterates over all the epochs num_epochs = 20 for epoch in range(num_epochs): # variables to store/keep track of the loss and number of iterations train_loss = 0 num_iter_train = 0 # train the model model.train() # Iterate over train_loader for i, (images, labels) in enumerate(train_loader): # need to permute so that the images are of size 3x28x28 # essential to be able to feed images into the model images = images.permute(0, 3, 1, 2) # Zero the gradient buffer # resets the gradient after each epoch so that the gradients don't add up optimizer.zero_grad() # Forward, get output outputs = model(images) # convert the labels from one hot encoding vectors into integer values labels = labels.view(-1, 37) y_true = torch.argmax(labels, 1) # calculate training loss loss = criterion(outputs, y_true) # Backward (computes all the gradients) loss.backward() # Optimize # loops through all parameters and updates weights by using the gradients # takes steps backwards to optimize (to reach the minimum weight) optimizer.step() # update the training loss and number of iterations train_loss += loss.data num_iter_train += 1 print('Epoch: {}'.format(epoch + 1)) print('Training Loss: {:.4f}'.format(train_loss / num_iter_train)) # append training loss over all the epochs train_losses.append(train_loss / num_iter_train) # evaluate the model model.eval() # variables to store/keep track of the loss and number of iterations validation_loss = 0 num_iter_validation = 0 # Iterate over validation_loader for i, (images, labels) in enumerate(validation_loader): # need to permute so that the images are of size 3x28x28 # essential to be able to feed images into the model images = images.permute(0, 3, 1, 2) # Forward, get output outputs = model(images) # convert the labels from one hot encoding vectors to integer values labels = labels.view(-1, 37) y_true = torch.argmax(labels, 1) # calculate the validation loss loss = criterion(outputs, y_true) # update the training loss and number of iterations validation_loss += loss.data num_iter_validation += 1 print('Validation Loss: {:.4f}'.format(validation_loss / num_iter_validation)) # append all validation_losses over all the epochs validation_losses.append(validation_loss / num_iter_validation) num_iter_test = 0 correct = 0 # Iterate over test_loader for images, labels in test_loader: # need to permute so that the images are of size 3x28x28 # essential to be able to feed images into the model images = images.permute(0, 3, 1, 2) # Forward outputs = model(images) # convert the labels from one hot encoding vectors into integer values labels = labels.view(-1, 37) y_true = torch.argmax(labels, 1) # find the index of the prediction y_pred = torch.argmax(outputs, 1).type('torch.FloatTensor') # convert to FloatTensor y_true = y_true.type('torch.FloatTensor') # find the mean difference of the comparisons correct += torch.sum(torch.eq(y_true, y_pred).type('torch.FloatTensor')) print('Accuracy on the test set: {:.4f}%'.format(correct / len(test_dataset) * 100)) print() # learning curve function def plot_learning_curve(train_losses, validation_losses): # plot the training and validation losses plt.ylabel('Loss') plt.xlabel('Number of Epochs') plt.plot(train_losses, label="training") plt.plot(validation_losses, label="validation") plt.legend(loc=1) # plot the learning curve plt.title("Learning Curve (Loss vs Number of Epochs)") plot_learning_curve(train_losses, validation_losses) torch.save(model.state_dict(), "model1.pth")
I'm also using a trusty RTX 2070 and this is how I do GPU acceleration (for 1 GPU): cuda_ = "cuda:0" device = torch.device(cuda_ if torch.cuda.is_available() else "cpu") model = CNN() model.to(device) This is the most up-to-date and recommended way to do GPU acceleration, as it gives more flexibility (don't need to amend code even when GPU isn't available). You would do the same to pass your images into the GPU vram, via images = images.to(device).
https://stackoverflow.com/questions/60352555/
Multiple PyTorch networks running in parallel on different CPUs
I’m trying to have different PyTorch neural networks run in parallel on different CPUs but am finding that it isn’t leading to any sort of speed up compared to running them sequentially. Below is my code that replicates the issue exactly. If you run this code it shows that with 2 processes it takes roughly twice as long as running it with 1 process but really it should take the same amount of time. import time import torch.multiprocessing as mp import gym import numpy as np import copy import torch.nn as nn import torch class NN(nn.Module): def __init__(self, output_dim): nn.Module.__init__(self) self.fc1 = nn.Linear(4, 50) self.fc2 = nn.Linear(50, 500) self.fc3 = nn.Linear(500, 5000) self.fc4 = nn.Linear(5000, output_dim) self.relu = nn.ReLU() def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.fc4(x) return x def Worker(ix): print("Starting training for worker ", ix) env = gym.make('CartPole-v0') model = NN(2) for _ in range(2000): model(torch.Tensor(env.reset())) print("Finishing training for worker ", ix) def overall_process(num_workers): workers = [] for ix in range(num_workers): worker = mp.Process(target=Worker, args=(ix, )) workers.append(worker) [w.start() for w in workers] for worker in workers: worker.join() print("Finished Training") print(" ") start = time.time() overall_process(1) print("Time taken: ", time.time() - start) print(" ") start = time.time() overall_process(2) print("Time taken: ", time.time() - start) Does anyone know why this might be happening and how to fix it? I thought that it is maybe because PyTorch networks automatically implement CPU parallelism in the background and so I tried adding the below 2 lines but it doesn’t always resolve the issue: torch.set_num_threads(1) torch.set_num_interop_threads(1)
The answer is to set torch.set_num_threads(1) at the beginning of each worker process (rather than in the main process) as explained here: https://discuss.pytorch.org/t/multiple-networks-running-in-parallel-on-different-cpus/70482
https://stackoverflow.com/questions/60353129/
Why does loss skyrocket during convolutional neural net training?
I am training a simple CNN in Pytorch for segmentation on a very small dataset (just a few images as this is just for proof of concept purposes). For some reason, loss skyrockets to as high as 6 and IoU drops to 0 (intersection over union accuracy metric) randomly during training before going back up. I was wondering why this could be happening?
Instability. It is actually common. Take a look at the published papers and you will see the same thing too. During gradient descent, there may be "rough patches" in the gradient landscape and gives a locally bad solution, hence high loss. Having said that, some of these spikes can actually signify that you have made poor hyperparameter and network architecture choices. From my experience, one of the possible causes for the spikes is the usage of weight decay. Weight decay provides regularisation, but in my own work, i found it to cause a lot of instability. So nowadays, i don't use it anymore. The spikes in your graph doesn't look too bad, i wouldn't worry about it.
https://stackoverflow.com/questions/60362114/
Re-using a classification CNN model for autoencoding - pytorch
I am very new to pytorch so I need a bit of handholding. I am trying to re-use an old CNN classification model -- reusing the already trained convolutional layers as the encoder in an autoencoder and then training the decoder layers. The below code is what I have. class Autoencoder(nn.Module): def __init__(self, model, specs): super(Autoencoder, self).__init__() self.encoder = nn.Sequential( *list(model.conv_layer.children()) ) self.decoder = nn.Sequential( nn.ConvTranspose2d(in_channels=C7, out_channels=C6, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C6, out_channels=C5, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C5, out_channels=C4, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C4, out_channels=C3, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C3, out_channels=C2, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C2, out_channels=C1, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C1, out_channels=C0, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), nn.ConvTranspose2d(in_channels=C0, out_channels=3, kernel_size=pooling, padding=0), nn.ReLU(inplace=True), ) for param in self.encoder.parameters(): param.requires_grad = False for p in self.decoder.parameters(): if p.dim() > 1: nn.init.kaiming_normal_(p) pass def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x However, I am getting a "NotImplementedError". What am I doing wrong? When I initiate an instance of that class, I would be passing the pretrained CNN classification model and self.encoder should take care of taking the layers I am interested from the model (those in conv_layer). When I: model = pretrainedCNNmodel autoencoder = Autoencoder(model, specs) print(autoencoder) the print looks okay, it has all layers and everything I am hoping for, but when I try to train on it I get the "NotImplementedError:". Edit Here is the entire error: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-20-9adc467b2472> in <module>() 2 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=L2_lambda) 3 ----> 4 train(x, train_loader, test_loader, optimizer, criterion) 2 frames <ipython-input-5-b25edb14cf5f> in train(model, train_loader, test_loader, optimizer, criterion) 15 data, target = data.cuda(), target.cuda() 16 optimizer.zero_grad() ---> 17 output = model(data) 18 loss = criterion(output, target) 19 loss.backward() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in forward(self, *input) 94 registered hooks while the latter silently ignores them. 95 """ ---> 96 raise NotImplementedError 97 98 def register_buffer(self, name, tensor): NotImplementedError:
Since you have a bounty on this question, it cannot be closed. However, the exact same question was already asked and answered in this thread. Basically, you have an indentation problem in your code: Your forward method is indented such that it is inside your __init__ method, instead of being part of the Autoencoder class. Please see my other answer for more details.
https://stackoverflow.com/questions/60365105/
Assign the new value to a tensor at specific indices
This is a simple example. Assume that I have an input tensor M. Now I have a tensor of indices of M with size 2 x 3 such as [[0, 1], [2,2], [0,1]] and a new array of values which is corresponding with the index tensor is [1, 2, 3]. I want to assign these values to the input M satisfying that the value is assigned to the element of M at index [0,1] will be the min value (1 in this example). It means M[0,1] = 1 and M[2,2] = 2. Can I do that by using some available functions in Pytorch without a loop?
It can be done without loops, but I am generally not sure whether it is such a great idea, due to significantly increased runtime. The basic idea is relatively simple: Since tensor assignments always assign the last element, it is sufficient to sort your tuples in M in descending order, according to the respective values stored in the value list (let's call it v). To do this in pytorch, let us consider the following example: import torch as t X = t.randn([3, 3]) # Random matrix of size 3x3 v = t.tensor([1, 2, 3]) M = t.tensor([[0, 2, 0], [1, 2, 1]]) # accessing the elements described above # Showcase pytorch's result with "naive" tensor assignment: X[tuple(M)] = v # This would assign the value 3 to position (0, 1) # To correct behavior, sort v in decreasing order. v_desc = v.sort(decreasing=True) # v now contains both the values and the indices of original position print(v_desc) # torch.return_types.sort( # values=tensor([3, 2, 1]), # indices=tensor([2, 1, 0])) # Access M in the correct order: M_desc = M[:, v_desc.indices] # Finally assign correct order: X[tuple(M_desc)] = v_desc Again, this is relatively complicated, because it involves sorting the values, and "re-shuffling" of the tensors. You can surely save at least some memory if you perform operations in-place, which is something I disregarded for the sake of legibility. As an answer whether this can also be achieved without sorting, I am fairly certain that the answer will be "no"; tensor assignments could only be done on fairly simple conditionals, but not something more complicated like your inter-dependent conditionals would require.
https://stackoverflow.com/questions/60372544/
How to use a customized dataset for training with PyTorch/few-shot-vid2vid
I’d like to use my own dataset created from the FaceForensics footage with few-show-vid2vid. So I generated image sequences with ffmpeg and keypoints with dlib. When I try to start the training script, I get the following error. What exactly is the problem? The provided small dataset was working for me. CustomDatasetDataLoader 485 sequences dataset [FaceDataset] was created Resuming from epoch 1 at iteration 0 create web directory ./checkpoints/face/web... ---------- Networks initialized ------------- ---------- Optimizers initialized ------------- ./checkpoints/face/latest_net_G.pth not exists yet! ./checkpoints/face/latest_net_D.pth not exists yet! model [Vid2VidModel] was created Traceback (most recent call last): File "train.py", line 73, in <module> train() File "train.py", line 40, in train for idx, data in enumerate(dataset, start=trainer.epoch_iter): File "/home/keno/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 819, in __next__ return self._process_data(data) File "/home/keno/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/home/keno/.local/lib/python3.7/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/keno/.local/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/keno/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/keno/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/keno/repos/few-shot-vid2vid/data/fewshot_face_dataset.py", line 103, in __getitem__ Li = self.get_face_image(keypoints, transform_L, ref_img.size) File "/home/keno/repos/few-shot-vid2vid/data/fewshot_face_dataset.py", line 168, in get_face_image x = keypoints[sub_edge, 0] IndexError: index 82 is out of bounds for axis 0 with size 82 EDIT: How I set up my dataset. I created image sequences from the video footage with ffmpeg -i _video_ -o %05d.jpg, following the directory structure of the provided sample dataset. Then I generated the keypoints by using landmark detection with dlib, based on the code example provided on the dlib website. I expanded the sample code to 68 points and saved them to .txt files: import re import sys import os import dlib import glob # if len(sys.argv) != 4: # print( # "Give the path to the trained shape predictor model as the first " # "argument and then the directory containing the facial images.\n" # "For example, if you are in the python_examples folder then " # "execute this program by running:\n" # " ./face_landmark_detection.py shape_predictor_68_face_landmarks.dat ../examples/faces\n" # "You can download a trained facial shape predictor from:\n" # " http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2") # exit() predictor_path = sys.argv[1] faces_folder_path = sys.argv[2] text_file_path = sys.argv[3] detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(predictor_path) win = dlib.image_window() for f in glob.glob(os.path.join(faces_folder_path, "*.jpg")): file_number = os.path.split(f) print(file_number[1]) file_number = os.path.splitext(file_number[1]) file_number = file_number[0] export_path = os.path.join(text_file_path, '%s.txt' % file_number) text = open(export_path,"w+") print("Processing file: {}".format(f)) img = dlib.load_rgb_image(f) win.clear_overlay() win.set_image(img) # Ask the detector to find the bounding boxes of each face. The 1 in the # second argument indicates that we should upsample the image 1 time. This # will make everything bigger and allow us to detect more faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for k, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( k, d.left(), d.top(), d.right(), d.bottom())) # Get the landmarks/parts for the face in box d. shape = predictor(img, d) for i in range(67): result = str(shape.part(i)) result = result.strip("()") print(result) text.write(result + '\n') # Draw the face landmarks on the screen. win.add_overlay(shape) text.close() win.add_overlay(dets)
for i in range(67): This is incorrect, you should be using range(68) for 68 face landmarks. You can verify this with python -c "for i in range(67): print(i)" which will only count from 0 to 66 (67 total numbers). python -c "for i in range(68): print(i)" will count from 0 to 67 (68 items) and get the whole face landmark set.
https://stackoverflow.com/questions/60373136/
Expected object of device type cuda but got device type cpu
I am trying to switch the training of my network from cpu to gpu but keep getting the following error. I am getting the following error Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_conv2d_forward Error occurs, No graph saved Traceback (most recent call last): File "<ipython-input-6-2720a5ea768d>", line 12, in <module> tb.add_graph(network, images) File "E:\Anaconda\lib\site-packages\torch\utils\tensorboard\writer.py", line 707, in add_graph self._get_file_writer().add_graph(graph(model, input_to_model, verbose)) File "E:\Anaconda\lib\site-packages\torch\utils\tensorboard\_pytorch_graph.py", line 291, in graph raise e File "E:\Anaconda\lib\site-packages\torch\utils\tensorboard\_pytorch_graph.py", line 285, in graph trace = torch.jit.trace(model, args) File "E:\Anaconda\lib\site-packages\torch\jit\__init__.py", line 882, in trace check_tolerance, _force_outplace, _module_class) File "E:\Anaconda\lib\site-packages\torch\jit\__init__.py", line 1034, in trace_module module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace) File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 530, in __call__ result = self._slow_forward(*input, **kwargs) File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 516, in _slow_forward result = self.forward(*input, **kwargs) File "<ipython-input-5-cd44a4e4fb73>", line 52, in forward t = F.relu(self.conv1(t)) File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 530, in __call__ result = self._slow_forward(*input, **kwargs) File "E:\Anaconda\lib\site-packages\torch\nn\modules\module.py", line 516, in _slow_forward result = self.forward(*input, **kwargs) File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "E:\Anaconda\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_conv2d_forward```** I think it says that argument is in type cpu but I changed it in the training part. I have the following code Conv-neural network class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.conv1 = nn.Conv2d( in_channels= 1, out_channels= 6, kernel_size=5 ) self.conv2 = nn.Conv2d( in_channels= 6, out_channels= 12, kernel_size=5 ) self.fc1 = nn.Linear( in_features = 12*4*4, out_features = 120 ) self.fc2 = nn.Linear( in_features = 120, out_features = 60 ) self.out = nn.Linear( in_features = 60, out_features = 10 ) def forward(self, t): t = F.relu(self.conv1(t)) t = F.max_pool2d(t, kernel_size=2, stride=2) t = F.relu(self.conv2(t)) t = F.max_pool2d(t, kernel_size=2, stride=2) t = F.relu(self.fc1(t.reshape(-1, 12*4*4))) t = F.relu(self.fc2(t)) t = self.out(t) return t The training part parameters = dict( lr = [.01, .001] , batch_size = [10, 100, 1000] , shuffle = [True, False] ) param_values = [v for v in parameters.values()] param_values for lr, batch_size, shuffle in product(*param_values): network = Network() network.to(device) train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle = shuffle) optimizer = optim.Adam(network.parameters(), lr=lr) images, labels = next(iter(train_loader)) grid = torchvision.utils.make_grid(images) comment = f' batch_size={batch_size} lr={lr} shuffle={shuffle}' tb = SummaryWriter(comment = comment) tb.add_image('images', grid) tb.add_graph(network, images) for epoch in range(10): total_loss = 0 total_correct = 0 for batch in train_loader: # Get batch images, labels = batch images = images.to(device) # Changing data to gpu preds = network(images) loss = F.cross_entropy(preds, labels) optimizer.zero_grad() loss.backward() optimizer.step() total_loss += loss.item() * batch_size total_correct += get_num_correct(preds, labels) tb.add_scalar('Loss:', total_loss, epoch) tb.add_scalar('Number Correct:', total_correct, epoch) tb.add_scalar('Accuracy:', total_correct/len(train_set), epoch) #tb.add_histogram('conv1.bias', network.conv1.bias, epoch) #tb.add_histogram('conv1.weight', network.conv1.weight, epoch) #tb.add_histogram('conv1.weight.grap', network.conv1.weight.grad, epoch) for name, weight in network.named_parameters(): tb.add_histogram(name, weight, epoch) tb.add_histogram(f'{name}.grad', weight.grad, epoch) print("epoch:", epoch, "total_correct:", total_correct, "loss:",total_loss) tb.close() I am new to deep learning so any help will be highly appreciated. Thanks
You missed moving your labels to gpu i.e. labels = labels.to(device) You also need to move these to gpu: images, labels = next(iter(train_loader)) images = images.to(device) labels = labels.to(device)
https://stackoverflow.com/questions/60376096/
Different results from PyTorch's conv1d and SciPy's convolve
I'm building a PyTorch model to estimate Impuse Responses. Currently I am calculating the loss from the real and estimated impulse response. I would like to convolve both the estimated and real impulse response with a signal and then calculate the loss from those. The pyroomaccoustics package uses SciPy's fftconvolve to convolve the impulse response with a given signal. I cannot use this since it would break PyTorch's computation graph. PyTorch's conv1d uses cross-correlation. From this answer it seems that by flipping the filter conv1d can be used for convolution. I am confused as to why the following code gives a different result for conv1d and convolve and what must be changed to get the outputs to be equal. import torch from scipy.signal import convolve a = torch.tensor([.1, .2, .3, .4, .5]) b = torch.tensor([.0, .1, .0]) a1 = a.view(1, 1, -1) b1 = torch.flip(b, (0,)).view(1, 1, -1) print(torch.nn.functional.conv1d(a1, b1).view(-1)) # >>> tensor([0.0200, 0.0300, 0.0400]) print(convolve(a, b)) # >>> [0. 0.01 0.02 0.03 0.04 0.05 0. ]
Take a look at the mode parameter of scipy.signal.convolve. Use mode='valid' to match PyTorch's conv1d: In [20]: from scipy.signal import convolve In [21]: a = np.array([.1, .2, .3, .4, .5]) In [22]: b = np.array([.0, .1, .0]) In [23]: convolve(a, b, mode='valid') Out[23]: array([0.02, 0.03, 0.04]) To modify the call of PyTorch's conv1d to give the same output as the default behavior of scipy.signal.convolve (i.e. to match mode='full') for this example, set padding=2 in the call to conv1d. More generally, for a given convolution kernel b, set the padding to len(b) - 1.
https://stackoverflow.com/questions/60377320/
Return predictions wav2vec fairseq
I'm trying to use wav2vec to train my own Automatic Speech Recognition System: https://github.com/pytorch/fairseq/tree/master/examples/wav2vec import torch from fairseq.models.wav2vec import Wav2VecModel cp = torch.load('/path/to/wav2vec.pt') model = Wav2VecModel.build_model(cp['args'], task=None) model.load_state_dict(cp['model']) model.eval() First of all how can I use a loaded model to return predictions from a wav file? Second, how can I pre-train using annotated data? I don't see any text mention in the manifest scripts.
After trying various things I was able to figure this out and trained a wav2vec model from scratch. Some background: wav2vec uses semi-supervised learning to learn vector representations for preprocessed sound frames. This is similar to what word2vec does to learn word embeddings a text corpus. In the case of wav2vec it samples random parts of the sound file and learns to predict if a given part is in the near future from a current offset position. This is somewhat similar to the masked word task used to train transformers such as BERT. The nice thing about such prediction tasks is that they are self-supervised: the algorithm can be trained on unlabeled data since it uses the temporal structure of the data to produce labels and it uses random sampling to produce contrasting negative examples. It is a binary classification task (is the proposed processed sound frame in the near future of the current offset or not). In training for this binary classification task, it learns vector representations of sound frames (one 512 dim vector for each 10ms of sound). These vector representations are useful features because they concentrate information relevant to predicting speech. These vectors can then be used instead of spectrogram vectors as inputs for speech to text algorithms such as wav2letter or deepSpeech. This is an important point: wav2vec is not a full automatic speech recognition (ASR) system. It is a useful component because by leveraging self-supervised learning on unlabeled data (audio files containing speech but without text transcriptions), it greatly reduces the need for labeled data (speech transcribed to text). Based on their article it appears that by using wav2vec in an ASR pipeline, the amount of labeled data needed can be reduced by a factor of at least 10 (10 to 100 times less transcribed speech is needed apparently). Since un-transcribed speech files are much easier to get than transcribed speech, this is a huge advantage of using wav2vec as an initial module in an ASR system. So wav2vec is trained with data which is not annotated (no text is used to train it). The thing which confused me was the following command for training (here) : python train.py /manifest/path --save-dir /model/path ...(etc.)......... It turns out that since wav2vec is part of fairseq, the following fairseq command line tool should be used to train it: fairseq-train As the arguments to this command are pretty long, this can be done using a bash scipt such as #!/bin/bash fairseq-train /home/user/4fairseq --save-dir /home/user/4fairseq --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ --arch wav2vec --task audio_pretraining --lr 1e-06 --min-lr 1e-09 --optimizer adam --max-lr 0.005 --lr-scheduler cosine \ --conv-feature-layers "[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)]" \ --conv-aggregator-layers "[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]" \ --skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion binary_cross_entropy --num-negatives 10 \ --max-sample-size 150000 --max-tokens 1500000 most of the arguments are those suggested here, only the first two (which are filesystem paths) must be modified for your system. Since I had audio voice files which were in mp3 format, I converted them to wav files the following bash script: #!/bin/bash for file in /home/user/data/soundFiles/* do echo "$file" echo "${file%.*}.wav" ffmpeg -i "$file" "${file%.*}.wav" done They suggest that the audio files be of short duration, longer files should be split into smaller files. The files which I had were already pretty short so I did not do any splitting. the script wav2vec_manifest.py must be used to create a training data manifest before training. It will create two files (train.tsv and valid.tsv) basically creating lists of which audio files should be used for training and which should be used for validation. The path at which these two files are located is the first argument to the fairseq-train method. The second argument to the method fairseq-train is the path at which to save the model. After training there will be these two model files: checkpoint_best.pt checkpoint_last.pt These are updated at the end of each epoch so I was able to terminate the train process early and still have those saved model files
https://stackoverflow.com/questions/60377747/
Get matrix dimensions from pytorch layers
Here is an autoencoder I created from Pytorch tutorials : epochs = 1000 from pylab import plt plt.style.use('seaborn') import torch.utils.data as data_utils import torch import torchvision import torch.nn as nn from torch.autograd import Variable cuda = torch.cuda.is_available() FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor import numpy as np import pandas as pd import datetime as dt features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ])) print(features) batch = 10 data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False) encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid()) decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid()) autoencoder = nn.Sequential(encoder, decoder) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001) encoded_images = [] for i in range(epochs): for j, images in enumerate(data_loader): # images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) optimizer.zero_grad() reconstructions = autoencoder(images) loss = torch.dist(images, reconstructions) loss.backward() optimizer.step() # encoded_images.append(encoder(images)) # print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor))) encoded_images = [] for j, images in enumerate(data_loader): images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) encoded_images.append(encoder(images)) I can see the encoded images do have newly created dimension of 10. In order to understand the matrix operations going on under the hood I'm attempting to print the matrix dimensions of encoder and decoder but shape is not available on nn.Sequential How to print the matrix dimensions of nn.Sequential ?
A nn.Sequential is not a "layer", but rather a "container". It can store several layers and manage their execution (and some other functionalities). In your case, each nn.Sequential holds both the linear layer and the non-linear nn.Sigmoid activation. To get the shape of the weights of the first layer in a nn.Sequential you can simply do: encoder[0].weight.shape
https://stackoverflow.com/questions/60384697/
Why is CNN convolution output size in PyTorch DQN tutorial computed with `kernel_size -1`?
Based on my understanding, CNN output size for 1D is output_size = (input_size - kernel_size + 2*padding)//stride + 1 Refer to PyTorch DQN Tutorial. In the tutorial, it uses 0 padding, which is fine. However, it computes the output size as follows: def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 It the above a mistake or is there something I missed?
No, it's not a mistake because size - (kernel_size - 1) - 1 = size - kernel_size + 2 * 0 with 0 as padding (it's not code, its an equation sorry for the formatting) I think the tutorial is using the formula for the output size from the official document which is output_size = ((input_size + 2 * padding - dialation * (kernel_size - 1) - 1) // stride + 1 official doc for conv1d
https://stackoverflow.com/questions/60406081/
How to index/slice the last dimension of a PyTorch tensor/numpy array of unknown dimensions
For example, if I have a 2D tensor X, I can do slicing X[:,1:]; if I have a 3D tensor Y, then I can do similar slicing for the last dimension like Y[:,:,1:]. What is the right way to do the slicing when given a tensor Z of unknown dimension? How about a numpy array? Thanks!
PyTorch support NumPy-like indexing so you can use Ellipsis(...) >>> z[..., -1:] Example: >>> x # (2,2) tensor tensor([[0.5385, 0.9280], [0.8937, 0.0423]]) >>> x[..., -1:] tensor([[0.9280], [0.0423]]) >>> y # (2,2,2) tensor tensor([[[0.5610, 0.8542], [0.2902, 0.2388]], [[0.2440, 0.1063], [0.7201, 0.1010]]]) >>> y[..., -1:] tensor([[[0.8542], [0.2388]], [[0.1063], [0.1010]]]) Ellipsis (...) expands to the number of : objects needed for the selection tuple to index all dimensions. In most cases, this means that length of the expanded selection tuple is x.ndim. There may only be a single ellipsis present.
https://stackoverflow.com/questions/60406366/
pytorch: model.forward() effects the training process even if the result is not part of the loss function
I have the following code to train a simple model: train_loader, another_loader = get_loaders() model = torch_utils.get_model() ce = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) another_loader_iter = iter(another_loader) for epoch in range(10): for i, (X, y) in enumerate(train_loader): model.train() X, y = X.to(device), y.to(device) pred = model.forward(X) loss1 = ce(pred, y) for j in range(2): X2, y2 = next(another_loader) pred2 = model.forward(X2) #**This line effects the training process - why???** new_loss = loss1 optimizer.zero_grad() new_loss.backward() optimizer.step() When I comment out pred2 = model.forward(X2) the model trains as it should but I want to use pred2 to add a new expression to the loss function.
I'd suggest the following changes: for epoch in range(10): for i, (X, y) in enumerate(train_loader): model.train() X, y = X.to(device), y.to(device) pred = model(X) loss1 = ce(pred, y) for j in range(2): model.eval() X2, y2 = next(another_loader) pred2 = model(X2) new_loss = loss1 optimizer.zero_grad() new_loss.backward() optimizer.step() I'd also say you don't need new_loss = loss1, but think you're including more stuff here in your original code, right? In any case, check if these changes help you.
https://stackoverflow.com/questions/60412527/
pytorch DataLoader: `Tensors must have same number of dimensions`
I am trying to fit an LSTM model in Pytorch. My data is too big to be read into memory and so I want to create mini-batches of data using the DataLoader function from Pytorch. I have two features as input (X1, X2). I have one output feature (y). I am using 365 timesteps of X1 & X2 as features used to predict y. The dimensions of my training array is: (n_observations, n_timesteps, n_features) == (9498, 365, 2) I don't understand why the code below isn't working because I have seen other examples where the X, y pairs have different numbers of dimensions (LSTM for runoff modelling, Pytorch's own docs ) Minimum Reproducible Example import numpy as np import torch from torch.utils.data import DataLoader train_x = torch.Tensor(np.random.random((9498, 365, 2))) train_y = torch.Tensor(np.random.random((9498, 1))) val_x = torch.Tensor(np.random.random((1097, 365, 2))) val_y = torch.Tensor(np.random.random((1097, 1))) test_x = torch.Tensor(np.random.random((639, 365, 2))) test_y = torch.Tensor(np.random.random((639, 1))) train_dataset = (train_x, train_y) test_dataset = (test_x, test_y) val_dataset = (val_x, val_y) train_dataloader = DataLoader(train_dataset, batch_size=256) iterator = train_dataloader.__iter__() iterator.next() Output: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-47-2a0b28b53c8f> in <module> 13 14 iterator = train_dataloader.__iter__() ---> 15 iterator.next() /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 344 def __next__(self): 345 index = self._next_index() # may raise StopIteration --> 346 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 347 if self._pin_memory: 348 data = _utils.pin_memory.pin_memory(data) /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 45 else: 46 data = self.dataset[possibly_batched_index] ---> 47 return self.collate_fn(data) /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py in default_collate(batch) 53 storage = elem.storage()._new_shared(numel) 54 out = elem.new(storage) ---> 55 return torch.stack(batch, 0, out=out) 56 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ 57 and elem_type.__name__ != 'string_': RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 4 and 3 at /tmp/pip-req-build-4baxydiv/aten/src/TH/generic/THTensor.cpp:680
The torch.utils.data.DataLoader must get a torch.utils.data.Dataset as parameters. You're giving a tuple of tensors. I suggest you use the torch.utils.data.TensorDataset as follows: from torch.utils.data import DataLoader, TensorDataset train_x = torch.rand(9498, 365, 2) train_y = torch.rand(9498, 1) train_dataset = TensorDataset(train_x, train_y) train_dataloader = DataLoader(train_dataset, batch_size=256) for x, y in train_dataloader: print (x.shape) Check if it solves your problem.
https://stackoverflow.com/questions/60414698/
PyTorch: Convolving a single channel image using torch.nn.Conv2d
I am trying to use a convolution layer to convolve a grayscale (single layer) image (stored as a numpy array). Here is the code: conv1 = torch.nn.Conv2d(in_channels = 1, out_channels = 1, kernel_size = 33) tensor1 = torch.from_numpy(img_gray) out_2d_np = conv1(tensor1) out_2d_np = np.asarray(out_2d_np) I want my kernel to be 33x33 and the number of output layers should be equal to the number of input layers, which is 1 as the image's RGB channels are summed. Whenout_2d_np = conv1(tensor1) is run it yields the following runtime error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight 1 1 33 33, but got 2-dimensional input of size [246, 248] instead Any idea on how I can solve this? I specifically want to use the torch.nn.Conv2d() class/function. Thanks in advance for any help!
pytorch's Conv2d expects its 2D inputs to actually have 4 dimensions: mini-batch dim, channel dim, and the two spatial dimensions. Your input tensor has only two spatial dimensions and it lacks the mini-batch and channel dimensions. In your case these two dimensions are actually singelton dimensions (dimensions with size=1). try: conv1(tensor1[None, None, ...])
https://stackoverflow.com/questions/60421221/
Which ImageNet classes is PyTorch trained on?
PyTorch lets you download pre-trained models for classification here, but it's not clear how to go from the output tensor of probabilities back to the actual class labels. Any ideas?
See example here: https://discuss.pytorch.org/t/imagenet-classes/4923 https://www.learnopencv.com/pytorch-for-beginners-image-classification-using-pre-trained-models/ You need to download imagenet_classes.txt
https://stackoverflow.com/questions/60421367/
DCGAN debugging. Getting just garbage
Introduction: I am trying to get a CDCGAN (Conditional Deep Convolutional Generative Adversarial Network) to work on the MNIST dataset which should be fairly easy considering that the library (PyTorch) I am using has a tutorial on its website. But I can't seem to get It working it just produces garbage or the model collapses or both. What I tried: making the model Conditional semi-supervised learning using batch norm using dropout on each layer besides the input/output layer on the generator and discriminator label smoothing to combat overconfidence adding noise to the images (I guess you call this instance noise) to get a better data distribution use leaky relu to avoid vanishing gradients using a replay buffer to combat forgetting of learned stuff and overfitting playing with hyperparameters comparing it to the model from PyTorch tutorial basically what I did besides some things like Embedding layer ect. Images my Model generated: Hyperparameters: batch_size=50, learning_rate_discrimiantor=0.0001, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, droupout=0.5 batch_size=50, learning_rate_discriminator=0.0003, learning_rate_generator=0.0003, shuffle=True, ndf=64, ngf=64, dropout=0 Images Pytorch tutorial Model generated: Code for the pytorch tutorial dcgan model As comparison here are the images from the DCGAN from the pytorch turoial: My Code: import torch import torch.nn as nn import torchvision from torchvision import transforms, datasets import torch.nn.functional as F from torch import optim as optim from torch.utils.tensorboard import SummaryWriter import numpy as np import os import time class Discriminator(torch.nn.Module): def __init__(self, ndf=16, dropout_value=0.5): # ndf feature map discriminator super().__init__() self.ndf = ndf self.droupout_value = dropout_value self.condi = nn.Sequential( nn.Linear(in_features=10, out_features=64 * 64) ) self.hidden0 = nn.Sequential( nn.Conv2d(in_channels=2, out_channels=self.ndf, kernel_size=4, stride=2, padding=1, bias=False), nn.LeakyReLU(0.2), ) self.hidden1 = nn.Sequential( nn.Conv2d(in_channels=self.ndf, out_channels=self.ndf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 2, out_channels=self.ndf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ndf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 4, out_channels=self.ndf * 8, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ndf * 8), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( nn.Conv2d(in_channels=self.ndf * 8, out_channels=1, kernel_size=4, stride=1, padding=0, bias=False), torch.nn.Sigmoid() ) def forward(self, x, y): y = self.condi(y.view(-1, 10)) y = y.view(-1, 1, 64, 64) x = torch.cat((x, y), dim=1) x = self.hidden0(x) x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Generator(torch.nn.Module): def __init__(self, n_features=100, ngf=16, c_channels=1, dropout_value=0.5): # ngf feature map of generator super().__init__() self.ngf = ngf self.n_features = n_features self.c_channels = c_channels self.droupout_value = dropout_value self.hidden0 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.n_features + 10, out_channels=self.ngf * 8, kernel_size=4, stride=1, padding=0, bias=False), nn.BatchNorm2d(self.ngf * 8), nn.LeakyReLU(0.2) ) self.hidden1 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 8, out_channels=self.ngf * 4, kernel_size=4, stride=2, padding=1, bias=False), #nn.BatchNorm2d(self.ngf * 4), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden2 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 4, out_channels=self.ngf * 2, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf * 2), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.hidden3 = nn.Sequential( nn.ConvTranspose2d(in_channels=self.ngf * 2, out_channels=self.ngf, kernel_size=4, stride=2, padding=1, bias=False), nn.BatchNorm2d(self.ngf), nn.LeakyReLU(0.2), nn.Dropout(self.droupout_value) ) self.out = nn.Sequential( # "out_channels=1" because gray scale nn.ConvTranspose2d(in_channels=self.ngf, out_channels=1, kernel_size=4, stride=2, padding=1, bias=False), nn.Tanh() ) def forward(self, x, y): x_cond = torch.cat((x, y), dim=1) # Combine flatten image with conditional input (class labels) x = self.hidden0(x_cond) # Image goes into a "ConvTranspose2d" layer x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x class Logger: def __init__(self, model_name, model1, model2, m1_optimizer, m2_optimizer, model_parameter, train_loader): self.out_dir = "data" self.model_name = model_name self.train_loader = train_loader self.model1 = model1 self.model2 = model2 self.model_parameter = model_parameter self.m1_optimizer = m1_optimizer self.m2_optimizer = m2_optimizer # Exclude Epochs of the model name. This make sense e.g. when we stop a training progress and continue later on. self.experiment_name = '_'.join("{!s}={!r}".format(k, v) for (k, v) in model_parameter.items())\ .replace("Epochs" + "=" + str(model_parameter["Epochs"]), "") self.d_error = 0 self.g_error = 0 self.tb = SummaryWriter(log_dir=str(self.out_dir + "/log/" + self.model_name + "/runs/" + self.experiment_name)) self.path_image = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/images/{self.experiment_name}') self.path_model = os.path.join(os.getcwd(), f'{self.out_dir}/log/{self.model_name}/model/{self.experiment_name}') try: os.makedirs(self.path_image) except Exception as e: print("WARNING: ", str(e)) try: os.makedirs(self.path_model) except Exception as e: print("WARNING: ", str(e)) def log_graph(self, model1_input, model2_input, model1_label, model2_label): self.tb.add_graph(self.model1, input_to_model=(model1_input, model1_label)) self.tb.add_graph(self.model2, input_to_model=(model2_input, model2_label)) def log(self, num_epoch, d_error, g_error): self.d_error = d_error self.g_error = g_error self.tb.add_scalar("Discriminator Train Error", self.d_error, num_epoch) self.tb.add_scalar("Generator Train Error", self.g_error, num_epoch) def log_image(self, images, epoch, batch_num): grid = torchvision.utils.make_grid(images) torchvision.utils.save_image(grid, f'{self.path_image}\\Epoch_{epoch}_batch_{batch_num}.png') self.tb.add_image("Generator Image", grid) def log_histogramm(self): for name, param in self.model2.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter["Epochs"]) self.tb.add_histogram(f'gen_{name}.grad', param.grad, self.model_parameter["Epochs"]) for name, param in self.model1.named_parameters(): self.tb.add_histogram(name, param, self.model_parameter["Epochs"]) self.tb.add_histogram(f'dis_{name}.grad', param.grad, self.model_parameter["Epochs"]) def log_model(self, num_epoch): torch.save({ "epoch": num_epoch, "model_generator_state_dict": self.model1.state_dict(), "model_discriminator_state_dict": self.model2.state_dict(), "optimizer_generator_state_dict": self.m1_optimizer.state_dict(), "optimizer_discriminator_state_dict": self.m2_optimizer.state_dict(), }, str(self.path_model + f'\\{time.time()}_epoch{num_epoch}.pth')) def close(self, logger, images, num_epoch, d_error, g_error): logger.log_model(num_epoch) logger.log_histogramm() logger.log(num_epoch, d_error, g_error) self.tb.close() def display_stats(self, epoch, batch_num, dis_error, gen_error): print(f'Epoch: [{epoch}/{self.model_parameter["Epochs"]}] ' f'Batch: [{batch_num}/{len(self.train_loader)}] ' f'Loss_D: {dis_error.data.cpu()}, ' f'Loss_G: {gen_error.data.cpu()}') def get_MNIST_dataset(num_workers_loader, model_parameter, out_dir="data"): compose = transforms.Compose([ transforms.Resize((64, 64)), transforms.CenterCrop((64, 64)), transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.5], std=[0.5]) ]) dataset = datasets.MNIST( root=out_dir, train=True, download=True, transform=compose ) train_loader = torch.utils.data.DataLoader(dataset, batch_size=model_parameter["batch_size"], num_workers=num_workers_loader, shuffle=model_parameter["shuffle"]) return dataset, train_loader def train_discriminator(p_optimizer, p_noise, p_images, p_fake_target, p_real_target, p_images_labels, p_fake_labels, device): p_optimizer.zero_grad() # 1.1 Train on real data pred_dis_real = discriminator(p_images, p_images_labels) error_real = loss(pred_dis_real, p_real_target) error_real.backward() # 1.2 Train on fake data fake_data = generator(p_noise, p_fake_labels).detach() fake_data = add_noise_to_image(fake_data, device) pred_dis_fake = discriminator(fake_data, p_fake_labels) error_fake = loss(pred_dis_fake, p_fake_target) error_fake.backward() p_optimizer.step() return error_fake + error_real def train_generator(p_optimizer, p_noise, p_real_target, p_fake_labels, device): p_optimizer.zero_grad() fake_images = generator(p_noise, p_fake_labels) fake_images = add_noise_to_image(fake_images, device) pred_dis_fake = discriminator(fake_images, p_fake_labels) error_fake = loss(pred_dis_fake, p_real_target) # because """ We use "p_real_target" instead of "p_fake_target" because we want to maximize that the discriminator is wrong. """ error_fake.backward() p_optimizer.step() return fake_images, pred_dis_fake, error_fake # TODO change to a Truncated normal distribution def get_noise(batch_size, n_features=100): return torch.FloatTensor(batch_size, n_features, 1, 1).uniform_(-1, 1) # We flip label of real and fate data. Better gradient flow I have told def get_real_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.0, 0.2) def get_fake_data_target(batch_size): return torch.FloatTensor(batch_size, 1, 1, 1).uniform_(0.8, 1.1) def image_to_vector(images): return torch.flatten(images, start_dim=1, end_dim=-1) def vector_to_image(images): return images.view(images.size(0), 1, 28, 28) def get_rand_labels(batch_size): return torch.randint(low=0, high=9, size=(batch_size,)) def load_model(model_load_path): if model_load_path: checkpoint = torch.load(model_load_path) discriminator.load_state_dict(checkpoint["model_discriminator_state_dict"]) generator.load_state_dict(checkpoint["model_generator_state_dict"]) dis_opti.load_state_dict(checkpoint["optimizer_discriminator_state_dict"]) gen_opti.load_state_dict(checkpoint["optimizer_generator_state_dict"]) return checkpoint["epoch"] else: return 0 def init_model_optimizer(model_parameter, device): # Initialize the Models discriminator = Discriminator(ndf=model_parameter["ndf"], dropout_value=model_parameter["dropout"]).to(device) generator = Generator(ngf=model_parameter["ngf"], dropout_value=model_parameter["dropout"]).to(device) # train dis_opti = optim.Adam(discriminator.parameters(), lr=model_parameter["learning_rate_dis"], betas=(0.5, 0.999)) gen_opti = optim.Adam(generator.parameters(), lr=model_parameter["learning_rate_gen"], betas=(0.5, 0.999)) return discriminator, generator, dis_opti, gen_opti def get_hot_vector_encode(labels, device): return torch.eye(10)[labels].view(-1, 10, 1, 1).to(device) def add_noise_to_image(images, device, level_of_noise=0.1): return images[0].to(device) + (level_of_noise) * torch.randn(images.shape).to(device) if __name__ == "__main__": # Hyperparameter model_parameter = { "batch_size": 500, "learning_rate_dis": 0.0002, "learning_rate_gen": 0.0002, "shuffle": False, "Epochs": 10, "ndf": 64, "ngf": 64, "dropout": 0.5 } # Parameter r_frequent = 10 # How many samples we save for replay per batch (batch_size / r_frequent). model_name = "CDCGAN" # The name of you model e.g. "Gan" num_workers_loader = 1 # How many workers should load the data sample_save_size = 16 # How many numbers your saved imaged should show device = "cuda" # Which device should be used to train the neural network model_load_path = "" # If set load model instead of training from new num_epoch_log = 1 # How frequent you want to log/ torch.manual_seed(43) # Sets a seed for torch for reproducibility dataset_train, train_loader = get_MNIST_dataset(num_workers_loader, model_parameter) # Get dataset # Initialize the Models and optimizer discriminator, generator, dis_opti, gen_opti = init_model_optimizer(model_parameter, device) # Init model/Optimizer start_epoch = load_model(model_load_path) # when we want to load a model # Init Logger logger = Logger(model_name, generator, discriminator, gen_opti, dis_opti, model_parameter, train_loader) loss = nn.BCELoss() images, labels = next(iter(train_loader)) # For logging # For testing # pred = generator(get_noise(model_parameter["batch_size"]).to(device), get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device)) # dis = discriminator(images.to(device), get_hot_vector_encode(labels, device)) logger.log_graph(get_noise(model_parameter["batch_size"]).to(device), images.to(device), get_hot_vector_encode(get_rand_labels(model_parameter["batch_size"]), device), get_hot_vector_encode(labels, device)) # Array to store exp_replay = torch.tensor([]).to(device) for num_epoch in range(start_epoch, model_parameter["Epochs"]): for batch_num, data_loader in enumerate(train_loader): images, labels = data_loader images = add_noise_to_image(images, device) # Add noise to the images # 1. Train Discriminator dis_error = train_discriminator( dis_opti, get_noise(model_parameter["batch_size"]).to(device), images.to(device), get_fake_data_target(model_parameter["batch_size"]).to(device), get_real_data_target(model_parameter["batch_size"]).to(device), get_hot_vector_encode(labels, device), get_hot_vector_encode( get_rand_labels(model_parameter["batch_size"]), device), device ) # 2. Train Generator fake_image, pred_dis_fake, gen_error = train_generator( gen_opti, get_noise(model_parameter["batch_size"]).to(device), get_real_data_target(model_parameter["batch_size"]).to(device), get_hot_vector_encode( get_rand_labels(model_parameter["batch_size"]), device), device ) # Store a random point for experience replay perm = torch.randperm(fake_image.size(0)) r_idx = perm[:max(1, int(model_parameter["batch_size"] / r_frequent))] r_samples = add_noise_to_image(fake_image[r_idx], device) exp_replay = torch.cat((exp_replay, r_samples), 0).detach() if exp_replay.size(0) >= model_parameter["batch_size"]: # Train on experienced data dis_opti.zero_grad() r_label = get_hot_vector_encode(torch.zeros(exp_replay.size(0)).numpy(), device) pred_dis_real = discriminator(exp_replay, r_label) error_real = loss(pred_dis_real, get_fake_data_target(exp_replay.size(0)).to(device)) error_real.backward() dis_opti.step() print(f'Epoch: [{num_epoch}/{model_parameter["Epochs"]}] ' f'Batch: Replay/Experience batch ' f'Loss_D: {error_real.data.cpu()}, ' ) exp_replay = torch.tensor([]).to(device) logger.display_stats(epoch=num_epoch, batch_num=batch_num, dis_error=dis_error, gen_error=gen_error) if batch_num % 100 == 0: logger.log_image(fake_image[:sample_save_size], num_epoch, batch_num) logger.log(num_epoch, dis_error, gen_error) if num_epoch % num_epoch_log == 0: logger.log_model(num_epoch) logger.log_histogramm() logger.close(logger, fake_image[:sample_save_size], num_epoch, dis_error, gen_error) First link to my Code (Pastebin) Second link to my Code (0bin) Conclusion: Since I implemented all these things (e.g. label smoothing) which are considered beneficial to a GAN/DCGAN. And my Model still performs worse than the Tutorial DCGAN from PyTorch I think I might have a bug in my code but I can't seem to find it. Reproducibility: You should be able to just copy the code and run it if you have the libraries that I imported installed to look for yourself if you can find anything. I appreciate any feedback.
So I solved this issue a while ago, but forgot to post an answer on stack overflow. So I will simply post my code here which should work probably pretty good. Some disclaimer: I am not quite sure if it works since I did this a year ago its for 128x128px Images MNIST It's not a vanilla GAN I used various optimization techniques If you want to use it you need to change various details, such as the training dataset Resources: Multi-Scale Gradients Instance Noise Various tricks I used More tricks `` import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torchvision import torchvision.transforms as transforms from torch.utils.data import DataLoader import pytorch_lightning as pl from pytorch_lightning import loggers from numpy.random import choice import os from pathlib import Path import shutil from collections import OrderedDict # custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) # randomly flip some labels def noisy_labels(y, p_flip=0.05): # # flip labels with 5% probability # determine the number of labels to flip n_select = int(p_flip * y.shape[0]) # choose labels to flip flip_ix = choice([i for i in range(y.shape[0])], size=n_select) # invert the labels in place y[flip_ix] = 1 - y[flip_ix] return y class AddGaussianNoise(object): def __init__(self, mean=0.0, std=0.1): self.std = std self.mean = mean def __call__(self, tensor): tensor = tensor.cuda() return tensor + (torch.randn(tensor.size()) * self.std + self.mean).cuda() def __repr__(self): return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std) def resize2d(img, size): return (F.adaptive_avg_pool2d(img, size).data).cuda() def get_valid_labels(img): return ((0.8 - 1.1) * torch.rand(img.shape[0], 1, 1, 1) + 1.1).cuda() # soft labels def get_unvalid_labels(img): return (noisy_labels((0.0 - 0.3) * torch.rand(img.shape[0], 1, 1, 1) + 0.3)).cuda() # soft labels class Generator(pl.LightningModule): def __init__(self, ngf, nc, latent_dim): super(Generator, self).__init__() self.ngf = ngf self.latent_dim = latent_dim self.nc = nc self.fc0 = nn.Sequential( # input is Z, going into a convolution nn.utils.spectral_norm(nn.ConvTranspose2d(latent_dim, ngf * 16, 4, 1, 0, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 16) ) self.fc1 = nn.Sequential( # state size. (ngf*8) x 4 x 4 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 16, ngf * 8, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 8) ) self.fc2 = nn.Sequential( # state size. (ngf*4) x 8 x 8 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 4) ) self.fc3 = nn.Sequential( # state size. (ngf*2) x 16 x 16 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf * 2) ) self.fc4 = nn.Sequential( # state size. (ngf) x 32 x 32 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ngf) ) self.fc5 = nn.Sequential( # state size. (nc) x 64 x 64 nn.utils.spectral_norm(nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False)), nn.Tanh() ) # state size. (nc) x 128 x 128 # For Multi-Scale Gradient # Converting the intermediate layers into images self.fc0_r = nn.Conv2d(ngf * 16, self.nc, 1) self.fc1_r = nn.Conv2d(ngf * 8, self.nc, 1) self.fc2_r = nn.Conv2d(ngf * 4, self.nc, 1) self.fc3_r = nn.Conv2d(ngf * 2, self.nc, 1) self.fc4_r = nn.Conv2d(ngf, self.nc, 1) def forward(self, input): x_0 = self.fc0(input) x_1 = self.fc1(x_0) x_2 = self.fc2(x_1) x_3 = self.fc3(x_2) x_4 = self.fc4(x_3) x_5 = self.fc5(x_4) # For Multi-Scale Gradient # Converting the intermediate layers into images x_0_r = self.fc0_r(x_0) x_1_r = self.fc1_r(x_1) x_2_r = self.fc2_r(x_2) x_3_r = self.fc3_r(x_3) x_4_r = self.fc4_r(x_4) return x_5, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r class Discriminator(pl.LightningModule): def __init__(self, ndf, nc): super(Discriminator, self).__init__() self.nc = nc self.ndf = ndf self.fc0 = nn.Sequential( # input is (nc) x 128 x 128 nn.utils.spectral_norm(nn.Conv2d(nc, ndf, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True) ) self.fc1 = nn.Sequential( # state size. (ndf) x 64 x 64 nn.utils.spectral_norm(nn.Conv2d(ndf + nc, ndf * 2, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 2) ) self.fc2 = nn.Sequential( # state size. (ndf*2) x 32 x 32 nn.utils.spectral_norm(nn.Conv2d(ndf * 2 + nc, ndf * 4, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 4) ) self.fc3 = nn.Sequential( # state size. (ndf*4) x 16 x 16e nn.utils.spectral_norm(nn.Conv2d(ndf * 4 + nc, ndf * 8, 4, 2, 1, bias=False)), # "+ nc" because of multi scale gradient nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 8), ) self.fc4 = nn.Sequential( # state size. (ndf*8) x 8 x 8 nn.utils.spectral_norm(nn.Conv2d(ndf * 8 + nc, ndf * 16, 4, 2, 1, bias=False)), nn.LeakyReLU(0.2, inplace=True), nn.BatchNorm2d(ndf * 16) ) self.fc5 = nn.Sequential( # state size. (ndf*8) x 4 x 4 nn.utils.spectral_norm(nn.Conv2d(ndf * 16 + nc, 1, 4, 1, 0, bias=False)), nn.Sigmoid() ) # state size. 1 x 1 x 1 def forward(self, input, detach_or_not): # When we train i ncombination with generator we use multi scale gradient. x, x_0_r, x_1_r, x_2_r, x_3_r, x_4_r = input if detach_or_not: x = x.detach() x_0 = self.fc0(x) x_0 = torch.cat((x_0, x_4_r), dim=1) # Concat Multi-Scale Gradient x_1 = self.fc1(x_0) x_1 = torch.cat((x_1, x_3_r), dim=1) # Concat Multi-Scale Gradient x_2 = self.fc2(x_1) x_2 = torch.cat((x_2, x_2_r), dim=1) # Concat Multi-Scale Gradient x_3 = self.fc3(x_2) x_3 = torch.cat((x_3, x_1_r), dim=1) # Concat Multi-Scale Gradient x_4 = self.fc4(x_3) x_4 = torch.cat((x_4, x_0_r), dim=1) # Concat Multi-Scale Gradient x_5 = self.fc5(x_4) return x_5 class DCGAN(pl.LightningModule): def __init__(self, hparams, checkpoint_folder, experiment_name): super().__init__() self.hparams = hparams self.checkpoint_folder = checkpoint_folder self.experiment_name = experiment_name # networks self.generator = Generator(ngf=hparams.ngf, nc=hparams.nc, latent_dim=hparams.latent_dim) self.discriminator = Discriminator(ndf=hparams.ndf, nc=hparams.nc) self.generator.apply(weights_init) self.discriminator.apply(weights_init) # cache for generated images self.generated_imgs = None self.last_imgs = None # For experience replay self.exp_replay_dis = torch.tensor([]) def forward(self, z): return self.generator(z) def adversarial_loss(self, y_hat, y): return F.binary_cross_entropy(y_hat, y) def training_step(self, batch, batch_nb, optimizer_idx): # For adding Instance noise for more visit: https://www.inference.vc/instance-noise-a-trick-for-stabilising-gan-training/ std_gaussian = max(0, self.hparams.level_of_noise - ( (self.hparams.level_of_noise * 2) * (self.current_epoch / self.hparams.epochs))) AddGaussianNoiseInst = AddGaussianNoise(std=std_gaussian) # the noise decays over time imgs, _ = batch imgs = AddGaussianNoiseInst(imgs) # Adding instance noise to real images self.last_imgs = imgs # train generator if optimizer_idx == 0: # sample noise z = torch.randn(imgs.shape[0], self.hparams.latent_dim, 1, 1).cuda() # generate images self.generated_imgs = self(z) # ground truth result (ie: all fake) g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, False), get_valid_labels(self.generated_imgs[0])) # adversarial loss is binary cross-entropy; [0] is the image of the last layer tqdm_dict = {'g_loss': g_loss} log = {'g_loss': g_loss, "std_gaussian": std_gaussian} output = OrderedDict({ 'loss': g_loss, 'progress_bar': tqdm_dict, 'log': log }) return output # train discriminator if optimizer_idx == 1: # Measure discriminator's ability to classify real from generated samples # how well can it label as real? real_loss = self.adversarial_loss( self.discriminator([imgs, resize2d(imgs, 4), resize2d(imgs, 8), resize2d(imgs, 16), resize2d(imgs, 32), resize2d(imgs, 64)], False), get_valid_labels(imgs)) fake_loss = self.adversarial_loss(self.discriminator(self.generated_imgs, True), get_unvalid_labels( self.generated_imgs[0])) # how well can it label as fake?; [0] is the image of the last layer # discriminator loss is the average of these d_loss = (real_loss + fake_loss) / 2 tqdm_dict = {'d_loss': d_loss} log = {'d_loss': d_loss, "std_gaussian": std_gaussian} output = OrderedDict({ 'loss': d_loss, 'progress_bar': tqdm_dict, 'log': log }) return output def configure_optimizers(self): lr_gen = self.hparams.lr_gen lr_dis = self.hparams.lr_dis b1 = self.hparams.b1 b2 = self.hparams.b2 opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr_gen, betas=(b1, b2)) opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr_dis, betas=(b1, b2)) return [opt_g, opt_d], [] def backward(self, trainer, loss, optimizer, optimizer_idx: int) -> None: loss.backward(retain_graph=True) def train_dataloader(self): # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5])]) # dataset = torchvision.datasets.MNIST(os.getcwd(), train=False, download=True, transform=transform) # return DataLoader(dataset, batch_size=self.hparams.batch_size) # transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), # transforms.ToTensor(), # transforms.Normalize([0.5], [0.5]) # ]) # train_dataset = torchvision.datasets.ImageFolder( # root="./drive/My Drive/datasets/flower_dataset/", # # root="./drive/My Drive/datasets/ghibli_dataset_small_overfit/", # transform=transform # ) # return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, # batch_size=self.hparams.batch_size) transform = transforms.Compose([transforms.Resize((self.hparams.image_size, self.hparams.image_size)), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]) ]) train_dataset = torchvision.datasets.ImageFolder( root="ghibli_dataset_small_overfit/", transform=transform ) return DataLoader(train_dataset, num_workers=self.hparams.num_workers, shuffle=True, batch_size=self.hparams.batch_size) def on_epoch_end(self): z = torch.randn(4, self.hparams.latent_dim, 1, 1).cuda() # match gpu device (or keep as cpu) if self.on_gpu: z = z.cuda(self.last_imgs.device.index) # log sampled images sample_imgs = self.generator(z)[0] torchvision.utils.save_image(sample_imgs, f'generated_images_epoch{self.current_epoch}.png') # save model if self.current_epoch % self.hparams.save_model_every_epoch == 0: trainer.save_checkpoint( self.checkpoint_folder + "/" + self.experiment_name + "_epoch_" + str(self.current_epoch) + ".ckpt") from argparse import Namespace args = { 'batch_size': 128, # batch size 'lr_gen': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'lr_dis': 0.0003, # TTUR;learnin rate of both networks; tested value: 0.0002 'b1': 0.5, # Momentum for adam; tested value(dcgan paper): 0.5 'b2': 0.999, # Momentum for adam; tested value(dcgan paper): 0.999 'latent_dim': 256, # tested value which worked(in V4_1): 100 'nc': 3, # number of color channels 'ndf': 8, # number of discriminator features 'ngf': 8, # number of generator features 'epochs': 4, # the maxima lamount of epochs the algorith should run 'save_model_every_epoch': 1, # how often we save our model 'image_size': 128, # size of the image 'num_workers': 3, 'level_of_noise': 0.1, # how much instance noise we introduce(std; tested value: 0.15 and 0.1 'experience_save_per_batch': 1, # this value should be very low; tested value which works: 1 'experience_batch_size': 50 # this value shouldnt be too high; tested value which works: 50 } hparams = Namespace(**args) # Parameters experiment_name = "DCGAN_6_2_MNIST_128px" dataset_name = "mnist" checkpoint_folder = "DCGAN/" tags = ["DCGAN", "128x128"] dirpath = Path(checkpoint_folder) # defining net net = DCGAN(hparams, checkpoint_folder, experiment_name) torch.autograd.set_detect_anomaly(True) trainer = pl.Trainer( # resume_from_checkpoint="DCGAN_V4_2_GHIBLI_epoch_999.ckpt", max_epochs=args["epochs"], gpus=1 ) trainer.fit(net) ``
https://stackoverflow.com/questions/60421475/
Pytorch tensor.save() produces huge files for small tensors from MNIST
I'm working with MNIST dataset from Kaggle challange and have troubles preprocessing with data. Furthermore, I don't know what are the best practices and was wondering if you could advise me on that. Disclaimer: I can't just use torchvision.datasets.mnist because I need to use Kaggle's data for training and submission. In this tutorial, it was advised to create a Dataset object loading .pt tensors from files, to fully utilize GPU. In order to achieve that, I needed to load the csv data provided by Kaggle and save it as .pt files: import pandas as pd import torch import numpy as np # import data digits_train = pd.read_csv('data/train.csv') train_tensor = torch.tensor(digits_train.drop(label, axis=1).to_numpy(), dtype=torch.int) labels_tensor = torch.tensor(digits_train[label].to_numpy()) for i in range(train_tensor.shape[0]): torch.save(train_tensor[i], "data/train-" + str(i) + ".pt") Each train_tensor[i].shape is torch.Size([1, 784]) However, each such .pt file has size of about 130MB. A tensor of the same size, with randomly generated integers, has size of 6.6kB. Why are these tensors so huge, and how can I reduce their size? Dataset is 42 000 samples. Should I even bother with batching this data? Should I bother with saving tensors to separate files, rather than loading them all into RAM and then slicing into batches? What is the most optimal approach here?
As explained in this discussion, torch.save() saves the whole tensor, not just the slice. You need to explicitly copy the data using clone(). Don't worry, at runtime the data is only allocated once unless you explicitly create copies. As a general advice: If the data easily fits into your memory, just load it at once. For MNIST with 130 MB that's certainly the case. However, I would still batch the data because it converges faster. Look up the advantages of SGD for more details.
https://stackoverflow.com/questions/60421630/
How can I do return_sequences for a stacked LSTM model with PyTorch?
I have a Tensorflow / Keras model with: self.model.add(Bidirectional(LSTM(lstm1_size, input_shape=( seq_length, feature_dim), return_sequences=True))) self.model.add(BatchNormalization()) self.model.add(Dropout(0.2)) self.model.add(Bidirectional( LSTM(lstm2_size, return_sequences=True))) self.model.add(BatchNormalization()) self.model.add(Dropout(0.2)) # BOTTLENECK HERE self.model.add(Bidirectional( LSTM(lstm3_size, return_sequences=True))) self.model.add(BatchNormalization()) self.model.add(Dropout(0.2)) self.model.add(Bidirectional( LSTM(lstm4_size, return_sequences=True))) self.model.add(BatchNormalization()) self.model.add(Dropout(0.2)) self.model.add(Bidirectional( LSTM(lstm5_size, return_sequences=True))) self.model.add(BatchNormalization()) self.model.add(Dropout(0.2)) self.model.add(Dense(feature_dim, activation='linear')) How do I create a stacked PyTorch model with the return_sequences? My understanding of return_sequences is that it returns the "output" of each layer of LSTM's which are then fed into the next layer. How would I accomplish this with PyToch?
PyTorch always returns sequences. https://pytorch.org/docs/stable/nn.html#lstm Example: import torch as t batch_size = 2 time_steps = 10 features = 2 data = t.empty(batch_size, time_steps, features).normal_() lstm = t.nn.LSTM(input_size=2, hidden_size=3, bidirectional=True, batch_first=True) output, (h_n, c_n) = lstm(data) [output.shape, h_n.shape, c_n.shape] [torch.Size([2, 10, 6]), torch.Size([2, 2, 3]), torch.Size([2, 2, 3])] class Net(t.nn.Module): def __init__(self): super(Net, self).__init__() self.lstm_1 = t.nn.LSTM(input_size=2, hidden_size=3, bidirectional=True, batch_first=True) self.lstm_2 = t.nn.LSTM(input_size=2*3, hidden_size=4, bidirectional=True, batch_first=True) def forward(self, input): output, (h_n, c_n) = self.lstm_1(input) output, (h_n, c_n) = self.lstm_2(output) return output net = Net() net(data).shape torch.Size([2, 10, 8])
https://stackoverflow.com/questions/60423030/
Pytorch Custom Module Using Existing CNN Module
I want to access and edit individual modules within a torchvision module and adjust the input. I know you can edit submodules like this: import torchvision resnet18 = torchvision.models.resnet18() print(resnet18._modules['conv1']) # Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) But I want to create a custom Net(nn.Module) kind of class so I can add additional layers later: class Sonar(resnet18): pass Throws the error: ----> 1 class Sonar(resnet18): 2 pass /usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py in __init__(self, block, layers, num_classes, zero_init_residual, groups, width_per_group, replace_stride_with_dilation, norm_layer) 142 self.relu = nn.ReLU(inplace=True) 143 self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) --> 144 self.layer1 = self._make_layer(block, 64, layers[0]) 145 self.layer2 = self._make_layer(block, 128, layers[1], stride=2, 146 dilate=replace_stride_with_dilation[0]) /usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py in _make_layer(self, block, planes, blocks, stride, dilate) 176 self.dilation *= stride 177 stride = 1 --> 178 if stride != 1 or self.inplanes != planes * block.expansion: 179 downsample = nn.Sequential( 180 conv1x1(self.inplanes, planes * block.expansion, stride), AttributeError: 'str' object has no attribute 'expansion' Trying again with AlexNet alexnet = torchvision.models.AlexNet() class Sonar(alexnet): pass Throws the error: 1 alexnet = torchvision.models.AlexNet() ----> 2 class Sonar(alexnet): 3 pass TypeError: __init__() takes from 1 to 2 positional arguments but 4 were given
The following should work well: import torch import torchvision class Sonar(torch.nn.Module): def __init__(self): super().__init__() self.ins = torchvision.models.resnet18(pretrained=True) self.fc1 = torch.nn.Linear(1000, 1) #adding layers def forward(self, x): out = self.ins(x) out = self.fc1(out) return out def run(): return Sonar() net = run() print(net(torch.ones(1,3,224,224))) #testing
https://stackoverflow.com/questions/60438644/
Torchscript incompatible with torch.cat for tensor lists
Torch.cat throws error for tensor lists when used within torchscript Here is a minimum reproducable example to reproduce the error import torch import torch.nn as nn """ Smallest working bug for torch.cat torchscript """ class Model(nn.Module): """dummy model for showing error""" def __init__(self): super(Model, self).__init__() pass def forward(self): a = torch.rand([6, 1, 12]) b = torch.rand([6, 1, 12]) out = torch.cat([a, b], axis=2) return out if __name__ == '__main__': model = Model() print(model()) # works torch.jit.script(model) # throws error The expected result would be a torchscript output for torch.cat. Here is the error message provided: File "/home/anil/.conda/envs/rnn/lib/python3.7/site-packages/torch/jit/__init__.py", line 1423, in _create_methods_from_stubs self._c._create_methods(self, defs, rcbs, defaults) RuntimeError: Arguments for call are not valid. The following operator variants are available: aten::cat(Tensor[] tensors, int dim=0) -> (Tensor): Keyword argument axis unknown. aten::cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!)): Argument out not provided. The original call is: at smallest_working_bug_torch_cat_torchscript.py:19:14 def forward(self): a = torch.rand([6, 1, 12]) b = torch.rand([6, 1, 12]) out = torch.cat([a, b], axis=2) ~~~~~~~~~ <--- HERE return out Kindly let me know of a fix or a workaround for this problem. Thanks!
changing axis to dim fixes the error, Original solution was posted here
https://stackoverflow.com/questions/60438983/
Beginner PyTorch - RuntimeError: shape '[16, 400]' is invalid for input of size 9600
I'm trying to build a CNN but I get this error: ---> 52 x = x.view(x.size(0), 5 * 5 * 16) RuntimeError: shape '[16, 400]' is invalid for input of size 9600 It's not clear for me what the inputs of the 'x.view' line should be. Also, I don't really understand how many times I should have this 'x.view' function in my code. Is it only once, after the 3 convolutional layers and 2 linear layers? Or is it 5 times, one after every layer? Here's my code: CNN import torch.nn.functional as F # Convolutional neural network class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.conv1 = nn.Conv2d( in_channels=3, out_channels=16, kernel_size=3) self.conv2 = nn.Conv2d( in_channels=16, out_channels=24, kernel_size=4) self.conv3 = nn.Conv2d( in_channels=24, out_channels=32, kernel_size=4) self.dropout = nn.Dropout2d(p=0.3) self.pool = nn.MaxPool2d(2) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(512, 10) self.final = nn.Softmax(dim=1) def forward(self, x): print('shape 0 ' + str(x.shape)) x = F.max_pool2d(F.relu(self.conv1(x)), 2) x = self.dropout(x) print('shape 1 ' + str(x.shape)) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = self.dropout(x) print('shape 2 ' + str(x.shape)) # x = F.max_pool2d(F.relu(self.conv3(x)), 2) # x = self.dropout(x) x = F.interpolate(x, size=(5, 5)) x = x.view(x.size(0), 5 * 5 * 16) x = self.fc1(x) return x net = ConvNet() Can someone help me understand the problem? The output of 'x.shape' is: shape 0 torch.Size([16, 3, 256, 256]) shape 1 torch.Size([16, 16, 127, 127]) shape 2 torch.Size([16, 24, 62, 62]) Thanks
This means that instead the product of the channel and spatial dimensions is not 5*5*16. To flatten the tensor, replace x = x.view(x.size(0), 5 * 5 * 16) with: x = x.view(x.size(0), -1) And self.fc1 = nn.Linear(600, 120) with: self.fc1 = nn.Linear(600, 120)
https://stackoverflow.com/questions/60439570/
RuntimeError: expected scalar type Long but found Float
I can't get the dtypes to match, either the loss wants long or the model wants float if I change my tensors to long. The shape of the tensors are 42000, 1, 28, 28 and 42000. I'm not sure where I can change what dtypes are required for the model or loss. I'm not sure if dataloader is required, using Variable didn't work either. dataloaders_train = torch.utils.data.DataLoader(Xt_train, batch_size=64) dataloaders_test = torch.utils.data.DataLoader(Yt_train, batch_size=64) class Network(nn.Module): def __init__(self): super().__init__() self.hidden = nn.Linear(42000, 256) self.output = nn.Linear(256, 10) self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x model = Network() input_size = 784 hidden_sizes = [28, 64] output_size = 10 model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.003) epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in zip(dataloaders_train, dataloaders_test): images = images.view(images.shape[0], -1) #images, labels = Variable(images), Variable(labels) print(images.dtype) print(labels.dtype) optimizer.zero_grad() output = model(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss}") Which gives RuntimeError Traceback (most recent call last) <ipython-input-128-68109c274f8f> in <module> 11 12 output = model(images) ---> 13 loss = criterion(output, labels) 14 loss.backward() 15 optimizer.step() /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 202 203 def forward(self, input, target): --> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) 205 206 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: expected scalar type Long but found Float
LongTensor is synonymous with integer. PyTorch won't accept a FloatTensor as categorical target, so it's telling you to cast your tensor to LongTensor. This is how you should change your target dtype: Yt_train = Yt_train.type(torch.LongTensor) This is very well documented on the PyTorch website, you definitely won't regret spending a minute or two reading this page. PyTorch essentially defines nine CPU tensor types and nine GPU tensor types: ╔══════════════════════════╦═══════════════════════════════╦════════════════════╦═════════════════════════╗ ║ Data type ║ dtype ║ CPU tensor ║ GPU tensor ║ ╠══════════════════════════╬═══════════════════════════════╬════════════════════╬═════════════════════════╣ ║ 32-bit floating point ║ torch.float32 or torch.float ║ torch.FloatTensor ║ torch.cuda.FloatTensor ║ ║ 64-bit floating point ║ torch.float64 or torch.double ║ torch.DoubleTensor ║ torch.cuda.DoubleTensor ║ ║ 16-bit floating point ║ torch.float16 or torch.half ║ torch.HalfTensor ║ torch.cuda.HalfTensor ║ ║ 8-bit integer (unsigned) ║ torch.uint8 ║ torch.ByteTensor ║ torch.cuda.ByteTensor ║ ║ 8-bit integer (signed) ║ torch.int8 ║ torch.CharTensor ║ torch.cuda.CharTensor ║ ║ 16-bit integer (signed) ║ torch.int16 or torch.short ║ torch.ShortTensor ║ torch.cuda.ShortTensor ║ ║ 32-bit integer (signed) ║ torch.int32 or torch.int ║ torch.IntTensor ║ torch.cuda.IntTensor ║ ║ 64-bit integer (signed) ║ torch.int64 or torch.long ║ torch.LongTensor ║ torch.cuda.LongTensor ║ ║ Boolean ║ torch.bool ║ torch.BoolTensor ║ torch.cuda.BoolTensor ║ ╚══════════════════════════╩═══════════════════════════════╩════════════════════╩═════════════════════════╝
https://stackoverflow.com/questions/60440292/
Beginner PyTorch : RuntimeError: size mismatch, m1: [16 x 2304000], m2: [600 x 120]
I'm a beginner with PyTorch and building NNs in general and I'm kinda stuck. I have this CNN architecture: class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.conv1 = nn.Conv2d( in_channels=3, out_channels=16, kernel_size=3) self.conv2 = nn.Conv2d( in_channels=16, out_channels=24, kernel_size=4) self.conv3 = nn.Conv2d( in_channels=24, out_channels=32, kernel_size=4) self.dropout = nn.Dropout2d(p=0.3) self.pool = nn.MaxPool2d(2) self.fc1 = nn.Linear(600, 120) self.fc2 = nn.Linear(512, 10) self.final = nn.Softmax(dim=1) def forward(self, x): # conv 3 layers x = F.max_pool2d(F.relu(self.conv1(x)), 2) # output of conv layers x = self.dropout(x) x = F.max_pool2d(F.relu(self.conv2(x)), 2) # output of conv layers x = self.dropout(x) x = F.max_pool2d(F.relu(self.conv3(x)), 2) # output of conv layers x = self.dropout(x) # linear layer x = F.interpolate(x, size=(600, 120)) x = x.view(x.size(0), -1) x = self.fc1(x) return x But when I try to train with my images, it doesn't work and I have this error: RuntimeError: size mismatch, m1: [16 x 2304000], m2: [600 x 120] I would like to add a second linear layer (self.fc2) as well as a final SoftMax layer (self.final) but since I'm stuck at the first linear layer I cannot make any progress.
The input dimension of self.fc1 needs to match the feature (second) dimension of your flattened tensor. So instead of doing self.fc1 = nn.Linear(600, 120), you can replace this with self.fc1 = nn.Linear(2304000, 120). Keep in mind that because you are using fully-connected layers, the model cannot be input size invariant (unlike Fully-Convolutional Networks). If you change the size of the channel or spatial dimensions before x = x.view(x.size(0), -1) (like you did moving from the last question to this one), the input dimension of self.fc1 will have to change accordingly.
https://stackoverflow.com/questions/60441677/
Why is the true positive - false negative distribution always the same
I have a neural network that I use it for binary classification. I change the size of training data and predict on the test set. By looking at the results, the difference between tp and fn is always the same and the difference between tn and fp is always the same. For example, in iteration #2, tp#2 - tp#1 = -91 and fn#2 - fn#1 = +91. Also, fp#2 - fp#1 = -46 and tn#2 - tn#1 = +46. As another example, tp#3 - tp#2 = -35 and fn#2 - fn#2 = +35. Iteration #1 tn=119, fp=173, fn=110, tp=407 Iteration #2 tn=165, fp=127, fn=201, tp=316 Iteration #3 tn=176, fp=116, fn=236, tp=281 Iteration #4 tn=157, fp=135, fn=207, tp=310 Iteration #5 tn=155, fp=137, fn=214, tp=303 I have tried various architectures of neural nets, but I always get the same numbers. Do you have an idea what is wrong. The following is a very simple network that I use: class AllCnns(nn.Module): def __init__(self, vocab_size, embedding_size): torch.manual_seed(0) super(AllCnns, self).__init__() self.word_embeddings = nn.Embedding(vocab_size, embedding_size) self.conv1 = nn.Conv1d(embedding_size, 64, 3) self.drop1 = nn.Dropout(0.3) self.max_pool1 = nn.MaxPool1d(2) self.flat1 = nn.Flatten() self.fc1 = nn.Linear(64*80, 100) self.fc2 = nn.Linear(100, 1) def forward(self, sentence): embedding = self.word_embeddings(sentence).permute(0, 2, 1) conv1 = F.relu(self.conv1(embedding)) drop1 = self.drop1(conv1) max_pool1 = self.max_pool1(drop1) flat1 = self.flat1(max_pool1) fc1 = F.relu(self.fc1(flat1)) fc2 = torch.sigmoid(self.fc2(fc1)) return fc2
I think it should be the same. The sum of tn(true negative) and fp(false positive) adds up to the total 'real' negative values, and same goes for the other two. So as long as you are using the same data, tn + fp = 292(total negative values) fn + tp = 517(total positive values) these equations are always true. So tn#1 + fp#1 = tn#2 + fp#2 so tn#1 - tn#2 = fp#2 - fp#1
https://stackoverflow.com/questions/60445820/
Pytorch copy a neuron in a layer
I am using pytorch 0.3.0. I'm trying to selectively copy a neuron and it's weights within the same layer, then replace the original neuron with an another set of weights. Here's my attempt at that: reshaped_data2 = data2.unsqueeze(0) new_layer_data = torch.cat([new_layer.data, reshaped_data2], dim=0) new_layer_data[i] = data1 new_layer.data.copy_(new_layer_data) First I unsqueezed data2 to make it a 1*X tensor instead of 0*X. Then I concatenate my layer's tensor with the reshaped data2 along dimension 0. I then replace the original data2 located at index i with data1. Finally, I copy all of that into my layer. The error I get is: RuntimeError: inconsistent tensor size, expected tensor [10 x 128] and src [11 x 128] to have the same number of elements, but got 1280 and 1408 elements respectively at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorCopy.c:86 If I do a simple assignment instead of copy I get RuntimeError: The expanded size of the tensor (11) must match the existing size (10) at non-singleton dimension 1. at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:309 I understand the error, but what is the right way to go about this?
The solution here is to create a new model with the correct size and pass in weights as default values. No dynamic expansion solution was found.
https://stackoverflow.com/questions/60458029/
Using past and attention_mask at the same time for gpt2
I am processing a batch of sentences with different lengths, so I am planning to take advantage of the padding + attention_mask functionality in gpt2 for that. At the same time, for each sentence I need to add a suffix phrase and run N different inferences. For instance, given the sentence "I like to drink coke", I may need to run two different inferences: "I like to drink coke. Coke is good" and "I like to drink coke. Drink is good". Thus, I am trying to improve the inference time for this by using the "past" functionality: https://huggingface.co/transformers/quickstart.html#using-the-past so I just process the original sentence (e.g. "I like to drink coke") once, and then I somehow expand the result to be able to be used with two other sentences: "Coke is good" and "Drink is good". Below you will find a simple code that is trying to represent how I was trying to do this. For simplicity I'm just adding a single suffix phrase per sentence (...but I still hope my original idea is possible though): from transformers.tokenization_gpt2 import GPT2Tokenizer from transformers.modeling_gpt2 import GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>') model = GPT2LMHeadModel.from_pretrained('gpt2') # Complete phrases are: "I like to drink soda without sugar" and "Go watch TV alone, I am not going" docs = ["I like to drink soda", "Go watch TV"] docs_tensors = tokenizer.batch_encode_plus( [d for d in docs], pad_to_max_length=True, return_tensors='pt') docs_next = ["without sugar", "alone, I am not going"] docs_next_tensors = tokenizer.batch_encode_plus( [d for d in docs_next], pad_to_max_length=True, return_tensors='pt') # predicting the first part of each phrase _, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask']) # predicting the rest of the phrase logits, _ = model(docs_next_tensors['input_ids'], attention_mask=docs_next_tensors['attention_mask'], past=past) logits = logits[:, -1] _, top_indices_results = logits.topk(30) The error I am getting is the following: Traceback (most recent call last): File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1434, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/damiox/Workspace/xxLtd/yy/stress-test-withpast2.py", line 26, in <module> logits, _ = model(docs_next_tensors['input_ids'], attention_mask=docs_next_tensors['attention_mask'], past=past) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 593, in forward inputs_embeds=inputs_embeds, File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 476, in forward hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i] File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 226, in forward self.ln_1(x), layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 189, in forward attn_outputs = self._attn(query, key, value, attention_mask, head_mask) File "/Users/damiox/.local/share/virtualenvs/yy-uMxmjV2h/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 150, in _attn w = w + attention_mask RuntimeError: The size of tensor a (11) must match the size of tensor b (6) at non-singleton dimension 3 Process finished with exit code 1 Initially I thought this was related to https://github.com/huggingface/transformers/issues/3031 - so I re-built latest master to try the fix, but I still experience the issue.
In order to make your current code snippet work, you will have combine the previous and new attention mask as follows: from transformers.tokenization_gpt2 import GPT2Tokenizer from transformers.modeling_gpt2 import GPT2LMHeadModel import torch tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>') model = GPT2LMHeadModel.from_pretrained('gpt2') # Complete phrases are: "I like to drink soda without sugar" and "Go watch TV alone, I am not going" docs = ["I like to drink soda", "Go watch TV"] docs_tensors = tokenizer.batch_encode_plus( [d for d in docs], pad_to_max_length=True, return_tensors='pt') docs_next = ["without sugar", "alone, I am not going"] docs_next_tensors = tokenizer.batch_encode_plus( [d for d in docs_next], pad_to_max_length=True, return_tensors='pt') # predicting the first part of each phrase _, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask']) # predicting the rest of the phrase attn_mask = torch.cat([docs_tensors['attention_mask'], docs_next_tensors['attention_mask']], dim=-1) logits, _ = model(docs_next_tensors['input_ids'], attention_mask=attn_mask, past=past) logits = logits[:, -1] _, top_indices_results = logits.topk(30) For the case that you want to test two possible suffixes for a sentence start you probably will have to clone your past variable as many times as you have suffixes. That means that the batch size of your prefix input_ids has to match the batch size of your suffix input_ids in order to make it work. Also you have to change the positional encodings input of your suffix input_ids (GPT2 uses absolute positional encodings) if one of your prefix input_ids is padded (this is not shown in the code above - please take a look at https://github.com/huggingface/transformers/issues/3021 to see how it's done).
https://stackoverflow.com/questions/60459292/
How to convert an list of image into Pytorch Tensor
I have a list called wordImages. It contains images in np.array format with different width & height. How Do I convert this into a tensor and use this instead of my_dataset in the below code? Currently i am using this. But I need to save/read images demo_data = RawDataset(root="output_craft/", opt=opt) demo_loader = torch.utils.data.DataLoader( demo_data , batch_size=opt.batch_size, shuffle=False, num_workers=int(opt.workers), collate_fn=AlignCollate_demo, pin_memory=True)
You can use transforms from the torchvision library to do so. You can pass whatever transformation(s) you declare as an argument into whatever class you use to create my_dataset, like so: from torchvision import transforms as transforms class MyDataset(data.Dataset): def __init__(self, transform=transforms.ToTensor()): self.transform = transform ... def __getitem__(self, idx): ... img_tensor = self.transform(img) return (img_tensor, label)
https://stackoverflow.com/questions/60463381/
PyTorch: Trying to backward through the graph a second time, but the buffers have already been freed
My model is: class BaselineModel(nn.Module): def __init__(self, feature_dim=5, hidden_size=5, num_layers=2, batch_size=32): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) def forward(self, x, hidden): lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden def init_hidden(self, batch_size): hidden = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) cell = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) return (hidden, cell) My training loop looks like: for epoch in range(250): hidden = model.init_hidden(13) # hidden = (torch.zeros(2, 13, 5), # torch.zeros(2, 13, 5)) # model.hidden = hidden for i, data in enumerate(train_loader): inputs = data[0] outputs = data[1] print('inputs', inputs.size()) # print('outputs', outputs.size()) optimizer.zero_grad() model.zero_grad() # print('inputs', inputs) pred, hidden = model(inputs, hidden) loss = loss_fn(pred[0], outputs) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5) optimizer.step() I appear to get through the first epoch, then see this error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time. I've seen postings about it, but in my case, I am reinitializing my hidden for each batch.
model.init_hidden(13) must be in the batch loop, rather than the epoch loop
https://stackoverflow.com/questions/60467953/
Pytorch is Throwing out of bounds Error? Expects a scalar
This is the code below, Not sure what the error being thrown is. Please can someone explain what is wrong and the fix. I'm new to pytorch and decided to try learn it using the house prices data set but ran into this error. It is apprently something to do with scalar value or somethings but not sure that the problem is because the y value given is a scalar not a vector. import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import TensorDataset, DataLoader import torch.optim as optim import pandas as pd import numpy as np df = pd.read_csv('housepricedata.csv') dataset = df.values X = dataset[:,0:10] y = dataset[:, 10] from sklearn import preprocessing min_max = preprocessing.MinMaxScaler() x_scale = min_max.fit_transform(X) y_scale = min_max.fit_transform(y_scale) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x_scale, y, test_size=0.3) X_train = torch.FloatTensor(X_train) X_test = torch.FloatTensor(X_test) y_train = torch.LongTensor(y_train) y_test = torch.LongTensor(y_test) trainD = TensorDataset(X_train, y_train) testD = TensorDataset(X_test, y_test) class Model(nn.Module): def __init__(self, inp1=10, out=1): super().__init__() self.Dense1 = nn.Linear(inp1, 32) self.Dense2 = nn.Linear(32, 32) self.out = nn.Linear(32, out) def forward(self, x): x = F.relu(self.Dense1(x)) x = F.relu(self.Dense2(x)) x = self.out(x) return x model = Model() trainloader = DataLoader(trainD, batch_size=64, shuffle=False) testloader = DataLoader(testloader, batch_size=64, shuffle=False) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) epochs1 = 500 losses = [] epochs1 = 500 losses = [] for i in range(epochs1): for data in trainloader: X, y = data optimizer.zero_grad() output = model(X) loss = criterion(output, y) losses.append(loss) loss.backward() optimizer.step() Error thrown: IndexError Traceback (most recent call last) in 5 i =+1 6 y_pred = model.forward(X_train) ----> 7 loss = criterion(y_pred, y_train) 8 losses.append(loss) 9 ~\Anaconda3\envs\ml1\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~\Anaconda3\envs\ml1\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 914 def forward(self, input, target): 915 return F.cross_entropy(input, target, weight=self.weight, --> 916 ignore_index=self.ignore_index, reduction=self.reduction) 917 918 ~\Anaconda3\envs\ml1\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2019 if size_average is not None or reduce is not None: 2020 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023 ~\Anaconda3\envs\ml1\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) IndexError: Target 1 is out of bounds.
You are trying to predict a discrete class with a regression network. When trying to predict discrete classes, one usually outputs a vector of class-probabilities - the probability for each class given the input. On the other hand, there are regression tasks in which one wants to compute a continuous function of the given input. In the regression case, the new usually outputs only one scalar value per input. In your code you are mixing the two: on the one hand your network has a single scalar output (self.out of your model has out_features=1). On the other hand, you are using nn.CrossEntropyLoss() which is a loss for classification that expects a vector of class probabilities.
https://stackoverflow.com/questions/60469220/
Confusing results of transfer learning accuracy
I have just started an image classification project according to the tutorial from the documentation on the pytorch website(this).In the tutorial, there is a part of code like this: model_ft = models.resnet50(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 20) And I have known the reason why the fc layer is supposed to be changed .Since my project needs to classify 20 classes , so I just changed the parameters from 2 to 20. But , I just get the accuracy of around 60%. When I dont change the fc layer like this: model_ft = se_resnet50(pretrained = True) It turned out that the accuracy reaches 93.75% which is much better compared with the former results. I just couldnt figure out why I get worse classification results when I modify the fc layer. Shouldn't it be modified?
It's probably more difficult for the network to find the matching class between 20 classes than between two classes. For example if you give it a dog image and it need to classify it between cat, dog and horse it could send 60% cat, 30% dog 10% horse and then be wrong while if it needs to classify it only between dog and horse it would give may be 75% dog, 25% horse and then be wright. The finetunnig will also be longer so you could have better result if you train it longer with the 20 classes if you haven't stop it after convergence but after a fix number of epochs.
https://stackoverflow.com/questions/60476501/
PyTorch LSTM has nan for MSELoss
My model is: class BaselineModel(nn.Module): def __init__(self, feature_dim=5, hidden_size=5, num_layers=2, batch_size=32): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) def forward(self, x, hidden): lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden def init_hidden(self, batch_size): hidden = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) cell = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) return (hidden, cell) Training looks like: train_loader = torch.utils.data.DataLoader( train_set, batch_size=BATCH_SIZE, shuffle=True, **params) model = BaselineModel(batch_size=BATCH_SIZE) optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001) loss_fn = torch.nn.MSELoss(reduction='sum') for epoch in range(250): # hidden = (torch.zeros(2, 13, 5), # torch.zeros(2, 13, 5)) # model.hidden = hidden for i, data in enumerate(train_loader): hidden = model.init_hidden(13) inputs = data[0] outputs = data[1] print('inputs', inputs.size()) # print('outputs', outputs.size()) # optimizer.zero_grad() model.zero_grad() # print('inputs', inputs) pred, hidden = model(inputs, hidden) loss = loss_fn(pred, outputs) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() print('Epoch: ', epoch, '\ti: ', i, '\tLoss: ', loss) I have gradient clipping set already, which seems to be the recommended solution. But after even the first step, I get: Epoch: 0 i: 0 Loss: tensor(nan, grad_fn=)
I suspect your issue has to do with your outputs / data[1] (it would help if you show examples of your train_set). Running the following piece of code gives no nan, but I forced shape of output by hand before calling the loss_fn(pred, outputs) : class BaselineModel(nn.Module): def __init__(self, feature_dim=5, hidden_size=5, num_layers=2, batch_size=32): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) def forward(self, x, hidden): lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden def init_hidden(self, batch_size): hidden = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) cell = Variable(next(self.parameters()).data.new( self.num_layers, batch_size, self.hidden_size)) return (hidden, cell) model = BaselineModel(batch_size=32) optimizer = optim.Adam(model.parameters(), lr=0.01, weight_decay=0.0001) loss_fn = torch.nn.MSELoss(reduction='sum') hidden = model.init_hidden(10) model.zero_grad() pred, hidden = model(torch.randn(2,10,5), hidden) pred.size() #torch.Size([2, 10, 5]) outputs = torch.zeros(2,10,5) loss = loss_fn(pred, outputs) loss loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() print(loss) Please note a common reason for nan values can be related to numerical stability of your learning phase, but usually you have values for the first steps before you see the divergence happening, which is apparently not the case here.
https://stackoverflow.com/questions/60476943/
How can I implement these bash commands in Google Colab
I'm a beginner who is working on Neural Machine Translation, the transformer model. I want to implement fairseq Scaling Neural Machine Translation using Google Colab. I guess the commands shown in the README file is written in bash. I know that bash commands can be run in Google Colab by prefixing the command with !. Following commands are from the Github repository mentioned above. TEXT=wmt16_en_de_bpe32k mkdir -p $TEXT tar -xzvf wmt16_en_de.tar.gz -C $TEXT These commands throw errors when I add the ! as follows.
Individual bash commands marked by ! are executed in a sub-shell, so variables aren't preserved between lines. If you want to execute a multi-line bash script, use the %%bash cell magic: %%bash TEXT=wmt16_en_de_bpe32k mkdir -p $TEXT tar -xzvf wmt16_en_de.tar.gz -C $TEXT
https://stackoverflow.com/questions/60477299/
How to avoid "RuntimeError: error in LoadLibraryA" for torch.cat?
I am running a pytorch solution for wireframe detection. I am receiving a "RuntimeError: error in LoadLibraryA" when the solution executes "forward return torch.cat(outputs, 1)" I am not able to provide a minimal re-producable example. Therefore the quesion: Is it possible to produce just type of error in a microsoft library by python programming errors, or is this most likely a version (of python, pytorch, CUDA,...) problem or a bug in my installation? I am using windows 10, python 3.8.1 and pytorch 1.4.0. File "main.py", line 144, in <module> main() File "main.py", line 137, in main trainer.train(train_loader, val_loader=None) File "D:\Dev\Python\Projects\wireframe\wireframe\junc\trainer\balance_junction_trainer.py", line 75, in train self.step(epoch, train_loader) File "D:\Dev\Python\Projects\wireframe\wireframe\junc\trainer\balance_junction_trainer.py", line 176, in step ) = self.model(input_var, junc_conf, junc_res, bin_conf, bin_res) File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\inception.py", line 41, in forward base_feat = self.base_net(im_data) File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\networks\inception_v2.py", line 63, in forward x = self.Mixed_3b(x) File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\networks\inception_v2.py", line 97, in forward return torch.cat(outputs, 1) RuntimeError: error in LoadLibraryA
Try this workground: run the following code after import torch (should be fixed in 1.5): import ctypes ctypes.cdll.LoadLibrary('caffe2_nvrtc.dll')
https://stackoverflow.com/questions/60478862/
cosine similarity for attention decoder in nmt
I am implementing a neural machine translation model, and for the decoder part (with attention mechanism) I would like to calculate the cosine similarity for finding the scores. Here is the function: score(a,b) = / ||a|| ||b|| In my case: a = htilde_t (N, H) b = h (S, N, H) the output should be (S, N) My confusion is about their dimensions and I don't know how to solve that in pytorch.
See here: https://pytorch.org/docs/master/nn.html?highlight=cosine#torch.nn.CosineSimilarity cos = nn.CosineSimilarity(dim=2, eps=1e-6) output = cos(a.unsqueeze(0),b) you need to unsqueeze to add a ghost dimension to have both input of same dim: Input1: (∗1,D,∗2) where D is at position dim Input2: (∗1,D,∗2) , same shape as the Input1 Output: (∗1,∗2)
https://stackoverflow.com/questions/60481045/
Pytorch: why print(model) does not show the activation functions?
I need to extract weights, bias and at least the type of activation function from a trained NN in pytorch. I know that to extract the weights and biases the command is: model.parameters() but I can't figure out how to extract also the activation function used on the layers.Here is my network class NetWithODE(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output, sampling_interval, scaler_features): super(NetWithODE, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) # hidden layer self.predict = torch.nn.Linear(n_hidden, n_output) # output layer self.sampling_interval = sampling_interval self.device = torch.device("cpu") self.dtype = torch.float self.scaler_features = scaler_features def forward(self, x): x0 = x.clone().requires_grad_(True) # activation function for hidden layer x = F.relu(self.hidden(x)) # linear output, here r should be the output r = self.predict(x) # Now the r enters the integrator x = self.integrate(r, x0) return x def integrate(self, r, x0): # RK4 steps per interval M = 4 DT = self.sampling_interval / M X = x0 for j in range(M): k1 = self.ode(X, r) k2 = self.ode(X + DT / 2 * k1, r) k3 = self.ode(X + DT / 2 * k2, r) k4 = self.ode(X + DT * k3, r) X = X + DT / 6 * (k1 + 2 * k2 + 2 * k3 + k4) return X def ode(self, x0, r): qF = r[0, 0] qA = r[0, 1] qP = r[0, 2] mu = r[0, 3] FRU = x0[0, 0] AMC = x0[0, 1] PHB = x0[0, 2] TBM = x0[0, 3] fFRU = qF * TBM fAMC = qA * TBM fPHB = qP - mu * PHB fTBM = mu * TBM return torch.stack((fFRU, fAMC, fPHB, fTBM), 0) if I run the command print(model) I get NetWithODE( (hidden): Linear(in_features=4, out_features=10, bias=True) (predict): Linear(in_features=10, out_features=4, bias=True) ) But where can I get the activation function (in this case Relu)? I have pytorch 1.4.
There are two ways of adding operations to the network graph: lowlevel functional way and more advanced object way. You need latter to make your structure observable, In first case is just calling (not exactly, but...) a function without storing info about it. So, instead of def forward(self, x): ... x = F.relu(self.hidden(x)) it must be something like def __init__(...): ... self.myFirstRelu= torch.nn.ReLU() def forward(self, x): ... x1 = self.hidden(x) x2 = self.myFirstRelu(x1) Anyway a mix of two theese ways is generally bad idea, although even torchvision models have such inconsistiencies: models.inception_v3 not register the poolings for example >:-( (EDIT: it is fixed in june 2020, thanks, mitmul!). UPD: - Thanks, that works, now if I print I see ReLU(). But this seems to only print the function in the same order they are defined in the init. Is there a way to get the associations between layers and activation functions? For example I want to know which activation was applyed to layer 1, which to layer 2 end so on... There is no uniform way, but here is some tricks: object way: -just init them in order -use torch.nn.Sequential -hook callbacks on nodes like that - def hook( m, i, o): print( m._get_name() ) for ( mo ) in model.modules(): mo.register_forward_hook(hook) functional and object way: -make use of internal model graph, builded on forward pass, as torchviz do (https://github.com/szagoruyko/pytorchviz/blob/master/torchviz/dot.py), or just use plot generated by said torchviz.
https://stackoverflow.com/questions/60484859/
With a PyTorch LSTM, can I have a different hidden_size than input_size?
I have: def __init__(self, feature_dim=15, hidden_size=5, num_layers=2): super(BaselineModel, self).__init__() self.num_layers = num_layers self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) and then I get an error: RuntimeError: The size of tensor a (5) must match the size of tensor b (15) at non-singleton dimension 2 If I set the two sizes to be the same, then the error goes away. But I'm wondering if my input_size is some large number, say 15, and I want to reduce the number of hidden features to 5, why shouldn't that work?
It should work the error probably came from elsewhere. This work for example: feature_dim = 15 hidden_size = 5 num_layers = 2 seq_len = 5 batch_size = 3 lstm = nn.LSTM(input_size=feature_dim, hidden_size=hidden_size, num_layers=num_layers) t1 = torch.from_numpy(np.random.uniform(0,1,size=(seq_len, batch_size, feature_dim))).float() output, states = lstm.forward(t1) hidden_state, cell_state = states print("output: ",output.size()) print("hidden_state: ",hidden_state.size()) print("cell_state: ",cell_state.size()) and return output: torch.Size([5, 3, 5]) hidden_state: torch.Size([2, 3, 5]) cell_state: torch.Size([2, 3, 5]) Are you using the output somewhere after the lstm ? Did you notice it has a size equal to hidden dim ie 5 on last dim ? It looks like you're using it afterwards thinking it has a size of 15 instead
https://stackoverflow.com/questions/60491519/
Pytorch: merging two models (nn.Module)
I have a model that's quite complicated and I therefore can't just call self.fc.weight etc. so I want to iterate over the model in some way. The goal is to merge models this way: m = alpha * n + (1 - alpha) * o where m n and o are instances of the same class but trained differently. So for each parameter in these models, I want to assign initial values to m based on n and o as described in the equation and then continue the training procedure with m only. I tried: for p1, p2, p3 in zip(m.parameters(), n.parameters(), o.parameters()): p1 = alpha * p2 + (1 - alpha) * p3 But this does not assign new values within m. for p1, p2, p3 in zip(m.parameters(), n.parameters(), o.parameters()): p1.fill_(alpha * p2 + (1 - alpha) * p3) But this throws RuntimeError: a leaf Variable that requires grad has been used in an in-place operation. And so I resorted to a working m.load_state_dict({ k: alpha * v1 + (1 - alpha) * v2 for (k, v1), (_, v2) in zip(n.state_dict().items(), o.state_dict().items()) }) Is there a better way to do this in Pytorch? Is it possible that I get gradient errors?
If I understand you correctly, then you need to get out from under PyTorch's autograd mechanics, which you can by simply doing p1.data = alpha * p2.data+ (1 - alpha) * p3.data The parameter's data is not in the parameter itself, but in the data member.
https://stackoverflow.com/questions/60491526/
Is it possible to add a trainable filter after an autoencoder?
So I’m building a denoiser with an autoencoder. The idea is that before computing my loss (after the autoencoder), I apply an empirical wiener filter to a texture map of the image and add it back to my autoencoder output (adding back ‘lost detail’). I’ve coded this filter with PyTorch. My first attempt worked by adding the filter to the end of my autoencoder’s forward function. I can train this network and it backpropagates through my filter in training. However, if I print my network, the filter is not listed, and torchsummary doesn’t include it when calculating parameters. This has me thinking that I am only training the autoencoder and my filter is filtering the same way every time and not learning. Is what I’m trying to do possible? Below is my Autoencoder: class AutoEncoder(nn.Module): """Autoencoder simple implementation """ def __init__(self): super(AutoEncoder, self).__init__() # Encoder # conv layer self.block1 = nn.Sequential( nn.Conv2d(1, 48, 3, padding=1), nn.Conv2d(48, 48, 3, padding=1), nn.MaxPool2d(2), nn.BatchNorm2d(48), nn.LeakyReLU(0.1) ) self.block2 = nn.Sequential( nn.Conv2d(48, 48, 3, padding=1), nn.MaxPool2d(2), nn.BatchNorm2d(48), nn.LeakyReLU(0.1) ) self.block3 = nn.Sequential( nn.Conv2d(48, 48, 3, padding=1), nn.ConvTranspose2d(48, 48, 2, 2, output_padding=1), nn.BatchNorm2d(48), nn.LeakyReLU(0.1) ) self.block4 = nn.Sequential( nn.Conv2d(96, 96, 3, padding=1), nn.Conv2d(96, 96, 3, padding=1), nn.ConvTranspose2d(96, 96, 2, 2), nn.BatchNorm2d(96), nn.LeakyReLU(0.1) ) self.block5 = nn.Sequential( nn.Conv2d(144, 96, 3, padding=1), nn.Conv2d(96, 96, 3, padding=1), nn.ConvTranspose2d(96, 96, 2, 2), nn.BatchNorm2d(96), nn.LeakyReLU(0.1) ) self.block6 = nn.Sequential( nn.Conv2d(97, 64, 3, padding=1), nn.BatchNorm2d(64), nn.Conv2d(64, 32, 3, padding=1), nn.BatchNorm2d(32), nn.Conv2d(32, 1, 3, padding=1), nn.LeakyReLU(0.1) ) # self.blockNorm = nn.Sequential( # nn.BatchNorm2d(1), # nn.LeakyReLU(0.1) # ) def forward(self, x): # torch.autograd.set_detect_anomaly(True) # print("input: ", x.shape) pool1 = self.block1(x) # print("pool1: ", pool1.shape) pool2 = self.block2(pool1) # print("pool2: ", pool2.shape) pool3 = self.block2(pool2) # print("pool3: ", pool3.shape) pool4 = self.block2(pool3) # print("pool4: ", pool4.shape) pool5 = self.block2(pool4) # print("pool5: ", pool5.shape) upsample5 = self.block3(pool5) # print("upsample5: ", upsample5.shape) concat5 = torch.cat((upsample5, pool4), 1) # print("concat5: ", concat5.shape) upsample4 = self.block4(concat5) # print("upsample4: ", upsample4.shape) concat4 = torch.cat((upsample4, pool3), 1) # print("concat4: ", concat4.shape) upsample3 = self.block5(concat4) # print("upsample3: ", upsample3.shape) concat3 = torch.cat((upsample3, pool2), 1) # print("concat3: ", concat3.shape) upsample2 = self.block5(concat3) # print("upsample2: ", upsample2.shape) concat2 = torch.cat((upsample2, pool1), 1) # print("concat2: ", concat2.shape) upsample1 = self.block5(concat2) # print("upsample1: ", upsample1.shape) concat1 = torch.cat((upsample1, x), 1) # print("concat1: ", concat1.shape) output = self.block6(concat1) t_map = x - output for i in range(4): tensor = t_map[i, :, :, :] # Take each item in batch separately. Could account for this in Wiener instead tensor = torch.squeeze(tensor) # Squeeze for Wiener input format tensor = wiener_3d(tensor, 0.05, 10) # Apply Wiener with specified std and block size tensor = torch.unsqueeze(tensor, 0) # unsqueeze to put back into block t_map[i, :, :, :] = tensor # put back into block filtered_output = output + t_map return filtered_output The for loop at the end is to apply the filter to each image in the batch. I get that this isn’t parallelisable so if anyone has ideas for this, I’d appreciate it. I can post the ‘wiener 3d()’ filter function if that helps, just want to keep the post short. I’ve tried to define a custom layer class with the filter inside it but I got lost very quickly. Any help would be greatly appreciated!
If all you want is to turn your Wiener filter into a module, the following would do: class WienerFilter(T.nn.Module): def __init__(self, param_a=0.05, param_b=10): super(WienerFilter, self).__init__() # This can be accessed like any other member via self.param_a self.register_parameter("param_a", T.nn.Parameter(T.tensor(param_a))) self.param_b = param_b def forward(self, input): for i in range(4): tensor = input[i] tensor = torch.squeeze(tensor) tensor = wiener_3d(tensor, self.param_a, self.param_b) tensor = torch.unsqueeze(tensor, 0) input[i] = tensor return input You can apply this by adding a line self.wiener_filter = WienerFilter() in the init function of your AutoEncoder. in the forward then you all it by replacing the for loop with filtered_output = output + self.wiener_filter(t_map) Torch knows that the wiener_filter module is a member module so it will list the module if you print your AutoEncoder's modules. If you want to parallelize your wiener filter, you need to do that in PyTorch's terms, meaning using its operations on tensors. Those operations are implemented in a parallel fashion.
https://stackoverflow.com/questions/60493941/
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2
I have a PyTorch LSTM model and my forward function looks like: def forward(self, x, hidden): print('in forward', x.dtype, hidden[0].dtype, hidden[1].dtype) lstm_out, hidden = self.lstm(x, hidden) return lstm_out, hidden All of the print statements show torch.float64, which I believe is a double. So then why am I getting this issue? I've cast to double in all of the relevant places already.
Make sure both your data and model are in dtype double. For the model: net = net.double() For the data: net(x.double()) It has been discussed on PyTorch forum.
https://stackoverflow.com/questions/60495029/
using Fasterrcnn for regression or for only finding the bounding box in an image
Is it possible to use FasterRcnn in pytorch just for finding the bounding box without considering the classification part ? And is it possible to change the loss of the classification part ( categorical cross entropy ) to a regression loss ( MSE ) ?
For the first part of your question, please have a look at the line where the bounding boxes are received in demo. https://github.com/jwyang/faster-rcnn.pytorch/blob/31ae20687b1b3486155809a57eeb376259a5f5d4/demo.py#L297 rois, cls_prob, bbox_pred, \ rpn_loss_cls, rpn_loss_box, \ RCNN_loss_cls, RCNN_loss_bbox, \ rois_label = fasterRCNN(im_data, im_info, gt_boxes, num_boxes) scores = cls_prob.data boxes = rois.data[:, :, 1:5] For the second part, yes, it is possible. But you have to make sure that a regression loss such as is suitable for the task that you are planning to use the network for.
https://stackoverflow.com/questions/60497613/
PyTorch: Simple feedforward neural network not running without retain_graph=True
Below is my code for training a Feedforward neural network (FFNN). The labels are numbers between 0 and 50. The FFNN comprises of a single hidden layer with 50 neurons and an output layer with 51 neurons. Furthermore, I have used negative log likelihood loss. I am very new to PyTorch so I used a couple of websites for guidance. The strange thing is that none of them required retain_graph to be set to True (they dont pass any arguments when calling backward()). Furthermore, it runs very slowly and the accuracy seems to be fluctuating around a fixed value instead of reducing. Assuming that the input's format is correct, can someone please explain to me why the network is performing so badly and why the network requires retain_graph to be set to True? Thank you very much! n_epochs = 2 batch_size = 100 for epoch in range(n_epochs): permutation = torch.randperm(training_set.size()[0]) for i in range(0, training_set.size()[0], batch_size): opt.zero_grad() indices = permutation[i:i + batch_size] batch_features = training_set[indices] batch_labels = torch.LongTensor([label for label, sent in train[indices]]) batch_outputs = model(batch_features) loss = loss_function(batch_outputs, batch_labels) loss.backward(retain_graph=True) opt.step()
You are missing .zero_grad() operation. Add that to the loop and your code will work fine without retain_graph= True. loss.backward() opt.step() opt.zero_grad()
https://stackoverflow.com/questions/60498031/
Facing this error while classifying Images, containing 10 classes in pytorch, in ResNet50. My code is:
This is the code I am implementing: I am using a subset of the CalTech256 dataset to classify images of 10 different kinds of animals. We will go over the dataset preparation, data augmentation and then steps to build the classifier. def train_and_validate(model, loss_criterion, optimizer, epochs=25): ''' Function to train and validate Parameters :param model: Model to train and validate :param loss_criterion: Loss Criterion to minimize :param optimizer: Optimizer for computing gradients :param epochs: Number of epochs (default=25) Returns model: Trained Model with best validation accuracy history: (dict object): Having training loss, accuracy and validation loss, accuracy ''' start = time.time() history = [] best_acc = 0.0 for epoch in range(epochs): epoch_start = time.time() print("Epoch: {}/{}".format(epoch+1, epochs)) # Set to training mode model.train() # Loss and Accuracy within the epoch train_loss = 0.0 train_acc = 0.0 valid_loss = 0.0 valid_acc = 0.0 for i, (inputs, labels) in enumerate(train_data_loader): inputs = inputs.to(device) labels = labels.to(device) # Clean existing gradients optimizer.zero_grad() # Forward pass - compute outputs on input data using the model outputs = model(inputs) # Compute loss loss = loss_criterion(outputs, labels) # Backpropagate the gradients loss.backward() # Update the parameters optimizer.step() # Compute the total loss for the batch and add it to train_loss train_loss += loss.item() * inputs.size(0) # Compute the accuracy ret, predictions = torch.max(outputs.data, 1) correct_counts = predictions.eq(labels.data.view_as(predictions)) # Convert correct_counts to float and then compute the mean acc = torch.mean(correct_counts.type(torch.FloatTensor)) # Compute total accuracy in the whole batch and add to train_acc train_acc += acc.item() * inputs.size(0) #print("Batch number: {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}".format(i, loss.item(), acc.item())) # Validation - No gradient tracking needed with torch.no_grad(): # Set to evaluation mode model.eval() # Validation loop for j, (inputs, labels) in enumerate(valid_data_loader): inputs = inputs.to(device) labels = labels.to(device) # Forward pass - compute outputs on input data using the model outputs = model(inputs) # Compute loss loss = loss_criterion(outputs, labels) # Compute the total loss for the batch and add it to valid_loss valid_loss += loss.item() * inputs.size(0) # Calculate validation accuracy ret, predictions = torch.max(outputs.data, 1) correct_counts = predictions.eq(labels.data.view_as(predictions)) # Convert correct_counts to float and then compute the mean acc = torch.mean(correct_counts.type(torch.FloatTensor)) # Compute total accuracy in the whole batch and add to valid_acc valid_acc += acc.item() * inputs.size(0) #print("Validation Batch number: {:03d}, Validation: Loss: {:.4f}, Accuracy: {:.4f}".format(j, loss.item(), acc.item())) # Find average training loss and training accuracy avg_train_loss = train_loss/train_data_size avg_train_acc = train_acc/train_data_size # Find average training loss and training accuracy avg_valid_loss = valid_loss/valid_data_size avg_valid_acc = valid_acc/valid_data_size history.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc]) epoch_end = time.time() print("Epoch : {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}%, \n\t\tValidation : Loss : {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(epoch, avg_train_loss, avg_train_acc*100, avg_valid_loss, avg_valid_acc*100, epoch_end-epoch_start)) # Save if the model has best accuracy till now torch.save(model, dataset+'_model_'+str(epoch)+'.pt') return model, history # Load pretrained ResNet50 Model resnet50 = models.resnet50(pretrained=True) #resnet50 = resnet50.to('cuda:0') # Freeze model parameters for param in resnet50.parameters(): param.requires_grad = False # Change the final layer of ResNet50 Model for Transfer Learning fc_inputs = resnet50.fc.in_features resnet50.fc = nn.Sequential( nn.Linear(fc_inputs, 256), nn.ReLU(), nn.Dropout(0.4), nn.Linear(256, num_classes), # Since 10 possible outputs nn.LogSoftmax(dim=1) # For using NLLLoss() ) # Convert model to be used on GPU # resnet50 = resnet50.to('cuda:0') # Change the final layer of ResNet50 Model for Transfer Learning fc_inputs = resnet50.fc.in_features resnet50.fc = nn.Sequential( nn.Linear(fc_inputs, 256), nn.ReLU(), nn.Dropout(0.4), nn.Linear(256, num_classes), # Since 10 possible outputs nn.LogSoftmax(dienter code herem=1) # For using NLLLoss() ) # Convert model to be used on GPU # resnet50 = resnet50.to('cuda:0')`enter code here` Error is this: RuntimeError Traceback (most recent call last) in () 6 # Train the model for 25 epochs 7 num_epochs = 30 ----> 8 trained_model, history = train_and_validate(resnet50, loss_func, optimizer, num_epochs) 9 10 torch.save(history, dataset+'_history.pt') in train_and_validate(model, loss_criterion, optimizer, epochs) 43 44 # Compute loss ---> 45 loss = loss_criterion(outputs, labels) 46 47 # Backpropagate the gradients ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) ~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 202 203 def forward(self, input, target): --> 204 return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) 205 206 ~\Anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1836 .format(input.size(0), target.size(0))) 1837 if dim == 2: -> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at C:\Users\builder\AppData\Local\Temp\pip-req-build-0i480kur\aten\src\THNN/generic/ClassNLLCriterion.c:97
This happens when there are either incorrect labels in your dataset, or the labels are 1-indexed (instead of 0-indexed). As from the error message, cur_target must be smaller than the total number of classes (10). To verify the issue, check the maximum and minimum label in your dataset. If the data is indeed 1-indexed, just minus one from all annotations and you should be fine. Note, another possible reason is that there exists some -1 labels in the data. Some (esp older) datasets use -1 as indication of a wrong/dubious label. If you find such labels, just discard them.
https://stackoverflow.com/questions/60498676/
slicing each raw of a tensor
I have a tensor of size (1124823 x 13). I want to go from the center of each raw of the tensor take five elements from the left and five from the right. I do such this simple code. It gives me TypeError: iteration over a 0-d tensor i = 0 j = 6 base = feat.size(0)//2 for i in feat[i, j]: x = feat[base - 5: base:+5] i += 1
What about this: x = torch.rand(100,13) center = x.size(1) // 2 x1 = x[:, center:center+5] # torch.Size([100, 5]) (right) x2 = x[:, center-5:center] # torch.Size([100, 5]) (left) Is that what you want?
https://stackoverflow.com/questions/60511346/
RuntimeError: Could not run 'aten::xxxx', Can't load Pytorch model trained with TPU
Training for MLM was added based on the Japanese model of BERT. At that time, we used TPU on Google Colab. I get the following error when loading the created model. Is there a way to load the model? code from transformers import BertJapaneseTokenizer, BertForMaskedLM ​ # Load pre-trained model model = BertForMaskedLM.from_pretrained('/content/drive/My Drive/Bert/models/sample/') model.eval() ​ output RuntimeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 469 try: --> 470 state_dict = torch.load(resolved_archive_file, map_location="cpu") 471 except Exception: ​ /usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 528 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) --> 529 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) 530 ​ /usr/local/lib/python3.6/dist-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args) 701 unpickler.persistent_load = persistent_load --> 702 result = unpickler.load() 703 ​ /usr/local/lib/python3.6/dist-packages/torch/_utils.py in _rebuild_xla_tensor(data, dtype, device, requires_grad) 151 def _rebuild_xla_tensor(data, dtype, device, requires_grad): --> 152 tensor = torch.from_numpy(data).to(dtype=dtype, device=device) 153 tensor.requires_grad = requires_grad ​ RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'XLATensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId].
I ran into the same error while using transformers, this is how I solved it. After training on Colab, I had to send the model to the CPU. Basically, run: model.to('cpu') Then save the model, which allowed me to import the weights in another instance. As implied by the error, RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'XLATensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId]
https://stackoverflow.com/questions/60519297/
Can my PyTorch forward function do additional operations?
Typically a forward function strings together a bunch of layers and returns the output of the last one. Can I do some additional processing after that last layer before returning? For example, some scalar multiplication and reshaping via .view? I know that the autograd somehow figures out gradients. So I don’t know if my additional processing will somehow screw that up. Thanks.
pytorch tracks the gradients via the computational graph of the tensors, not through the functions. As long as your tensors has requires_grad=True property and their grad is not None you can do (almost) whatever you like and still be able to backprop. As long as you are using pytorch's operations (e.g., those listed in here and here) you should be okay. For more info see this. For example (taken from torchvision's VGG implementation): class VGG(nn.Module): def __init__(self, features, num_classes=1000, init_weights=True): super(VGG, self).__init__() # ... def forward(self, x): x = self.features(x) x = self.avgpool(x) x = torch.flatten(x, 1) # <-- what you were asking about x = self.classifier(x) return x A more complex example can be seen in torchvision's implementation of ResNet: class Bottleneck(nn.Module): def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None): super(Bottleneck, self).__init__() # ... def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) if self.downsample is not None: # <-- conditional execution! identity = self.downsample(x) out += identity # <-- inplace operations out = self.relu(out) return out
https://stackoverflow.com/questions/60523638/
How to deploy an existing pytorch model previously trained with Amazon Sagemaker and stored in S3 bucket
I have trained a Pytorch model using SageMaker and the model is now stored in an S3 bucket. I am trying to retrieve that model and deploying it. This is the code I am using: estimator = sagemaker.model.FrameworkModel( model_data= #link to model location in s3 image= # image role=role, entry_point='train.py', source_dir='pytorch_source', sagemaker_session = sagemaker_session ) predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.p2.xlarge") But after the deployment process (that seems to run smoothly), the predictor is just a NoneType. I haven't found any weird message in the logs... I have also made another attempt with the following code: estimator = PyTorchModel(model_data= #link to model location in s3 role=role, image= #image entry_point='pytorch_source/train.py', predictor_cls = 'pytorch_source/train.py', framework_version = '1.1.0') predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.p2.xlarge") But it doesn't even complete the deployment. Can anyone help with this?
I actually solved using PyTorchModel with the following settings: estimator = PyTorchModel(model_data='#path to model, role=role, source_dir='pytorch_source', entry_point='deploy.py', predictor_cls = ImgPredictor, framework_version = '1.1.0') where ImgPredictor is from sagemaker.predictor import RealTimePredictor, json_deserializer class ImgPredictor(RealTimePredictor): def __init__(self, endpoint_name, sagemaker_session): super(ImgPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='application/x-image', deserializer = json_deserializer ,accept='application/json') and deploy.py contains the required functions input_fn, output_fn, model_fn and predict_fn. Also, a requirements.txt file was missing in the source directory.
https://stackoverflow.com/questions/60529048/
Torchscripting a module with _ConvNd in forward
I am using PyTorch 1.4 and need to export a model with convolutions inside a loop in forward: class MyCell(torch.nn.Module): def __init__(self): super(MyCell, self).__init__() def forward(self, x): for i in range(5): conv = torch.nn.Conv1d(1, 1, 2*i+3) x = torch.nn.Relu()(conv(x)) return x torch.jit.script(MyCell()) This gives the following error: RuntimeError: Arguments for call are not valid. The following variants are available: _single(float[1] x) -> (float[]): Expected a value of type 'List[float]' for argument 'x' but instead found type 'Tensor'. _single(int[1] x) -> (int[]): Expected a value of type 'List[int]' for argument 'x' but instead found type 'Tensor'. The original call is: File "***/torch/nn/modules/conv.py", line 187 padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros'): kernel_size = _single(kernel_size) ~~~~~~~ <--- HERE stride = _single(stride) padding = _single(padding) 'Conv1d.__init__' is being compiled since it was called from 'Conv1d' File "***", line *** def forward(self, x): for _ in range(5): conv = torch.nn.Conv1d(1, 1, 2*i+3) ~~~~~~~~~~~~~~~ <--- HERE x = torch.nn.Relu()(conv(x)) return x 'Conv1d' is being compiled since it was called from 'MyCell.forward' File "***", line *** def forward(self, x, h): for _ in range(5): conv = torch.nn.Conv1d(1, 1, 2*i+3) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE x = torch.nn.Relu()(conv(x)) return x I have also tried pre-defining the conv's then putting them in a list inside __init__, but such a type is not allowed by TorchScript: class MyCell(torch.nn.Module): def __init__(self): super(MyCell, self).__init__() self.conv = [torch.nn.Conv1d(1, 1, 2*i+3) for i in range(5)] def forward(self, x): for i in range(len(self.conv)): x = torch.nn.Relu()(self.conv[i](x)) return x torch.jit.script(MyCell()) This instead gives: RuntimeError: Module 'MyCell' has no attribute 'conv' (This attribute exists on the Python module, but we failed to convert Python type: 'list' to a TorchScript type.): File "***", line *** def forward(self, x): for i in range(len(self.conv)): ~~~~~~~~~ <--- HERE x = torch.nn.Relu()(self.conv[i](x)) return x So how to export this module? Background: I am exporting Mixed-scale Dense Networks (source) to TorchScript; while nn.Sequential may work for this simplified case, practically I need to convolve with all the historical convolution outputs in each iteration, which is more than chaining the layers.
You can use nn.ModuleList() in the following way. Also, note that you can't subscript nn.ModuleList currently probably due to a bug as mentioned in issue#16123, but use the workaround as mentioned below. class MyCell(nn.Module): def __init__(self): super(MyCell, self).__init__() self.conv = nn.ModuleList([torch.nn.Conv1d(1, 1, 2*i+3) for i in range(5)]) self.relu = nn.ReLU() def forward(self, x): for mod in self.conv: x = self.relu(mod(x)) return x >>> torch.jit.script(MyCell()) RecursiveScriptModule( original_name=MyCell (conv): RecursiveScriptModule( original_name=ModuleList (0): RecursiveScriptModule(original_name=Conv1d) (1): RecursiveScriptModule(original_name=Conv1d) (2): RecursiveScriptModule(original_name=Conv1d) (3): RecursiveScriptModule(original_name=Conv1d) (4): RecursiveScriptModule(original_name=Conv1d) ) (relu): RecursiveScriptModule(original_name=ReLU) )
https://stackoverflow.com/questions/60530703/
Adding a Linear layer to my LSTM made the validation loss skyrocket in PyTorch
My model was: def forward(self, x, hidden=None): lstm_out, hidden = self.lstm(x, hidden) lstm_out = (lstm_out[:, :, :self.hidden_size] + lstm_out[:, :, self.hidden_size:]) out = torch.nn.SELU()(lstm_out) return out, hidden It now is: def forward(self, x, hidden=None): lstm_out, hidden = self.lstm(x, hidden) batch_size = lstm_out.size(0) flattened_out = lstm_out.view(-1, self.hidden_size * 2) lstm_out = (lstm_out[:, :, :self.hidden_size] + lstm_out[:, :, self.hidden_size:]) out = self.linear(flattened_out) out = torch.nn.functional.relu(out) view_out = out.view(batch_size, self.seq_length, -1) return view_out, hidden I used to get validation loss (with MSELoss) under 1000 after 2-3 epochs. Now with the Linear layer, it is skyrocketing up to 15000 even after 10 epochs. Why would this be?
You can try lowering the learning rate :)
https://stackoverflow.com/questions/60530966/
installing CuPy with Pytorch
I want to pass tensors from Pytorch to CuPy and back to do some ops not available in Pytorch. What's the recommended way to install them together? I already installed Pytorch using conda, and it has installed cudatoolkit package. I'd prefer to use conda to install cupy as well. However CuPy installation instructions recommend uninstalling cudatoolkit (at the very bottom). So, how do I make sure both Pytorch and CuPy are using the same cuda version, as well as CuDNN, NVCC, etc? Or should I not use conda to install cupy?
Just do conda install -c conda-forge cupy and it should work. That part of documentation is outdated and to be updated.
https://stackoverflow.com/questions/60534075/
Pytorch cut 2d array by lengths on first dimension
I have a 2d array, let's say of size torch.tensor(batch_size, 1000). The 1000 array from the second dimension is actually variable length. I have a second array of size [batch_size] containing the length for each rows... Here is an example code snippet: # preds is the 2d array of size [batch_size, 1000] # lengths is a 1d array containing the lengths of each row of preds res_pred = [] for i in range(len(preds)): length = lengths[i].item() res_pred += [preds[i][:length]] result = torch.cat(res_pred).flatten() I do the same thing for my targets and then I can apply a loss function to both. I was wondering if there was a single vectorized operation I could do to extract all batch_size vectors of variable lengths and torch.cat them together. Right now I am looping on the first dimension, but this feel slow. Thanks,
You can create a 2D mask tensor with the number of True's in i-th row given by lengths[i]. Here's one example: batch_size = 6 n = 5 preds = torch.arange(batch_size * n).reshape(batch_size, n) # tensor([[ 0, 1, 2, 3, 4], # [ 5, 6, 7, 8, 9], # [10, 11, 12, 13, 14], # [15, 16, 17, 18, 19], # [20, 21, 22, 23, 24], # [25, 26, 27, 28, 29]]) #lengths = np.random.randint(0, n+1, batch_size) lengths = torch.randint(0, n+1, (batch_size, )) # tensor([2, 0, 5, 3, 3, 2]) Let's create the mask and get our result (probably there is a better way to create such a mask, but that's what I came up with): #mask = np.tile(range(n), (batch_size,1)) < lengths[:,None] mask = torch.arange(n).repeat((batch_size,1)) < lengths[:, None] # tensor([[ True, True, False, False, False], # [False, False, False, False, False], # [ True, True, True, True, True], # [ True, True, True, False, False], # [ True, True, True, False, False], # [ True, True, False, False, False]]) #result = preds[mask] result = torch.masked_select(preds, mask) # tensor([0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 20, 21, 22, 25, 26]) This produces the same result as your code: res_pred = [] for i in range(len(preds)): length = lengths[i].item() res_pred += [preds[i][:length]] result = torch.cat(res_pred).flatten()
https://stackoverflow.com/questions/60534121/
Gaussian filter in PyTorch
I am looking for a way to apply a Gaussian filter to an image (tensor) only using PyTorch functions. Using numpy, the equivalent code is import numpy as np from scipy import signal import matplotlib.pyplot as plt # Define 2D Gaussian kernel def gkern(kernlen=256, std=128): """Returns a 2D Gaussian kernel array.""" gkern1d = signal.gaussian(kernlen, std=std).reshape(kernlen, 1) gkern2d = np.outer(gkern1d, gkern1d) return gkern2d # Generate random matrix and multiply the kernel by it A = np.random.rand(256*256).reshape([256,256]) # Test plot plt.figure() plt.imshow(A*gkern(256, std=32)) plt.show() The closest suggestion I found is based on this post: import torch.nn as nn conv = nn.Conv2d(in_channels = 1, out_channels = 1, kernel_size=264, bias=False) with torch.no_grad(): conv.weight = gaussian_weights But it gives me the error NameError: name 'gaussian_weights' is not defined. How can I make it work?
There is a Pytorch class to apply Gaussian Blur to your image: torchvision.transforms.GaussianBlur(kernel_size, sigma=(0.1, 2.0)) Check the documentation for more info
https://stackoverflow.com/questions/60534909/
HTTPError: HTTP Error 403: Forbidden on Google Colab
I am trying to download MNIST data in PyTorch using the following code: train_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=128, shuffle=True) and it gives the following error. Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz 0it [00:00, ?it/s] --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) <ipython-input-2-2fee284dabb8> in <module>() 5 transform=transforms.Compose([ 6 transforms.ToTensor(), ----> 7 transforms.Normalize((0.1307,), (0.3081,)) 8 ])), 9 batch_size=128, shuffle=True) 11 frames /usr/lib/python3.6/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs) 648 class HTTPDefaultErrorHandler(BaseHandler): 649 def http_error_default(self, req, fp, code, msg, hdrs): --> 650 raise HTTPError(req.full_url, code, msg, hdrs, fp) 651 652 class HTTPRedirectHandler(BaseHandler): HTTPError: HTTP Error 403: Forbidden How do I solve this? The notebook was working before, I'm trying to rerun it but I got this error.
This is a new bug, reported here: https://github.com/pytorch/vision/issues/1938 See that thread for some potential workarounds until the issue is fixed in pytorch itself.
https://stackoverflow.com/questions/60538059/