instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
How to select specific labels in pytorch MNIST dataset
I'm trying to create dataloaders using only a specific digit from PyTorch Mnist dataset I already tried to create my own Sampler but it doesn't work and I'm not sure I'm using correctly the mask. class YourSampler(torch.utils.data.sampler.Sampler): def __init__(self, mask): self.mask = mask def __iter__(self): return (self.indices[i] for i in torch.nonzero(self.mask)) def __len__(self): return len(self.mask) mnist = datasets.MNIST(root=dataroot, train=True, download=True, transform = transform) mask = [True if mnist[i][1] == 5 else False for i in range(len(mnist))] mask = torch.tensor(mask) sampler = YourSampler(mask) trainloader = torch.utils.data.DataLoader(mnist, batch_size=4, sampler = sampler, shuffle=False, num_workers=2) So far I had many different types of errors. For this implementation, it's "Stop Iteration". I feel like this is very easy/stupid but I can't find a simple way to do it. Thank you for your help!
Thank you for your help. After a while I figured out a solution (but might not be the best at all): class YourSampler(torch.utils.data.sampler.Sampler): def __init__(self, mask, data_source): self.mask = mask self.data_source = data_source def __iter__(self): return iter([i.item() for i in torch.nonzero(mask)]) def __len__(self): return len(self.data_source) mnist = datasets.MNIST(root=dataroot, train=True, download=True, transform = transform) mask = [1 if mnist[i][1] == 5 else 0 for i in range(len(mnist))] mask = torch.tensor(mask) sampler = YourSampler(mask, mnist) trainloader = torch.utils.data.DataLoader(mnist, batch_size=batch_size,sampler = sampler, shuffle=False, num_workers=workers)
https://stackoverflow.com/questions/57913825/
Pytorch how to stack tensor like for loop
I want to concat tensor generated in for loop, and get 2dTensor. standard python, like below. li = [] for i in range(0, len(items)): # calc something li.append(calc_result) In my case, in for loop, generate torch.Size([768]) Tensor, and I want to get torch.Size([len(item),768]) Tensor. How to do this?
You can use torch.stack: torch.stack(li, dim=0) after the for loop will give you a torch.Tensor of that size. Note that if you know in advance the size of the final tensor, you can allocate an empty tensor beforehand and fill it in the for loop: x = torch.empty(size=(len(items), 768)) for i in range(len(items)): x[i] = calc_result This is usually faster than doing the stack.
https://stackoverflow.com/questions/57917981/
Replicating Semantic Analysis Model in Demo
Good day, I am a student that is interested in NLP. I have come across the demo on AllenNLP's homepage, which stated that: The model is a simple LSTM using GloVe embeddings that is trained on the binary classification setting of the Stanford Sentiment Treebank. It achieves about 87% accuracy on the test set. Is there any reference to the sample code or any tutorial that I can follow to replicate this result, so that I can learn more about this subject? I am trying to obtain a Regression Output (Instead of classification). I hope that someone can point me in the right direction.. Any help is much appreciated. Thank you!
AllenAI provides all code for examples and lib opensource on Git, including AllenNLP. I found exactly how the example was run here: https://github.com/allenai/allennlp/blob/master/allennlp/tests/data/dataset_readers/stanford_sentiment_tree_bank_test.py However, to make it a Regression task, you'll have to tweak directly on Pytorch, which is the underlying technology for AllenNLP.
https://stackoverflow.com/questions/57925256/
How to share weights between modules in Pytorch?
What is the correct way of sharing weights between two layers(modules) in Pytorch? Based on my findings in the Pytorch discussion forum, there are several ways for doing this. As an example, based on this discussion, I thought simply assigning the transposed weights would do it. That is doing : self.decoder[0].weight = self.encoder[0].weight.t() This however, proved to be wrong and causes an error. I then tried wrapping the above line in a nn.Parameter(): self.decoder[0].weight = nn.Parameter(self.encoder[0].weight.t()) This eliminates the error, but then again, there is no sharing happening here. by this I just initialized a new tensor with the same values as the encoder[0].weight.t(). I then found this link which provides different ways for sharing weights. however, I'm skeptical if all methods given there are actually correct. For example, one way is demonstrated like this : # tied autoencoder using off the shelf nn modules class TiedAutoEncoderOffTheShelf(nn.Module): def __init__(self, inp, out, weight): super().__init__() self.encoder = nn.Linear(inp, out, bias=False) self.decoder = nn.Linear(out, inp, bias=False) # tie the weights self.encoder.weight.data = weight.clone() self.decoder.weight.data = self.encoder.weight.data.transpose(0,1) def forward(self, input): encoded_feats = self.encoder(input) reconstructed_output = self.decoder(encoded_feats) return encoded_feats, reconstructed_output Basically it creates a new weight tensor using nn.Parameter() and assigns it to each layer/module like this : weights = nn.Parameter(torch.randn_like(self.encoder[0].weight)) self.encoder[0].weight.data = weights.clone() self.decoder[0].weight.data = self.encoder[0].weight.data.transpose(0, 1) This really confuses me, how is this sharing the same variable between these two layers? Is it not just cloning the 'raw' data? When I used this approach, and visualized the weights, I noticed the visualizations were different and that make me even more certain something is not right. I'm not sure if the different visualizations were solely due to one being the transpose of the other one, or as I just already suspected, they are optimized independently (i.e. the weights are not shared between layers) example weight initialization :
As it turns out, after further investigation, which was simply retransposing the decoder's weight and visualized it, they were indeed shared. Below is the visualization for encoder and decoders weights :
https://stackoverflow.com/questions/57929299/
Masking: Mask everything after a specified token (eos)
My tgt tensor is in shape of [12, 32, 1] which is sequence_length, batch_size, token_idx. What is the best way to create a mask which has ones for entries with <eos> and before in sequence, and zeros afterwards? Currently I'm calculating my mask like this, which simply puts zeros where <blank> is, ones otherwise. mask = torch.zeros_like(tgt).masked_scatter_((tgt != tgt_padding), torch.ones_like(tgt)) But the problem is, that my tgt can contain <blank> as well (before <eos>), in which cases I don't want to mask it out. My temporary solution: mask = torch.ones_like(tgt) for eos_token in (tgt == tgt_eos).nonzero(): mask[eos_token[0]+1:,eos_token[1]] = 0
I guess you are trying to create a mask for the PAD tokens. There are several ways. One of them is as follows. # tensor is of shape [seq_len, batch_size, 1] tensor = tensor.mul(tensor.ne(PAD).float()) Here, PAD stands for the index of the PAD_TOKEN. tensor.ne(PAD) will create a byte tensor where at PAD_TOKEN positions, 0 will be assigned and 1 elsewhere. If you have examples like, "<s> I think <pad> so </s> <pad> <pad>". Then, I would suggest using different PAD tokens, for before and after </s>. OR, if you have the length information for each sentence (in the above example, the sentence length is 6), then you can create the mask using the following function. def sequence_mask(lengths, max_len=None): """ Creates a boolean mask from sequence lengths. :param lengths: 1d tensor [batch_size] :param max_len: int """ batch_size = lengths.numel() max_len = max_len or lengths.max() return (torch.arange(0, max_len, device=lengths.device) # (0 for pad positions) .type_as(lengths) .repeat(batch_size, 1) .lt(lengths.unsqueeze(1)))
https://stackoverflow.com/questions/57931586/
torchtext data build_vocab / data_field
I want to ask you some about torchtext. I have a task about abstractive text summarization, and I build a seq2seq model with pytorch. I just wonder about data_field constructed by build_vocab function in torchtext. In machine translation, i accept that two data_fields(input, output) are needed. But, in summarization, input data and output data are same language. Here, should I make two data_field(full_sentence, abstract_sentence) in here? Or is it okay to use only one data_field? I'm afraid that my wrong choice make model's performance down. Please, give me a hint.
You are right in the case of summarization and other tasks, it makes sense to build and use the same vocab for input and output
https://stackoverflow.com/questions/57933011/
How to convert torch tensor to pandas dataframe?
I'd like to convert a torch tensor to pandas dataframe but by using pd.DataFrame I'm getting a dataframe filled with tensors instead of numeric values. import torch import pandas as pd x = torch.rand(4,4) px = pd.DataFrame(x) Here's what I get when clicking on px in the variable explorer: 0 1 2 3 tensor(0.3880) tensor(0.4598) tensor(0.4239) tensor(0.7376) tensor(0.4174) tensor(0.9581) tensor(0.0987) tensor(0.6359) tensor(0.6199) tensor(0.8235) tensor(0.9947) tensor(0.9679) tensor(0.7164) tensor(0.9270) tensor(0.7853) tensor(0.6921)
I found one possible way by converting torch first to numpy: import torch import pandas as pd x = torch.rand(4,4) px = pd.DataFrame(x.numpy())
https://stackoverflow.com/questions/57942487/
Multiply rows of matrix by vector elementwise in pytorch?
I would like to do the below but using PyTorch. The below example and description is from this post. I have a numeric matrix with 25 columns and 23 rows, and a vector of length 25. How can I multiply each row of the matrix by the vector without using a for loop? The result should be a 25x23 matrix (the same size as the input), but each row has been multiplied by the vector. Example Code in R (source: reproducible example from @hatmatrix's answer): matrix <- matrix(rep(1:3,each=5),nrow=3,ncol=5,byrow=TRUE) [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 1 1 [2,] 2 2 2 2 2 [3,] 3 3 3 3 3 vector <- 1:5 Desired output: [,1] [,2] [,3] [,4] [,5] [1,] 1 2 3 4 5 [2,] 2 4 6 8 10 [3,] 3 6 9 12 15 What is the best way of doing this using Pytorch?
The answer was so trivial that I overlooked it. For simplicity I used a smaller vector and matrix in this answer. Multiply rows of matrix by vector: X = torch.tensor([[1,2,3],[5,6,7]]) y = torch.tensor([7,4]) X.transpose(0,1)*y # or alternatively y*X.transpose(0,1) output: tensor([[ 7, 20], [14, 24], [21, 28]]) tensor([[ 7, 20], [14, 24], [21, 28]]) Multiply columns of matrix by vector: To multiply the columns of matrix by a vector you can use the same operator '*' but without the need to transpose the matrix (or vector) first X = torch.tensor([[3, 5],[5, 5],[1, 0]]) y = torch.tensor([7,4]) X*y # or alternatively y*X output: tensor([[21, 20], [35, 20], [ 7, 0]]) tensor([[21, 20], [35, 20], [ 7, 0]])
https://stackoverflow.com/questions/57947725/
With pytorch DataLoader how to take in two ndarray (data & label)?
I have a training data features in ndarray of shape (100, 400, 3) as it's 100 images of 20x20 with RGB channel and label in shape (100, ). Do I need to combine them into one dataset or how can I pass it to Pytorch dataLoader in order to iterate over image and label later? What I've tried so far #turn ndarray of features and labels into tensors transform = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])
As @Shai mentioned, DataLoader requires the input to be the Dataset class or its subclass. One of the simplest subclasses is TensorDataset and you can convert it from ndarray. import torch import numpy as np import torch.utils as utils train_x = torch.Tensor(np.random.randn(100,400,3)) train_y = torch.Tensor(np.random.randint(0,2,100)) dataset = utils.data.TensorDataset(train_x, train_y) dataloader = utils.data.DataLoader(dataset)
https://stackoverflow.com/questions/57949625/
pytorch cannot install using anaconda prompt
C:\Users\User>pip install pytorch When I run this command anaconda prompt display this error. Collecting pytorch Downloading https://files.pythonhosted.org/packages/ee/67/f403d4ae6e9cd74b546ee88cccdb29b8415a9c1b3d80aebeb20c9ea91d96/pytorch-1.0.2.tar.gz Building wheels for collected packages: pytorch Building wheel for pytorch (setup.py) ... error ERROR: Complete output from command 'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-yto7o19p\\pytorch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\User\AppData\Local\Temp\pip-wheel-lqbohuk2' --python-tag cp37: ERROR: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\User\AppData\Local\Temp\pip-install-yto7o19p\pytorch\setup.py", line 15, in <module> raise Exception(message) Exception: You tried to install "pytorch". The package named for PyTorch is "torch" ---------------------------------------- ERROR: Failed building wheel for pytorch Running setup.py clean for pytorch Failed to build pytorch Installing collected packages: pytorch Running setup.py install for pytorch ... error ERROR: Complete output from command 'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-yto7o19p\\pytorch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\User\AppData\Local\Temp\pip-record-oah_3fo2\install-record.txt' --single-version-externally-managed --compile: ERROR: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\User\AppData\Local\Temp\pip-install-yto7o19p\pytorch\setup.py", line 11, in <module> raise Exception(message) Exception: You tried to install "pytorch". The package named for PyTorch is "torch" ---------------------------------------- ERROR: Command "'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-yto7o19p\\pytorch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\User\AppData\Local\Temp\pip-record-oah_3fo2\install-record.txt' --single-version-externally-managed --compile" failed with error code 1 in C:\Users\User\AppData\Local\Temp\pip-install-yto7o19p\pytorch\ C:\Users\User>pip install torch When I tried to install torch it gives me this error message Collecting torch Downloading https://files.pythonhosted.org/packages/f8/02/880b468bd382dc79896eaecbeb8ce95e9c4b99a24902874a2cef0b562cea/torch-0.1.2.post2.tar.gz (128kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 133kB 544kB/s Requirement already satisfied: pyyaml in c:\users\user\anaconda3\lib\site-packages (from torch) (5.1.1) Building wheels for collected packages: torch Building wheel for torch (setup.py) ... error ERROR: Complete output from command 'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-5hkp2cz9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\User\AppData\Local\Temp\pip-wheel-64iazehi' --python-tag cp37: ERROR: running bdist_wheel running build running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\setup.py", line 265, in <module> description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", File "C:\Users\User\Anaconda3\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "C:\Users\User\Anaconda3\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\User\Anaconda3\lib\site-packages\wheel\bdist_wheel.py", line 192, in run self.run_command('build') File "C:\Users\User\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\User\Anaconda3\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\Users\User\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Failed building wheel for torch Running setup.py clean for torch ERROR: Complete output from command 'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-5hkp2cz9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all: ERROR: running clean error: [Errno 2] No such file or directory: '.gitignore' ---------------------------------------- ERROR: Failed cleaning build dir for torch Failed to build torch Installing collected packages: torch Running setup.py install for torch ... error ERROR: Complete output from command 'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-5hkp2cz9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\User\AppData\Local\Temp\pip-record-en00_2m3\install-record.txt' --single-version-externally-managed --compile: ERROR: running install running build_deps Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\setup.py", line 265, in <module> description="Tensors and Dynamic neural networks in Python with strong GPU acceleration", File "C:\Users\User\Anaconda3\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "C:\Users\User\Anaconda3\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\setup.py", line 99, in run self.run_command('build_deps') File "C:\Users\User\Anaconda3\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\User\Anaconda3\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command "'C:\Users\User\Anaconda3\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'C:\\Users\\User\\AppData\\Local\\Temp\\pip-install-5hkp2cz9\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\User\AppData\Local\Temp\pip-record-en00_2m3\install-record.txt' --single-version-externally-managed --compile" failed with error code 1 in C:\Users\User\AppData\Local\Temp\pip-install-5hkp2cz9\torch\ Other packsges can install by using pip install command but pytorch and torch didn't work like that
conda install PyTorch -c PyTorch is fine no need to run pip3 install torchvision
https://stackoverflow.com/questions/57957101/
Do I need to match the representation of my output layer and the target labels in PyTorch Nllloss?
I'm trying to solve a classification problem with 3 possible outputs: 0, 1, or 2. My output layer finally outputs a vector of probabilities for each label, say [0.3,0.4,0.3] My loss function is defined such: loss = criterion(output_batch, label_batch) #criterion = nn.NLLLoss() Now my question has to do with the outputs and labels not matching in the way the store data. The output is in the form of a size=3 probability vector (adding to 1 using soft max), and my target labels are simple scalars. I can convert my labels to vectors when the loss function is calculated but I'm not sure if this is necessary 0 ==> [1,0,0] 1 ==> [0,1,0] 2 ==> [0,0,1] Can someone please shed light on this issue? Thanks!
Let's say your classes are: cat, dog and capibara. You have so called softmax predictions. [0.3,0.4,0.3] The softmax function is pumping one result at a top. In this case if dog is under 0.4 our output is predicting the dog. Note how the predictions sum to 1 = 0.3+0.4+0.3. Now you need to calculate the log of that which is log softmax, and then NLL is just negative of that. I can convert my labels to vectors when the loss function is calculated but I'm not sure if this is necessary? 0 ==> [1,0,0] 1 ==> [0,1,0] 2 ==> [0,0,1] This is not necessary in your case. This means we had three different estimations (bs=3) while you showed just one. Here is a little exercise: batch_size, n_classes = 10, 5 x = torch.randn(batch_size, n_classes) print("x:",x) target = torch.randint(n_classes, size=(batch_size,), dtype=torch.long) print("target:",target) def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def nll_loss(p, target): return -p[range(target.shape[0]), target].mean() pred = log_softmax(x) print ("pred:", pred) ohe = torch.zeros(batch_size, n_classes) ohe[range(ohe.shape[0]), target]=1 print("ohe:",ohe) pe = pred[range(target.shape[0]), target] print("pe:",pe) mean = pred[range(target.shape[0]), target].mean() print("mean:",mean) negmean = -mean print("negmean:", negmean) loss = nll_loss(pred, target) print("loss:",loss) Out: x: tensor([[ 1.5837, -1.3132, 1.5513, 1.4422, 0.8072], [ 1.1740, 1.9250, 0.4258, -1.0320, -0.4650], [-1.2447, -0.5360, -1.4950, 1.2020, 1.2724], [ 0.2300, 0.2587, -0.4463, -0.1397, -0.3617], [-0.7983, 0.7742, 0.0035, 0.9963, -0.7926], [ 0.7575, -0.8008, 0.7995, 0.0448, 0.6621], [-1.7153, 0.7672, -0.6841, -0.4826, -0.8614], [ 0.0263, 0.7244, 0.8751, -1.0226, -1.3762], [ 0.0192, -0.4368, -0.4010, -1.0660, 0.0364], [-0.5120, -1.4871, 0.6758, 1.2975, 0.2879]]) target: tensor([0, 4, 3, 0, 0, 4, 1, 2, 4, 2]) pred: tensor([[-1.2094, -4.1063, -1.2418, -1.3509, -1.9859], [-1.3601, -0.6091, -2.1083, -3.5661, -2.9991], [-3.3233, -2.6146, -3.5736, -0.8766, -0.8063], [-1.3302, -1.3015, -2.0065, -1.7000, -1.9220], [-2.7128, -1.1403, -1.9109, -0.9181, -2.7070], [-1.2955, -2.8538, -1.2535, -2.0081, -1.3909], [-3.0705, -0.5881, -2.0394, -1.8379, -2.2167], [-1.7823, -1.0841, -0.9334, -2.8311, -3.1847], [-1.2936, -1.7496, -1.7138, -2.3788, -1.2764], [-2.5641, -3.5393, -1.3764, -0.7546, -1.7643]]) ohe: tensor([[1., 0., 0., 0., 0.], [0., 0., 0., 0., 1.], [0., 0., 0., 1., 0.], [1., 0., 0., 0., 0.], [1., 0., 0., 0., 0.], [0., 0., 0., 0., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 0., 1.], [0., 0., 1., 0., 0.]]) pe: tensor([-1.2094, -2.9991, -0.8766, -1.3302, -2.7128, -1.3909, -0.5881, -0.9334, -1.2764, -1.3764]) mean: tensor(-1.4693) negmean: tensor(1.4693) loss: tensor(1.4693)
https://stackoverflow.com/questions/57966054/
how to use RGB values in feedforward neural network?
I have data set of colored images in the form of ndarray (100, 20, 20, 3) and 100 corresponding labels. When passing them as input to a fully connected neural network (not CNN), what should I do with the 3 values of RGB? Average them perhaps lose some information, but if not manipulating them, my main issue is batch size, as demo-ed below in pytorch. for epoch in range(n_epochs): for i, (images, labels) in enumerate(train_loader): # because of rgb values, now images is 3 times the length of labels images = Variable(images.view(-1, 400)) labels = Variable(labels) optimizer.zero_grad() outputs = net(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() This returns 'ValueError: Expected input batch_size (300) to match target batch_size (100).' Should I have reshaped images into (1, 1200) dimension tensors? Thanks in advance for answers.
Since size of labels is (100,), so your batch data should be with shape of (100, H, W, C). I'm assuming your data loader is returning a tensor with shape of (100,20,20,3). The error happens because you reshape the tensor to (300,400). Check your network architecture whether the input tensor shape is (20,20,3). If your network can only accept single channel images, you can first convert your RGB to grayscale images. Or, modify your network architecture to make it accept 3 channels images. One convenient way is adding an extra layer reducing 3 channels to 1 channel, and you do not need to change the other parts of the network.
https://stackoverflow.com/questions/57966951/
Unexpected import behavior: non-existing module imported
I have a torch extension (but I think the error I get is independent of torch), let's call it foo, that is built with setuptools. However, it depends on whether there is a GPU available in the system, in which case some CUDA code is compiled: from torch.utils.cpp_extension import CppExtension, BuildExtension, CUDAExtension if cuda_available: ext = CUDAExtension( 'foo_gpu', prefixed(prefix, ['foo.cpp', 'foo_kernel.cu']), define_macros=[('COMPILE_CUDA', '1')]) else: ext = CppExtension( 'foo_cpu', prefixed(prefix, ['foo.cpp'])) setup(name=ext.name, version='1.0.0', ext_modules=[ext_module], extra_compile_args=['-mmacosx-version-min=10.9'], cmdclass={'build_ext': BuildExtension}) Notice how the module is compiled to be either foo_gpu or foo_cpu. Later, I import it as follows: try: import foo_gpu as foo_backend CUDA_SUPPORTED = True except ImportError: CUDA_SUPPORTED = False # Try importing the cpu version try: import foo_cpu as foo_backend except ImportError: ... Let's say I compile it with CUDA support, so foo_gpu goes into PIP (actually goes into PIP as foo-gpu, not sure why?) Now, I uninstalled it, pip uninstall foo-gpu, and compile it with CPU support only, and pip shows foo-cpu 1.0.0. BUT, now, when I run the import code above, it still finds foo-gpu, i.e., the first import statement succeeds! Even though it does not show up in pip. EDIT. I checked sys.path and found there is one folder that contains something with gpu in my conda env: $ ls ~/miniconda3/envs/cpu_env/lib/python3.7/site-packages/foo_cpu-1.0.0-py3.7-linux-x86_64.egg/ EGG-INFO __pycache__ foo_cpu.cpython-37m-x86_64-linux-gnu.so foo_cpu.py foo_gpu.cpython-37m-x86_64-linux-gnu.so But how should it get there? In this env (cpu_env), I never compiled with GPU support. Are there some caches that get invoked?
Turns out the GPU build in was cached by setuptools (build/, dist/, etc.) and also installed!
https://stackoverflow.com/questions/57968883/
How do I reshape this tensor?
I have a torch.tensor that looks like this: tensor([[[A,B,C], [D,E,F], [G,H,I]], [[J,K,L], [M,N,O], [P,Q,R]]] I want to reshape this tensor so that its dimensions are (18, 1). I want the new tensor to look like this: tensor([A, J, B, K, C, L, D, M, ... I, R]) I've tried tensor.view(-1,1) but this doesn't work..
a = torch.arange(18).view(2,3,3) print(a) #tensor([[[ 0, 1, 2], # [ 3, 4, 5], # [ 6, 7, 8]], # # [[ 9, 10, 11], # [12, 13, 14], # [15, 16, 17]]]) aa = a.permute(1,2,0).flatten() print(aa) #tensor([ 0, 9, 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8, 17])
https://stackoverflow.com/questions/57978510/
OverflowError: Python int too large to convert to C long torchtext.datasets.text_classification.DATASETS['AG_NEWS']()
I have 64 bit windows 10 OS I have installed python 3.6.8 I have installed torch and torchtext using pip. torch version is 1.2.0 I am trying to load AG_NEWS dataset using below code: import torch import torchtext from torchtext.datasets import text_classification NGRAMS = 2 import os if not os.path.isdir('./.data'): os.mkdir('./.data') train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](root='./.data', ngrams=NGRAMS, vocab=None) On the last statement of above code, I am getting below error: --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) <ipython-input-1-7e8544fdaaf6> in <module> 6 if not os.path.isdir('./.data'): 7 os.mkdir('./.data') ----> 8 train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](root='./.data', ngrams=NGRAMS, vocab=None) 9 # BATCH_SIZE = 16 10 # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in AG_NEWS(*args, **kwargs) 168 """ 169 --> 170 return _setup_datasets(*(("AG_NEWS",) + args), **kwargs) 171 172 c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in _setup_datasets(dataset_name, root, ngrams, vocab, include_unk) 126 if vocab is None: 127 logging.info('Building Vocab based on {}'.format(train_csv_path)) --> 128 vocab = build_vocab_from_iterator(_csv_iterator(train_csv_path, ngrams)) 129 else: 130 if not isinstance(vocab, Vocab): c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\vocab.py in build_vocab_from_iterator(iterator) 555 counter = Counter() 556 with tqdm(unit_scale=0, unit='lines') as t: --> 557 for tokens in iterator: 558 counter.update(tokens) 559 t.update(1) c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in _csv_iterator(data_path, ngrams, yield_cls) 33 with io.open(data_path, encoding="utf8") as f: 34 reader = unicode_csv_reader(f) ---> 35 for row in reader: 36 tokens = ' '.join(row[1:]) 37 tokens = tokenizer(tokens) c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\utils.py in unicode_csv_reader(unicode_csv_data, **kwargs) 128 maxInt = int(maxInt / 10) 129 --> 130 csv.field_size_limit(sys.maxsize) 131 132 if six.PY2: OverflowError: Python int too large to convert to C long I think the issue is with either windows os or torchtext because I am getting same error for below code as well. pos = data.TabularDataset( path='data/pos/pos_wsj_train.tsv', format='tsv', fields=[('text', data.Field()), ('labels', data.Field())]) Can somebody please help? and mainly I don't have any large numerical values in the file.
I also encountered a similar problem. I changed a line of code in my torchtext\utils.py file and my error disappeared. Changed this: csv.field_size_limit(sys.maxsize) To this: csv.field_size_limit(maxInt)
https://stackoverflow.com/questions/57988897/
How to use tensor.item() ? IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number
Im pretty new to Siamese Neural Networks and recently found this example and Colab notebook. When running the code i get the following error: IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number on the line: result=torch.max(res,1)[1][0][0][0].data[0].tolist() I found something about the tensor.item() but i really just don't know how to use it here. EDIT: test_dataloader = DataLoader(test_dataset,num_workers=6,batch_size=1,shuffle=True) accuracy=0 counter=0 correct=0 for i, data in enumerate(test_dataloader,0): x0, x1 , label = data # onehsot applies in the output of 128 dense vectors which is then converted to 2 dense vectors output1,output2 = model(x0.to(device),x1.to(device)) res=torch.abs(output1.cuda() - output2.cuda()) label=label[0].tolist() label=int(label[0]) result=torch.max(res,1)[1][0][0][0].data.item().tolist() if label == result: correct=correct+1 counter=counter+1 # if counter ==20: # break accuracy=(correct/len(test_dataloader))*100 print("Accuracy:{}%".format(accuracy)) Thats the code i get the error on.
What this error message says is that you're trying to index into an array which has just one item in it. For example, In [10]: aten = torch.tensor(2) In [11]: aten Out[11]: tensor(2) In [12]: aten[0] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-12-5c40f6ab046a> in <module> ----> 1 aten[0] IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number In the above case, aten is a tensor with just a single number in it. So, using an index (or more) to retrieve that number throws the IndexError. The correct way to extract the number(item) out of the tensor is to use tensor.item(), here aten.item() as in: In [14]: aten.item() Out[14]: 2
https://stackoverflow.com/questions/57993899/
PyTorch: How to write a neural network that only returns the weights?
I'm training a neural network that learns some weights and based on those weights, I compute transformations that produce the predicted model in combination with the weights. My network doesn't learn properly and therefore I'm writing a different network that does nothing but returning the weights independent from the input x (after normalization with softmax and transpose). This way, I want to find out whether the problem lies in the network or in the transformation estimation outside the network. But this doesn't work. This is what I've got. class DoNothingNet(torch.nn.Module): def __init__(self, n_vertices=6890, n_joints=14): super(DoNothingNet, self).__init__() self.weights = nn.parameter.Parameter(torch.randn(n_vertices, n_joints)) def forward(self, x, indices): self.weights = F.softmax(self.weights, dim=1) return self.weights.transpose(0,1) But the line self.weights = F.softmax(self.weights, dim=1) doesn't work and produces the error TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weights' (torch.nn.Parameter or None expected). How do I fix this? And does the code even make sense?
nn.Module tracks all fields of type nn.Parameter for training. In your code every forward call you try to change parameters weights by assigning it to Tensor type, so the error occurs. The following code outputs normalised weights without changing the stored ones. Hope this will help. import torch from torch import nn from torch.nn import functional as F class DoNothingNet(torch.nn.Module): def __init__(self, n_vertices=6890, n_joints=14): super(DoNothingNet, self).__init__() self.weights = nn.parameter.Parameter(torch.randn(n_vertices, n_joints)) def forward(self, x, indices): output = F.softmax(self.weights, dim=1) return output.transpose(0,1)
https://stackoverflow.com/questions/57995859/
How to exponentially decay a value along an axis?
I have a tensor of shape (sequence_length,batch_size,1) (which is (5,5,1) in the following example). Every sequence has a single entry that I want to exponentially decay along the sequence (for example with a decay factor of 0.99) for n steps (here: n = 2). Example: Is there an efficient way to realize something like that (either in PyTorch or numpy). Edit (visualizing the additive nature):
This looks like convolution to me: import numpy as np from scipy.ndimage import convolve1d # just for printing: import numpy.lib.recfunctions as nlr def show(x): print(nlr.unstructured_to_structured(x)) a = "00000","00100","10000","03502","00010" A = np.array(a).view('U1').reshape(5,5,1).astype(float) show(A) # [[(0.,) (0.,) (0.,) (0.,) (0.,)] # [(0.,) (0.,) (1.,) (0.,) (0.,)] # [(1.,) (0.,) (0.,) (0.,) (0.,)] # [(0.,) (3.,) (5.,) (0.,) (2.,)] # [(0.,) (0.,) (0.,) (1.,) (0.,)]] show(convolve1d(A,np.r_[np.logspace(2,0,3,base=0.99),0,0],axis=0,mode="constant")) # [[(0.9801,) (0. ,) (0.99 ,) (0. ,) (0. ,)] # [(0.99 ,) (2.9403,) (5.9005,) (0. ,) (1.9602,)] # [(1. ,) (2.97 ,) (4.95 ,) (0.9801,) (1.98 ,)] # [(0. ,) (3. ,) (5. ,) (0.99 ,) (2. ,)] # [(0. ,) (0. ,) (0. ,) (1. ,) (0. ,)]]
https://stackoverflow.com/questions/57998816/
Does PyTorch have a RandomState-like object for random number generation?
in numpy i can import numpy as np rs = np.random.RandomState(seed=0) and then pass that object around, eg for dependency injection. Does PyTorch have a similar interface? I can't find anything in the docs, but maybe i'm missing something.
The closest thing would be torch.manual_seed, which sets the seed for generating random numbers and returns a torch.Generator. This thread here has more information, apparently there may be some inconsistencies depending on whether you are using GPU or a CPU.
https://stackoverflow.com/questions/58000160/
Images change color after unfolding using pytorch
I am new to pytorch, for one of my projects I am dividing a large image into smaller tiles/patches. I am using unfold to make this happen. My code is as follows data = training_set[1][0].data.unfold(1, 64, 64).unfold(2, 64, 64).unfold(3, 64, 64) After doing this I transpose the resultant matrix since the images are flipped, like this sample code torch.t(data [0][0][0][0]) but the resultant images lose color, or get discolored for some reason, and I am worried that this might affect any calculations I do based on these patches. The following is a screenshot of the problem The top is the patch and the bottom one is the complete picture Any help is appreciated, thanks
I think your dataset is probably fine. I had a similar issue in the past related to this. In this situation I have the feeling that the culprit is the matplotlib.imshow() function. It would be helpful if you could share the complete code you have used to plot the matplotlib figure. You are most likely taking a RGBA instead of a regular RGB as input into the plt.imshow() function. Thus the color are just because you also display an alpha value (A) on top of the regular red green blue (RGB). If that is the case I would suggest that you try to plot this image = torch.t(data [0][0][0])
https://stackoverflow.com/questions/58001703/
I've installed PyTorch, but CUDA is not recognized
I installed PyTorch using this command: pip3 install torch===1.2.0 torchvision===0.4.0 -f https://download.pytorch.org/whl/torch_stable.html And it seemed to work. When importing torch, there isn't any error. My laptop has a GeForce 1060 TI, which I assumed would work with CUDA. Here is the error in the IDE (Eclipse):
You should include code, not a graph! Remove PyTorch and install it again with this command: conda install pytorch -c pytorch pip3 install torchvision after this try checking it with this command: import torch import torchvision train_on_gpu = torch.cuda.is_available() if train_on_gpu: print('CUDA is available, Training on GPU ...') else: print('CUDA is not available! Training on CPU ...')
https://stackoverflow.com/questions/58002754/
PyTorch [1 if x > 0.5 else 0 for x in outputs ] with tensors
I have a list outputs from a sigmoid function as a tensor in PyTorch E.g output (type) = torch.Size([4]) tensor([0.4481, 0.4014, 0.5820, 0.2877], device='cuda:0', As I'm doing binary classification I want to turn all values bellow 0.5 to 0 and above 0.5 to 1. Traditionally with a NumPy array you can use list iterators: output_prediction = [1 if x > 0.5 else 0 for x in outputs ] This would work, however I have to later convert output_prediction back to a tensor to use torch.sum(ouput_prediction == labels.data) Where labels.data is a binary tensor of labels. Is there a way to use list iterators with tensors?
prob = torch.tensor([0.3,0.4,0.6,0.7]) out = (prob>0.5).float() # tensor([0.,0.,1.,1.]) Explanation: In pytorch, you can directly use prob>0.5 to get a torch.bool type tensor. Then you can convert to float type via .float().
https://stackoverflow.com/questions/58002836/
Pytorch differentiable conditional (index-based) sum
I have an idx array like [0, 1, 0, 2, 3, 1] and another 2d array data like the following: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17]] I want my output to be 4x3 in which 4 is the max of idx and 3 is the feature size (data.shape[1]) and in the output each element is the sum of features with corresponding index in the idx. Then output in this example would be: [[6, 8, 10], [18, 20, 22], [9, 10, 11], [12, 13, 14]] I can do it with iterating over range(3) and creating a mask on data and summing up them but it's not differentiable (I suppose). Is there any function(s) in Pytorch for this purpose? something like scatter(). Update: It seems I am looking for something named scatter sum which is implemented in this repository.
You are looking for index_add_: import torch x = torch.tensor([[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.], [ 9., 10., 11.], [12., 13., 14.], [15., 16., 17.]], dtype=torch.float) idx = torch.tensor([0, 1, 0, 2, 3, 1], dtype=torch.long) # note the dtype here, must be "long" # init the sums to zero y = torch.zeros((idx.max()+1, x.shape[1]), dtype=x.dtype) # do the magic y.index_add_(0, idx, x) Gives the desired output tensor([[ 6., 8., 10.], [18., 20., 22.], [ 9., 10., 11.], [12., 13., 14.]])
https://stackoverflow.com/questions/58007127/
Attention Text Generation in Character-by-Character fashion
I am searching the web for a couple of days for any text generation model that would use only attention mechanisms. The Transformer architecture that made waves in the context of Seq-to-Seq models is actually based solely on Attention mechanisms but is mainly designed and used for translation or chat bot tasks so it doesn't fit to the purpose, but the principle does. My question is: Does anyone knows or heard of a text generation model based solely on Attention without any recurrence? Thanks a lot! P.S. I'm familiar with PyTorch.
Building a character-level self-attentive model is a challenging task. Character-level models are usually based on RNNs. Whereas in a word/subword model, it is clear from the beginning what are the units carrying meaning (and therefore the units the attention mechanism can attend to), a character-level model needs to learn word meaning in the following layers. This makes it quite difficult for the model to learn. Text generation models are nothing more than conditional languages model. Google AI recently published a paper on Transformer character language model, but it is the only work I know of. Anyway, you should consider either using subwords units (as BPE, SentencePiece) or if you really need to go for character level, use RNNs instead.
https://stackoverflow.com/questions/58007391/
padding zeroes to a tensor on both dimensions
I have a tensor t 1 2 3 4 5 6 7 8 And I would like to make it 0 0 0 0 0 1 2 0 0 3 4 0 0 5 6 0 0 7 8 0 0 0 0 0 I tried stacking with new=torch.tensor([0. 0. 0. 0.]) tensor four times but that did not work. t = torch.arange(8).reshape(1,4,2).float() print(t) new=torch.tensor([[0., 0., 0.,0.]]) print(new) r = torch.stack([t,new]) # invalid argument 0: Tensors must have same number of dimensions: got 4 and 3 new=torch.tensor([[[0., 0., 0.,0.]]]) print(new) r = torch.stack([t,new]) # invalid argument 0: Sizes of tensors must match except in dimension 0. I also tried cat, that did not work either.
It is probably better to initialize an array of the desired shape first, and then add the data at the appropriate indices. import torch t = torch.arange(8).reshape(1,4,2).float() x = torch.zeros((1, t.shape[1]+2, t.shape[2]+2)) x[:, 1:-1, 1:-1] = t print(x) On the other hand, if you just want to pad your tensor with zeroes (and not just add extra zeroes somewhere), you can use torch.nn.functional.pad: import torch t = torch.arange(8).reshape(1, 4, 2).float() x = torch.nn.functional.pad(t, (1, 1, 1, 1)) print(x)
https://stackoverflow.com/questions/58007525/
How to add to pytorch tensor at indices?
I have to admit, I'm a bit confused by the scatter* and index* operations - I'm not sure any of them do exactly what I'm looking for, which is very simple: Given some 2-D tensor z = tensor([[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]]) And a list (or tensor?) of 2-d indexes: inds = tensor([[0, 0], [1, 1], [1, 2]]) I want to add a scalar to z at those indexes (and do it efficiently): znew = z.something_add(inds, 3) -> znew = tensor([[4., 1., 1., 1.], [1., 4., 4., 1.], [1., 1., 1., 1.]]) If I have to I can make that scalar a tensor of whatever shape (where all elements = 3), but I'd rather not...
You must provide two lists to your indexing. The first having the row positions and the second the column positions. In your example, it would be: z[[0, 1, 1], [0, 1, 2]] += 3 torch.Tensor indexing follows Numpy. See https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing for more details.
https://stackoverflow.com/questions/58012586/
Truncate SVD decomposition of Pytorch tensor without transfering to cpu
I'm training a model in Pytorch and I want to use truncated SVD decomposition of input. For calculating SVD I transfer input witch is a Pytorch Cuda Tensor to CPU and using TruncatedSVD from scikit-learn perform truncate, after that, I transfer the result back to GPU. The following is code for my model: class ImgEmb(nn.Module): def __init__(self, input_size, hidden_size): super(ImgEmb, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.drop = nn.Dropout(0.2) self.mlp = nn.Linear(input_size/2, hidden_size) self.relu = nn.Tanh() self.svd = TruncatedSVD(n_components=input_size/2) def forward(self, input): svd=self.svd.fit_transform(input.cpu()) svd_tensor=torch.from_numpy(svd) svd_tensor=svd_tensor.cuda() mlp=self.mlp(svd_tensor) res = self.relu(mlp) return res I wonder is a way to implement truncated SVD without transferring back and forth to GPU? (Because it's very time consuming and is not efficient at all)
You could directly use PyTorch's SVD and truncate it manually, or you can use the truncated SVD from TensorLy, with the PyTorch backend: import tensorly as tl tl.set_backend('pytorch') U, S, V = tl.truncated_svd(matrix, n_eigenvecs=10) However, the GPU SVD does not scale very well on large matrices. You can also use TensorLy's partial svd which will still copy your input to CPU but will be much faster if you keep only a few eigenvalues as it will use a sparse eigendecomposition. In Scikit-learn's truncated SVD, you can also use 'algorithm = arpack' to use Scipy's sparse SVD which again might be faster if you only need a few components.
https://stackoverflow.com/questions/58026949/
sklearn pipeline with multiple inputs/outputs
How can I build a sklearn pipeline to do the following? What I have: A, B = getAB(X_train) X_train = transform(X_train) model(A, B, X_train) What I want: pipe = Pipeline([ (β€˜ab’, getAB), (β€˜tranf’, transform), (β€˜net’, net) ] pipe.fit(X_train, y_train) Please help!
Yes it is doable by writing a custom transformer that has a fit/transform function. This can be your class: from sklearn.base import BaseEstimator, TransformerMixin def getABTransformer(BaseEstimator, TransformerMixin): def __init__(self): # no *args or **kargs pass def fit(self, X, y=None): return self # nothing else to do def transform(self, X, y=None): return getAB(X) Then you can create your ColumnTransformer as following: from sklearn.compose import ColumnTransformer clm_pipe = ColumnTransformer([ (β€˜ab’, getABTransformer, np.arange(0, len(X_train)), # list of columns indices (β€˜tranf’, transform, np.arange(0, len(X_train))), # list of columns indices ] and a final pipeline with the model: pipe = Pipeline([ (β€˜clm_pipe’, clm_pipe), (β€˜net’, net) ] You can read more about ColumnTransformer
https://stackoverflow.com/questions/58028064/
Pytorch's stack() adds dimension?
I am looking to stack one shape=(1,2) tensor array on top of another shape=(1,2) array, across dim=1, using pytorch's stack() method. >>> import numpy as np >>> import torch >>> np_a = np.array([[1,2]]) >>> np_b = np.array([[3,4]]) >>> print(np_a) [[1 2]] >>> print(np_b) [[3 4]] >>> t_a = torch.from_numpy(np_a) >>> t_b = torch.from_numpy(np_b) >>> print(t_a) tensor([[1, 2]]) >>> print(t_b) tensor([[3, 4]]) >>> t_stacked = torch.stack((t_a, t_b), dim=1) >>> print(t_stacked) tensor([[[1, 2], [3, 4]]]) The resulting tensor has an added dimension and now has a shape=(1,2,2). Why doesn't pytorch's stack() behave like numpy's vstack()? See below: >>> import numpy as np >>> np_a = np.array([[1,2]]) >>> np_b = np.array([[3,4]]) >>> stacked = np.vstack((np_a, np_b)) >>> print(stacked) [[1 2] [3 4]] How do I make pytorch not add a dimension?
I can't tell you why it was decided to have pytorch's stack behave differently from numpy (maybe compatibility with luatorch?). Anyway, to get the desired outcome you can use torch.cat: >>> torch.cat((t_a, t_b), dim=0) tensor([[1, 2], [3, 4]])
https://stackoverflow.com/questions/58041403/
Training a model with multiple learning rate in PyTorch
I am new to PyTorch and getting used to some concepts. I need to train a Neural Network. For optimization, I need to use Adam optimizer with 4 different learning rates = [2e-5, 3e-5, 4e-5, 5e-5] The optimizer function is defined as below def optimizer(no_decay = ['bias', 'gamma', 'beta'], lr=2e-5): param_optimizer = list(model.named_parameters()) optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] # This variable contains all of the hyperparemeter information our training loop needs optimizer = BertAdam(optimizer_grouped_parameters, lr, warmup=.1) return optimizer How do I make sure the optimizer uses my specified set of learning rate and returns the best model? During training, we use the optimizer like below where I don't see a way to tell it to try different learning rate def model_train(): #other code # clear out the gradient optimizer.zero_grad() # Forward pass loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) train_loss_set.append(loss.item()) # Backward pass loss.backward() # Update parameters and take a step using the computed gradient optimizer.step() I know that the optimizer.step() internally steps through to optimize the gradient. But how do I make sure the optimizer tries my specified set of learning rate and returns the best model to me? Please suggest.
If you want to train four times with four different learning rates and then compare you need not only four optimizers but also four models: Using different learning rate (or any other meta-parameter for this matter) yields a different trajectory of the weights in the high-dimensional "parameter space". That is, after a few steps its not only the learning rate that differentiate between the models, but the trained weights themselves - this is what yield the actual difference between the models. therefore, you need to train 4 times using 4 separate instances of model using 4 instances of optimizer with different learning rates.
https://stackoverflow.com/questions/58044133/
Pytorch simple model not improving
I am making a simple PyTorch neural net to approximate the sine function on x = [0, 2pi]. This is a simple architecture I use with different deep learning libraries to test whether I understand how to use it or not. The neural net, when untrained, always produces a straight horizontal line, and when trained, produces a straight line at y = 0. In general, it always produces a straight line at y = (The mean of the function). This leads me to believe something is wrong with the forward prop portion of it, as the boundary should not just be a straight line when untrained. Here is the code for the net: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.model = nn.Sequential( nn.Linear(1, 20), nn.Sigmoid(), nn.Linear(20, 50), nn.Sigmoid(), nn.Linear(50, 50), nn.Sigmoid(), nn.Linear(50, 1) ) def forward(self, x): x = self.model(x) return x Here is the training loop def train(net, trainloader, valloader, learningrate, n_epochs): net = net.train() loss = nn.MSELoss() optimizer = torch.optim.SGD(net.parameters(), lr = learningrate) for epoch in range(n_epochs): for X, y in trainloader: X = X.reshape(-1, 1) y = y.view(-1, 1) optimizer.zero_grad() outputs = net(X) error = loss(outputs, y) error.backward() #net.parameters() net.parameters() * learningrate optimizer.step() total_loss = 0 for X, y in valloader: X = X.reshape(-1, 1).float() y = y.view(-1, 1) outputs = net(X) error = loss(outputs, y) total_loss += error.data print('Val loss for epoch', epoch, 'is', total_loss / len(valloader) ) it is called as: net = Net() losslist = train(net, trainloader, valloader, .0001, n_epochs = 4) Where trainloader and valloader are the training and validation loaders. Can anyone help me see what's wrong with this? I know its not the learning rate since its the one I use in other frameworks, and I know its not the fact im using SGD or sigmoid activation functions, although I have a suspicion the error is in the activation functions somewhere. Does anyone know how to fix this? Thanks.
After a while playing with some hyperparameters, modifying the net and changing the optimizer (following this excellent recipe) I ended up with changing the line optimizer = torch.optim.SGD(net.parameters(), lr = learningrate) to optimizer = torch.optim.Adam(net.parameters()) (the default optimizer parameters was used), running for 100 epochs and batch size equal to 1. The following code was used (tested on CPU only): import torch import torch.nn as nn from torch.utils import data import numpy as np import matplotlib.pyplot as plt # for reproducibility torch.manual_seed(0) np.random.seed(0) class Dataset(data.Dataset): def __init__(self, init, end, n): self.n = n self.x = np.random.rand(self.n, 1) * (end - init) + init self.y = np.sin(self.x) def __len__(self): return self.n def __getitem__(self, idx): x = self.x[idx, np.newaxis] y = self.y[idx, np.newaxis] return torch.Tensor(x), torch.Tensor(y) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.model = nn.Sequential( nn.Linear(1, 20), nn.Sigmoid(), nn.Linear(20, 50), nn.Sigmoid(), nn.Linear(50, 50), nn.Sigmoid(), nn.Linear(50, 1) ) def forward(self, x): x = self.model(x) return x def train(net, trainloader, valloader, n_epochs): loss = nn.MSELoss() # Switch the two following lines and run the code # optimizer = torch.optim.SGD(net.parameters(), lr = 0.0001) optimizer = torch.optim.Adam(net.parameters()) for epoch in range(n_epochs): net.train() for x, y in trainloader: optimizer.zero_grad() outputs = net(x).view(-1) error = loss(outputs, y) error.backward() optimizer.step() net.eval() total_loss = 0 for x, y in valloader: outputs = net(x) error = loss(outputs, y) total_loss += error.data print('Val loss for epoch', epoch, 'is', total_loss / len(valloader) ) net.eval() f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) def plot_result(ax, dataloader): out, xx, yy = [], [], [] for x, y in dataloader: out.append(net(x)) xx.append(x) yy.append(y) out = torch.cat(out, dim=0).detach().numpy().reshape(-1) xx = torch.cat(xx, dim=0).numpy().reshape(-1) yy = torch.cat(yy, dim=0).numpy().reshape(-1) ax.scatter(xx, yy, facecolor='green') ax.scatter(xx, out, facecolor='red') xx = np.linspace(0.0, 3.14159*2, 1000) ax.plot(xx, np.sin(xx), color='green') plot_result(ax1, trainloader) plot_result(ax2, valloader) plt.show() train_dataset = Dataset(0.0, 3.14159*2, 100) val_dataset = Dataset(0.0, 3.14159*2, 30) params = {'batch_size': 1, 'shuffle': True, 'num_workers': 4} trainloader = data.DataLoader(train_dataset, **params) valloader = data.DataLoader(val_dataset, **params) net = Net() losslist = train(net, trainloader, valloader, n_epochs = 100) Result with Adam optimizer: Result with SGD optimizer:
https://stackoverflow.com/questions/58044728/
How to have incrementing batch size in pytorch
In pytorch, DataLoader will split a dataset into batches of set size with additional options of shuffling etc, which one can then loop over. But if I need the batch size to increment, such as first 10 batch of size 50, next 5 batch of size 100 and so on, what's the best way of doing so? I tried splitting the tensor then concat them: #10x50 + 5*100 originalTensor = torch.randn(1000, 80) split1=torch.split(originalTensor, 500, dim=0) split2=torch.split(list(split1)[0], 100, dim=0) Thereafter is there a way to pass the concatenated tensor into dataLoader or any other way to directly turn the concat tensor into a generator (which might lose shuffling and other functionalities)?
I think you can do that by simply providing a non-default batch_sampler to your DataLoader. For instance: class VaryingSizeBatchSampler(Sampler): r"""Wraps another sampler to yield a varying-size mini-batch of indices. Args: sampler (Sampler): Base sampler. batch_size_fn (function): Size of current mini-batch. drop_last (bool): If ``True``, the sampler will drop the last batch if its size would be less than ``batch_size`` """ def __init__(self, sampler, batch_size_fn, drop_last): if not isinstance(sampler, Sampler): raise ValueError("sampler should be an instance of " "torch.utils.data.Sampler, but got sampler={}" .format(sampler)) self.sampler = sampler self.batch_size_fn = batch_size_fn self.drop_last = drop_last self.batch_counter = 0 def __iter__(self): batch = [] cur_batch_size = self.batch_size_fn(self.batch_counter) # get current batch size for idx in self.sampler: batch.append(idx) if len(batch) == cur_batch_size: yield batch self.batch_counter += 1 cur_batch_size = self.batch_size_fn(self.batch_counter) # get current batch size batch = [] if len(batch) > 0 and not self.drop_last: yield batch def __len__(self): raise NotImplementedError('You need to implement it yourself!')
https://stackoverflow.com/questions/58046178/
why does the Accuracy decrease when using a ReLu activation after Linear layers
so I started using Pytorch and I'm building a very basic CNN on the FashionMNIST Dataset. I noticed some weird Behaviour while using the NN and I don't know why this is happening, in the Forward Function the Accuracy of the NN decrease when I use a Relu function after every Linear Layer. here is the Code for my custom NN: # custom class neural network class FashionMnistClassifier(nn.Module): def __init__(self, n_inputs, n_out): super().__init__() self.cnn1 = nn.Conv2d(n_inputs, out_channels=32, kernel_size=5).cuda(device) self.cnn2 = nn.Conv2d(32, out_channels=64, kernel_size=5).cuda(device) #self.cnn3 = nn.Conv2d(n_inputs, out_channels=32, kernel_size=5) self.fc1 = nn.Linear(64*4*4, out_features=100).cuda(device) self.fc2 = nn.Linear(100, out_features=n_out).cuda(device) self.relu = nn.ReLU().cuda(device) self.pool = nn.MaxPool2d(kernel_size=2).cuda(device) self.soft_max = nn.Softmax().cuda(device) def forward(self, x): x.cuda(device) out = self.relu(self.cnn1(x)) out = self.pool(out) out = self.relu(self.cnn2(out)) out = self.pool(out) #print("out shape in classifier forward func: ", out.shape) out = self.fc1(out.view(out.size(0), -1)) #out = self.relu(out) # if I uncomment these then the Accuracy decrease from 90 to 50!!! out = self.fc2(out) #out = self.relu(out) # this too return out n_batch = 100 n_outputs = 10 LR = 0.001 model = FashionMnistClassifier(1, 10).cuda(device) optimizer = optim.Adam(model.parameters(), lr=LR) criterion = nn.CrossEntropyLoss() so if I use the ReLu only after the CNN layers I get an Accuracy of 90% but when I uncomment that section and use the Relu activation after the Linear Layers the Accuracy decrease to 50% I have no idea why is this happening since I thought it is always better to use activation after every linear layer to get a better accuracy for Classification. I thought always that we should always use the activation function if we have a Classification Problem and for Linear Regression we don't have to do this, but here in my case although it is a Classification Problem, I get a better performance if I don't use the Activation Function after the Linear Layers. Can someone maybe clarify this to me?
CrossEntropyLoss requires you to pass in unnormalized logits (output from last Linear layer). If you use ReLU as the output from last layer you are only outputting values in the range [0, inf), while neural network tends to go with small values for incorrect labels and high for the correct ones (we may say it's overconfident in it's predictions). Oh, and the one with highest logit value is chosen by argmax as correct label. So it won't definitely work with this line: # out = self.relu(out) # this too Though it should with the ReLU before it. Just remember, more nonlinearity isn't always good for the network.
https://stackoverflow.com/questions/58050105/
How to use tensorboard debugger with pytorch?
I know pytorch starts to support tensorboard since the version 1.11. But I am wondering is it possible for us to use the debugger plugin for tensorboard with pytorch? I didn't find any information about this. If pytorch can also support tensorboard debugger, it would be extremely convenient and could save us a lot of time.
No, it is not and to doubt it ever will be possible. The plugin heavily relies on how TensorFlow internally represents graph nodes which is quite different in PyTorch.
https://stackoverflow.com/questions/58054247/
Extracting reduced dimension data from autoencoder in pytorch
I have defined my autoencoder in pytorch as following: self.encoder = nn.Sequential( nn.Conv2d(input_shape[0], 32, kernel_size=1, stride=1), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=1, stride=1), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=1, stride=1), nn.ReLU() ) self.decoder = nn.Sequential( nn.Conv2d(64, 64, kernel_size=1, stride=1), nn.ReLU(), nn.Conv2d(64, 32, kernel_size=1, stride=1), nn.ReLU(), nn.Conv2d(32, input_shape[0], kernel_size=1, stride=1), nn.ReLU(), nn.Sigmoid() ) I need to get a reduced dimension encoding which requires creating a new linear layer of the dimension N much lower than the image dimension so that I can extract the activations. If anybody can help me with fitting a linear layer in the decoder part I would appreciate (i know how to Flatten() the data, but I guess I need to "unflatten" it again to interface with the Conv2d layer again) Update: I have come up with the following based on the first answer (it gives me a 8-dimensional bottleneck at the output of the encoder which works fine torch.Size([1, 8, 1, 1]) ). self.encoder = nn.Sequential( nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 8, kernel_size=3, stride=1), nn.ReLU(), nn.MaxPool2d(7, stride=1) ) self.decoder = nn.Sequential( nn.ConvTranspose2d(8, 64, kernel_size=3, stride=1), nn.ReLU(), nn.Conv2d(64, 32, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(32, input_shape[0], kernel_size=8, stride=4), nn.ReLU(), nn.Sigmoid() ) What I cannot do is train the autoencoder with def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x The decoder gives me an error: Calculated padded input size per channel: (3 x 3). Kernel size: (4 x 4). Kernel size can't be greater than actual input size I would like to thank the person who provided the first answer.
In the decoder part, you need to upsample to larger size, which can be done via nn.ConvTranspose2d. I notice that in your encoder part, seems like you didn't downsample your feature maps, because your stride is alwasy 1. Here is a toy example. self.encoder = nn.Sequential( nn.Conv2d(32, 16, 3, stride=1, padding=1), # b, 16, 32, 32 nn.ReLU(True), nn.MaxPool2d(2, stride=2), # b, 16, 16, 16 nn.Conv2d(16, 32, 3, stride=1, padding=1), # b, 32, 16, 16 nn.ReLU(True), nn.MaxPool2d(2, stride=2) # b, 32, 8, 8 ) self.decoder = nn.Sequential( nn.ConvTranspose2d(32, 16, 3, stride=2,padding=1,output_padding=1), # b, 16, 16, 16 nn.ReLU(True), nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1), # b, 1, 32, 32 nn.Sigmoid() )
https://stackoverflow.com/questions/58054328/
How to generate new tensor by given indexes and tensor in pytorch?
I have a tensor like this: x = torch.tensor([[3, 4, 2], [0, 1, 5]]) and I have a indexes like this: ind = torch.tensor([[1, 1, 0], [0, 0, 1]]) then I want to generate a new tensor by x and ind: z = torch.tensor([0, 1, 2], [3, 4, 5]) I impement it with python like this: # -*- coding: utf-8 -*- import torch x = torch.tensor([[3, 4, 2], [0, 1, 5]]) ind = torch.tensor([[1, 1, 0], [0, 0, 1]]) z = torch.zeros_like(x) for i in range(x.shape[0]): for j in range(x.shape[1]): z[i, j] = x[ind[i][j]][j] print(z) I want to know how to solve this by pytorch?
You are looking for torch.gather In [1]: import torch In [2]: x = torch.tensor([[3, 4, 2], [0, 1, 5]]) In [3]: ind = torch.tensor([[1, 1, 0], [0, 0, 1]]) In [4]: torch.gather(x, 0, ind) Out[4]: tensor([[0, 1, 2], [3, 4, 5]])
https://stackoverflow.com/questions/58055912/
Pytorch: ValueError: Expected input batch_size (32) to match target batch_size (64)
Tried to run the CNN examples on MNIST dataset, batch size=64, channel =1, n_h=28, n_w=28, n_iters = 1000. The program runs for first 500 interation and then gives the above mentioned error. There are same topics already being discussed on the forum such as : topic 1 and topic 2, but none of them could help me identify the mistake in the following code: class CNN_MNIST(nn.Module): def __init__(self): super(CNN_MNIST,self).__init__() # convolution layer 1 self.cnn1 = nn.Conv2d(in_channels=1, out_channels= 32, kernel_size=5, stride=1,padding=2) # ReLU activation self.relu1 = nn.ReLU() # maxpool 1 self.maxpool1 = nn.MaxPool2d(kernel_size=2,stride=2) # convolution 2 self.cnn2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, stride=1,padding=2) # ReLU activation self.relu2 = nn.ReLU() # maxpool 2 self.maxpool2 = nn.MaxPool2d(kernel_size=2,stride=2) # fully connected 1 self.fc1 = nn.Linear(7*7*64,1000) # fully connected 2 self.fc2 = nn.Linear(1000,10) def forward(self,x): # convolution 1 out = self.cnn1(x) # activation function out = self.relu1(out) # maxpool 1 out = self.maxpool1(out) # convolution 2 out = self.cnn2(out) # activation function out = self.relu2(out) # maxpool 2 out = self.maxpool2(out) # flatten the output out = out.view(out.size(0),-1) # fully connected layers out = self.fc1(out) out = self.fc2(out) return out # model trainning count = 0 loss_list = [] iteration_list = [] accuracy_list = [] for epoch in range(int(n_epochs)): for i, (image,labels) in enumerate(train_loader): train = Variable(image) labels = Variable(labels) # clear gradient optimizer.zero_grad() # forward propagation output = cnn_model(train) # calculate softmax and cross entropy loss loss = error(output,label) # calculate gradients loss.backward() # update the optimizer optimizer.step() count += 1 if count % 50 ==0: # calculate the accuracy correct = 0 total = 0 # iterate through the test data for image, labels in test_loader: test = Variable(image) # forward propagation output = cnn_model(test) # get prediction predict = torch.max(output.data,1)[1] # total number of labels total += len(labels) # correct prediction correct += (predict==labels).sum() # accuracy accuracy = 100*correct/float(total) # store loss, number of iteration, and accuracy loss_list.append(loss.data) iteration_list.append(count) accuracy_list.append(accuracy) # print loss and accurcay as the algorithm progresses if count % 500 ==0: print('Iteration :{} Loss :{} Accuracy : {}'.format(count,loss.item(),accuracy)) The error is as follows: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-9e93a242961b> in <module> 18 19 # calculate softmax and cross entropy loss ---> 20 loss = error(output,label) 21 22 # calculate gradients ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) ~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 914 def forward(self, input, target): 915 return F.cross_entropy(input, target, weight=self.weight, --> 916 ignore_index=self.ignore_index, reduction=self.reduction) 917 918 ~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 1993 if size_average is not None or reduce is not None: 1994 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 1995 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 1996 1997 ~\Anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 1820 if input.size(0) != target.size(0): 1821 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' -> 1822 .format(input.size(0), target.size(0))) 1823 if dim == 2: 1824 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) ValueError: Expected input batch_size (32) to match target batch_size (64).
You are providing the wrong target to your loss: loss = error(output, label) While your loader gives you for i, (image,labels) in enumerate(train_loader): train = Variable(image) labels = Variable(labels) So you have a variable name labels (with s) from the loader, yet you feed label (no s) to your loss. Batch size is the least of your worries.
https://stackoverflow.com/questions/58059221/
Pytorch autograd.grad how to write the parameters for multiple outputs?
In the documentation of torch.autograd.grad, it is stated that, for parameters, parameters: outputs (sequence of Tensor) – outputs of the differentiated function. inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will be returned (and not accumulated into .grad). I try the following: a = torch.rand(2, requires_grad=True) b = torch.rand(2, requires_grad=True) c = a+b d = a-b torch.autograd.grad([c, d], [a, b]) #ValueError: only one element tensors can be converted to Python scalars torch.autograd.grad(torch.tensor([c, d]), torch.tensor([a, b])) #RuntimeError: grad can be implicitly created only for scalar outputs I would like to get gradients of a list of tensors w.r.t another list of tensors. What is the correct way to feed the parameters?
As the torch.autograd.grad mentioned, torch.autograd.grad computes and returns the sum of gradients of outputs w.r.t. the inputs. Since your c and d are not scalar values, grad_outputs are required. import torch a = torch.rand(2,requires_grad=True) b = torch.rand(2, requires_grad=True) a # tensor([0.2308, 0.2388], requires_grad=True) b # tensor([0.6314, 0.7867], requires_grad=True) c = a*a + b*b d = 2*a+4*b torch.autograd.grad([c,d], inputs=[a,b], grad_outputs=[torch.Tensor([1.,1.]), torch.Tensor([1.,1.])]) # (tensor([2.4616, 2.4776]), tensor([5.2628, 5.5734])) Explanation: dc/da = 2*a = [0.2308*2, 0.2388*2] dd/da = [2.,2.] So the first output is dc/da*grad_outputs[0]+dd/da*grad_outputs[1] = [2.4616, 2.4776]. Same calculation for the second output. If you just want to get the gradient of c and d w.r.t. the inputs, probably you can do this: a = torch.rand(2,requires_grad=True) b = torch.rand(2, requires_grad=True) a # tensor([0.9566, 0.6066], requires_grad=True) b # tensor([0.5248, 0.4833], requires_grad=True) c = a*a + b*b d = 2*a+4*b [torch.autograd.grad(t, inputs=[a,b], grad_outputs=[torch.Tensor([1.,1.])]) for t in [c,d]] # [(tensor([1.9133, 1.2132]), tensor([1.0496, 0.9666])), # (tensor([2., 2.]), tensor([4., 4.]))]
https://stackoverflow.com/questions/58059268/
Pytorch GPU memory increase after load operation
I have a pytorch model which is of the size 386MB, but when i load the model state = torch.load(f, flair.device) My GPU memory takes up to 900MB, why does this happen and is there a way to resolve this? This is how i save the model model_state = self._get_state_dict() # additional fields for model checkpointing model_state["optimizer_state_dict"] = optimizer_state model_state["scheduler_state_dict"] = scheduler_state model_state["epoch"] = epoch model_state["loss"] = loss torch.save(model_state, str(model_file), pickle_protocol=4)
It's probably the optimizer_state that takes the extra space. Some optimizers (e.g., Adam) track statistics of each trainable parameter, such as first and second order moments. This information takes up space, as you can tell. You can load to CPU first: state = torch.load(f, map_location=torch.device('cpu'))
https://stackoverflow.com/questions/58060885/
Compute cross entropy loss for classification in pytorch
I am trying to build two neural network for classification. One for Binary and the second is for multi-class classification. I am trying to use the torch.nn.CrossEntropyLoss() as a loss function, but I try to train my first neural network I get the following error: multi-target not supported at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THNN/generic/ClassNLLCriterion.c:22 From my analysis, I found that the my dataset has two problems that caused the error. My data set is one hot encoded. I used one hot encoding to pre processes my dataset. The first target Y_binary variable has the shape of torch.Size([125973, 1]) full of 0s and 1 indicating classes 'No' and 'Yes'. My data has the wrong dimensions? I found that I can't use a simple vector with the cross entropy loss function. Some people used the following code to reshape their target vector before feeding to the loss function. out = out.permute(0, 2, 3, 1).contiguous().view(-1, class_number) But I didn't really understand the reasoning behind this code. But it seems for my that I need to keep track of the following variables: Class_Number, Batch_size, Dimension_Output. For my code here are the dimensions X_train.shape: (125973, 122) Y_train2.shape: (125973, 1) batch_size = 64 K = len(set(Y_train2)) # Binary classification For multi class classification use K = len(set(Y_train5)) Should the target value be one hot encoded? If not, how I can feed a nominal feature to the loss function? If I use reshape the output, can you help me do this for my code ? I am trying to use this loss function for both my neural networks. Thank you in advance,
The error is due to the usage of torch.nn.CrossEntropyLoss() which can be used if you want to predict 1 class out of N classes. For multiclass classification, you should use torch.nn.BCEWithLogitsLoss() which combines a Sigmoid layer and the BCELoss in one single class. In case of multi-class, and if you use Sigmoid + BCELoss, then you need the target to be one-hot encoding, i.e. something like this per sample: [0 1 0 0 0 1 0 0 1 0], where 1 will be at the locations of classes present.
https://stackoverflow.com/questions/58063826/
Upsampling an autoencoder in pytorch
I have defined my autoencoder in pytorch as following (it gives me a 8-dimensional bottleneck at the output of the encoder which works fine torch.Size([1, 8, 1, 1])): self.encoder = nn.Sequential( nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(64, 8, kernel_size=3, stride=1), nn.ReLU(), nn.MaxPool2d(7, stride=1) ) self.decoder = nn.Sequential( nn.ConvTranspose2d(8, 64, kernel_size=3, stride=1), nn.ReLU(), nn.Conv2d(64, 32, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(32, input_shape[0], kernel_size=8, stride=4), nn.ReLU(), nn.Sigmoid() ) What I cannot do is train the autoencoder with def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x The decoder gives me an error that the decoder cannot upsample the tensor: Calculated padded input size per channel: (3 x 3). Kernel size: (4 x 4). Kernel size can't be greater than actual input size
You are not upsampling enough via ConvTranspose2d, shape of your encoder is only 1 pixel (width x height), see this example: import torch layer = torch.nn.ConvTranspose2d(8, 64, kernel_size=3, stride=1) print(layer(torch.randn(64, 8, 1, 1)).shape) This prints your exact (3,3) shape after upsampling. You can: Make the kernel smaller - instead of 4 in first Conv2d in decoder use 3 or 2 or even 1 Upsample more, for example: torch.nn.ConvTranspose2d(8, 64, kernel_size=7, stride=2) would give you 7x7 What I would do personally: downsample less in encoder, so output shape after it is at least 4x4 or maybe 5x5. If you squash your image so much there is no way to encode enough information into one pixel, and even if the code passes the network won't learn any useful representation.
https://stackoverflow.com/questions/58069452/
Concept of mini batch in deep generative model using pyro
I am new to probabilistic programming and ML. I am following a code on deep Markov model given on pyro's website. The link to the github page to that code is: https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm/dmm.py I understand most part of the code. The part I don't understand is mini batch idea they are using from line 175. Question 1: Could someone explain what are they doing there when they are using mini-batch? In pyro documentation they say mini_batch is a three dimensional tensor, with the first dimension being the batch dimension, the second dimension being the temporal dimension, and the final dimension being the features (88-dimensional in our case)' Question 2: What does temporal dimension means here? Because I want to use this code on my dataset which is a sequential data. I have done one hot encoding of my data such that it's dimension is (10000,500,20) where 10000 is the number of examples/Sequences, 500 is the length of each of these sequences and 20 is the number of features. Question 3: How can I use my one hot encoded data as mini batch here? I'm sorry if it is a really basic question but, insights will be appreciated. Link to that documentation is: https://pyro.ai/examples/dmm.html
Question 1: Could someone explain what are they doing there when they are using mini-batch? To optimize most of the deep learning models, we use mini-batch gradient descent. Here, A mini_batch refers to a small number of examples. Let's say, we have 10,000 training examples and we want to create mini-batches of 50 examples. So, in total there will be 200 mini-batches and we will perform 200 parameter updates during one iteration over the entire dataset. Question 2: What does the temporal dimension mean here? In your data: (10000, 500, 20), the second dimension refers to the temporal dimension. You can consider you have examples with 500 timesteps (t1, t2, ..., t500). Question 3: How can I use my one-hot encoded data as mini-batch here? In your scenario, you can split your data (10000, 500, 20) into 200 small batches of size (50, 500, 20) where 50 is the number of examples/Sequences in the mini-batch, 500 is the length of each of these sequences and 20 is the number of features. How do we decide the mini-batch size? Basically, we can tune the batch size just like any other hyperparameters of our model.
https://stackoverflow.com/questions/58082534/
Not Able to Import PyTorch in Conda Env
Last week I had a working conda env I was using for a project. I have not touched the project in a week. I just went to run a python file (python file.py) that had been running with no errors. Now I get the following error: Traceback (most recent call last): File "file.py", line 2, in <module> from torch.utils.data import Dataset, DataLoader ModuleNotFoundError: No module named 'torch' In an attempt to troubleshoot, I opened a python console and ran the following code: >>> import torch The result was the following error message: Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch' If I check all the installed packages using conda list -n <env_name>, I can see that PyTorch is in fact installed, just as it was last week. ... pytorch 1.2.0 py3.7_cuda9.2.148_cudnn7.6.2_0 pytorch ... torchvision 0.4.0 py37_cu92 pytorch ... Here is what I see when I start python console using python: Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] :: Anaconda, Inc. on linux The output of python -c 'import sys; print(sys.path) in the base env is: ['', '/home/<name>/anaconda3/lib/python37.zip', '/home/<name>/anaconda3/lib/python3.7', '/home/<name>/anaconda3/lib/python3.7/lib-dynload', '/home/<name>/anaconda3/lib/python3.7/site-packages'] I have not personally made any chages to PYTHONPATH. If I run python -c 'import sys; print(sys.path)' with my conda env (non-base) active, I get: ['', '/home/<name>/anaconda3/envs/<env_name>/lib/python37.zip', '/home/<name>/anaconda3/envs/<env_name>/lib/python3.7', '/home/<name>/anaconda3/envs/<env_name>/lib/python3.7/lib-dynload', '/home/<name>/anaconda3/envs/<env_name>/lib/python3.7/site-packages'] This is totally bizarre, can't figure out what is going on, and what could have happened, over the course of the last week, without me touching the code or making any changes to Anaconda.
open anaconda-prompt then run this conda install PyTorch -c PyTorch If you didn't upgrade your pip.use this command to update python -m pip install –upgrade pip After first step run this pip3 install torchvision hope it will work.
https://stackoverflow.com/questions/58084535/
How are token vectors calculated in spacy-pytorch-transformers
I am currently working with the spacy-pytorch-transformer package to experiment with the respective embeddings. When reading the introductionary article (essentially the GitHub README), my understanding was that the token-level embeddings are the mean over the embeddings of all corresponding word pieces, i.e. embed(complex) would be the same as 1/2 * embed(comp#) * embed(#lex). According to the BERT paper, this should simply utilize the last_hidden_state property of the network, but my MCVE below shows that this is not the same for Spacy 2.1.8 and spacy-pytorch-transformers 0.4.0, for at least BERT and RoBERTa (have not verified it for more models): import spacy import numpy as np nlp = spacy.load("en_pytt_robertabase_lg") # either this or the BERT model test = "This is a test" # Note that all tokens are directly aligned, so no mean has to be calculated. doc = nlp(test) # doc[0].vector and doc.tensor[0] are equal, so the results are equivalent. print(np.allclose(doc[0].vector, doc._.pytt_last_hidden_state[1, :])) # returns False The offset of 1 for the hidden states is due to the <CLS> token as the first input, which corresponds to the sentence classification task; I even checked with any available other token for my sentence (which has no token alignment problems according to doc._.pytt_alignment), so there is no way I missed something here. According to the source code, the corresponding hook overwrites simply to return the corresponding row in the tensor, so I do not see any transformation here. Is there something obvious that I am missing here, or is this deviating from the expected behavior?
It seems that there is a more elaborate weighting scheme behind this, which also accounts for the [CLS] and [SEP] token outputs in each sequence. This has also been confirmed by an issue post from the spaCy developers. Unfortunately, it seems that this part of the code has since moved with the renaming to spacy-transformers.
https://stackoverflow.com/questions/58084661/
How to fix 'Input and hidden tensors are not at the same device' in pytorch
When I want to put the model on the GPU, I get the following error: "RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu" However, all of the above had been put on the GPU: for m in model.parameters(): print(m.device) #return cuda:0 if torch.cuda.is_available(): model = model.cuda() test = test.cuda() # test is the Input Windows 10 server Pytorch 1.2.0 + cuda 9.2 cuda 9.2 cudnn 7.6.3 for cuda 9.2
You need to move the model, the inputs, and the targets to Cuda: if torch.cuda.is_available(): model.cuda() inputs = inputs.cuda() target = target.cuda()
https://stackoverflow.com/questions/58095627/
How to create variable names in loop for layers in pytorch neural network
I am implementing a straightforward feedforward neural newtork in PyTorch. However I am wondern if theres a nicer way to add a flexible amount of layer to the network? Maybe by naming them during a loop, but i heard thats impossible? Currently I am doing it like this import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self, input_dim, output_dim, hidden_dim): super(Net, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.hidden_dim = hidden_dim self.layer_dim = len(hidden_dim) self.fc1 = nn.Linear(self.input_dim, self.hidden_dim[0]) i = 1 if self.layer_dim > i: self.fc2 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc3 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc4 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc5 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc6 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc7 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 if self.layer_dim > i: self.fc8 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i]) i += 1 self.fcn = nn.Linear(self.hidden_dim[-1], self.output_dim) def forward(self, x): # Max pooling over a (2, 2) window x = F.relu(self.fc1(x)) i = 1 if self.layer_dim > i: x = F.relu(self.fc2(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc3(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc4(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc5(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc6(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc7(x)) i += 1 if self.layer_dim > i: x = F.relu(self.fc8(x)) i += 1 x = F.softmax(self.fcn(x)) return x
You can put your layers in a ModuleList container: import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self, input_dim, output_dim, hidden_dim): super(Net, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.hidden_dim = hidden_dim current_dim = input_dim self.layers = nn.ModuleList() for hdim in hidden_dim: self.layers.append(nn.Linear(current_dim, hdim)) current_dim = hdim self.layers.append(nn.Linear(current_dim, output_dim)) def forward(self, x): for layer in self.layers[:-1]: x = F.relu(layer(x)) out = F.softmax(self.layers[-1](x)) return out It is very important to use pytorch Containers for the layers, and not just a simple python lists. Please see this answer to know why.
https://stackoverflow.com/questions/58097924/
Choosing loss function for lstm trained on word2vec vectors when target is a vector of same dimensions
I have an lstm I'm using as a sequence-generator trained on word2vec vectors. The previous implementation produced a probability distribution for all the different labels. There was one label for every word in the vocabulary. This implementation used Pytorch's CrossEntropyLoss. I now want to change this so the lstm outputs a vector that has the same dimensions as the vectors used for training. This way I could use the euclydian distance measure to match wit to nearby vectors in the vocabulary. The problem is that in order to do this, I have to use a different loss function, because CrossEntropyLoss is appropriate for classifiers and not for regression problems. I tried changing the format of the target vector, but torch's CrossEntropyLoss function requires integer input, and I have a word vector. Having looked at a few options it seems Cosine Embedding Loss might be a good idea but I don't understand how it works and what kind of input it takes. I have already changed my fully connected layer to output vectors of the same dimensions as the Word Embeddings used for training: nn.Linear(in_features=self.cfg.lstm.lstm_num_hidden,out_features=self.cfg.lstm.embedding_dim,bias=True) Any advice and examples would be much appreciated.
As the documentation of CosineEmbeddingLoss says: Creates a criterion that measures the loss given two input tensors and a Tensor label with values 1 or -1. In your scenario, you should always provide 1 as the Tensor label. batch_size, seq_len, w2v_dim = 32, 100, 200 x1 = torch.randn(batch_size, seq_len, w2v_dim) x2 = torch.randn(batch_size, seq_len, w2v_dim) y = torch.ones(batch_size, seq_len) loss_fn = torch.nn.CosineEmbeddingLoss(reduction='none') loss = loss_fn(x1.view(-1, w2v_dim), x2.view(-1, w2v_dim), y.view(-1)) loss = loss.view(batch_size, seq_len) Here, I assume x1 is the word embeddings, x2 is the output of the LSTM followed by some transformation. Why should I always provide 1 as the Tensor label? First, you should see the loss function. In your scenario, the higher the cosine similarity is, the lower the loss should be. In other words, you want to maximize the cosine similarity. So, you need to provide 1 as the label. On the other hand, if you want to minimize the cosine similarity, you need to provide -1 as the label.
https://stackoverflow.com/questions/58101091/
How to use DataLoader for PyTorch on iPython Console of Spyder
I check this Tutorial and can't figure out a way to actually use my DataLoader to train a ANN. When iterating over my DataLoader a cmd prompt pops up and immediately closes itself, afterwards nothing happens. My original data are both np.arrays. import torch from torch.utils import data import numpy as np class Dataset(data.Dataset): 'Characterizes a dataset for PyTorch' def __init__(self, datax, labels): 'Initialization' self.labels = torch.tensor(labels) self.datax = torch.tensor(datax) self.len = len(datax) def __len__(self): 'Denotes the total number of samples' return self.len def __getitem__(self, index): 'Generates one sample of data' # Load data and get label X = self.datax[index] y = self.labels[index] return X, y params = {'batch_size': 64, 'shuffle': True, 'num_workers': 1} training_set = Dataset(datax=X, labels=labels) training_generator = data.DataLoader(training_set, **params) for x in training_generator: print(1) I tried many times and had a glimpse at the commandprompt which says something like OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids. OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0 OMP: Info #156: KMP_AFFINITY: 4 available OS procs OMP: Info #157: KMP_AFFINITY: Uniform topology OMP: Info #179: KMP_AFFINITY: 1 packages x 2 cores/pkg x 2 threads/core (2 total cores) OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map: OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 0 thread 1 OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 1 thread 0 OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 1 thread 1 OMP: Info #250: KMP_AFFINITY: pid 10264 tid 2388 thread 0 bound to OS proc set 0 OMP: Info #250: KMP_AFFINITY: pid 10264 tid 3288 thread 1 bound to OS proc set 2
Here is how I do that: class myDataset(Dataset): ''' a dataset for PyTorch ''' def __init__(self, X, y): self.X = X self.y = y def __getitem__(self, index): return self.X[index], self.y[index] def __len__(self): return len(self.X) then you can simple add to the loader: full_dataset = myDataset(X,y) train_loader = DataLoader(full_dataset, batch_size=batch_size) Also, X, y are just numpy arrays. And for the training you can access your data with a for loop: for data, target in train_loader: if train_on_gpu: data, target = data.double().cuda(), target.double().cuda()
https://stackoverflow.com/questions/58112658/
How to use the lbfgs optimizer with pytorch-lightning?
I have a problem in using the LBFGS optimizer from pytorch with lightning. I use the template from here to start a new project and here is the code that I tried (only the training portion): def training_step(self, batch, batch_nb): x, y = batch x = x.float() y = y.float() y_hat = self.forward(x) return {'loss': F.mse_loss(y_hat, y)} def configure_optimizers(self): optimizer = torch.optim.LBFGS(self.parameters()) return optimizer def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i): def closure(): optimizer.zero_grad() l = self.training_step(batch, batch_nb) loss = l['loss'] loss.backward() return loss optimizer.step(closure) The LBFGS optimizer from pytorch requires a closure function (see here and here), but I don't know how to define it inside the template, specially I don't know how the batch data is passed to the optimizer. I tried to define a custom optimizer_step function but I have some problems to passing the batch inside the closure function. I will be very thankful of any advise that helps me to solve this problem or points me to the right direction. Environment: PyTorch version: 1.2.0+cpu Lightning version: 0.4.9 Test-tube version: 0.7.1
The support for lbfgs optimizer was added on #310, now it is not required to define a closure function.
https://stackoverflow.com/questions/58115801/
Convert image of dimension height,width,number of channels to n_masks, image_height, image_width
I have an RGB image, it contains masks of different colors, each color represents a particular class. I want to convert it into the format - n_masks, image_height, image_width where n_masks is number of masks present in the image. and each slice of the matrix along the 0th axis represents one binary mask. So far, I have been able to convert it into the format of image_height, image_width where each array value represents which class it belongs to but I am kind of struck after it. Below is my code to convert it into image_height,image_width format- def mask_to_class(mask): target = torch.from_numpy(mask) h,w = target.shape[0],target.shape[1] masks = torch.empty(h, w, dtype=torch.long) colors = torch.unique(target.view(-1,target.size(2)),dim=0).numpy() target = target.permute(2, 0, 1).contiguous() mapping = {tuple(c): t for c, t in zip(colors.tolist(), range(len(colors)))} for k in mapping: idx = (target==torch.tensor(k, dtype=torch.uint8).unsqueeze(1).unsqueeze(2)) validx = (idx.sum(0) == 3) masks[validx] = torch.tensor(mapping[k], dtype=torch.long) return masks It converts an image of let's say format (512,512,3) to (512,512) where each pixel values represents the class it belongs to, but I don't have any idea how to proceed further. P.S- I am coding it in pytorch but any approach involving numpy is also welcomed.
Suppose you already have (512,512) mask. You can first use mask.unique() to get all classes pixel values. Then for each class value, torch.where(mask==cls_val, torch.tensor(1), torch.tensor(0)) will return the mask of one certain class. In the end, you stack all outputs. mask = torch.tensor([[1,2,3], [2,4,5], [1,2,3]]) cls = mask.unique() res = torch.stack([torch.where(mask==cls_val, torch.tensor(1), torch.tensor(0)) for cls_val in cls]) #tensor([[[1, 0, 0], # [0, 0, 0], # [1, 0, 0]], # # [[0, 1, 0], # [1, 0, 0], # [0, 1, 0]], # # [[0, 0, 1], # [0, 0, 0], # [0, 0, 1]], # # [[0, 0, 0], # [0, 1, 0], # [0, 0, 0]], # # [[0, 0, 0], # [0, 0, 1], # [0, 0, 0]]])
https://stackoverflow.com/questions/58121654/
Suppress use of Softmax in CrossEntropyLoss for PyTorch Neural Net
I know theres no need to use a nn.Softmax() Function in the output layer for a neural net when using nn.CrossEntropyLoss as a loss function. However I need to do so, is there a way to suppress the implemented use of softmax in nn.CrossEntropyLoss and instead use nn.Softmax() on my output layer of the neural network itself? Motivation: I am using shap package to analyze the features influences afterwards, where I can only feed my trained model as an input. The outputs however don't make any sense then because I am looking at unbound values instead of probabilites. Example: Instead of -69.36 as an output value for one class of my model, I want something between 0 and 1, summing up to 1 for all classes. As I can't alter it afterwards, the outputs need to be like this already during training.
The documentation of nn.CrossEntropyLoss says, This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. I suggest you stick to the use of CrossEntropyLoss as the loss criterion. However, you can convert the output of your model into probability values by using the softmax function. Please note, you can always play with the output values of your model, you do not need to change the loss criterion for that. But if you still want to use Softmax() in your network, then you can use the NLLLoss() as the loss criterion, only apply log() before feeding model's output to the criterion function. Similarly, if you use LogSoftmax instead in your network, you can apply exp() to get the probability values. Update: To use log() on the Softmax output, please do: torch.log(prob_scores + 1e-20) By adding a very small number (1e-20) to the prob_scores, we can avoid the log(0) issue.
https://stackoverflow.com/questions/58122505/
Remove certain elements of all the tensors in a list of dictionary of tensors in pytorch
I am new to pytorch, and trying to learn by playing with simple codes. What I have is a list of length zero, populated by a dictionary, where values in this dictionary are tensors. This is what this list looks like: A = [{'boxes': tensor([[ 142.1232, 142.9373, 1106.0452, 971.3792], [ 259.1277, 618.4834, 1100.1293, 1028.8989], [ 232.1346, 692.5888, 763.3408, 1028.6766], [ 206.8070, 312.2080, 1137.1434, 1013.4373], [ 495.9471, 675.7287, 978.5932, 1012.7568]], grad_fn=<StackBackward>), 'labels': tensor([16, 1, 1, 1, 1]), 'scores': tensor([0.9988, 0.9489, 0.5228, 0.3500, 0.0639], grad_fn=<IndexBackward>), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]]], grad_fn=<UnsqueezeBackward0>)}] This list is the output of Mask-RCNN and I want to remove certain element of all the tensors in the nested dictionary. In this case, I only want to keep information related with class label "1". Information related to each class ('boxes', 'labels', 'scores', and 'masks') are all in the same location(s) (index) in each of the tensors. So, I find the index of all the "1"s in the tensor with key "labels": idxOfClass = [i for i, x in enumerate(list(pred[0]['labels'])) if x == 1] which gives me: [1, 2, 3, 4]. Then, I want to keep all the values located at indices in idxOfClass in all of the tensors in the nested dictionary. If I do something like this: Anew = [{pred[0]['boxes'][idxOfClass],pred[0]['labels'][idxOfClass],pred[0]['masks'][idxOfClass],pred[0]['scores'][idxOfClass]}] I get: [{tensor([[ 259.1277, 618.4834, 1100.1293, 1028.8989], [ 232.1346, 692.5888, 763.3408, 1028.6766], [ 206.8070, 312.2080, 1137.1434, 1013.4373], [ 495.9471, 675.7287, 978.5932, 1012.7568]], grad_fn= <IndexBackward>), tensor([0.9489, 0.5228, 0.3500, 0.0639], grad_fn= <IndexBackward>), tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], [[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]]], grad_fn=<IndexBackward>), tensor([1, 1, 1, 1])}] But, this is not a list of dictionary with tensors as the values of the dictionary. This is a list of tensors, with no key-value structure of the nested dictionary. My question is, "is there any way to keep the original structure of the list when I remove certain elements of all the tensors using indices of specific elements?".
You can just add key names when you construct the new pred res. Anew = [{'boxes': pred[0]['boxes'][idxOfClass],'labels': pred[0]['labels'][idxOfClass],'masks': pred[0]['masks'][idxOfClass],'scores': pred[0]['scores'][idxOfClass]}]
https://stackoverflow.com/questions/58125674/
libcudart.so.9.0: cannot open shared object file: No such file or directory
I'm using Pytorch under Ubuntu 18.04 and trying to import torchvision, but I get an error libcudart.so.9.0: cannot open shared object file: No such file or directory. Someone could help to fix it? Thanks. The infos below are detailed error logs: Traceback (most recent call last): File "/home/x/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2882, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-6dd351122000>", line 1, in <module> import torchvision File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/__init__.py", line 1, in <module> from torchvision import models File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/__init__.py", line 11, in <module> from . import detection File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/detection/__init__.py", line 1, in <module> from .faster_rcnn import * File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in <module> from torchvision.ops import misc as misc_nn_ops File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/ops/__init__.py", line 1, in <module> from .boxes import nms, box_iou File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in <module> from torchvision import _C File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory import torch import torch.nn as nn import torchvision.transforms as transforms Traceback (most recent call last): File "/home/x/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2882, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-4-677acbcfae34>", line 1, in <module> import torchvision.transforms as transforms File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/__init__.py", line 1, in <module> from torchvision import models File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/__init__.py", line 11, in <module> from . import detection File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/detection/__init__.py", line 1, in <module> from .faster_rcnn import * File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/models/detection/faster_rcnn.py", line 7, in <module> from torchvision.ops import misc as misc_nn_ops File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/ops/__init__.py", line 1, in <module> from .boxes import nms, box_iou File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/home/x/.local/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 2, in <module> from torchvision import _C File "/home/x/pycharm-2019.2/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory
If you are using anaconda, the following may fix your problem. conda install -c anaconda cudatoolkit==9.0 You can also try the followings. Make sure the CUDA version is 9.0. And add the following 2 lines to ~/.bashrc. export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH Then, run: source ~/.bashrc Add the following lines to /etc/ld.so.conf.d/cuda.conf /usr/local/cuda/lib64 And run: sudo ldconfig
https://stackoverflow.com/questions/58127401/
RuntimeError: Given groups=1, weight of size 16 1 5 5, expected input[100, 3, 256, 256] to have 1 channels, but got 3 channels instead
I try to run the following programe for images classification problem in Pytorch: import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import torch.utils.data as data # Device configuration device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # Hyper parameters num_epochs = 5 num_classes = 10 batch_size = 100 learning_rate = 0.001 TRAIN_DATA_PATH = "train/" TEST_DATA_PATH = "test/" TRANSFORM_IMG = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(256), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_dataset = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG) train_loader = data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4) test_dataset = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_loader = data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True, num_workers=4) # Convolutional neural network (two convolutional layers) class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc = nn.Linear(7 * 7 * 32, num_classes) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out model = ConvNet(num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i + 1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch + 1, num_epochs, i + 1, total_step, loss.item())) # Test the model model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model/model.ckpt') But I get a RuntimeError: Traceback (most recent call last): RuntimeError: Given groups=1, weight of size 16 1 5 5, expected input[100, 3, 256, 256] to have 1 channels, but got 3 channels instead Someone could help to fix the bug? Thanks a lot. Reference related: https://discuss.pytorch.org/t/given-groups-1-weight-16-1-5-5-so-expected-input-100-3-64-64-to-have-1-channels-but-got-3-channels-instead/28831/17 RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[3, 1, 224, 224] to have 3 channels, but got 1 channels instead
Your input layer self.layer1 starts with a 2d convolution nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2). This conv layer expects an input with two spatial dimensions and one channel, and outputs a tesnor with the same spatial dimensions and 16 channels. However, your input has three channels and not one (RGB image instead of gray level image). Make sure your net and data are in synch.
https://stackoverflow.com/questions/58130237/
How the function nn.LSTM behaves within the batches/ seq_len?
I’m currently learning to use nn.LSTM with pytorch and had to ask how the function is working. Basically I’m trying to feed my dataset matrix (M x N). Since the dataset is a matrix, I wanted to feed the dataset recursively(as timesteps) into the LSTM network with Dataloader(utils.data.Dataset). The point where i got confused was the size of input(seq_len, batch, input_size) Let’s say I’m getting my data_loader with batch_size=10. In order to generate the train_loader with the right form, I had to make the previous size of (M x N) into the size including the sequence_length which could simply be transformed to (M/seq_len, seq_len, N). Then the input size of my nn.LSTM would be like: (M/seq_len/batch_size, seq_len, N) So, my main question comes: If i feed this data size into the LSTM model nn.LSTM(N, hidden_size), is the LSTM model already doing the recursive feed-forward within the whole batch? I'm also confused with the seq_len, while seq_len>1, the output will get the dimension of seq_len. Would that mean the output contains the recursive operations of sequences? I’m not sure i made the questions clear, but my understanding is getting quite messed up..lol Hope somebody could help me organizing the right understanding.
Yes, provided each sample's sequence length is the same (which seems to be the case here). If not, you have to pad with torch.nn.utils.rnn.pad_sequence for example. Yes, LSTM is expanded to each timestep and there is output for each timestep already. Hence you don't have to apply it for each element separately.
https://stackoverflow.com/questions/58134311/
For a given condition, get indices of values in 2D tensor A, use those to index a 3D tensor B
For a given 2D tensor I want to retrieve all indices where the value is 1. I expected to be able to simply use torch.nonzero(a == 1).squeeze(), which would return tensor([1, 3, 2]). However, instead, torch.nonzero(a == 1) returns a 2D tensor (that's okay), with two values per row (that's not what I expected). The returned indices should then be used to index the second dimension (index 1) of a 3D tensor, again returning a 2D tensor. import torch a = torch.Tensor([[12, 1, 0, 0], [4, 9, 21, 1], [10, 2, 1, 0]]) b = torch.rand(3, 4, 8) print('a_size', a.size()) # a_size torch.Size([3, 4]) print('b_size', b.size()) # b_size torch.Size([3, 4, 8]) idxs = torch.nonzero(a == 1) print('idxs_size', idxs.size()) # idxs_size torch.Size([3, 2]) print(b.gather(1, idxs)) Evidently, this does not work, leading to aRunTimeError: RuntimeError: invalid argument 4: Index tensor must have same dimensions as input tensor at C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensorEvenMoreMath.cpp:453 It seems that idxs is not what I expect it to be, nor can I use it the way I thought. idxs is tensor([[0, 1], [1, 3], [2, 2]]) but reading through the documentation I don't understand why I also get back the row indices in the resulting tensor. Now, I know I can get the correct idxs by slicing idxs[:, 1] but then still, I cannot use those values as indices for the 3D tensor because the same error as before is raised. Is it possible to use the 1D tensor of indices to select items across a given dimension?
You could simply slice them and pass it as the indices as in: In [193]: idxs = torch.nonzero(a == 1) In [194]: c = b[idxs[:, 0], idxs[:, 1]] In [195]: c Out[195]: tensor([[0.3411, 0.3944, 0.8108, 0.3986, 0.3917, 0.1176, 0.6252, 0.4885], [0.5698, 0.3140, 0.6525, 0.7724, 0.3751, 0.3376, 0.5425, 0.1062], [0.7780, 0.4572, 0.5645, 0.5759, 0.5957, 0.2750, 0.6429, 0.1029]]) Alternatively, an even simpler & my preferred approach would be to just use torch.where() and then directly index into the tensor b as in: In [196]: b[torch.where(a == 1)] Out[196]: tensor([[0.3411, 0.3944, 0.8108, 0.3986, 0.3917, 0.1176, 0.6252, 0.4885], [0.5698, 0.3140, 0.6525, 0.7724, 0.3751, 0.3376, 0.5425, 0.1062], [0.7780, 0.4572, 0.5645, 0.5759, 0.5957, 0.2750, 0.6429, 0.1029]]) A bit more explanation about the above approach of using torch.where(): It works based on the concept of advanced indexing. That is, when we index into the tensor using a tuple of sequence objects such as tuple of tensors, tuple of lists, tuple of tuples etc. # some input tensor In [207]: a Out[207]: tensor([[12., 1., 0., 0.], [ 4., 9., 21., 1.], [10., 2., 1., 0.]]) For basic slicing, we would need a tuple of integer indices: In [212]: a[(1, 2)] Out[212]: tensor(21.) To achieve the same using advanced indexing, we would need a tuple of sequence objects: # adv. indexing using a tuple of lists In [213]: a[([1,], [2,])] Out[213]: tensor([21.]) # adv. indexing using a tuple of tuples In [215]: a[((1,), (2,))] Out[215]: tensor([21.]) # adv. indexing using a tuple of tensors In [214]: a[(torch.tensor([1,]), torch.tensor([2,]))] Out[214]: tensor([21.]) And the dimension of the returned tensor would always be one dimension less than the dimension of the input tensor.
https://stackoverflow.com/questions/58136592/
Freezing Individual Weights in Pytorch
The following question is not a duplicate of How to apply layer-wise learning rate in Pytorch? because this question aims at freezing a subset of a tensor from training rather than the entire layer. I am trying out a PyTorch implementation of Lottery Ticket Hypothesis. For that, I want to freeze the weights in a model that are zero. Is the following a correct way to implement it? for name, p in model.named_parameters(): if 'weight' in name: tensor = p.data.cpu().numpy() grad_tensor = p.grad.data.cpu().numpy() grad_tensor = np.where(tensor == 0, 0, grad_tensor) p.grad.data = torch.from_numpy(grad_tensor).to(device)
What you have seems like it would work provided you did it after loss.backward() and before optimizer.step() (referring to the common usage for these variable names). That said, it seems a bit convoluted. Also, if your weights are floating point values then comparing them to exactly zero is probably a bad idea, we could introduce an epsilon to account for this. IMO the following is a little cleaner than the solution you proposed: # locate zero-value weights before training loop EPS = 1e-6 locked_masks = {n: torch.abs(w) < EPS for n, w in model.named_parameters() if n.endswith('weight')} ... for ... #training loop ... optimizer.zero_grad() loss.backward() # zero the gradients of interest for n, w in model.named_parameters(): if w.grad is not None and n in locked_masks: w.grad[locked_masks[n]] = 0 optimizer.step()
https://stackoverflow.com/questions/58145727/
Why pytorch needs much more memory than it should?
I'm just playing around with pytorch and I'm wondering why it consumes so much memory of my GPU? I'm using Cuda 10.0 with pythorch 1.2.0 and torchvision 0.4.0. import torch gpu = torch.device("cuda") x = torch.ones(int(4e8), device=gpu) y = torch.ones(int(1e5), device=gpu) Running the above code I get the error: RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 1.49 GiB already allocated; 0 bytes free; 0 bytes cached) So, does pytorch needs ~500MB of the gpu memory as overhead? Or what is the problem here?
More information and testing done by xymeng in github could be seen in the given link Referencing xymeng's words : PyTorch has its own cuda kernels. From my measurement the cuda runtime allocates ~1GB memory for them. If you compile pytorch with cudnn enabled the total memory usage is 1GB + 750M + others = 2GB+ Note that this is just my speculation as there is no official documentation about this. What puzzles me is that the cuda runtime allocates much more memory than the actual code size (they are approx. linearly correlated. If I remove half of pytorch's kernels the memory usage is also reduced by half). I suspect either the kernel binaries have been compressed or they have to be post-processed by the runtime. Seems it suits your situation.
https://stackoverflow.com/questions/58149401/
Why Pytorch officially use mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225] to normalize images?
In this page (https://pytorch.org/vision/stable/models.html), it says that "All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]". Shouldn't the usual mean and std of normalization be [0.5, 0.5, 0.5] and [0.5, 0.5, 0.5]? Why is it setting such strange values?
Using the mean and std of Imagenet is a common practice. They are calculated based on millions of images. If you want to train from scratch on your own dataset, you can calculate the new mean and std. Otherwise, using the Imagenet pretrianed model with its own mean and std is recommended.
https://stackoverflow.com/questions/58151507/
Should I transpose a Tensor when feeding it into a CNN
I am using a custom dataset with images of different sizes in the Lab format (Lightness, a, b) which are feed into a CNN. The input layer has 3 in-channels and so my idea was to split all 3 channels (L, a, b) and feed those into the network. Next I was wondering if each tensor needs to be transposed? My doubt is that it would lose its dimensions which are variable from image to image and I would not be able to reconstruct the image in the end. Any thoughts or ideas how I should normalize the image?
You can normalise without the need of transposing the image or splitting it based on its channels torchvision.transforms.Normalize(mean=[l_channel_mean, a_channel_mean , b_channel_mean], std= [l_channel_mean, a_channel_mean , b_channel_mean]) The only required transform is the one that converts the images to tensors : torchvision.transforms.ToTensor()
https://stackoverflow.com/questions/58155920/
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2' how to fix it?
import torch.nn as nn import torch import torch.optim as optim import itertools class net1(nn.Module): def __init__(self): super(net1,self).__init__() self.pipe = nn.Sequential( nn.Linear(10,10), nn.ReLU() ) def forward(self,x): return self.pipe(x.long()) class net2(nn.Module): def __init__(self): super(net2,self).__init__() self.pipe = nn.Sequential( nn.Linear(10,20), nn.ReLU(), nn.Linear(20,10) ) def forward(self,x): return self.pipe(x.long()) netFIRST = net1() netSECOND = net2() learning_rate = 0.001 opt = optim.Adam(itertools.chain(netFIRST.parameters(),netSECOND.parameters()), lr=learning_rate) epochs = 1000 x = torch.tensor([1,2,3,4,5,6,7,8,9,10],dtype=torch.long) y = torch.tensor([10,9,8,7,6,5,4,3,2,1],dtype=torch.long) for epoch in range(epochs): opt.zero_grad() prediction = netSECOND(netFIRST(x)) loss = (y.long() - prediction)**2 loss.backward() print(loss) print(prediction) opt.step() error: line 49, in prediction = netSECOND(netFIRST(x)) line 1371, in linear; output = input.matmul(weight.t()) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'mat2' I don't really see what I'm doing wrong. I have tried to turn everything in a Long in every possible way. I don't really get the way typing works for pytorch. Last time I tried something with just one layer and it forced me to use type int. Could someone explain how the typing is established in pytorch and how to prevent and fix errors like this?? A lot I mean an awful lot of thanks in advance, this problem really bothers me and I can't seem to fix it no matter what I try.
The weights are Floats, the inputs are Longs. This is not allowed. In fact, I don't think torch supports anything else than Floats in neural networks. If you remove all calls to long, and define your input as floats, it will work (it does, I tried). (You will then get another unrelated error: you need to sum your loss)
https://stackoverflow.com/questions/58157523/
Deep Learning methods for Text Generation (PyTorch)
Greetings to everyone, I want to design a system that is able to generate stories or poetry based on a large dataset of text, without being needed to feed a text description/start/summary as input at inference time. So far I did this using RNN's, but as you know they have a lot of flaws. My question is, what are the best methods to achieve this task at the time? I searched for possibilities using Attention mechanisms, but it turns out that they are fitted for translation tasks. I know about GPT-2, Bert, Transformer, etc., but all of them need a text description as input, before the generation and this is not what I'm seeking. I want a system able to generate stories from scratch after training. Thanks a lot!
edit so the comment was: I want to generate text from scratch, not starting from a given sentence at inference time. I hope it makes sense. yes, you can do that, that's just simple code manipulation on top of the ready models, be it BERT, GPT-2 or LSTM based RNN. How? You have to provide random input to the model. Such random input can be randomly chosen word or phrase or just a vector of zeroes. Hope it helps. You have mixed up several things here. You can achieve what you want either using LSTM based or transformer based architecture. When you said you did it with RNN, you probably mean that you have tried LSTM based sequence to sequence model. Now, there is attention in your question. So you can use attention to improve your RNN but it is not a required condition. However, if you use transformer architecture, then it is built in the transormer blocks. GPT-2 is nothing but a transformer based model. Its building block is a transformer architecture. BERT is also another transformer based architecture. So to answer your question, you should and can try using LSTM based or transformer based architecture to achieve what you want. Sometimes such architecture is called GPT-2, sometimes BERT depending on how it is realized. I encourage you to read this classic from Karpathy, if you understand it then you have cleared most of your questions: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
https://stackoverflow.com/questions/58166356/
Am I using polynomial regression right in pytorch?
I am trying to do polynomial regression by pytorch. First I just tried only linear regression (b + wx). model_1 = RegressionModel() W = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad = True) optimizer_1 = torch.optim.SGD([W, b], lr = 0.001) x_train = torch.FloatTensor(dataset.x_data['LSTAT']) y_train = torch.FloatTensor(dataset.data['target']) nb_epochs = 10000 for epoch in range(nb_epochs + 1): hypothesis = x_train * W + b cost = torch.nn.functional.mse_loss(hypothesis, y_train.float()) optimizer_1.zero_grad() cost.backward() optimizer_1.step() print('Epoch {:4d}/{} W: {:.3f}, b: {:.3f}, Cost: {:.6f}'.format(epoch, nb_epochs, W.item(), b.item(), cost.item())) Then I changed and added some variables to do polynomial regression (b + w1x + w2x^2) model_2 = RegressionModel() W1 = torch.zeros(1, requires_grad=True) W2 = torch.zeros(1, requires_grad=True) b = torch.zeros(1, requires_grad = True) optimizer_2 = torch.optim.SGD([W2, W1, b], lr = 0.0000099) x_train = torch.FloatTensor(dataset.x_data['LSTAT']) y_train = torch.FloatTensor(dataset.data['target']) nb_epochs = 10000 for epoch in range(nb_epochs + 1): hypothesis = b + x_train * W1 + x_train * x_train * W2 cost = torch.nn.functional.mse_loss(hypothesis, y_train.float()) optimizer_2.zero_grad() cost.backward() optimizer_2.step() print('Epoch {:4d}/{} W1: {:.3f}, W2: {:.3f}, b: {:.3f}, Cost: {:.6f}'.format(epoch, nb_epochs, W1.item(), W2.item(), b.item(), cost.item())) Can I try polynomial regression like this? If not, I would be really appreciate if you let me know. I'm really noob to pytorch...
Your code should work. When working with larger data, it will be more efficient if you do the regression in a single matrix operation. For that, you need to first pre-compute polynomials of your input features: x_train_polynomial = torch.stack([x_train, x_train ** 2], dim=1) To save some lines, you can rewrite the projection a linear layer: import torch.nn as nn projection = nn.Linear(2, 1, bias=True) In the training loop, you can call: hypothesis = projection(x_train_polynomial)
https://stackoverflow.com/questions/58167146/
PyTorch loss function referencing model parameters
For an assignment, I need to create a movie recommender system that uses a supplied loss function: sum(i=1,M) sum(j=1,M) indicator[iΜΈ=j](viT vj βˆ’ Xi,j )**2 What this means is, the dot product between two movie embeddings, Vi and Vj should be pretty darn close to Xi,j. Where Xi,j is the sum of users who liked both movie i and movie j. Entries where i == j are omitted by the indicator function, (set to 0.) The assignment deliverable is a weight matrix from the hidden layer. Its dimensions should be 9724x300, where there are 9724 unique movie IDs, and 300 neurons. 300 is an arbitrary choice, influenced by use of 300 neurons in Google's word2vec. What I have: source_data: rows are users, columns are movies. A 1 in a given cell means that the corresponding user liked the corresponding movie (dislike = 0.) preprocessed_data: matrix multiplication of source_data transposed on itself. (i.e. every cell is the sum of users who liked BOTH movie i and j. The diagonal entries are not of use to me, as they're just the sum of users who liked a single movie.) Where I'm stuck: Not sure how to define my own loss function in terms of i and j that can compare the sum of users who like movies i and j in preprocessed_data to rows, i and j, in hidden layer weight tensor. Not sure how to organize my data into X and y tensors, such that the data is appropriate to the loss function.
Before you read further, please note that asking for and receiving direct help in your assignment from StackOverflow may be against the rules of your school and lead to consequences for you as a student! That being said, the way I would model this problem is as follows: import torch U = 300 # number of users M = 30 # number of movies D = 4 # dimension of embedding vectors source = torch.randint(0, 2, (U, M)) # users' ratings X = source.transpose(0, 1) @ source # your `preprocessed_data` # initial values for your embedding. This is what your algorithm needs to learn v = torch.randn(M, D, requires_grad=True) X = X.to(torch.float32) # necessary to be in line with `v` # this is the `(viT vj βˆ’ Xi,j )**2` part loss_elementwise = (v @ v.transpose(0, 1) - X).pow(2) # now we need to get rid of the diagonal. Notice that we can equally # well get rid of the diagonal and the whole upper triangular part, # as well, since both V @ V.T and source.T @ source are symmetric, so # the upper triangular part contains just # a mirror reflection of the lower triangular part. # This means that we actually implement a bit different summation: # sum(i=1,M) sum(j=1,i-1) stuff(i, j) # instead of # sum(i=1,M) sum(j=1,M) indicator[iΜΈ=j] stuff(i, j) # and get exactly half the original value masked = torch.tril(loss_elementwise, -1) # finally we sum it up, multiplying by 2 to make up # for the "lost" upper triangular part loss = 2 * masked.sum() Now what is left for you to implement is the optimization loop which will use gradient of loss to optimize the values of v.
https://stackoverflow.com/questions/58169916/
How to add L1 Regularization to PyTorch NN Model?
When searching for ways to implement L1 regularization in PyTorch Models, I came across this question, which is now 2 years old so i was wondering if theres anything new on this topic? I also found this recent approach of dealing with the missing l1 function. However I don't understand how to use it for a basic NN as shown below. class FFNNModel(nn.Module): def __init__(self, input_dim, output_dim, hidden_dim, dropout_rate): super(FFNNModel, self).__init__() self.input_dim = input_dim self.output_dim = output_dim self.hidden_dim = hidden_dim self.dropout_rate = dropout_rate self.drop_layer = nn.Dropout(p=self.dropout_rate) self.fully = nn.ModuleList() current_dim = input_dim for h_dim in hidden_dim: self.fully.append(nn.Linear(current_dim, h_dim)) current_dim = h_dim self.fully.append(nn.Linear(current_dim, output_dim)) def forward(self, x): for layer in self.fully[:-1]: x = self.drop_layer(F.relu(layer(x))) x = F.softmax(self.fully[-1](x), dim=0) return x I was hoping simply putting this before training would work: model = FFNNModel(30,5,[100,200,300,100],0.2) regularizer = _Regularizer(model) regularizer = L1Regularizer(regularizer, lambda_reg=0.1) with out = model(inputs) loss = criterion(out, target) + regularizer.__add_l1() Does anyone understand how to apply these 'ready to use' classes?
I haven't run the code in question, so please reach back if something doesn't exactly work. Generally, I would say that the code you linked is needlessly complicated (it may be because it tries to be generic and allow all the following kinds of regularization). The way it is to be used is, I suppose model = FFNNModel(30,5,[100,200,300,100],0.2) regularizer = L1Regularizer(model, lambda_reg=0.1) and then out = model(inputs) loss = criterion(out, target) + regularizer.regularized_all_param(0.) You can check that regularized_all_param will just iterate over parameters of your model and if their name ends with weight, it will accumulate their sum of absolute values. For some reason the buffer is to be manually initialized, that's why we pass in the 0.. Really though, if you wish to efficiently regularize L1 and don't need any bells and whistles, the more manual approach, akin to your first link, will be more readable. It would go like this l1_regularization = 0. for param in model.parameters(): l1_regularization += param.abs().sum() loss = criterion(out, target) + l1_regularization This is really what is at heart of both approaches. You use the Module.parameters method to iterate over all model parameters and you sum up their L1 norms, which then becomes a term in your loss function. That's it. The repo you linked comes up with some fancy machinery to abstract it away but, judging by your question, fails :)
https://stackoverflow.com/questions/58172188/
How do you generate positive definite matrix in pytorch?
I am trying to define Multivariate Gaussian distribution with randomly generated covariance matrix: psi = torch.zeros(512).normal_(0., 1.).requires_grad_(True) # Generate random matrix Sigma_k = torch.rand(512, 512) # Make it symmetric positive Sigma_k = Sigma_k * Sigma_k.t() # Make it definite Sigma_k.add_(0.001, torch.eye(512)).requires_grad_(True) multivariate_normal.MultivariateNormal(psi, Sigma_k) But I end up with getting an exception: RuntimeError: Lapack Error in potrf : the leading minor of order 2 is not positive definite at /Users/soumith/mc3build/conda-bld/pytorch_1549597882250/work/aten/src/TH/generic/THTensorLapack.cpp:658 What is the proper way to generate positive definite square matrix?
The answer is one should make a dot product of matrix A and it's transpose matrix (A.t()) in order to obtain a positive semi-definite matrix. The last thing is to ensure that it is definite (strictly greater than zero). With Pytorch: Sigma_k = torch.rand(512, 512) Sigma_k = torch.mm(Sigma_k, Sigma_k.t()) Sigma_k.add_(torch.eye(512)) Formal algorithm is described here.
https://stackoverflow.com/questions/58176501/
How to use a different test batch size for RNN in PyTorch?
I want to train an RNN over 5 training points where each sequence also has a size of 5. At test time, I want to send in a single data point and compute the output. The task is to predict the next character in a sequence of five characters (all encoded as 1-hot vectors). I have tried duplicating the test data point five times. However, I am sure that this is not the right way to solve this problem. import numpy as np import torch import torch.nn as nn import torch.nn.functional as F # Define the parameters H = [ 1, 0, 0, 0 ] E = [ 0, 1, 0, 0 ] L = [ 0, 0, 1, 0 ] O = [ 0, 0, 0, 1 ] # Define the model net = nn.RNN(input_size=4, hidden_size=4, batch_first=True) # Generate data data = [[H,E,L,L,O], [E,L,L,O,H], [L,L,O,H,E], [L,O,H,E,L], [O,H,E,L,L]] inputs = torch.tensor(data).float() hidden = torch.randn(1,5,4) # Random initialization correct_outputs = torch.tensor(np.array(data[1:]+[data[0]]).astype(float).tolist(), requires_grad=True) # Set the loss function criterion = torch.nn.MSELoss() # Set the optimizer optimizer = torch.optim.SGD(net.parameters(), lr=0.1) # Perform gradient descent until convergence for epoch in range(1000): # Forward Propagation outputs, hidden = net(inputs, hidden) # Compute and print loss loss = criterion(nn.functional.softmax(outputs,2), correct_outputs) print('epoch: ', epoch,' loss: ', loss.item()) # Zero the gradients optimizer.zero_grad() # Backpropagation loss.backward(retain_graph=True) # Parameter update optimizer.step() # Predict net(torch.tensor([[H,E,L,L,O]]).float(),hidden) I get the following error: RuntimeError: Expected hidden size (1, 1, 4), got (1, 5, 4) I understand that torch wants a tensor of size (1,1,4) but I am not sure how I can convert the initial hidden state from (1, 5, 4) to (1, 1, 4). Any help would be highly appreciated!
You are getting the error because you are using: hidden = torch.randn(1,5,4) # Random initialization Instead, you should use: hidden = torch.randn(1,inputs.size(0),4) # Random initialization to cope up with the batch size of the inputs. So, do the following: # Predict inputs = torch.tensor([[H,E,L,L,O]]).float() hidden = torch.randn(1,inputs.size(0),4) net(inputs, hidden) Suggestion: improve your coding style by following some good examples in PyTorch.
https://stackoverflow.com/questions/58176523/
"Runtimeerror: bool value of tensor with more than one value is ambiguous" fastai
I am doing semantic segmentation using cityscapes dataset in fastai. I want to ignore some classes while calculating accuracy. this is how I defined accuracy according to fastai deep learning course: name2id = {v:k for k,v in enumerate(classes)} unlabeled = name2id['unlabeled'] ego_v = name2id['ego vehicle'] rectification = name2id['rectification border'] roi = name2id['out of roi'] static = name2id['static'] dynamic = name2id['dynamic'] ground = name2id['ground'] def acc_cityscapes(input, target): target = target.squeeze(1) mask=(target!=unlabeled and target!= ego_v and target!= rectification and target!=roi and target !=static and target!=dynamic and target!=ground) return (input.argmax(dim=1)[mask]==target[mask]).float().mean() This code works if I ignore only one of the classes: mask=target!=unlabeled But when I am trying to ignore multiple classes like this: mask=(target!=unlabeled and target!= ego_v and target!= rectification and target!=roi and target !=static and target!=dynamic and target!=ground) I am getting this error: runtimeError: bool value of tensor with more than one value is ambiguous Any Idea how do I solve this?
The problem is probably because your tensor contains more than 1 bool values, which will lead to an error when doing logical operation (and, or). For example, >>> import torch >>> a = torch.zeros(2) >>> b = torch.ones(2) >>> a == b tensor([False, False]) >>> a == 0 tensor([True, True]) >>> a == 0 and True Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: bool value of Tensor with more than one value is ambiguous >>> if a == b: ... print (a) ... Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: bool value of Tensor with more than one value is ambiguous The potential solution might be using logical operator directly. >>> (a != b) & (a == b) tensor([False, False]) >>> mask = (a != b) & (a == b) >>> c = torch.rand(2) >>> c[mask] tensor([]) Hope it helps.
https://stackoverflow.com/questions/58180153/
Extract sub tensor in PyTorch
For this tensor is PyTorch, tensor([[ 0.7646, 0.5573, 0.4000, 0.2188, 0.7646, 0.5052, 0.2042, 0.0896, 0.7667, 0.5938, 0.3167, 0.0917], [ 0.4271, 0.1354, 0.5000, 0.1292, 0.4260, 0.1354, 0.4646, 0.0917, -1.0000, -1.0000, -1.0000, -1.0000], [ 0.7208, 0.5656, 0.3000, 0.1688, 0.7177, 0.5271, 0.1521, 0.0667, 0.7198, 0.5948, 0.2438, 0.0729], [ 0.6292, 0.8250, 0.4000, 0.2292, 0.6271, 0.7698, 0.2083, 0.0812, 0.6281, 0.8604, 0.3604, 0.0917]], device='cuda:0') How can I extract to new Tensor for those values 0.7646, 0.5573, 0.4000, 0.2188 0.4271, 0.1354, 0.5000, 0.1292 How to get the first 4 of two rows into a new tensor?
Actually the question was answered from @zihaozhihao in the Comments but in case you are wondering where that comes from it would be helpful if you structured your Tensor like this: x = torch.Tensor([ [ 0.7646, 0.5573, 0.4000, 0.2188, 0.7646, 0.5052, 0.2042, 0.0896, 0.7667, 0.5938, 0.3167, 0.0917], [ 0.4271, 0.1354, 0.5000, 0.1292, 0.4260, 0.1354, 0.4646, 0.0917, -1.0000, -1.0000, -1.0000, -1.0000], [ 0.7208, 0.5656, 0.3000, 0.1688, 0.7177, 0.5271, 0.1521, 0.0667, 0.7198, 0.5948, 0.2438, 0.0729], [ 0.6292, 0.8250, 0.4000, 0.2292, 0.6271, 0.7698, 0.2083, 0.0812, 0.6281, 0.8604, 0.3604, 0.0917] ]) so now it is more clear that you have a shape (4, 12) you can think about it like an excel file, you have 4 rows and 12 columns. Now what you want is to extract from the two first rows the 4 first columns and that's why your solution would be: x[:2, :4] # 2 means you want to take all the rows until the second row and then you set that you want all the columns until the fourth column, this Code will also give the same result x[0:2, 0:4]
https://stackoverflow.com/questions/58187686/
PyTorch matrix factorization embedding error
I'm trying to use a single hidden layer NN to perform matrix factorization. In general, I'm trying to solve for a tensor, V, with dimensions [9724x300] where there are 9724 items in inventory, and 300 is the arbitrary number of latent features. The data that I have is a [9724x9724] matrix, X, where columns and rows represent the number of mutual likes. (eg X[0,1] represents the sum of users who like both item 0 and item 1. Diagonal entries are not of importance. My goal is to use MSE loss, such that the dot product of V[i,:] on V[j,:] transposed is very, very close to X[i,j]. Below is code that I've adapted from the below link. https://blog.fastforwardlabs.com/2018/04/10/pytorch-for-recommenders-101.html import torch from torch.autograd import Variable class MatrixFactorization(torch.nn.Module): def __init__(self, n_items=len(movie_ids), n_factors=300): super().__init__() self.vectors = nn.Embedding(n_items, n_factors,sparse=True) def forward(self, i,j): return (self.vectors([i])*torch.transpose(self.vectors([j]))).sum(1) def predict(self, i, j): return self.forward(i, j) model = MatrixFactorization(n_items=len(movie_ids),n_factors=300) loss_fn = nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01) for i in range(len(movie_ids)): for j in range(len(movie_ids)): # get user, item and rating data rating = Variable(torch.FloatTensor([Xij[i, j]])) # predict # i = Variable(torch.LongTensor([int(i)])) # j = Variable(torch.LongTensor([int(j)])) prediction = model(i, j) loss = loss_fn(prediction, rating) # backpropagate loss.backward() # update weights optimizer.step() The error returned is: TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list I'm very new to embeddings. I had tried replacing embeddings as a simple float tensor, however the MatrixFactorization class, which I defined, did not recognize the tensor as a model parameters to be optimized over. Any thoughts on where I'm going wrong?
You are passing a list to self.vectors, return (self.vectors([i])*torch.transpose(self.vectors([j]))).sum(1) Try to convert it to tensor before you call self.vectors()
https://stackoverflow.com/questions/58188133/
What is the time-complexity of the pseudo-inverse in pytorch (i.e. torch.pinverse)?
Let's say I have a matrix X with n, m == X.shape in PyTorch. What is the time complexity of calculating the pseudo-inverse with torch.pinverse? In other words, what is the time complexity of X_p = torch.pinverse(X) ? Here is the documentation
The PyTorch documentation states that pinverse is calculated using SVD (singular value decomposition). The complexity of SVD is O(n m^2), where m is the larger dimension of the matrix and n the smaller. Thus this is the complexity. For more info, check out these pages on wikipedia: https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse#Singular_value_decomposition_(SVD) https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
https://stackoverflow.com/questions/58191604/
How to find built-in function source code in pytorch
I am trying to do research on batch normalization, and had to make some modifications for the pytorch BN code. I dig into the pytorch code and got stuck with torch.nn.functional.batch_norm, which references torch.batch_norm. The problem is that torch.batch_norm cannot be further found in the torch library. Is there any way I can find the source code of this built-in function and re-implement it? Thanks!
It's there, but it's not defined in Python. They're defined in C++ in the aten/ directories. For CPU, the implementation (one of them, it depends on whether or not the input is contiguous) is here: https://github.com/pytorch/pytorch/blob/420b37f3c67950ed93cd8aa7a12e673fcfc5567b/aten/src/ATen/native/Normalization.cpp#L61-L126 For CUDA, the implementation is here: https://github.com/pytorch/pytorch/blob/7aae51cdedcbf0df5a7a8bf50a947237ac4b3ee8/aten/src/ATen/native/cudnn/BatchNorm.cpp#L52-L143
https://stackoverflow.com/questions/58193798/
Pytorch CUDA error: invalid configuration argument
I recently added a new component to my loss function. Running the new code works on a CPU, but I get the following error when I run it on a GPU, clearly relating to the backward pass: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-12-56dcbddd5230> in <module> 20 recall = Recall(N_RECALL_CAND, K) 21 #run the model ---> 22 train_loss, val_loss = fit(triplet_train_loader, triplet_test_loader, model, loss_fn, optimizer, scheduler, N_EPOCHS, cuda, LOG_INT) 23 #measure recall ~/thesis/trainer.py in fit(train_loader, val_loader, model, loss_fn, optimizer, scheduler, n_epochs, cuda, log_interval, metrics, start_epoch) 24 scheduler.step() 25 # Train stage ---> 26 train_loss, metrics, writer_train_index = train_epoch(train_loader, model, loss_fn, optimizer, cuda, log_interval, metrics, writer, writer_train_index) 27 28 message = 'Epoch: {}/{}. Train set: Average loss: {:.4f}'.format(epoch + 1, n_epochs, train_loss) ~/thesis/trainer.py in train_epoch(train_loader, model, loss_fn, optimizer, cuda, log_interval, metrics, writer, writer_train_index) 80 losses.append(loss.item()) 81 total_loss += loss.item() ---> 82 loss.backward() 83 optimizer.step() 84 /opt/anaconda3/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 116 products. Defaults to ``False``. 117 """ --> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph) 119 120 def register_hook(self, hook): /opt/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 91 Variable._execution_engine.run_backward( 92 tensors, grad_tensors, retain_graph, create_graph, ---> 93 allow_unreachable=True) # allow_unreachable flag 94 95 RuntimeError: CUDA error: invalid configuration argument Here is a copy of the code that causes it to break in the loss function: def forward(self, anchor, positive, negative, model, size_average=True): #regular triplet loss. This works on GPU and CPU distance_positive = (anchor - positive).pow(2).sum(1) # .pow(.5) distance_negative = (anchor - negative).pow(2).sum(1) # .pow(.5) losses = F.relu(distance_positive - distance_negative + self.margin) #the additional component that causes the error. This will run on CPU but fails on GPU anchor_dists = torch.cdist(model.embedding_net.anchor_net.anchors, model.embedding_net.anchor_net.anchors) t = (self.beta * F.relu(self.rho - anchor_dists)) regularization = t.sum() - torch.diag(t).sum() return regularization + losses.mean() if size_average else losses.sum() The error occurs with a batch size of 1 and on the first backward pass. Answers from here suggest that it has to do with a lack of memory, but my model isn't particularly large: TripletNet( (embedding_net): EmbeddingNet( (anchor_net): AnchorNet(anchors torch.Size([128, 192]), biases torch.Size([128])) (embedding): Sequential( (0): AnchorNet(anchors torch.Size([128, 192]), biases torch.Size([128])) (1): Tanh() ) ) ) The available memory on my GPU is the 8GB, which is much smaller than the model and the size of the cdist result which is 128x128. I have no idea how to begin debugging this. If it is the case that I'm running out of memory because its keeping track of intermediate states, how do I navigate around this? Any help is appreciated! EDIT: Monitoring the GPU memory usage shows that I’m well under the memory limit when it crashes.
As per this thread on the pytroch forums, upgrading to pytorch 1.5.0 should solve this issue
https://stackoverflow.com/questions/58194148/
Speech Recognition - how to split a sentence into words?
I'm new to Speech Recognition, and I'm looking for an approach to split a sentence (or multiple sentences) in the form of audio/wav files, into individual words? This sounds like a standard problem, so I'm wondering how people in the industry approach it. ps: yes this question was asked three years ago, but I'm looking for an up-to-date answer using newer libraries (i.e. pytorch and tensorflow 2.0). Thanks!
This is not so trivial. What you want is called an alignment. I.e. where each audio frame is aligned to a word (or subword, character, or better individual phonemes). The most reasonable approach would need a standard conventional speech recognition system. The easiest would be to use a HMM system, either backed by old fashioned GMMs, or maybe by NNs (which is called hybrid HMM-NN model). This also requires a lexicon (mapping of phonemes to words). Usually you would use an existing implementation of all that, e.g. Kaldi or RASR, as this is not so simple to implement. I have not seen a pure TF implementation of that. This software then calculates the best possible alignment path through the HMM (i.e. which has the highest probability, according to the trained model). If you know the ground truth words, this is the Viterbi algorithm, to calculate this best path. Otherwise you would do some decoding (using beam search). What you can also do, but this will be more hacky, and less good (for this task of getting an alignment): Use some of the end-to-end models, e.g. encoder-decoder with attention, or CTC. For encoder-decoder with attention, you can use the attention weights to get a good guess where the words are (and then you can maybe guess where the boundaries are). Similar for CTC. But this will not be accurate. But this is something you can implement easily in pure TF. In any case, the implementation itself is not so much the hard part (although still not simple). You first need to understand all the theory behind that. And maybe StackOverflow is not the right place to ask about that. Read through the Kaldi or RASR documentation maybe, or watch some lecture about speech recognition, or read a book about that topic.
https://stackoverflow.com/questions/58194657/
Why is my Fully Convolutional Autoencoder not symmetric?
I am developing a Fully Convolutional Autoencoder which takes 3 channels as input and outputs 2 channels (in: LAB, out: AB). Because the output should be the same size as the input, I use Full Convolution. The Code: import torch.nn as nn class AE(nn.Module): def __init__(self): super(AE, self).__init__() self.encoder = nn.Sequential( # conv 1 nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 2 nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 3 nn.Conv2d(in_channels=128, out_channels=256, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 4 nn.Conv2d(in_channels=256, out_channels=512, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(512), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 5 nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(1024), nn.ReLU() ) self.decoder = nn.Sequential( # conv 6 nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(512), nn.ReLU(), # conv 7 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), # conv 8 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), # conv 9 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=5, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), # conv 10 out nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=5, stride=1, padding=1), nn.Softmax() # multi-class classification # TODO softmax deprecated ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x The size the output tensor should be: torch.Size([1, 2, 199, 253]) The size the output tensor really has: torch.Size([1, 2, 190, 238]) My main problem is combining Conv2d and MaxPool2d and to set the correct parameter values in the ConvTranspose2d. Because of that, I treat those separately using the Upsample function for the MaxPool2d and ConvTranspose2d only for Conv2d. But I still have a little asymmetry and I really don't know why. Thank you for the help!
There are two issues. First is insufficient padding: with kernel_size=5 your convolutions are shrinking the image by 4 every time they are applied (2 pixels on each side), so you need padding=2, and not just 1, in all places. Second is the "uneven" input size. What I mean is that once your convolutions are properly padded, you are left with downsampling operations which at each point try to divide your image resolution in half. When they fail, they just return a smaller result (integer division discards the remainder). Since your network has 4 successive 2x downsampling operations, you need your input to have H, W dimensions which are multiples of 2^4=16. Then you will actually get equally shaped output. An example below import torch import torch.nn as nn class AE(nn.Module): def __init__(self): super(AE, self).__init__() self.encoder = nn.Sequential( # conv 1 nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 2 nn.Conv2d(in_channels=64, out_channels=128, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 3 nn.Conv2d(in_channels=128, out_channels=256, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 4 nn.Conv2d(in_channels=256, out_channels=512, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(512), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2), # conv 5 nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(1024), nn.ReLU() ) self.decoder = nn.Sequential( # conv 6 nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(512), nn.ReLU(), # conv 7 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(256), nn.ReLU(), # conv 8 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(128), nn.ReLU(), # conv 9 nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(64), nn.ReLU(), # conv 10 out nn.Upsample(scale_factor=2, mode='bilinear'), nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=5, stride=1, padding=2), nn.Softmax() # multi-class classification ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x input = torch.randn(1, 3, 6*16, 7*16) output = AE()(input) print(input.shape) print(output.shape)
https://stackoverflow.com/questions/58198305/
Slightly adapt L1 loss to a weighted L1 loss in Pytorch, does gradient computation still work properly?
I implemented a neural network in Pytorch and I would like to use a weighted L1 loss function to train the network. The implementation with the regular L1 loss contains this code for each epoch: optimiser.zero_grad() net.train() _,forecast = net(torch.tensor(feature, dtype=torch.float).to(DEVICE)) loss = F.l1_loss(forecast, torch.tensor(target,dtype=torch.float).to(DEVICE),reduction='mean') loss.backward() params.append(net.parameters()) optimiser.step() Now I want to use a weighted L1 loss instead. So I thought to use the same standard Pytorch L1 function again and rescale the forecasts and targets with weights. Will the gradient computation still be done correctly? optimiser.zero_grad() net.train() _,forecast = net(torch.tensor(feature, dtype=torch.float).to(DEVICE)) loss = F.l1_loss(torch.t(torch.mul(torch.t(forecast), torch.tensor(weight,dtype=torch.float).to(DEVICE))) , torch.t(torch.mul(torch.t(torch.tensor(target,dtype=torch.float).to(DEVICE)), torch.tensor(weight,dtype=torch.float).to(DEVICE))),reduction='mean') loss.backward() params.append(net.parameters()) optimiser.step()
Yes, it will be correct. If you are not using in-place operations, the gradients will be computed correctly. Besides, in the current version of Pytorch, there will be an error reported if you accidentally involve some in-place operations into your program. Here is a related discussion. You can find more information here.
https://stackoverflow.com/questions/58200833/
Multilabel classification with class imbalance in Pytorch
I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130. The problem is that my dataset is very imbalance. For some classes, I have only ~900 examples, which is around 1%. For β€œoverrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss from pytorch with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives. As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero. So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work.
I would suggest either one of these strategies Focal Loss A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar Focal Loss for Dense Object Detection (ICCV 2017). They propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors. Hard Negative Mining Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones. see, e.g.: Abhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016)
https://stackoverflow.com/questions/58206286/
RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool
As described in the issue I opened, I get the following error when running the Pytorch inverse-cooking model on CPU: RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool I have tried running the demo.ipynb file in both my laptop's Intel i7-4700HQ 8 threads and my desktop Ryzen 3700x. I was using Arch Linux on my laptop and Manjaro on my desktop. The model works fine when I run it on Google Collabs GPU. According to the demo.ipynb file the model should be able to run on CPU as well. Does anyone know if I have to tweak any parameters in order to make it work?
As stated by @iacolippo and in the comment session and myDennisCode, the problem really was dependency versions. I had torchvision==0.4.0 (which confused me) and torch==1.2.0. To fix the problem, simply install torch==0.4.1 and torchvision==0.2.1.
https://stackoverflow.com/questions/58206559/
How to use torchvision.transforms for data augmentation of segmentation task in Pytorch?
I am a little bit confused about the data augmentation performed in PyTorch. Because we are dealing with segmentation tasks, we need data and mask for the same data augmentation, but some of them are random, such as random rotation. Keras provides a random seed guarantee that data and mask do the same operation, as shown in the following code: data_gen_args = dict(featurewise_center=True, featurewise_std_normalization=True, rotation_range=25, horizontal_flip=True, vertical_flip=True) image_datagen = ImageDataGenerator(**data_gen_args) mask_datagen = ImageDataGenerator(**data_gen_args) seed = 1 image_generator = image_datagen.flow(train_data, seed=seed, batch_size=1) mask_generator = mask_datagen.flow(train_label, seed=seed, batch_size=1) train_generator = zip(image_generator, mask_generator) I didn't find a similar description in the official Pytorch documentation, so I don't know how to ensure that data and mask can be processed synchronously. Pytorch does provide such a function, but I want to apply it to a custom Dataloader. For example: def __getitem__(self, index): img = np.zeros((self.im_ht, self.im_wd, channel_size)) mask = np.zeros((self.im_ht, self.im_wd, channel_size)) temp_img = np.load(Image_path + '{:0>4}'.format(self.patient_index[index]) + '.npy') temp_label = np.load(Label_path + '{:0>4}'.format(self.patient_index[index]) + '.npy') for i in range(channel_size): img[:,:,i] = temp_img[self.count[index] + i] mask[:,:,i] = temp_label[self.count[index] + i] if self.transforms: img = np.uint8(img) mask = np.uint8(mask) img = self.transforms(img) mask = self.transforms(mask) return img, mask In this case, img and mask will be transformed separately, because some operations such as random rotation are random, so the correspondence between mask and image may be changed. In other words, the image may have rotated but the mask did not do this. EDIT 1 I used the method in augmentations.py, but I got an error:: Traceback (most recent call last): File "test_transform.py", line 87, in <module> for batch_idx, image, mask in enumerate(train_loader): File "/home/dirk/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 314, in __next__ batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/dirk/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 314, in <listcomp> batch = self.collate_fn([self.dataset[i] for i in indices]) File "/home/dirk/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 103, in __getitem__ return self.dataset[self.indices[idx]] File "/home/dirk/home/data/dirk/segmentation_unet_pytorch/data.py", line 164, in __getitem__ img, mask = self.transforms(img, mask) File "/home/dirk/home/data/dirk/segmentation_unet_pytorch/augmentations.py", line 17, in __call__ img, mask = a(img, mask) TypeError: __call__() takes 2 positional arguments but 3 were given This is my code for __getitem__(): data_transforms = { 'train': Compose([ RandomHorizontallyFlip(), RandomRotate(degree=25), transforms.ToTensor() ]), } train_set = DatasetUnetForTestTransform(fold=args.fold, random_index=args.random_index,transforms=data_transforms['train']) # __getitem__ in class DatasetUnetForTestTransform def __getitem__(self, index): img = np.zeros((self.im_ht, self.im_wd, channel_size)) mask = np.zeros((self.im_ht, self.im_wd, channel_size)) temp_img = np.load(Label_path + '{:0>4}'.format(self.patient_index[index]) + '.npy') temp_label = np.load(Label_path + '{:0>4}'.format(self.patient_index[index]) + '.npy') temp_img, temp_label = crop_data_label_from_0(temp_img, temp_label) for i in range(channel_size): img[:,:,i] = temp_img[self.count[index] + i] mask[:,:,i] = temp_label[self.count[index] + i] if self.transforms: img = T.ToPILImage()(np.uint8(img)) mask = T.ToPILImage()(np.uint8(mask)) img, mask = self.transforms(img, mask) img = T.ToTensor()(img).copy() mask = T.ToTensor()(mask).copy() return img, mask EDIT 2 I found that after ToTensor, the dice between the same labels becomes 255 instead of 1, how to fix it? # Dice computation def DSC_computation(label, pred): pred_sum = pred.sum() label_sum = label.sum() inter_sum = np.logical_and(pred, label).sum() return 2 * float(inter_sum) / (pred_sum + label_sum) Feel free to ask if more code is needed to explain the problem.
torchvision also provides similar functions [document]. Here is a simple example, import torchvision from torchvision import transforms trans = transforms.Compose([transforms.CenterCrop((178, 178)), transforms.Resize(128), transforms.RandomRotation(20), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) dset = torchvision.datasets.MNIST(data_root, transforms=trans) EDIT A brief example when customizing your own CelebA dataset. Note that, to apply transformations, you need call transform list in __getitem__. class CelebADataset(Dataset): def __init__(self, root, transforms=None, num=None): super(CelebADataset, self).__init__() self.img_root = os.path.join(root, 'img_align_celeba') self.attr_root = os.path.join(root, 'Anno/list_attr_celeba.txt') self.transforms = transforms df = pd.read_csv(self.attr_root, sep='\s+', header=1, index_col=0) #print(df.columns.tolist()) if num is None: self.labels = df.values self.img_name = df.index.values else: self.labels = df.values[:num] self.img_name = df.index.values[:num] def __getitem__(self, index): img = Image.open(os.path.join(self.img_root, self.img_name[index])) # only use blond_hair, eyeglass, male, smile indices = [9, 15, 20, 31] label = np.take(self.labels[index], indices) label[label==-1] = 0 if self.transforms is not None: img = self.transforms(img) return np.asarray(img), label def __len__(self): return len(self.labels) EDIT 2 I probably miss something at the first glance. The main point of your problem is how to apply "the same" data preprocessing to img and labels. To my understanding, there is no available Pytorch built-in function. So, what I did before is to implement the augmentation by myself. class RandomRotate(object): def __init__(self, degree): self.degree = degree def __call__(self, img, mask): rotate_degree = random.random() * 2 * self.degree - self.degree return img.rotate(rotate_degree, Image.BILINEAR), mask.rotate(rotate_degree, Image.NEAREST) Note that the input should be PIL format. See this for more information.
https://stackoverflow.com/questions/58215056/
Get total amount of free GPU memory and available using pytorch
I'm using google colab free Gpu's for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch.
In the recent version of PyTorch you can also use torch.cuda.mem_get_info: https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html#torch.cuda.mem_get_info
https://stackoverflow.com/questions/58216000/
How do I effectively parallelize AlphaZero on the GPU?
I'm implementing a version of AlphaZero (AlphaGo's most recent incarnation) to be applied to some other domain. The crux of the algorithm is a Monte Carlo Tree Search of the state space (CPU) interleaved with 'intuition' (probabilities) from a neural network in eval mode (GPU). The MCTS result is then used to train the neural network. I already parallelized the CPU execution by launching multiple processes which each build up their own tree. This is effective and has now lead to a GPU bottleneck! (nvidia-smi showing the GPU at 100% all the time) I have devised 2 strategies to parallelize GPU evaluations, however both of them have problems. Each process evaluates the network only on batches from its own tree. In my initial naive implementation, this meant a batch size of 1. However, by refactoring some code and adding a 'virtual loss' to discourage (but not completely block) the same node from being picked twice we can get larger batches of size 1-4. The problem here is that we cannot allow large delays until we evaluate the batch or accuracy suffers, so a small batch size is key here. Send the batches to a central "neural network worker" thread which combines and evaluates them. This could be done in a large batch of 32 or more, so the GPU could be used very efficiently. The problem here is that the tree workers send CUDA tensors 'round-trip' which is not supported by PyTorch. It is supported if I clone them first, but all that constant copying makes this approach slower than the first one. I was thinking maybe a clever batching scheme that I'm not seeing could make the first approach work. Using multiple GPUs could speed up the first approach too, but the kind of parallelism I want is not natively supported by PyTorch. Maybe keeping all tensors in the NN worker and only sending ids around could improve the second approach, however the difficulty here is how to synchronize effectively to get a large batch without making the CPU threads wait too long. I found next to no information on how AlphaZero or AlphaGo Zero were parallelized in their respective papers. I was able to find limited information online however which lead me to improve the first approach. I would be grateful for any advice on this, particularly if there's some point or approach I missed.
Use tensorflow serving as an example, The prediction service could run in a different process, running a service to receive request from the worker (runs a MCTS process and send prediction request to the prediction service). We can keep a dict from the socket address to the socket itself. The prediction service could read each query body and their header(which is different for each query), we can put that headers in a queue. While wait for maybe at most 100ms or the current batch is bigger than a batch size, The prediction runs. After the GPU gives the results, we loop over the results and as the order is the same as the headers in the queue, we can send back the responses via the socket based on each header (can be looked up from the dict we kept above). As each query comes with a different header, you can not miss the request, response and the socket. While you can run a tensorflow serving with a GPU card while running multiple worker to keep the batch size big enough to get a larger throughput. I aslo find a batching mechanism here in this repo: https://github.com/richemslie/galvanise_zero
https://stackoverflow.com/questions/58235790/
How to implement contractive autoencoder in Pytorch?
I'm trying to create a contractive autoencoder in Pytorch. I found this thread and tried according to that. This is the snippet I wrote based on the mentioned thread: import datetime import numpy as np import torch import torchvision from torchvision import datasets, transforms from torchvision.utils import save_image, make_grid import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt %matplotlib inline dataset_train = datasets.MNIST(root='MNIST', train=True, transform = transforms.ToTensor(), download=True) dataset_test = datasets.MNIST(root='MNIST', train=False, transform = transforms.ToTensor(), download=True) batch_size = 128 num_workers = 2 dataloader_train = torch.utils.data.DataLoader(dataset_train, batch_size = batch_size, shuffle=True, num_workers = num_workers, pin_memory=True) dataloader_test = torch.utils.data.DataLoader(dataset_test, batch_size = batch_size, num_workers = num_workers, pin_memory=True) def view_images(imgs, labels, rows = 4, cols =11): imgs = imgs.detach().cpu().numpy().transpose(0,2,3,1) fig = plt.figure(figsize=(8,4)) for i in range(imgs.shape[0]): ax = fig.add_subplot(rows, cols, i+1, xticks=[], yticks=[]) ax.imshow(imgs[i].squeeze(), cmap='Greys_r') ax.set_title(labels[i].item()) # now let's view some imgs, labels = next(iter(dataloader_train)) view_images(imgs, labels,13,10) class Contractive_AutoEncoder(nn.Module): def __init__(self): super().__init__() self.encoder = nn.Linear(784, 512) self.decoder = nn.Linear(512, 784) def forward(self, input): # flatten the input shape = input.shape input = input.view(input.size(0), -1) output_e = F.relu(self.encoder(input)) output = F.sigmoid(self.decoder(output_e)) output = output.view(*shape) return output_e, output def loss_function(output_e, outputs, imgs, device): output_e.backward(torch.ones(output_e.size()).to(device), retain_graph=True) criterion = nn.MSELoss() assert outputs.shape == imgs.shape ,f'outputs.shape : {outputs.shape} != imgs.shape : {imgs.shape}' imgs.grad.requires_grad = True loss1 = criterion(outputs, imgs) print(imgs.grad) loss2 = torch.mean(pow(imgs.grad,2)) loss = loss1 + loss2 return loss epochs = 50 interval = 2000 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Contractive_AutoEncoder().to(device) optimizer = optim.Adam(model.parameters(), lr =0.001) for e in range(epochs): for i, (imgs, labels) in enumerate(dataloader_train): imgs = imgs.to(device) labels = labels.to(device) outputs_e, outputs = model(imgs) loss = loss_function(outputs_e, outputs, imgs,device) optimizer.zero_grad() loss.backward() optimizer.step() if i%interval: print('') print(f'epoch/epoechs: {e}/{epochs} loss : {loss.item():.4f} ') For the sake of brevity I just used one layer for the encoder and the decoder. It should work regardless of number of layers in either of them obviously! But the catch here is, aside from the fact that I don't know if this is the correct way of doing this, (calculating gradients with respect to the input), I get an error which makes the former solution wrong/not applicable. That is: imgs.grad.requires_grad = True produces the error : AttributeError : 'NoneType' object has no attribute 'requires_grad' I also tried the second method suggested in that thread which is as follows: class Contractive_Encoder(nn.Module): def __init__(self): super().__init__() self.encoder = nn.Linear(784, 512) def forward(self, input): # flatten the input input = input.view(input.size(0), -1) output_e = F.relu(self.encoder(input)) return output_e class Contractive_Decoder(nn.Module): def __init__(self): super().__init__() self.decoder = nn.Linear(512, 784) def forward(self, input): # flatten the input output = F.sigmoid(self.decoder(input)) output = output.view(-1,1,28,28) return output epochs = 50 interval = 2000 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_enc = Contractive_Encoder().to(device) model_dec = Contractive_Decoder().to(device) optimizer = optim.Adam([{"params":model_enc.parameters()}, {"params":model_dec.parameters()}], lr =0.001) optimizer_cond = optim.Adam(model_enc.parameters(), lr = 0.001) criterion = nn.MSELoss() for e in range(epochs): for i, (imgs, labels) in enumerate(dataloader_train): imgs = imgs.to(device) labels = labels.to(device) outputs_e = model_enc(imgs) outputs = model_dec(outputs_e) loss_rec = criterion(outputs, imgs) optimizer.zero_grad() loss_rec.backward() optimizer.step() imgs.requires_grad_(True) y = model_enc(imgs) optimizer_cond.zero_grad() y.backward(torch.ones(imgs.view(-1,28*28).size())) imgs.grad.requires_grad = True loss = torch.mean([pow(imgs.grad,2)]) optimizer_cond.zero_grad() loss.backward() optimizer_cond.step() if i%interval: print('') print(f'epoch/epoechs: {e}/{epochs} loss : {loss.item():.4f} ') but I face the error : RuntimeError: invalid gradient at index 0 - got [128, 784] but expected shape compatible with [128, 512] How should I go about this in Pytorch?
Summary The final implementation for contractive loss that I wrote is as follows: def loss_function(output_e, outputs, imgs, lamda = 1e-4, device=torch.device('cuda')): criterion = nn.MSELoss() assert outputs.shape == imgs.shape ,f'outputs.shape : {outputs.shape} != imgs.shape : {imgs.shape}' loss1 = criterion(outputs, imgs) output_e.backward(torch.ones(outputs_e.size()).to(device), retain_graph=True) # Frobenious norm, the square root of sum of all elements (square value) # in a jacobian matrix loss2 = torch.sqrt(torch.sum(torch.pow(imgs.grad,2))) imgs.grad.data.zero_() loss = loss1 + (lamda*loss2) return loss and inside training loop you need to do: for e in range(epochs): for i, (imgs, labels) in enumerate(dataloader_train): imgs = imgs.to(device) labels = labels.to(device) imgs.retain_grad() imgs.requires_grad_(True) outputs_e, outputs = model(imgs) loss = loss_function(outputs_e, outputs, imgs, lam,device) imgs.requires_grad_(False) optimizer.zero_grad() loss.backward() optimizer.step() print(f'epoch/epochs: {e}/{epochs} loss: {loss.item():.4f}') Full explanation As it turns out and rightfully @akshayk07 pointed out in the comments, the implementation found in Pytorch forum was wrong in multiple places. The notable thing, being it wasn't implementing the actual contractive loss that was introduced in Contractive Auto-Encoders:Explicit Invariance During Feature Extraction paper! and also aside from that, the implementation wouldn't work at all for obvious reasons that will be explained in a moment. The changes are obvious so I try to explain what's going on here. First of all note that imgs is not a leaf node, so the gradients would not be retained in the image .grad attribute. In order to retain gradients for non leaf nodes, you should use retain_graph(). grad is only populated for leaf Tensors. Also imgs.retain_grad() should be called before doing forward() as it will instruct the autograd to store grads into non-leaf nodes. Update Thanks to @Michael for pointing out that the correct calculation of Frobenius Norm is actually (from ScienceDirect): the square root of the sum of the squares of all the matrix entries and not the the square root of the sum of the absolute values of all the matrix entries as explained here
https://stackoverflow.com/questions/58249160/
Size Mismatch using pytorch when trying to train data
I am really new to pytorch and just trying to use my own dataset to do a simple Linear Regression Model. I am only using the numbers values as inputs, too. I have imported the data from the CSV dataset = pd.read_csv('mlb_games_overview.csv') I have split the data into four parts X_train, X_test, y_train, y_test X = dataset.drop(['date', 'team', 'runs', 'win'], 1) y = dataset['win'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=True) I have converted the data to pytorch tensors X_train = torch.from_numpy(np.array(X_train)) X_test = torch.from_numpy(np.array(X_test)) y_train = torch.from_numpy(np.array(y_train)) y_test = torch.from_numpy(np.array(y_test)) I have created a LinearRegressionModel class LinearRegressionModel(torch.nn.Module): def __init__(self): super(LinearRegressionModel, self).__init__() self.linear = torch.nn.Linear(1, 1) def forward(self, x): y_pred = self.linear(x) return y_pred I have initialized the optimizer and the loss function criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=0.01) Now when I start to train the data I get the runtime error mismatch EPOCHS = 500 for epoch in range(EPOCHS): pred_y = model(X_train) # RUNTIME ERROR HERE loss = criterion(pred_y, y_train) optimizer.zero_grad() # zero out gradients to update parameters correctly loss.backward() # backpropagation optimizer.step() # update weights print('epoch {}, loss {}'. format(epoch, loss.data[0])) Error Log: RuntimeError Traceback (most recent call last) <ipython-input-40-c0474231d515> in <module> 1 EPOCHS = 500 2 for epoch in range(EPOCHS): ----> 3 pred_y = model(X_train) 4 loss = criterion(pred_y, y_train) 5 optimizer.zero_grad() # zero out gradients to update parameters correctly RuntimeError: size mismatch, m1: [3540 x 8], m2: [1 x 1] at C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensorMath.cpp:752
In your Linear Regression model, you have: self.linear = torch.nn.Linear(1, 1) But your training data (X_train) shape is 3540 x 8 which means you have 8 features representing each input example. So, you should define the linear layer as follows. self.linear = torch.nn.Linear(8, 1) A linear layer in PyTorch has parameters, W and b. If you set the in_features to 8 and out_features to 1, then the shape of the W matrix will be 1 x 8 and the length of b vector will be 1. Since your training data shape is 3540 x 8, you can perform the following operation. linear_out = X_train W_T + b I hope it clarifies your confusion.
https://stackoverflow.com/questions/58250257/
Pytorch variable changes numpy variable even though memory addresses are not same
I have the following bit of code: a = torch.ones(10); b = a.numpy() a[0] += 1 print(a, b) Both variables essentially hold the same values even though I only modified a. However, I checked the memory addresses of a and b using hex(id(a)) and they're different. So in this case, is b a pointer to a? What's going on?
Actually the raw data is at the same address. You can check like this, a.storage().data_ptr() Out[16]: 93866530123392 b.__array_interface__['data'] Out[17]: (93866530123392, False)
https://stackoverflow.com/questions/58251122/
Create self attributes in python using for loop when using pytorch
Within pytorch, creating layers, can be semi automated, thus the reason for using a for loop. One of the main issues is that these layers cannot stored within a list or dictionary or else back propagation will not work. Thus the reason for a work around. Within the object, assigning new self attributes How do i replace this self.res1 = 1 self.res2 = 2 self.res3 = 3 with this for i in range(2): res_name = 'res'+str(i+1) self.res_name = i Now that i have created objects this way, how can I access them in the same way. For example, if we assume self.res_name is now an object? for i in range(2): res_name = 'res'+str(i+1) out = self.res_name(out)
You probably should use a dict or list instead. But if you really want this for some reason, you can try setattr(x, attr, 'magic'). Thus, in your case, it's for i in range(1, 4): res_name = 'res' + str(i) setattr(self, res_name, i) See this related question for more info.
https://stackoverflow.com/questions/58251275/
Efficiency of .to(device) in pytorch
For achieving maximum efficiency, which command would be more efficient? x = torch.randn(100, 100).to(device) x = torch.randn(100, 100, device = device) Is there a benefit to using one versus the other when doing heavy tensor operations? I was told one of them is less efficient, but can't properly figure how to compare the two. I'm assuming the second is better as it directly exports to device instead of first creating a tensor and then having to transfer it to device.
well, I'm no expert in this. Here's the timing result for you. %timeit torch.randn(100, 100).to(device) The slowest run took 12.65 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 129 Β΅s per loop %timeit torch.randn(100, 100, device = device) The slowest run took 88.54 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 11.6 Β΅s per loop P.S. I executed both these commands on Google Colab.
https://stackoverflow.com/questions/58251453/
Error in ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
The input image size is 512*512,in order to suit to the input of resnet.input image I used _img = Image.open(self.images[index]).convert('RGB') in dataloader. I used resnet50 as my network's backbone without fc.The output shape is [4,2048,16,16] then used two (conv bn relu) and a interpolate def forward(self, input): x=self.backbone(input) x = self.conv1(x) x= self.bn1(x) x = self.relu(x) x = self.conv2(x) x= self.bn2(x) x = self.relu(x) x = F.interpolate(x, size=[512,512], mode='bilinear', align_corners=True) return x The part of training self.criterion=nn.CrossEntropyLoss() if self.args.cuda: image, target = image.cuda(), target.cuda() self.scheduler(self.optimizer, i, epoch, self.best_pred) self.optimizer.zero_grad() output = self.model(image) loss = self.criterion(output, target.long()) loss.backward() But the Error occurs File "E:/python_workspace/1006/train.py", line 135, in training loss = self.criterion(output, target.long()) File "E:\python_workspace\1006\utils\loss.py", line 28, in CrossEntropyLoss loss = criterion(logit, target.long()) File "E:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "E:\anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 916, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "E:\anaconda3\lib\site-packages\torch\nn\functional.py", line 1995, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "E:\anaconda3\lib\site-packages\torch\nn\functional.py", line 1826, in nll_loss ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at C:\w\1\s\tmp_conda_3.6_045031\conda\conda-bld\pytorch_1565412750030\work\aten\src\THNN/generic/SpatialClassNLLCriterion.c:111 image.shape is [4, 3, 512, 512],dtype is torch.float32 target.shape is [4, 512, 512],dtype is torch.float32 output.shape is [4, 3, 512, 512],dtype is torch.float32 target image The target images all only have three different colors.so I set output to 3 channel.And there's Image mode is P Where may have problems in my code?
Judging by the sizes of your ternsors, your batch_size=4. You are trying to predict one of three labels per pixel, that is n_classes=3. The error you got: RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. Means that the target.long() you provide to your loss function has values either negative or larger than n_classes. Check the way you read the ground truth labels. If it's a P type image, you need to read it as such and not convert it to RGB values. PS, Do not use align_corners=True in F.interpolate, it introduces distortions.
https://stackoverflow.com/questions/58258255/
What does a.sub_(lr*a.grad) actually do?
I am doing the course of fast-ai, SGD and I can not understand..... This subtracts the coefficients by (learning rate * gradient)... But why is it necessary to subtract? here is the code: def update(): y_hat = x@a loss = mse(y_hat, y) if t % 10 == 0: print (loss) loss.backward() with torch.no_grad(): a.sub_(lr * a.grad)
Look at the image. It shows the loss function J as a function of the parameter W. Here it is a simplified representation with W being the only parameter. So, for a convex loss function, the curve looks as shown. Note that the learning rate is positive. On the left side, the gradient (slope of the line tangent to the curve at that point) is negative, so the product of the learning rate and gradient is negative. Thus, subtracting the product from W will actually increase W (since 2 negatives make a positive). In this case, this is good because loss decreases. On the other hand (on the right side), the gradient is positive, so the product of the learning rate and gradient is positive. Thus, subtracting the product from W reduces W. In this case also, this is good because the loss decreases. We can extend this same thing for more number of parameters (the graph shown will be higher dimensional and won't be easy to visualize, which is why we had taken a single parameter W initially) and for other loss functions (even non-convex ones, though it won't always converge to the global minima, but definitely to the nearest local minima). Note : This explanation can be found in Andrew Ng's courses of deeplearning.ai, but I couldn't find a direct link, so I wrote this answer.
https://stackoverflow.com/questions/58265580/
Learning rate & gradient descent difference?
What is the difference between the two?, the two serve to reach the minimum point (lower loss) of a function for example. I understand (I think) that the learning rate is multiplied by the gradient ( slope ) to make the gradient descent , but is that so ? Do I miss something? What is the difference between lr and gradient? Thanks
Deep learning neural networks are trained using the stochastic gradient descent algorithm. Stochastic gradient descent is an optimization algorithm that estimates the error gradient for the current state of the model using examples from the training dataset, then updates the weights of the model using the back-propagation of errors algorithm, referred to as simply backpropagation. The amount that the weights are updated during training is referred to as the step size or the β€œlearning rate.” Specifically, the learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0. The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs. A learning rate that is too large can cause the model to converge too quickly to a suboptimal solution, whereas a learning rate that is too small can cause the process to get stuck. The challenge of training deep learning neural networks involves carefully selecting the learning rate. It may be the most important hyperparameter for the model. The learning rate is perhaps the most important hyperparameter. If you have time to tune only one hyperparameter, tune the learning rate. β€” Page 429, Deep Learning, 2016. For more on what the learning rate is and how it works, see the post: How to Configure the Learning Rate Hyperparameter When Training Deep Learning Neural Networks Also you can refer to here: Understand the Impact of Learning Rate on Neural Network Performance
https://stackoverflow.com/questions/58266988/
What is the meaning of a 'mini-batch' in deep learning?
I'm taking the fast-ai course, and in "Lesson 2 - SGD" it says: Mini-batch: a random bunch of points that you use to update your weights And it also says that gradient descent uses mini-batches. What is a mini-batch? What's the difference between a mini-batch and a regular batch?
Both are approaches to gradient descent. But in a batch gradient descent you process the entire training set in one iteration. Whereas, in a mini-batch gradient descent you process a small subset of the training set in each iteration. Also compare stochastic gradient descent, where you process a single example from the training set in each iteration. Another way to look at it: they are all examples of the same approach to gradient descent with a batch size of m and a training set of size n. For stochastic gradient descent, m=1. For batch gradient descent, m = n. For mini-batch, m=b and b < n, typically b is small compared to n. Mini-batch adds the question of determining the right size for b, but finding the right b may greatly improve your results.
https://stackoverflow.com/questions/58269460/
In distributed computing, what are world size and rank?
I've been reading through some documentation and example code with the end goal of writing scripts for distributed computing (running PyTorch), but the concepts confuse me. Let's assume that we have a single node with 4 GPUs, and we want to run our script on those 4 GPUs (i.e. one process per GPU). In such a scenario, what are the rank world size and rank? I often find the explanation for world size: Total number of processes involved in the job, so I assume that that is four in our example, but what about rank? To explain it further, another example with multiple nodes and multiple GPUs could be useful, too.
These concepts are related to parallel computing. It would be helpful to learn a little about parallel computing, e.g., MPI. You can think of world as a group containing all the processes for your distributed training. Usually, each GPU corresponds to one process. Processes in the world can communicate with each other, which is why you can train your model distributedly and still get the correct gradient update. So world size is the number of processes for your training, which is usually the number of GPUs you are using for distributed training. Rank is the unique ID given to a process, so that other processes know how to identify a particular process. Local rank is the a unique local ID for processes running in a single node, this is where my view differs with @zihaozhihao. Let's take a concrete example. Suppose we run our training in 2 servers (some articles also call them nodes) and each server/node has 4 GPUs. The world size is 4*2=8. The ranks for the processes will be [0, 1, 2, 3, 4, 5, 6, 7]. In each node, the local rank will be [0, 1, 2, 3]. I have also written a post about MPI collectives and basic concepts. The link is here.
https://stackoverflow.com/questions/58271635/
Problem when converting Pytorch Image Classifier to mlmodel: Returns same softmax output regardless of img
I trained and tested an image classifier (Resnet34, Fast.ai, 3 classes) using pytorch and learn.predict() works as expected. When I convert pytorch -> onnx -> mlmodel it predicts the same softmax values regardless of the image I submit. Here's my pytorch model: Sequential( (0): Sequential( (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (4): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (6): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (4): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (5): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (7): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (1): Sequential( (0): AdaptiveConcatPool2d( (ap): AdaptiveAvgPool2d(output_size=1) (mp): AdaptiveMaxPool2d(output_size=1) ) (1): Flatten() (2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Dropout(p=0.25, inplace=False) (4): Linear(in_features=1024, out_features=512, bias=True) (5): ReLU(inplace=True) (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): Dropout(p=0.5, inplace=False) (8): Linear(in_features=512, out_features=3, bias=True) ) ) To convert it to .onnx, I need to first normalize the image data and flatten it. I found this tutorial, which worked on a previous version of fastai/onnx-coreml. I do this with the following class: sz = (960,540) class ImageScale(nn.Module): def __init__(self): super().__init__() self.denominator = torch.full((3, sz[0], sz[1]), 255.0, device=torch.device("cuda")) def forward(self, x): return torch.div(x, self.denominator).unsqueeze(0) To construct the entire model, I concatenate my ImageScale layer, the model, and a softmax function like this: final_model = [ImageScale()] + [learn.model] + [nn.Softmax(dim=-1)] final_model = nn.Sequential(*final_model) Which ends up looking like this: Sequential( (0): ImageScale() (1): Sequential( (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (4): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (6): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (3): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (4): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (5): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (7): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (2): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (2): Sequential( (0): AdaptiveConcatPool2d( (ap): AdaptiveAvgPool2d(output_size=1) (mp): AdaptiveMaxPool2d(output_size=1) ) (1): Flatten() (2): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Dropout(p=0.25, inplace=False) (4): Linear(in_features=1024, out_features=512, bias=True) (5): ReLU(inplace=True) (6): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (7): Dropout(p=0.5, inplace=False) (8): Linear(in_features=512, out_features=3, bias=True) ) (3): Softmax(dim=-1) ) I convert to .onnx like this: dummy_input = Variable(torch.randn(3, sz[0], sz[1])).cuda() torch.onnx.export(final_model, dummy_input, 'model.onnx', input_names = ['input'], output_names =['output'], verbose=True) And I convert from .onnx to .mlmodel like this: model_file = open('model.onnx', 'rb') model_proto = onnx_pb.ModelProto() model_proto.ParseFromString(model_file.read()) coreml_model = convert(model_proto, image_input_names = ['image'], mode='classifier', class_labels="labels.txt") coreml_model.save('model.mlmodel') When I call predict using coremltools, I get the same output regardless of the image I input: import coremltools from PIL import Image model = coremltools.models.MLModel('model.mlmodel') img = Image.open('img.jpg') preds = model.predict({'image': img}) # preds: {'output': {'class1': 0.011085365898907185, 'class2': 0.9794686436653137, 'class2': 0.009446004405617714}, 'classLabel': 'class2'} img2 = Image.open('img2.jpg') preds = model.predict({'image': img2}) # preds: {'output': {'class1': 0.011085365898907185, 'class2': 0.9794686436653137, 'class2': 0.009446004405617714}, 'classLabel': 'class2'} Possible issues: 1. I'm not setting up the Sequential correctly before converting 2. Coreml or onnx cannot handle non-square images I've tried a bunch of different inputs, but keep getting the same so any help would be much appreciated! Here are screen shots of my head and tail from netron: Head: Tail:
I added target_ios=13 to my list of parameters (which required updating to MacOS Version 10.15) and it worked. from onnx_coreml import convert ml_model = convert(model='model.onnx', target_ios='13')
https://stackoverflow.com/questions/58276161/
AttributeError: 'list' object has no attribute 'dim' when predicting in pytorch
I'm currently loading in a model and 11 input values. Then I'm sending those 11 values into a tensor and attempting to predict outputs. Here is my code: # coding: utf-8 # In[5]: import torch import torchvision from torchvision import transforms, datasets import torch.nn as nn import torch.nn.functional as F import torch.utils.data as utils import numpy as np data_np = np.loadtxt('input_preds.csv', delimiter=',') train_ds = utils.TensorDataset(torch.tensor(data_np, dtype=torch.float32).view(-1,11)) trainset = torch.utils.data.DataLoader(train_ds, batch_size=1, shuffle=True) # setting device on GPU if available, else CPU, replace .cuda() with .to(device) device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') class Net(nn.Module): def __init__(self): super().__init__() #self.bn = nn.BatchNorm2d(11) self.fc1 = nn.Linear(11, 22) self.fc2 = nn.Linear(22, 44) self.fc3 = nn.Linear(44, 22) self.fc4 = nn.Linear(22, 11) def forward(self, x): #x = x.view(-1, 11) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) #return F.log_softmax(x, dim=1) return x model1 = torch.load('./1e-2') model2 = torch.load('./1e-3') for data in trainset: X = data X = X output = model1(X).to(device) print(output) However, I get this error Traceback (most recent call last): File "inference.py", line 53, in <module> output = model1(X).to(device) File "C:\Users\Happy\Miniconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "inference.py", line 40, in forward x = F.relu(self.fc1(x)) File "C:\Users\Happy\Miniconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\Happy\Miniconda3\envs\torch\lib\site-packages\torch\nn\modules\linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\Happy\Miniconda3\envs\torch\lib\site-packages\torch\nn\functional.py", line 1022, in linear if input.dim() == 2 and bias is not None: AttributeError: 'list' object has no attribute 'dim' I've tried to convert the batch to a numpy array but that didn't help. How do I resolve this error? Thank you for your help.
It looks like your X (data) is a list of tensors, while a PyTorch tensor is expected. Try X = torch.stack(X).to(device) before sending to the model.
https://stackoverflow.com/questions/58278247/
Word embeddings with multiple categorial features for a single word
I'm looking for a method to implement word embedding network with LSTM layers in Pytorch such that the input to the nn.Embedding layer has a different form than vectors of words IDs. Each word in my case has a corresponding vector and the sentence in my corpus is consequently a vector of vectors. So, for example, I may have the word "King" with vector [500, 3, 18] where 500 is the Word ID, 3 is the word color, and 18 is the font size, etc. The embedding layer role here is to do some automatic feature reduction/extraction. How can I feed the embedding layer with such form data? Or do you have any better suggestions?
I am not sure what do you mean by word2vec algorithm with LSTM because the original word2vec algorithm does not use LSTMs and uses directly embeddings to predict surrounding words. Anyway, it seems you have multiple categorical variables to embed. In the example, it is word ID, color ID, and font size (if you round it to integer values). You have two option: You can create new IDs for all possible combinations of your features and use nn.Embedding for them. There is however a risk that most of the IDs will appear too sparsely in the data to learn reliable embeddings. Have separate embedding for each of the features. Then, you will need to combine the embeddings for the features together. You have basically three options how to do it: Just concatenate the embeddings and let the following layers of the network to resolve the combination. Choose the same embedding dimension for all features and average them. (I would start with this one probably.) Add a nn.Dense layer (or two, the first one with ReLU activation and the second without activation) that will explicitly combine the embeddings for your features. If you need to include continuous features that cannot be discretized, you can always take the continuous features, apply a layer or two on top of them and combine them with the embeddings of the discrete features.
https://stackoverflow.com/questions/58281876/
What's the workaround for "ragged/jagged tensors" in PyTorch?
Tensorflow provides ragged tensors (https://www.tensorflow.org/guide/ragged_tensor). PyTorch however doesn't provide such a data structure. Is there a workaround to construct something similar in PyTorch? import numpy as np x = np.array([[0], [0, 1]]) print(x) # [list([0]) list([0, 1])] import tensorflow as tf x = tf.ragged.constant([[0], [0, 1]]) print(x) # <tf.RaggedTensor [[0], [0, 1]]> import torch # x = torch.Tensor([[0], [0, 1]]) # ValueError
PyTorch is implementing something called NestedTensors which seems to have pretty much the same purpose as RaggedTensors in Tensorflow. You can follow the RFC and progress here.
https://stackoverflow.com/questions/58287925/
Convert 3D Tensor to 4D Tensor in Pytorch
I had difficulty finding information on reshaping in PyTorch. Tensorflow is quite easy. My tensor has shape torch.Size([3, 480, 480]). I want to convert it to a 4D tensor with shape [1,3,480,480]. How do I do that?
You can use unsqueeze() For example: x = torch.zeros((4,4,4)) # Create 3D tensor x = x.unsqueeze(0) # Add dimension as the first axis (1,4,4,4) I've seen a few people use indexing with None to add a singular dimension as well. For example: x = torch.zeros((4,4,4)) # Create 3D tensor print(x[None].shape) # (1,4,4,4) print(x[:,None,:,:].shape) # (4,1,4,4) print(x[:,:,None,:].shape) # (4,4,1,4) print(x[:,:,:,None].shape) # (4,4,4,1) Personally, I prefer unsqueeze(), but it's good to be familiar with both.
https://stackoverflow.com/questions/58296345/
How to change activation layer in Pytorch pretrained module?
How to change the activation layer of a Pytorch pretrained network? Here is my code : print("All modules") for child in net.children(): if isinstance(child,nn.ReLU) or isinstance(child,nn.SELU): print(child) print('Before changing activation') for child in net.children(): if isinstance(child,nn.ReLU) or isinstance(child,nn.SELU): print(child) child=nn.SELU() print(child) print('after changing activation') for child in net.children(): if isinstance(child,nn.ReLU) or isinstance(child,nn.SELU): print(child) Here is my output: All modules ReLU(inplace=True) Before changing activation ReLU(inplace=True) SELU() after changing activation ReLU(inplace=True)
._modules solves the problem for me. for name,child in net.named_children(): if isinstance(child,nn.ReLU) or isinstance(child,nn.SELU): net._modules['relu'] = nn.SELU()
https://stackoverflow.com/questions/58297197/
Command line python and jupyter notebooks use two different versions of torch
On my conda environment importing torch from command line Python and from a jupyter notebook yields two different results. Command line Python: $ source activate GNN (GNN) $ python >>> import torch >>> print(torch.__file__) /home/riccardo/.local/lib/python3.7/site-packages/torch/__init__.py >>> print(torch.__version__) 0.4.1 Jupyter: (GNN) $ jupyter notebook --no-browser --port=8890 import torch print(torch.__file__) /home/riccardo/.local/lib/python3.6/site-packages/torch/__init__.py print(torch.__version__) 1.2.0+cu92 I tried the steps suggested in Conda environments not showing up in Jupyter Notebook $ conda install ipykernel $ source activate GNN (GNN) $ python -m ipykernel install --user --name GNN --display-name "Python (GNN)" Installed kernelspec GNN in /home/riccardo/.local/share/jupyter/kernels/gnn but that did not solve the problem.
You need to sort of make the Anaconda environment recognized in Jupyter using conda activate myenv conda install -n myenv ipykernel python -m ipykernel install --user --name myenv --display-name "Python (myenv)" Replace myenv with the name of your environment. Later on, in your Jupyter Notebook, in the Select Kernel option, you will see this Python (myenv) option.
https://stackoverflow.com/questions/58301581/
Is there really no padding=same option for PyTorch's Conv2d?
I'm currently working on building a convolutional neural network (CNN) that will work on financial time series data. The input shape is (100, 40) - 100 time stamps by 40 features. The CNN that I'm using uses asymmetric kernel sizes (i.e. 1 x 2 and 4 x 1) and also asymmetric strides (i.e. 1 x 2 for the 1 x 2 layers and 1 x 1 for the 4 x 1 layers). In order to maintain the height dimension to stay 100, I needed to pad the data. In my research, I noticed that people who use TensorFlow or Keras simply use padding='same'; but this option is apparently unavailable in PyTorch. According to some answers in What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?, and also this answer on the PyTorch discussion forum, I can manually calculate how I need to pad my data, and use torch.nn.ZeroPad2d to solve the problem - since apparently normal torch.nn.Conv2d layers don't support asymmetric padding (I believe that the total padding I need is 3 in height and 0 in width). I tried this code: import torch import torch.nn as nn conv = nn.Conv2d(1, 1, kernel_size=(4, 1)) pad = nn.ZeroPad2d((0, 0, 2, 1)) # Add 2 to top and 1 to bottom. x = torch.randint(low=0, high=9, size=(100, 40)) x = x.unsqueeze(0).unsqueeze(0) y = pad(x) x.shape # (1, 1, 100, 40) y.shape # (1, 1, 103, 40) print(conv(x.float()).shape) print(conv(y.float()).shape) # Output # x -> (1, 1, 97, 40) # y -> (1, 1, 100, 40) It does work, in the sense that the data shape remains the same. However, is there really no padding='same' option available? Also, how can we decide which side to pad?
I had the same issue some time ago, so I implemented it myself using a ZeroPad2d layer as you are trying to do. Here is the right formula: from functools import reduce from operator import __add__ kernel_sizes = (4, 1) # Internal parameters used to reproduce Tensorflow "Same" padding. # For some reasons, padding dimensions are reversed wrt kernel sizes, # first comes width then height in the 2D case. conv_padding = reduce(__add__, [(k // 2 + (k - 2 * (k // 2)) - 1, k // 2) for k in kernel_sizes[::-1]]) pad = nn.ZeroPad2d(conv_padding) conv = nn.Conv2d(1, 1, kernel_size=kernel_sizes) print(x.shape) # (1, 1, 103, 40) print(conv(y.float()).shape) # (1, 1, 103, 40) Also, as mentioned by @akshayk07 and @Separius, I can confirm that it is the dynamic nature of pytorch that makes it hard. Here is a post about this point from a Pytorch developper.
https://stackoverflow.com/questions/58307036/
BERT sentence embedding by summing last 4 layers
I used Chris Mccormick tutorial on BERT using pytorch-pretained-bert to get a sentence embedding as follows: tokenized_text = tokenizer.tokenize(marked_text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1] * len(tokenized_text) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) model = BertModel.from_pretrained('bert-base-uncased') model.eval() with torch.no_grad(): encoded_layers, _ = model(tokens_tensor, segments_tensors) # Holds the list of 12 layer embeddings for each token # Will have the shape: [# tokens, # layers, # features] token_embeddings = [] # For each token in the sentence... for token_i in range(len(tokenized_text)): # Holds 12 layers of hidden states for each token hidden_layers = [] # For each of the 12 layers... for layer_i in range(len(encoded_layers)): # Lookup the vector for `token_i` in `layer_i` vec = encoded_layers[layer_i][batch_i][token_i] hidden_layers.append(vec) token_embeddings.append(hidden_layers) Now, I am trying to get the final sentence embedding by summing the last 4 layers as follows: summed_last_4_layers = [torch.sum(torch.stack(layer)[-4:], 0) for layer in token_embeddings] But instead of getting a single torch vector of length 768 I get the following: [tensor([-3.8930e+00, -3.2564e+00, -3.0373e-01, 2.6618e+00, 5.7803e-01, -1.0007e+00, -2.3180e+00, 1.4215e+00, 2.6551e-01, -1.8784e+00, -1.5268e+00, 3.6681e+00, ...., 3.9084e+00]), tensor([-2.0884e+00, -3.6244e-01, ....2.5715e+00]), tensor([ 1.0816e+00,...-4.7801e+00]), tensor([ 1.2713e+00,.... 1.0275e+00]), tensor([-6.6105e+00,..., -2.9349e-01])] What did I get here? How do I pool the sum of the last for layers? Thank you!
You create a list using a list comprehension that iterates over token_embeddings. It is a list that contains one tensor per token - not one tensor per layer as you probably thought (judging from your for layer in token_embeddings). You thus get a list with a length equal to the number of tokens. For each token, you have a vector that is a sum of BERT embeddings from the last 4 layers. More efficient would be avoiding the explicit for loops and list comprehenions: summed_last_4_layers = torch.stack(encoded_layers[-4:]).sum(0) Now, variable summed_last_4_layers contains the same data, but in the form of a single tensor of dimension: length of the sentence Γ— 768. To get a single (i.e., pooled) vector, you can do pooling over the first dimension of the tensor. Max-pooling or average-pooling might make much more sense in this case than summing all the token embeddings. When summing the values, vectors of differently long sentences are in different ranges and are not really comparable.
https://stackoverflow.com/questions/58308257/