instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
RuntimeError: Unknown device when trying to run AlbertForMaskedLM on colab tpu | I am running the following code on colab taken from the example here: https://huggingface.co/transformers/model_doc/albert.html#albertformaskedlm
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
assert os.environ['COLAB_TPU_ADDR']
dev = xm.xla_device()
from transformers import AlbertTokenizer, AlbertForMaskedLM
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForMaskedLM.from_pretrained('albert-base-v2').to(dev)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
data = input_ids.to(dev)
outputs = model(data, masked_lm_labels=data)
loss, prediction_scores = outputs[:2]
I haven't done anything to the example code except move input_ids and model onto the TPU device using .to(dev). It seems everything is moved to the TPU no problem as when I input data I get the following output: tensor([[ 2, 10975, 15, 51, 1952, 25, 10901, 3]], device='xla:1')
However when I run this code I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-f756487db8f7> in <module>()
1
----> 2 outputs = model(data, masked_lm_labels=data)
3 loss, prediction_scores = outputs[:2]
9 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py in forward(self, hidden_states, attention_mask, head_mask)
277 attention_output = self.attention(hidden_states, attention_mask, head_mask)
278 ffn_output = self.ffn(attention_output[0])
--> 279 ffn_output = self.activation(ffn_output)
280 ffn_output = self.ffn_output(ffn_output)
281 hidden_states = self.full_layer_layer_norm(ffn_output + attention_output[0])
RuntimeError: Unknown device
Anyone know what's going on?
| Solution is here: https://github.com/pytorch/xla/issues/1909
Before calling model.to(dev), you need to call xm.send_cpu_data_to_device(model, xm.xla_device()):
model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
model = xm.send_cpu_data_to_device(model, dev)
model = model.to(dev)
There are also some issues with getting the gelu activation function ALBERT uses to work on the TPU, so you need to use the following branch of transformers when working on TPU: https://github.com/huggingface/transformers/tree/fix-jit-tpu
See the following colab notebook (by https://github.com/jysohn23) for full solution: https://colab.research.google.com/gist/jysohn23/68d620cda395eab66289115169f43900/getting-started-with-pytorch-on-cloud-tpus.ipynb
| https://stackoverflow.com/questions/61157314/ |
How to apply image transform to a list of images and maintain the right dimensions? | I'm using the Omniglot dataset, which is a set of 19,280 images, each which is 105 x 105 (grayscale).
I defined a custom Dataset class with the following transform:
class OmniglotDataset(Dataset):
def __init__(self, X, transform=None):
self.X = X
self.transform = transform
def __len__(self):
return self.X.shape[0]
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img = self.X[idx]
if self.transform:
img = self.transform(img)
return img
img_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
X_train.shape
(19280, 105, 105)
train_dataset = OmniglotDataset(X_train, transform=img_transform)
When I index a single image, it returns the right dimensions:
train_dataset[0].shape
torch.Size([1, 105, 105])
But when I index several images, it returns the dimensions in the wrong order (I expect 3 x 105 x 105):
train_dataset[[1,2,3]].shape
torch.Size([105, 3, 105])
| You got the error because try apply transformation of single image to list:
A more convenient way to get a batch of any size is to use Dataloader:
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
img_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
omniglot = datasets.Omniglot(root='./data', background=True, download=True, transform = img_transform)
data_loader = DataLoader(omniglot, shuffle=False, batch_size = 8)
for image_batch in data_loader:
# now image_batch contain first eight samples
print(image_batch.shape) # torch.Size([8, 1, 105, 105])
break
If you really need to get images in arbitrary order:
from operator import itemgetter
indexes = [1,3,5]
selected_samples = itemgetter(*b)(omniglot)
| https://stackoverflow.com/questions/61165562/ |
Should we train the original data point when we do data augmentation? | I am confused about the definition of data augmentation. Should we train the original data points and the transformed ones or just the transformed? If we train both, then we will increase the size of the dataset while the second approach won't.
I got this question when using the function RandomResizedCrop.
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
If we resize and crop some of the dataset randomly, we don't actually increase the size of the dataset for data augmentation. Is that correct? Or data augmentation just requires the change/modification of original dataset rather than increase the size of it?
Thanks.
| By definition, or at least the influential paper AlexNet from 2012 that popularized data augmentation in computer vision, increases the size of the training set. Hence the word augmentation. Go ahead and have a look at Section 4.1 from the AlexNet paper. But, here is the gist of it, which I'm quoting from the paper:
The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations. The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches. This increases the size of our training set by a factor of 2048, though the resulting training examples are, of course, highly interdependent.
As for the specific implementation, it depends on your use case and most importantly the size of your training data. If you're short in quantity in the latter case, you should consider training on original and transformed images, by sufficiently taking care to preserve the labels.
| https://stackoverflow.com/questions/61166307/ |
cant see android app after ./gradlew installDebug on device | I have recently started learning to build android apps, and I am trying to follow the instructions from the Pytorch website, which says:
git clone https://github.com/pytorch/android-demo-app.git
cd HelloWorldApp
./gradlew installDebug
I have my device plugged in and I ran the above and got:
BUILD SUCCESSFUL in 8s
26 actionable tasks: 1 executed, 25 up-to-date
But.. I cannot find the application on the phone..
I am baffled as to why this is.
I would REALLY appreciate any directions to troubleshoot this - spent the last two hours on this!
| The app is being installed normally, but not automatically opened after installation. You just need to find it amongst other apps on your device/emulator.
| https://stackoverflow.com/questions/61170478/ |
what does padding_idx do in nn.embeddings() | I'm learning pytorch and
I'm wondering what does the padding_idx attribute do in torch.nn.Embedding(n1, d1, padding_idx=0)?
I have looked everywhere and couldn't find something I can get.
Can you show example to illustrate this?
| As per the docs, padding_idx pads the output with the embedding vector at padding_idx (initialized to zeros) whenever it encounters the index.
What this means is that wherever you have an item equal to padding_idx, the output of the embedding layer at that index will be all zeros.
Here is an example:
Let us say you have word embeddings of 1000 words, each 50-dimensional ie num_embeddingss=1000, embedding_dim=50. Then torch.nn.Embedding works like a lookup table (lookup table is trainable though):
emb_layer = torch.nn.Embedding(1000,50)
x = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
y = emb_layer(x)
y will be a tensor of shape 2x4x50. I hope this part is clear to you.
Now if I specify padding_idx=2, ie
emb_layer = torch.nn.Embedding(1000,50, padding_idx=2)
x = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
y = emb_layer(x)
then output will still be 2x4x50 but the 50-dim vector at (1,2) and (2,3) will be all zeros since x[1,2] and x[2,3] values are 2 which is equal to the padding_idx.
You can think of it as 3rd word in the lookup table (since lookup table would be 0-indexed) is not being used for training.
| https://stackoverflow.com/questions/61172400/ |
Choosing the learning_rate using fastai's learn.lr_find() | I am going over this Heroes Recognition ResNet34 notebook published on Kaggle.
The author uses fastai's learn.lr_find() method to find the optimal learning rate.
Plotting the loss function against the learning rate yields the following figure:
It seems that the loss reaches a minimum for 1e-1, yet in the next step the author passes 1e-2 as the max_lr in fit_one_cycle in order to train his model:
learn.fit_one_cycle(6,1e-2)
Why use 1e-2 over 1e-1 in this example? Wouldn't this only make the training slower?
| The idea for a learning rate range test as done in lr_find comes from this paper by Leslie Smith: https://arxiv.org/abs/1803.09820 That has a lot of other useful tuning tips; it's worth studying closely.
In lr_find, the learning rate is slowly ramped up (in a log-linear way). You don't want to pick the point at which loss is lowest; you want to pick the point at which it is dropping fastest per step (=net is learning as fast as possible). That does happen somewhere around the middle of the downward slope or 1e-2, so the guy who wrote the notebook has it about right. Anything between 0.5e-2 and 3e-2 has roughly the same slope and would be a reasonable choice; the smaller values would correspond to a bit slower learning (=more epochs needed, also less regularization) but with a bit less risk of reaching a plateau too early.
I'll try to add a bit of intuition about what is happening when loss is the lowest in this test, say learning rate=1e-1. At this point, the gradient descent algorithm is taking large steps in the direction of the gradient, but loss is not decreasing. How can this happen? Well, it would happen if the steps are consistently too large. Think of trying to get into a well (or canyon) in the loss landscape. If your step size is larger than the size of the well, you can consistently step over it every time and end up on the other side.
This picture from a nice blog post by Jeremy Jordan shows it visually:
In the picture, it shows the gradient descent climbing out of a well by taking too large steps (maybe lr=1+0 in your test). I think this rarely happens exactly like that unless lr is truly excessive; more likely, the well is in a relatively flat landscape, and the gradient descent can step over it, not being able to get into the well in the first place. High-dimensional loss landscapes are hard to visualize, and may be very irregular, but in a sense the lr_find test is looking for the scale of the typical features in the landscape and then picking a learning rate that gives you a step which is similar sized but a bit smaller.
| https://stackoverflow.com/questions/61172627/ |
I am installing the Detecto library in anaconda environment but it fails while importing | As a source, I apply the guide here:
Object Detection
I applied the pip3 install detecto command as shown here and it was successfully installed. But when I run python and try to run the detecto command, I get an error. How can I solve it?
Operation:
| I don't think you can import detecto directly. Use from detecto import <lib> . Try from detecto.core import Model and see if it works in your terminal
| https://stackoverflow.com/questions/61174808/ |
How do I convert 'the array saved as string to csv' back to float array? | I had to merge a lot of files (containing word embeddings and othe real valued vectors) based on some common attributes so I used Pandas DataFrame and saved the intermediate files as csv.
Currently I have a dataframe whose columns look something like this:
I want to merge all last 4 columns (t-1embedding1a,t-1embedding7b,t-2embedding1a,t-2embedding7b) into a single vector to pass to neural network.
I planned to iterate over the current dataframe and take 4 temporary tensors with value of each column and concatenate and write to new dataframe.
However torch.tensor doesn't work as it says:
torch_tensor = torch.tensor(final['t-1embedding1a'].astype(float).values)
could not convert string to float: '[-6.12873614e-01 -5.58319509e-01 -9.73452032e-01 3.66993636e-01\n
I also tried np.fromstring() but the original values are lost in this case.
Sorry, if the question is unnecessarily complicated, I am a newbie to pytorch. Any help is appreciated!
| First of all, the data type for columns with "t-lembeddingXX" is string that look like "[-6.12873614e-01 -5.58319509e-01 -9.73452032e-01 3.66993636e-01]". You have to convert them to a list of float.
final["t-lembeddingXX"] = final["t-lembeddingXX"].apply(lambda x : [float(x) for x in x.replace("[", "").replace ("]", "").split()])
Then, you have to check that each list of final.loc[i,"t-lembeddingXX"] has the same lengths.
If I haven't mistaken, you want to merge the 4 columns into one verctor.
all_values = list(df["t-lembeddingX1"]) + list(df["t-lembeddingX2"]) + list(df["t-lembeddingX3"]) + list(df["t-lembeddingX4"])
# there is sureliy a better way
Then pass to tensor:
torch_tensor = torch.tensor(all_values)
------------
Finally, I advise you to take a look at the function of torch.cat. You can convert each column to a vector and then use this function to concatenate them together.
| https://stackoverflow.com/questions/61177480/ |
test accuracy fluctuate even train and test are always same | Even though my train, validation and test set are always the same, test accuracy fluctuate. The only reason I can think of is weight initialization. I am using PyTorch and I guess they use advance initialization technique (kaiming initialization).
What might be the reason for accuracy fluctuation even though train, validation and test data are the same?
| In addition to weight initialisation, dropout between layers also involves randomness and these can lead to different results on running again.
These random numbers are generally based on a seed value and fixing it can help reproduce the results. You can take a look here on how to fix seed value.
| https://stackoverflow.com/questions/61177832/ |
How to solve size mismatch error in Python (Pytorch) | I am currently working on multivariate linear regression using PyTorch and I am getting the following error, I did search a lot about this error and the only thing I got to know is that there is a size mismatch between data and labels. But how to solve this error. Please help me or show me the right way to solve this problem.
size mismatch, m1: [824 x 1], m2: [8 x 8]
import torch
import torch.nn as nn
import numpy as np
Xtr = np.loadtxt("TrainData.csv")
Ytr = np.loadtxt("TrainLabels.csv")
X_train = torch.FloatTensor(Xtr)
Y_train = torch.FloatTensor(Ytr)
#### MODEL ARCHITECTURE ####
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear = torch.nn.Linear(8,8)
self.lin2 = torch.nn.Linear(8,1)
def forward(self, x):
x = self.lin2(x)
y_pred = self.linear(x)
return y_pred
model = Model()
loss_func = nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
#print(len(list(model.parameters())))
def count_params(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
### TRAINING
for epoch in range(2):
y_pred = model(X_train)
loss = loss_func(y_pred, Y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
count = count_params(model)
print(count)
test_exp = torch.FloatTensor([[6.0]])
| It looks like the order of operations in your forward pass is incorrect. The short answer is swap them as shown below. More context on the various shapes below.
def forward(self, x):
x = self.lin2(x)
y_pred = self.linear(x)
return y_pred
Should be:
def forward(self, x):
x = self.linear(x)
y_pred = self.lin2(x)
return y_pred
Assuming that you have 8 features and some batch size N your input data to the forward pass will have size (N x 8). After you pass it through lin2 it will have shape (N x 1). The linear node expects an input with shape (N x 8) but it's getting (N x 1) hence the error.
| https://stackoverflow.com/questions/61186998/ |
Should Decoder Prediction Be Detached in PyTorch Training? | Hi guys I have recently started to use PyTorch for my research that needs the encoder-decoder framework. PyTorch's tutorials on this are wonderful, but there's a little problem: when training the decoder without teacher forcing, which means the prediction of the current time step is used as the input to the next, should the prediction be detached?
In this PyTorch tutorial, detach is used (decoder_input = topi.squeeze().detach() # detach from history as input
), but it is not the case in this one (top1 = output.max(1)[1]; output = (trg[t] if teacher_force else top1)).
Both tutorials are RNN-based, so I am not sure about Transformer-based architectures. Would be grateful if someone could point out which one is the better practice :).
| Yes, you should detach it. Detaching a tensor removes it from the computational graph, so it's no longer tracked in respect to the gradient calculations, which is exactly what you want. Since the previous token can be seen as a constant defining the starting point, it will be discarded after one time step. However if you don't detach it, it will still be hanging around since it's tracked in the computational graph, which consumes unnecessary memory.
Realistically, the memory overhead is usually rather small, so you would only notice it if you have a lot of time steps and are at the upper limit of your GPU memory usage. Just regard it as a micro-optimisation.
There are instances where you absolutely need to detach a tensor to avoid unwanted backpropagation, but that generally happens when the same input is used in two different models, since backward consumes the graph by default and if two different backpropagations try to go through the same path it won't be available anymore and fail.
| https://stackoverflow.com/questions/61187520/ |
How can we provide the output of a Linear layer to a Conv2D in PyTorch? | I am building an Autoencoder where I need to encode an image into a latent representation of length 100. I am using the following architecture for my model.
self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3)
self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size=3,stride=2)
self.conv3 = nn.Conv2d(in_channels=64,out_channels=128,kernel_size=3,stride=2)
self.linear = nn.Linear(in_features=128*30*30,out_features=100)
self.conv1_transpose = nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=2,output_padding=1)
self.conv2_transpose = nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=2,output_padding=1)
self.conv3_transpose = nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1)
Is there any way I could give my Linear layer's output to a Conv2D or a ConvTranspose2D layer so that I can reconstruct my image? The output is restored if I remove the Linear layer. I want to know how I can reconstruct my image keeping the Linear layer
Any help would be appreciated. Thanks!
| You could use another linear layer:
self.linear2 = nn.Linear(in_features=100, out_features=128*30*30)
And then reshape the output into a 3D volume and pass it into your de-convolution layers.
| https://stackoverflow.com/questions/61194941/ |
how to initialize Tensor for torch.cat | import torch
#Y_pred = ?
for xi in X_iter:
y_pred = net(xi).argmax(dim=1)
Y_pred = torch.cat([Y_pred, y_pred])
How do you initialize this tensor, or is there a better way to write it?
| You could do this instead:
Y_pred = torch.cat([net(xi).argmax(dim=1) for xi in X_iter])
| https://stackoverflow.com/questions/61195211/ |
Resize torch tensor channels | I have a torch tensor with 3 channels, and I want it to be 1 channel (all other dimensions should stay the same).
So if my current dimensions are torch.Size([6, 3, 512, 512]) I want it to be torch.Size([6, 1, 512, 512])
How can I do that?
| Does this solve your problem?
a = torch.ones(6, 3, 512, 512)
b = a[:, 0:1, :, :]
print(b.size()) # torch.Size([6, 1, 512, 512])
| https://stackoverflow.com/questions/61196528/ |
Torch optimisers with different scaled parameters | I am trying to optimise parameter values using a torch optimiser but the parameters are on vastly different scales. i.e., one parameter has values in the thousands while others are between 0 and 1. For example in this made up case there are two parameters - one has an optimal value of 0.1 and the other an optimal value of 20. How can I modify this code so it applies a sensible learning rate to each parameter say 1e-3 and 0.1?
import torch as pt
# Objective function
def f(x, y):
return (10 - 100 * x) ** 2 + (y - 20) ** 2
# Optimal parameters
print("Optimal value:", f(0.1, 20))
# Initial parameters
hp = pt.Tensor([1, 10])
print("Initial value", f(*hp))
# Optimiser
hp.requires_grad = True
optimizer = pt.optim.Adam([hp])
n = 5
for i in range(n):
optimizer.zero_grad()
loss = f(*hp)
loss.backward()
optimizer.step()
hp.requires_grad = False
print("Final parameters:", hp)
print("Final value:", f(*hp))
| torch.optim.Optimizer class accepts a list of dictionaries in the params argument as the parameter groups. In each dictionary, you need to define params and other arguments used for this parameter group. If you do not provide a specific argument in the dictionary, the original arguments passed to the Optimizer will be used instead. Refer to the official documentation for more information.
Here is the updated code:
import torch as pt
# Objective function
def f(x, y):
return (10 - 100 * x) ** 2 + (y - 20) ** 2
# Optimal parameters
print("Optimal value:", f(0.1, 20))
# Initial parameters
hp = pt.Tensor([1]), pt.Tensor([10])
print("Initial value", f(*hp))
# Optimiser
for param in hp:
param.requires_grad = True
# eps and betas are shared between the two groups
optimizer = pt.optim.Adam([{"params": [hp[0]], "lr": 1e-3}, {"params": [hp[1]], "lr": 0.1}])
# optimizer = pt.optim.Adam([{"params": [hp[0]], "lr": 1}, {"params": [hp[1]], "lr": 2.2}])
n = 5
for i in range(n):
optimizer.zero_grad()
loss = f(*hp)
loss.backward()
optimizer.step()
for param in hp:
param.requires_grad = False
print("Final parameters:", hp)
print("Final value:", f(*hp))
Try using {"lr": 1} and {"lr": 2.2} for the first and second parameters, respectively. It will result in the final value of 19.9713.
| https://stackoverflow.com/questions/61199034/ |
AWS EC2 Deep Learning instance cuda 3.0 | I just launched (and paid for) the Deep Learning AMI (Ubuntu 18.04) Version 27.0 (ami-0dbb717f493016a1a) instance type g2.2xlarge. I activated
for PyTorch with Python3 (CUDA 10.1 and Intel MKL) ____________source activate pytorch_p36
When I run my pytorch network I see a warning
/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/cuda/__init__.py:134: UserWarning:
Found GPU0 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
Is this real?
This is my code to put my neural net on the gpu
if torch.cuda.is_available():
device = torch.device("cuda:0") # you can continue going on here, like cuda:1 cuda:2....etc.
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
net = Net(image_height, image_width)
net.to(device)
| I had to use a g3s.xlarge instance. I guess the g2 instances use older GPUs.
Also I had to make num_workers=0 on my dataloaders following this https://discuss.pytorch.org/t/oserror-errno-12-cannot-allocate-memory-but-memory-usage-is-actually-normal/56027.
And this is another pytorch gotcha https://stackoverflow.com/a/51606286/3614578 when adding tensors to a device.
| https://stackoverflow.com/questions/61200312/ |
Reduce the dimension of a tensor using max-pooling layer | My question is very simple:
How can I reduce the dimension of a list or a tensor using max-pooling layer to 512 elements in the list:
I'm trying the following code:
input_ids = tokenizer.encode(question, text)
print(input_ids) # input_ids is a list of 700 elements
m = nn.AdaptiveMaxPool1d(512)
input_ids = m(torch.tensor([[input_ids]])) # convert the list to tensor and apply max-pooling layer
But I get the following error:
RuntimeError: "adaptive_max_pool2d_cpu" not implemented for 'Long'
So, please help to figure out where is the error
| The problem is with your input_ids. You are passing a tensor of type long to AdaptiveMaxPool1d, just convert it to float.
input_ids = tokenizer.encode(question, text)
print(input_ids) # input_ids is a list of 700 elements
m = nn.AdaptiveMaxPool1d(512)
input_ids = m(torch.tensor([[input_ids]]).float()) #
| https://stackoverflow.com/questions/61206157/ |
PyTorch LSTM for multiclass classification: TypeError: '<' not supported between instances of 'Example' and 'Example' | I am trying to modify the code in this Tutorial to adapt it to a multiclass data (I have 55 distinct classes). An error is triggered and I am uncertain of the root cause. The changes I made to this tutorial have been annotated in same-line comments.
One of two solutions would satisfy this questions:
(A) Help identifying the root cause of the error, OR
(B) A boilerplate script for multiclass classification using PyTorch LSTM
import spacy
import torchtext
from torchtext import data
import re
TEXT = data.Field(tokenize = 'spacy', include_lengths = True)
LABEL = data.LabelField(dtype = torch.float)
fields = [(None,None),('text', TEXT), ('wage_label', LABEL)]
train_torch, test_torch = data.TabularDataset.splits(path='/Users/jdmoore7/Desktop/Python Projects/560_capstone/',
format='csv',
train='train_text_target.csv',
test='test_text_target.csv',
fields=fields,
skip_header=True)
import random
train_data, valid_data = train_torch.split(random_state = random.seed(SEED))
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_torch),
batch_size = BATCH_SIZE,
sort_within_batch = True,
device = device)
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers,
bidirectional, dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.rnn = nn.LSTM(embedding_dim,
hidden_dim,
num_layers=n_layers,
bidirectional=bidirectional,
dropout=dropout)
self.fc = nn.Linear(hidden_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text, text_lengths):
#text = [sent len, batch size]
embedded = self.dropout(self.embedding(text))
#embedded = [sent len, batch size, emb dim]
#pack sequence
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths)
packed_output, (hidden, cell) = self.rnn(packed_embedded)
#unpack sequence
output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
#output = [sent len, batch size, hid dim * num directions]
#output over padding tokens are zero tensors
#hidden = [num layers * num directions, batch size, hid dim]
#cell = [num layers * num directions, batch size, hid dim]
#concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
#and apply dropout
hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))
#hidden = [batch size, hid dim * num directions]
return self.fc(hidden)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = len(LABEL.vocab) ### changed from previous value (1)
N_LAYERS = 2
BIDIRECTIONAL = True
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = RNN(INPUT_DIM,
EMBEDDING_DIM,
HIDDEN_DIM,
OUTPUT_DIM,
N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
PAD_IDX)
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss() # Previously: criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
All the above ran smoothly, it's the next code block which triggers the error:
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-888-c1b298b1eeea> in <module>
7 start_time = time.time()
8
----> 9 train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
10 valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
11
<ipython-input-885-9a57198441ec> in train(model, iterator, optimizer, criterion)
6 model.train()
7
----> 8 for batch in iterator:
9
10 optimizer.zero_grad()
~/opt/anaconda3/lib/python3.7/site-packages/torchtext/data/iterator.py in __iter__(self)
140 while True:
141 self.init_epoch()
--> 142 for idx, minibatch in enumerate(self.batches):
143 # fast-forward if loaded from state
144 if self._iterations_this_epoch > idx:
~/opt/anaconda3/lib/python3.7/site-packages/torchtext/data/iterator.py in pool(data, batch_size, key, batch_size_fn, random_shuffler, shuffle, sort_within_batch)
284 for p in batch(data, batch_size * 100, batch_size_fn):
285 p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn) \
--> 286 if sort_within_batch \
287 else batch(p, batch_size, batch_size_fn)
288 if shuffle:
TypeError: '<' not supported between instances of 'Example' and 'Example'
Lastly, the PyTorch forum has an issue opened for this error, however, the code that produced it is not similar so I understand that to be a separate issue.
| The BucketIterator sorts the data to make batches with examples of similar length to avoid having too much padding. For that it needs to know what the sorting criterion is, which should be the text length. Since it is not fixed to a specific data layout, you can freely choose which field it should use, but that also means you must provide that information to sort_key.
In your case, there are two possible fields, text and wage_label, and you want to sort it based on the length of the text.
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_torch),
batch_size = BATCH_SIZE,
sort_within_batch = True,
sort_key = lambda x: len(x.text),
device = device)
You might be wondering why it worked in the tutorial but doesn't in your example. The reason is that if sort_key is not specified, it defers it to the underlying dataset. In the tutorial they used the IMDB dataset, which defines the sort_key to be x.text. Your custom dataset did not define that, so you need to specify it manually.
| https://stackoverflow.com/questions/61213493/ |
Speed of L2 Regularization on Pytorch | I'm trying to manually implement L2 regularisation and a couple of its variations in a neural network. What I'm doing is the following:
for name, param in model.state_dict():
if 'weight' in name:
l2_reg += torch.sum(param**2)
loss = cross_entropy(outputs, labels) + 0.0001*l2_reg
Is this equivalent to adding 'weight_decay = 0.0001' inside my optimizer? i.e.:
torch.optim.SGD(model.parameters(), lr=learning_rate , momentum=0.9, weight_decay = 0.0001)
My problem is that I thought they were equivalent, but the manual procedure is about 100x slower than adding 'weight_decay = 0.0001'. Why is that? How can I fix it?
Note that I need to also implement my own variation of L2 regularization, so just adding 'weight_decay = 0.0001' won't help.
| You can check PyTorch implementation of SGD to get some tips and base off of that code.
There are a few things going on which should speed up your custom regularization.
Below is a cleaned version (a little pseudo-code, refer to original) of the parts we are interested in:
for p in group['params']:
if p.grad is None:
continue
d_p = p.grad.data
if weight_decay != 0:
d_p.add_(weight_decay, p.data)
p.data.add_(-group['lr'], d_p)
return loss
BTW. It seems your implementation is mathematically sound (correct me if I missed anything) and equivalent to PyTorch but will be slow indeed.
Modify only gradient
Please notice you perform regularization explicitly during forward pass. This takes a lot of time, more or less because:
take parameters and iterate over them
take it to the power of 2
sum all of them
add to variable containing all previous parameters (all this while creating graph dynamically and creating new nodes).
What pytorch does is it only focuses on backward pass as that's all is needed. This is pretty handy because:
parameters have to be loaded and iterated over once anyway during corrections performed by optimizer (in your case they are taken out twice)
no power of 2 because gradient of w**2 is simply 2*w (2 is further left out and L2 is often expressed as 1/2 * w **2 to make it simpler and a little faster)
no accumulation and creation of additional graph nodes
Essentially, this line:
d_p.add_(weight_decay, p.data)
Modifies the gradient adding p.data (weight) multiplied by weight_decay all done in-place (notice d_p.add_), which is all you have to do to perform L2 regularization.
Finally this line:
p.data.add_(-group['lr'], d_p)
Updates weights with gradient (modified by weight decay) using standard SGD formula (once again, in-place to be as fast as possible, at least on Python level).
Your own implementation
I would advise you to follow similar logic for your own regularization if you want to make it faster.
You can copy PyTorch implementation of SGD and only change this one relevant line. This would also gives you functionality of PyTorch optimizer in case you need it in your experiments.
For L1 regularization (|w| instead of w**2) you would have to calculate the derivative of it (which is 1 for positive case, -1 for negative and undefined for 0 (we can't have that so it should be zero)).
With that in mind we can write the weight_decay like this:
if weight_decay != 0:
d_p.add_(weight_decay, torch.sign(p.data))
torch.sign returns 1 for positive values and -1 for negative and 0 for... yeah, 0.
Hope this helps, exact implementation is left for you (hit me up in the comments in case you have any questions or troubles).
| https://stackoverflow.com/questions/61215600/ |
EOFError: Ran out of input, using torch.load() | I saw this error being posted a lot and often it was due to the file not being closed properly after opening. But since I'm using the integrated torch.load() function, I'm not sure what I could do different.
First the saving part:
torch.save({
'model_state_dict': agent.dqn.state_dict(),
...
'loss_history': agent.losshistory
}, modelpath)
and here the loading part, where I also get the error message:
if os.path.exists(modelpath):
checkpoint = torch.load(modelpath)
agent.dqn.load_state_dict(checkpoint['model_state_dict'])
...
agent.losshistory = checkpoint['loss_history']
and here the error:
Traceback (most recent call last):
File "c:/Users/levin/Desktop/programming/main.py", line 33, in <module>
checkpoint = torch.load(modelpath)
File "C:\Users\levin\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 529, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\levin\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\serialization.py", line 702, in _legacy_load
result = unpickler.load()
EOFError: Ran out of input
One more thing I want to mention is that I used this exact code several times without a problem. I can't remember changing anything that could have caused the error.
| According to this thread it seems to raise an exception when reading an empty file, so please check the size of the document before reading it and post a response if it is not solved.
| https://stackoverflow.com/questions/61215873/ |
Why doesn't torch.autograd compute the gradient in this case? | Why doesn't torch.autograd compute the gradient in this case?
import torch
x = torch.tensor([1., 2., ], requires_grad=True)
y = torch.tensor([x[0] + x[1], x[1] - x[0], ], requires_grad=True)
z = y[0] + y[1]
z.backward()
x.grad
Output is a blank line (None). The same occurs for x[0].grad. Why?
PS: In retrospect, I realize the motivation for making y a tensor with requires_grad was so I could examine its own gradient. I learned that one must use retain_grad for that here: Why does autograd not produce gradient for intermediate variables?
| When you use torch.tensor for y, it just uses the values of x to initialize the tensor, the gradient chain is lost.
This works:
x = torch.tensor([1., 2., ], requires_grad=True)
y = [x[0] + x[1], x[1] - x[0], ]
z = y[0] + y[1]
z.backward()
x.grad
The result is tensor([0., 2.])
| https://stackoverflow.com/questions/61217336/ |
Getting the following error while using scikit-image to read images "AttributeError: 'PngImageFile' object has no attribute '_PngImageFile__frame' " | I am using scikit-image to load a random image from a folder. OpenCV is being used for operations later on..
Code is as follows (only relevant parts included)
import imageio
import cv2 as cv
import fileinput
from collections import Counter
from data.apple_dataset import AppleDataset
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
from torchvision.transforms import functional as F
import utility.utils as utils
import utility.transforms as T
from PIL import Image
import skimage.io
from skimage.viewer import ImageViewer
from matplotlib import pyplot as plt
%matplotlib inline
APPLE_IMAGE_PATH = r"__mypath__\samples\apples\images"
# Load a random image from the images folder
FILE_NAMES = next(os.walk(APPLE_IMAGE_PATH))[2]
random_apple_in_folder = os.path.join(APPLE_IMAGE_PATH, random.choice(FILE_NAMES))
apple_image = skimage.io.imread(random_apple_in_folder)
apple_image_cv = cv.imread(random_apple_in_folder)
apple_image_cv = cv.cvtColor(apple_image_cv, cv.COLOR_BGR2RGB)
Error is as follows
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-9575eed18f18> in <module>
11 FILE_NAMES = next(os.walk(APPLE_IMAGE_PATH))[2]
12 random_apple_in_folder = os.path.join(APPLE_IMAGE_PATH, random.choice(FILE_NAMES))
---> 13 apple_image = skimage.io.imread(random_apple_in_folder)
14 apple_image_cv = cv.imread(random_apple_in_folder)
AttributeError: 'PngImageFile' object has no attribute '_PngImageFile__frame'
How do i proceed from here? What should i change???
| This is a bug in Pillow 7.1.0. You can upgrade Pillow with pip install -U pillow. See this bug report for more information:
https://github.com/scikit-image/scikit-image/issues/4548
| https://stackoverflow.com/questions/61220974/ |
Confusion in Pre-processing text for Roberta Model | I want to apply Roberta model for text similarity. Given a pair of sentences,the input should be in the format <s> A </s></s> B </s>. I figure out two possible ways to generate the input ids namely
a)
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
list1 = tokenizer.encode('Very severe pain in hands')
list2 = tokenizer.encode('Numbness of upper limb')
sequence = list1+[2]+list2[1:]
In this case, sequence is [0, 12178, 3814, 2400, 11, 1420, 2, 2, 234, 4179, 1825, 9, 2853, 29654, 2]
b)
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
list1 = tokenizer.encode('Very severe pain in hands', add_special_tokens=False)
list2 = tokenizer.encode('Numbness of upper limb', add_special_tokens=False)
sequence = [0]+list1+[2,2]+list2+[2]
In this case, sequence is [0, 25101, 3814, 2400, 11, 1420, 2, 2, 487, 4179, 1825, 9, 2853, 29654, 2]
Here 0 represents <s> token and 2 represents </s> token. I'm not sure which is the correct way to encode the given two sentences for calculating sentence similarity using Roberta model.
| The easiest way is probably to directly use the provided function by HuggingFace's Tokenizers themselves, namely the text_pair argument in the encode function, see here. This allows you to directly feed in two sentences, which will be giving you the desired output:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
sequence = tokenizer.encode(text='Very severe pain in hands',
text_pair='Numbness of upper limb',
add_special_tokens=True)
This is especially convenient if you are dealing with very long sequences, as the encode function automatically reduces your lengths according to the truncaction_strategy argument. You obviously don't have to worry about this, if it is only short sequences.
Alternatively, you can also make use of the more explicit build_inputs_with_special_tokens() function of the RobertaTokenizer, specifically, which could be added to your example like so:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
list1 = tokenizer.encode('Very severe pain in hands', add_special_tokens=False)
list2 = tokenizer.encode('Numbness of upper limb', add_special_tokens=False)
sequence = tokenizer.build_inputs_with_special_tokens(list1, list2)
Note that in that case, you have to generate the sequences list1 and list2 still without any special tokens, as you have already done correctly.
| https://stackoverflow.com/questions/61221810/ |
Python : convert a float encoded as a byte string (from PyTorch) to an int | I have converted an output from PyTorch using .detach().numpy() which produces this kind of data :
b'0.06722715'
which is a byte type according to type() from Python. How can I convert this to an integer ?
| Try this (explaination in code comments). You can convert 0.06 to an integer but you'll get a zero. Did you mean float?
#byte
b = b'0.06722715'
# to string
s = b.decode()
# to float
f = float(s)
# to integer
i = int(f)
print("Float", f)
print("Integer", i)
or simply
be_float = float(b.decode())
print (be_float)
| https://stackoverflow.com/questions/61223589/ |
Model behaves differently after saving and loading | I want to use torch.save() to save a trained model for inference. However, with either torch.load_state_dict() or torch.load(), I can't get the saved model. The loss computed by the loaded model is just different from the loss computed by the saved model.
The relevant Libraries:
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn import functional as F
The model:
class nn_block(nn.Module):
def __init__(self, feats_dim):
super(nn_block, self).__init__()
self.linear = nn.Linear(feats_dim, feats_dim)
self.bn = nn.BatchNorm1d(feats_dim)
self.softplus1 = nn.Softplus()
self.softplus2 = nn.Softplus()
def forward(self, rep_mat):
transformed_mat = self.linear(rep_mat)
transformed_mat = self.bn(transformed_mat)
transformed_mat = self.softplus1(transformed_mat)
transformed_mat = self.softplus2(transformed_mat + rep_mat)
return transformed_mat
class test_nn(nn.Module):
def __init__(self, in_feats, feats_dim, num_conv, num_classes):
super(test_nn, self).__init__()
self.linear1 = nn.Linear(in_feats, feats_dim)
self.convs = [nn_block(feats_dim) for _ in range(num_conv)]
self.linear2 = nn.Linear(feats_dim, num_classes)
self.softmax = nn.Softmax()
def forward(self, rep_mat):
h = self.linear1(rep_mat)
for conv_func in self.convs:
h = conv_func(h)
h = self.linear2(h)
h = self.softmax(h)
return h
Train, save, and reload a model:
# fake a classification task
num_classes = 2; input_dim = 8
one = np.random.multivariate_normal(np.zeros(input_dim),np.eye(input_dim),20)
two = np.random.multivariate_normal(np.ones(input_dim),np.eye(input_dim),20)
inputs = np.concatenate([one, two], axis=0)
labels = np.concatenate([np.zeros(20), np.ones(20)])
inputs = Variable(torch.Tensor(inputs))
labels = torch.LongTensor(labels)
# build a model
net = test_nn(input_dim, 5, 2, num_classes)
optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
net.train()
losses = []
best_score = 1e10
for epoch in range(25):
preds = net(inputs)
loss = F.cross_entropy(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
state_dict = {'state_dict': net.state_dict()}
if loss.item()-best_score<-1e-4:
# save only parameters
torch.save(state_dict, 'model_params.torch')
# save the whole model
torch.save(net, 'whole_model.torch')
best_score = np.min([best_score, loss.item()])
losses.append(loss.item())
net_params = test_nn(input_dim, 5, 2, num_classes)
net_params.load_state_dict(torch.load('model_params.torch')['state_dict'])
net_params.eval()
preds_params = net_params(inputs)
loss_params = F.cross_entropy(preds_params, labels)
print('reloaded params %.4f %.4f' % (loss_params.item(), np.min(losses)))
net_whole = torch.load('whole_model.torch')
net_whole.eval()
preds_whole = net_whole(inputs)
loss_whole = F.cross_entropy(preds_whole, labels)
print('reloaded whole %.4f %.4f' % (loss_whole.item(), np.min(losses)))
As you can see by running the code, the losses computed by the two loaded models are different, while the two loaded models are exactly the same. Not just the two losses are different, they are also different from the loss computed by the best model that was saved in the first place.
Why this can happen?
| The state dict contains every parameter (nn.Parameter
) and buffer (similar to parameter, but which should not be trained/optimised) that has been registered on the module and all of its submodules. Everything else will not be included in that state dict.
Your test_nn module uses a list for convs, therefore it is not included in the state dict:
self.convs = [nn_block(feats_dim) for _ in range(num_conv)]
Not only are they not contained in the state dict, they are also not visible to net.parameters(), which means they are not trained/optimised at all.
To register the modules from the list you can wrap it in nn.ModuleList, which is a module that acts like a list, while correctly registering the modules it contains:
self.convs = nn.ModuleList([nn_block(feats_dim) for _ in range(num_conv)])
With that change both models produce the same result.
Since you are calling the convs modules sequentially in the for-loop (output of one module is the input of the next), you may consider using nn.Sequential, which you can call directly instead of having to use the for-loop. Sequencing is used a lot and it just makes it a little simpler, for example if you want to replace the sequence of modules with a single module, you don't need to change anything in the forward method.
Not just the two losses are different, they are also different from the loss computed by the best model that was saved in the first place.
When you are training, you calculate the loss for the current input (batch) and then you optimise the parameters based on that input. This means your parameters differ from the ones used to calculate the loss. Because you are saving the model after that, it will also have a different loss (the one that would occur in the next iteration).
preds = net(inputs)
# Calculating the loss of the current model
loss = F.cross_entropy(preds, labels)
optimizer.zero_grad()
loss.backward()
# Updating the model's parameters based on the loss
optimizer.step()
# State of the model after it has been updated
state_dict = {'state_dict': net.state_dict()}
# Comparing the loss from BEFORE the update
# But saving the model from AFTER the update
if loss.item()-best_score<-1e-4:
# save only parameters
torch.save(state_dict, 'model_params.torch')
# save the whole model
torch.save(net, 'whole_model.torch')
It's important to evaluate the model after the updates have been made. For this reason a validation set should be used, which is run after each epoch to assess the model's accuracy.
| https://stackoverflow.com/questions/61223723/ |
How to output the accuracy alongside with the loss when training the MNIST dataset after each epoch | from __future__ import print_function
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
from tensorflow.examples.tutorials.mnist import input_data
import torch.optim as optim
import tensorflow.python.util.deprecation as deprecation
deprecation._PRINT_DEPRECATION_WARNINGS = False
import matplotlib.pyplot as plt
%matplotlib inline
from plot import plot_loss_and_acc
mnist = input_data.read_data_sets("MNIST_data", one_hot=False)
batch_size = 250
epoch_num = 10
lr = 0.0001
disp_freq = 20
def next_batch(train=True):
# Reads the next batch of MNIST images and labels and returns them
if train:
batch_img, batch_label = mnist.train.next_batch(batch_size)
else:
batch_img, batch_label = mnist.test.next_batch(batch_size)
batch_label = torch.from_numpy(batch_label).long() # convert the numpy array into torch tensor
batch_label = Variable(batch_label) # create a torch variable
batch_img = torch.from_numpy(batch_img).float() # convert the numpy array into torch tensor
batch_img = Variable(batch_img) # create a torch variable
return batch_img, batch_label
class MLP(nn.Module):
def __init__(self, n_features, n_classes):
super(MLP, self).__init__()
self.layer1 = nn.Linear(n_features, 128)
self.layer2 = nn.Linear(128, 128)
self.layer3 = nn.Linear(128, n_classes)
def forward(self, x, training=True):
# a neural network with 2 hidden layers
# x -> FC -> relu -> dropout -> FC -> relu -> dropout -> FC -> output
x = F.relu(self.layer1(x))
x = F.dropout(x, 0.5, training=training)
x = F.relu(self.layer2(x))
x = F.dropout(x, 0.5, training=training)
x = self.layer3(x)
return x
def predict(self, x):
# a function to predict the labels of a batch of inputs
x = F.softmax(self.forward(x, training=False))
return x
def accuracy(self, x, y):
# a function to calculate the accuracy of label prediction for a batch of inputs
# x: a batch of inputs
# y: the true labels associated with x
prediction = self.predict(x)
maxs, indices = torch.max(prediction, 1)
acc = 100 * torch.sum(torch.eq(indices.float(), y.float()).float())/y.size()[0]
print(acc.data)
return acc.data
# define the neural network (multilayer perceptron)
net = MLP(784, 10)
# calculate the number of batches per epoch
batch_per_ep = mnist.train.num_examples // batch_size
# define the loss (criterion) and create an optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=lr)
print(' ')
print("__________Training__________________")
xArray = []
yLoss = []
yAcc = []
for ep in range(epoch_num): # epochs loop
for batch_n in range(batch_per_ep): # batches loop
features, labels = next_batch()
# Reset gradients
optimizer.zero_grad()
# Forward pass
output = net(features)
loss = criterion(output, labels)
# Backward pass and updates
loss.backward() # calculate the gradients (backpropagation)
optimizer.step() # update the weights
if batch_n % disp_freq == 0:
print('epoch: {} - batch: {}/{} '.format(ep, batch_n, batch_per_ep))
xArray.append(ep)
yLoss.append(loss.data)
#yAcc.append(acc.data)
print('loss: ', loss.data)
print('__________________________________')
# test the accuracy on a batch of test data
features, labels = next_batch(train=False)
print("Result")
print('Test accuracy: ', net.accuracy(features, labels))
print('loss: ', loss.data)
accuracy = net.accuracy(features, labels)
#Loss Plot
# plotting the points
plt.plot(xArray, yLoss)
# naming the x axis
plt.xlabel('epoch')
# naming the y axis
plt.ylabel('loss')
# giving a title to my graph
plt.title('Loss Plot')
# function to show the plot
plt.show()
#Accuracy Plot
# plotting the points
plt.plot(xArray, yAcc)
# naming the x axis
plt.xlabel('epoch')
# naming the y axis
plt.ylabel(' accuracy')
# giving a title to my graph
plt.title('Accuracy Plot ')
# function to show the plot
plt.show()
I want to display the accuracy of my training dataset. I have managed to display and plot the loss but I didn't manage to do it for accuracy. I know I am missing 1 or 2 lines of code and I don't know how to do it.
I mean if I can display the accuracy alongside each epoch like the loss I can do the plotting myself.
| Hi replace this code print('epoch: {} - batch: {}/{} '.format(ep, batch_n, batch_per_ep)) with
print('epoch: {} - batch: {}/{} - accuracy: {}'.format(ep, batch_n, batch_per_ep, net.accuracy(features,labels)))
Hope this helps.
| https://stackoverflow.com/questions/61224226/ |
Using annoy with Torchtext for nearest neighbor search | I'm using Torchtext for some NLP tasks, specifically using the built-in embeddings.
I want to be able to do a inverse vector search: Generate a noisy vector, find the vector that is closest to it, then get back the word that is "closest" to the noisy vector.
From the torchtext docs, here's how to attach embeddings to a built-in dataset:
from torchtext.vocab import GloVe
from torchtext import data
embedding = GloVe(name='6B', dim=100)
# Set up fields
TEXT = data.Field(lower=True, include_lengths=True, batch_first=True)
LABEL = data.Field(sequential=False, is_target=True)
# make splits for data
train, test = datasets.IMDB.splits(TEXT, LABEL)
# build the vocabulary
TEXT.build_vocab(train, vectors=embedding, max_size=100000)
LABEL.build_vocab(train)
# Get an example vector
embedding.get_vecs_by_tokens("germany")
Then we can build the annoy index:
from annoy import AnnoyIndex
num_trees = 50
ann_index = AnnoyIndex(embedding_dims, 'angular')
# Iterate through each vector in the embedding and add it to the index
for vector_num, vector in enumerate(TEXT.vocab.vectors):
ann_index.add_item(vector_num, vector) # Here's the catch: will vector_num correspond to torchtext.vocab.Vocab.itos?
ann_index.build(num_trees)
Then say I want to retrieve a word using a noisy vector:
# Get an existing vector
original_vec = embedding.get_vecs_by_tokens("germany")
# Add some noise to it
noise = generate_noise_vector(ndims=100)
noisy_vector = original_vec + noise
# Get the vector closest to the noisy vector
closest_item_idx = ann_index.get_nns_by_vector(noisy_vector, 1)[0]
# Get word from noisy item
noisy_word = TEXT.vocab.itos[closest_item_idx]
My question comes in for the last two lines above: The ann_index was built using enumerate over the embedding object, which is a Torch tensor.
The [vocab][2] object has its own itos list that given an index returns a word.
My question is this: Can I be certain that the order in which words appear in the itos list, is the same as the order in TEXT.vocab.vectors? How can I map one index to the other?
|
Can I be certain that the order in which words appear in the itos list, is the same as the order in TEXT.vocab.vectors?
Yes.
The Field class will always instantiate a Vocab object (source), and since you are passing the pre-trained vectors to TEXT.build_vocab, the Vocab constructor will call the load_vectors function.
if vectors is not None:
self.load_vectors(vectors, unk_init=unk_init, cache=vectors_cache)
In the load_vectors, the vectors are filled by enumerating the words in the itos.
for i, token in enumerate(self.itos):
start_dim = 0
for v in vectors:
end_dim = start_dim + v.dim
self.vectors[i][start_dim:end_dim] = v[token.strip()]
start_dim = end_dim
assert(start_dim == tot_dim)
Therefore, you can be certain that itos and vectors will have the same order.
| https://stackoverflow.com/questions/61235218/ |
TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first (fastai) | I am following the code here:
https://www.kaggle.com/tanlikesmath/diabetic-retinopathy-with-resnet50-oversampling
However, during the metrics calculation, I am getting the following error:
File "main.py", line 50, in <module>
learn.fit_one_cycle(4,max_lr = 2e-3)
...
File "main.py", line 39, in quadratic_kappa
return torch.tensor(cohen_kappa_score(torch.argmax(y_hat,1), y, weights='quadratic'),device='cuda:0')
...
File "/pfs/work7/workspace/scratch/ul_dco32-conda-0/conda/envs/resnet50/lib/python3.8/site-packages/torch/tensor.py", line 486, in __array__
return self.numpy()
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Here are the metrics and the model:
def quadratic_kappa(y_hat, y):
return torch.tensor(cohen_kappa_score(torch.argmax(y_hat,1), y, weights='quadratic'),device='cuda:0')
learn = cnn_learner(data, models.resnet50, metrics = [accuracy,quadratic_kappa])
learn.fit_one_cycle(4,max_lr = 2e-3)
As it is being said in the discussion https://discuss.pytorch.org/t/typeerror-can-t-convert-cuda-tensor-to-numpy-use-tensor-cpu-to-copy-the-tensor-to-host-memory-first/32850/6, I have to bring the data back to cpu. But I am slightly lost how to do it.
I tried to add .cpu() all over the metrics but could not solve it so far.
| I'm assuming that both y and y_hat are CUDA tensors, that means that you need to bring them both to the CPU for the cohen_kappa_score, not just one.
def quadratic_kappa(y_hat, y):
return torch.tensor(cohen_kappa_score(torch.argmax(y_hat.cpu(),1), y.cpu(), weights='quadratic'),device='cuda:0')
# ^^^ ^^^
Calling .cpu() on a tensor that is already on the CPU has no effect, so it's safe to use in any case.
| https://stackoverflow.com/questions/61236178/ |
Pytorch Custom Optimizer got an empty parameter list | new here. I am trying to create a custom optimizer in PyTorch, where the backprop takes place in a meta RL policy, with the policy receiving the model parameters, and outputting the desired model parameters. However, I am seeing the above error. My models work fine on Adam and SGD, but not my optimizer.
Code:
class MetaBackProp(torch.optim.Optimizer):
def __init__(self, params):
self.param_shape_list = np.array([])
for param in list(params):
np.append(self.param_shape_list, list(param.size()))
pseudo_lr = 1e-4
pseudo_defaults = dict(lr=pseudo_lr)
length = 100 #TODO: get shape, flatten, multiply...
self.policy = AEPolicy(length)
self.policy_optim = torch.optim.Adam(self.policy.parameters(), lr=pseudo_lr)
super(MetaBackProp, self).__init__(params, pseudo_defaults)
def step(self, closure=None):
params = torch.cat([p.view(-1) for p in self.param_groups])
self.policy_optim.zero_grad()
quit()
Traceback:
Traceback (most recent call last):
File "main.py", line 6, in <module>
gan = CycleGAN()
File "/home/ai/Projects_v2/R/cycle_gan.py", line 32, in __init__
self.discriminator2_optim = MetaBackProp(self.discriminator2.parameters())
File "/home/ai/Projects_v2/R/lr_schedule.py", line 34, in __init__
super(MetaBackProp, self).__init__(params, pseudo_defaults)
File "/home/ai/anaconda3/lib/python3.7/site-packages/torch/optim/optimizer.py", line 46, in __init__
raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list
| You retrieve the parameters with self.discriminator2.parameters(), which returns an iterator. In your constructor you are converting them to a list for the for loop:
for param in list(params):
This consumes the iterator, but you are passing that same iterator to the constructor of the base class, hence it does not contain any parameter at all.
super(MetaBackProp, self).__init__(params, pseudo_defaults)
Instead of passing the iterator, you can use the list you created from the iterator, since the parameters just need to be iterable, which lists are.
# Convert parameters to a list to allow multiple iterations
params = list(params)
for param in params:
| https://stackoverflow.com/questions/61238514/ |
Add More Metrics to Ray Tune Status Table (Python, PyTorch) | When running tune.run() on a set of configs to search, is it possible to add more metrics columns (i.e. a, b, etc) to the status table being printed out?
tune.track.log(a=metric1, b=metric2)
will give the following table without columns for the metrics a and b:
== Status ==
Memory usage on this node: 22.1/125.8 GiB
Using FIFO scheduling algorithm.
Resources requested: 1/32 CPUs, 1/4 GPUs, 0.0/65.59 GiB heap, 0.0/22.13 GiB objects
Result logdir: /home/nyxynyx/ray_results/fooba
Number of trials: 4 (3 PENDING, 1 RUNNING)
+--------------+----------+-------+------+-----+
| Trial name | status | loc | lr | x |
|--------------+----------+-------+------+-----|
| fooba_00000 | RUNNING | | 0.01 | 1 |
| fooba_00001 | PENDING | | 0.1 | 1 |
| fooba_00002 | PENDING | | 0.01 | 5 |
| fooba_00003 | PENDING | | 0.1 | 5 |
+--------------+----------+-------+------+-----+
How can we include a column for every metric that we pass to tune.track.log() other than mean_accuracy?
Using Python 3.7.3 and Ray 0.8.4
| Yep! You should be able to do this with a reporter object: https://ray.readthedocs.io/en/latest/tune/api_docs/reporters.html
| https://stackoverflow.com/questions/61238914/ |
How can I show multiple predictions of the next word in a sentence? | I am using the GPT-2 pre trained model. the code I am working on will get a sentence and generate the next word for that sentence. I want to print multiple predictions, like the three first predictions with best probabilities!
for example if I put in the sentence "I's an interesting ...."
predictions: "Books" "story" "news"
is there a way I can modify this code to show these predictions instead of one?!
also there are two parts in the code, I do not understand, what is the meaning of the numbers in (predictions[0, -1, :])? and why do we put [0] after predictions = output[0]?
import torch
from pytorch_transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Encode a text inputs
text = "The fastest car in the "
indexed_tokens = tokenizer.encode(text)
# Convert indexed tokens in a PyTorch tensor
tokens_tensor = torch.tensor([indexed_tokens])
# Load pre-trained model (weights)
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Set the model in evaluation mode to deactivate the DropOut modules
model.eval()
# If you have a GPU, put everything on cuda
#tokens_tensor = tokens_tensor.to('cuda')
#model.to('cuda')
# Predict all tokens
with torch.no_grad():
outputs = model(tokens_tensor)
predictions = outputs[0]
#print(predictions)
# Get the predicted next sub-word
predicted_index = torch.argmax(predictions[0, -1, :]).item()
predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])
# Print the predicted word
#print(predicted_index)
print(predicted_text)
The result for the above code will be :
The fastest car in the world.
| You can use torch.topk as follows:
predicted_indices = [x.item() for x in torch.topk(predictions[0, -1, :],k=3)]
| https://stackoverflow.com/questions/61239331/ |
Pytorch: AttributeError: 'function' object has no attribute 'copy' | I am trying to load a model state_dict I trained on Google Colab GPU, here is my code to load the model:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = models.resnet50()
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, n_classes)
model.load_state_dict(copy.deepcopy(torch.load("./models/model.pth",device)))
model = model.to(device)
model.eval()
Here is the error:
state_dict = state_dict.copy()
AttributeError: 'function' object has no attribute 'copy'
Pytorch :
>>> import torch
>>> print (torch.__version__)
1.4.0
>>> import torchvision
>>> print (torchvision.__version__)
0.5.0
Please help I have searched everywhere to no avail
[full error details][1] https://i.stack.imgur.com/s22DL.png
| I am guessing this is what you did by mistake.
You saved the function
torch.save(model.state_dict, 'model_state.pth')
instead of the state_dict()
torch.save(model.state_dict(), 'model_state.pth')
Otherwise, everything should work as expected. (I tested the following code on Colab)
Replace model.state_dict() with model.state_dict to reproduce error
import copy
model = TheModelClass()
torch.save(model.state_dict(), 'model_state.pth')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.load_state_dict(copy.deepcopy(torch.load("model_state.pth",device)))
| https://stackoverflow.com/questions/61242966/ |
Saving vocabulary object from pytorch's torchtext library | Building a text classification model using pytorch's torchtext . The vocabulary object is in the data.field :
def create_tabularDataset_object(self,csv_path):
self.TEXT = data.Field(tokenize=self.tokenizer,batch_first=True,include_lengths=True)
self.LABEL = data.LabelField(dtype = torch.float,batch_first=True)
def get_vocab_with_glov(self,data):
# initialize glove embeddings
self.TEXT.build_vocab(data,min_freq=100,vectors = "glove.6B.100d")
After training , when serving the model in production how do i hold the TEXT object ? at prediction time i need it for indexing the words tokens
[TEXT.vocab.stoi[t] for t in tokenizedׁ_sentence]
am i missing something and it is not necessary to hold that object ? Do i need any other file other then the model weights ?
| I've found that i can save it as a pkl:
Saving the TEXT.vocab as a pkl worked :
def save_vocab(vocab, path):
import pickle
output = open(path, 'wb')
pickle.dump(vocab, output)
output.close()
Where
vocab = TEXT.vocab
and reading it as usual.
| https://stackoverflow.com/questions/61245215/ |
How to partition a neural network into sub-networks in Pytorch? | I'd like to partition a neural network into two sub-networks using Pytorch. To make things concrete, consider this image:
In 1, I've a 3x4x1 neural network. What I want is, for example during epoch 1, I'd only like to update the weights in the sub-network 1, i.e., the weights that appear in the sub-network 2 must be frozen. Then again, in epoch 2, I'd like to train the weights that appear in sub-network 2 while the rest should be frozen.
How can I do that?
| You can do this easily if your subnet is a subset of layers. That is, you do not need to freeze any partial layers. It is all or nothing.
For your example that would mean dividing the hidden layer into two different 2-node layers. Each would belong to exactly one of the subnetworks, which gets us back to all or nothing.
With that done, you can toggle individual layers using requires_grad. Setting this to False on the parameters will disable training and freeze the weights. To do this for an entire model, sub-model, or Module, you loop through the model.parameters().
For your example, with 3 inputs, 1 output, and a now split 2x2 hidden layer, it might look something like this:
import torch.nn as nn
import torch.nn.functional as F
def set_grad(model, grad):
for param in model.parameters():
param.requires_grad = grad
class HalfFrozenModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.hid1 = torch.nn.Linear(3, 2)
self.hid2 = torch.nn.Linear(3, 2)
self.out = torch.nn.Linear(4, 1)
def set_freeze(self, hid1=False, hid2=False):
set_grad(self.hid1, not hid1)
set_grad(self.hid2, not hid2)
def forward(self, inp):
hid1 = self.hid1(inp)
hid2 = self.hid2(inp)
hidden = torch.cat([hid1, hid2], 1)
return self.out(F.relu(hidden))
Then you can train one half or the other like so:
model = HalfFrozenModel()
model.set_freeze(hid1=True)
# Do some training.
model.set_freeze(hid2=True)
# Do some more training.
# ...
If you happen to use fastai, then there is a concept of layer groups that is also used for this. The fastai documentation goes into some detail about how that works.
| https://stackoverflow.com/questions/61263103/ |
Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1] | I read this question but it doesnt seem to answer my question :(.
So basically I'm trying to vectorize the game snake so it can run faster.
Here is my code till now:
import torch
import torch.nn.functional as F
device = torch.device("cpu")
class SnakeBoard:
def __init__(self, board=None):
if board != None:
self.channels = board
else:
# 0 - Food, 1 - Head, 2 - Body
self.channels = torch.zeros(1, 3, 15, 17,
device=device)
# Initialize game channels
self.channels[:, 0, 7, 12] = 1
self.channels[:, 1, 7, 5] = 1
self.channels[:, 2, 7, 2:6] = torch.arange(1, 5)
self.move()
def move(self):
self.channels[:, 2] -= 1
F.relu(self.channels[:, 2], inplace=True)
# Up movement test
F.conv2d(self.channels[:, 1], torch.tensor([[[0,1,0],[0,0,0],[0,0,0]]]), padding=1)
SnakeBoard()
The first dimension in channels represents batch size, second dimension represent the 3 channels of the snake game: food, head, and body, and finally the third and fourth dimensions represent the height and width of the board.
Unfortunately when running the code I get error: Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]
How can I fix that?
| The dimensions of the inputs for the convolution are not correct for a 2D convolution. Let's have a look at the dimensions you're passing to F.conv2d:
self.channels[:, 1].size()
# => torch.Size([1, 15, 17])
torch.tensor([[[0,1,0],[0,0,0],[0,0,0]]]).size()
# => torch.Size([1, 3, 3])
The correct sizes should be
input: (batch_size, in_channels , height, width)
weight: (out_channels, in_channels , kernel_height, kernel_width)
Because your weight has only 3 dimensions, it is considered to be a 1D convolution, but since you called F.conv2d the stride and padding will be tuples and therefore it won't work.
For the input you indexed the second dimension, which selects that particular element across that dimensions and eliminates that dimensions. To keep that dimension you can index it with a slice of just one element.
And for the weight you are missing one dimension as well, which can just be added directly. Also your weight is of type torch.long, since you are only using integers in the tensor creation, but the weight needs to be of type torch.float.
F.conv2d(self.channels[:, 1:2], torch.tensor([[[[0,1,0],[0,0,0],[0,0,0]]]], dtype=torch.float), padding=1)
On a different note, I don't think that convolutions are appropriate for this use case, because you're not using a key property of the convolution, which is to capture the surroundings. Those are just too many unnecessary computations to achieve what you want, most of them are multiplications with 0.
For example, a move up is much easier to achieve by removing the first row and adding a new row of zeros at the end, so everything is shifted up (assuming that the first row is the top and the last row is the bottom of the board).
head = self.channels[:, 1:2]
batch_size, channels, height, width = head.size()
# Take everything but the first row of the head
# Add a row of zeros to the end by concatenating them across the height (dimension 2)
new_head = torch.cat([head[:, :, 1:], torch.zeros(batch_size, channels, 1, width)], dim=2)
# Or if you want to wrap it around the board, it's even simpler.
# Move the first row to the end
wrap_around_head = torch.cat([head[:, :, 1:], head[:, :, 0:1]], dim=2)
| https://stackoverflow.com/questions/61269421/ |
How To downgrade Torch Version for Google Colab | I would like to downgrade the Torch version used in my Google Colab notebooks. How could I do that?
| Run from your cell:
!pip install torch==version
Where version could be, for example, 1.3.0 (default is 1.4.0). You may have to downgrade torchvision appropriately as well so you would go with:
!pip install torch==1.3.0 torchvision==0.4.1
On the other hand PyTorch provides backward compatibility between major versions so there should be no need for downgrading.
Remember you have to restart your Google Colab for changes to take effect
| https://stackoverflow.com/questions/61273244/ |
Constant Training Loss and Validation Loss | I am running a RNN model with Pytorch library to do sentiment analysis on movie review, but somehow the training loss and validation loss remained constant throughout the training. I have looked up different online sources but still stuck.
Can someone please help and take a look at my code?
Some parameters are specified by the assignment:
embedding_dim = 64
n_layers = 1
n_hidden = 128
dropout = 0.5
batch_size = 32
My main code
txt_field = data.Field(tokenize=word_tokenize, lower=True, include_lengths=True, batch_first=True)
label_field = data.Field(sequential=False, use_vocab=False, batch_first=True)
train = data.TabularDataset(path=part2_filepath+"train_Copy.csv", format='csv',
fields=[('label', label_field), ('text', txt_field)], skip_header=True)
validation = data.TabularDataset(path=part2_filepath+"validation_Copy.csv", format='csv',
fields=[('label', label_field), ('text', txt_field)], skip_header=True)
txt_field.build_vocab(train, min_freq=5)
label_field.build_vocab(train, min_freq=2)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
train_iter, valid_iter, test_iter = data.BucketIterator.splits(
(train, validation, test),
batch_size=32,
sort_key=lambda x: len(x.text),
sort_within_batch=True,
device=device)
n_vocab = len(txt_field.vocab)
embedding_dim = 64
n_hidden = 128
n_layers = 1
dropout = 0.5
model = Text_RNN(n_vocab, embedding_dim, n_hidden, n_layers, dropout)
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
criterion = torch.nn.BCELoss().to(device)
N_EPOCHS = 15
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
train_loss, train_acc = RNN_train(model, train_iter, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iter, criterion)
My Model
class Text_RNN(nn.Module):
def __init__(self, n_vocab, embedding_dim, n_hidden, n_layers, dropout):
super(Text_RNN, self).__init__()
self.n_layers = n_layers
self.n_hidden = n_hidden
self.emb = nn.Embedding(n_vocab, embedding_dim)
self.rnn = nn.RNN(
input_size=embedding_dim,
hidden_size=n_hidden,
num_layers=n_layers,
dropout=dropout,
batch_first=True
)
self.sigmoid = nn.Sigmoid()
self.linear = nn.Linear(n_hidden, 2)
def forward(self, sent, sent_len):
sent_emb = self.emb(sent)
outputs, hidden = self.rnn(sent_emb)
prob = self.sigmoid(self.linear(hidden.squeeze(0)))
return prob
The training function
def RNN_train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
text, text_lengths = batch.text
predictions = model(text, text_lengths)
batch.label = batch.label.type(torch.FloatTensor).squeeze()
predictions = torch.max(predictions.data, 1).indices.type(torch.FloatTensor)
loss = criterion(predictions, batch.label)
loss.requires_grad = True
acc = binary_accuracy(predictions, batch.label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
The output I run on 10 testing reviews + 5 validation reviews
Epoch [1/15]: Train Loss: 15.351 | Train Acc: 44.44% Val. Loss: 11.052 | Val. Acc: 60.00%
Epoch [2/15]: Train Loss: 15.351 | Train Acc: 44.44% Val. Loss: 11.052 | Val. Acc: 60.00%
Epoch [3/15]: Train Loss: 15.351 | Train Acc: 44.44% Val. Loss: 11.052 | Val. Acc: 60.00%
Epoch [4/15]: Train Loss: 15.351 | Train Acc: 44.44% Val. Loss: 11.052 | Val. Acc: 60.00%
...
Appreciate if someone can point me to the right direction, I believe is something with the training code, since for most parts I follow this article:
https://www.analyticsvidhya.com/blog/2020/01/first-text-classification-in-pytorch/
| In your training loop you are using the indices from the max operation, which is not differentiable, so you cannot track gradients through it. Because it is not differentiable, everything afterwards does not track the gradients either. Calling
loss.backward() would fail.
# The indices of the max operation are not differentiable
predictions = torch.max(predictions.data, 1).indices.type(torch.FloatTensor)
loss = criterion(predictions, batch.label)
# Setting requires_grad to True to make .backward() work, although incorrectly.
loss.requires_grad = True
Presumably you wanted to fix that by setting requires_grad, but that does not do what you expect, because no gradients are propagated to your model, since the only thing in your computational graph would be the loss itself, and there is nowhere to go from there.
You used the indices to get either 0 or 1, since the output of your model is essentially two classes, and you wanted the one with the higher probability. For the Binary Cross Entropy loss, you only need one class that has a value between 0 and 1 (continuous), which you get by applying the sigmoid function.
So you need change the output channels of the final linear layer to 1:
self.linear = nn.Linear(n_hidden, 1)
and in your training loop you can remove the torch.max call and also the requires_grad.
# Squeeze the model's output to get rid of the single class dimension
predictions = model(text, text_lengths).squeeze()
batch.label = batch.label.type(torch.FloatTensor).squeeze()
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
optimizer.zero_grad()
loss.backward()
Since you have only 1 class at the end, an actual prediction would be either 0 or 1 (nothing in between), to achieve that you can simply use 0.5 as the threshold, so everything below is considered a 0 and everything above is considered a 1. If you are using the binary_accuracy function of the article you were following, that is done automatically for you. They do that by rounding it with torch.round.
| https://stackoverflow.com/questions/61276500/ |
How to apply the histogram function in pytorch to a specific axis? | I would like to use the torch.histc function to different samples in my training batch.
Here is an example:
>>> tt2 = torch.from_numpy(np.array([[-0.2, 1, 0.21], [-0.1, 0.32, 0.2]]))
>>> tt3 = torch.from_numpy(np.array([[-0.8, 0.6, 0.1], [-0.6, 0.5, 0.4]]))
>>> t = torch.cat((tt2, tt3), 1)
>>> t
tensor([[-0.2000, 1.0000, 0.2100, -0.8000, 0.6000, 0.1000],
[-0.1000, 0.3200, 0.2000, -0.6000, 0.5000, 0.4000]],
dtype=torch.float64)
>>> torch.histc(t, bins=1, min=0, max=5)
tensor([8.], dtype=torch.float64)
However, I don't want to apply the histogram function for all the values in t, I rather expect something like this:
>>> torch.histc(torch.tensor([[-0.2000, 1.0000, 0.2100, -0.8000, 0.6000, 0.1000]]), bins=1, min=0, max=5)
tensor([4.])
>>> torch.histc(torch.tensor([[-0.1000, 0.3200, 0.2000, -0.6000, 0.5000, 0.4000]]), bins=1, min=0, max=5)
tensor([4.])
>>>
And, finally, I want to aggregate all the histograms in the same tensor: tensor([[4.], [4.]]).
Thanks in advance
| The problem is solved with this function, but I'm not sure this is the most pythonic way to do it:
import numpy as np
def funct(semembs_as, semembs_bs):
t = torch.cat((semembs_as, semembs_bs), 1)
# make prediction a value between 0.0 and 5.0
l = [torch.histc(ti, bins=1, min=0, max=5) for ti in t]
y = [list(e) for e in l]
return torch.from_numpy(np.array(y))
t1 = torch.from_numpy(np.array([[-0.2, 1, 0.21], [-0.1, 0.32, 0.2]]))
t2 = torch.from_numpy(np.array([[0.7, 0.0, -0.6], [-0.6, 0.5, 0.4]]))
x = funct(t1, t2)
x
tensor([[4.],
[4.]], dtype=torch.float64)
If you have better solutions, don't hesitate to comment, please.
| https://stackoverflow.com/questions/61277170/ |
Converting keras code to pytorch code with Conv1D layer | I have some keras code that I need to convert to Pytorch. I've done some research but so far I am not able to reproduce the results I got from keras. I have spent many hours on this any tips or help is very appreciated.
Here is the keras code I am dealing with. The input shape is (None, 105, 768) where None is the batch size and I want to apply Conv1D to the input. The desire output in keras is (None, 105)
x = tf.keras.layers.Dropout(0.2)(input)
x = tf.keras.layers.Conv1D(1,1)(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Activation('softmax')(x)
What I've tried, but worse in term of results:
self.conv1d = nn.Conv1d(768, 1, 1)
self.dropout = nn.Dropout(0.2)
self.softmax = nn.Softmax()
def forward(self, input):
x = self.dropout(input)
x = x.view(x.shape[0],x.shape[2],x.shape[1])
x = self.conv1d(x)
x = torch.squeeze(x, 1)
x = self.softmax(x)
| The culprit is your attempt to swap the dimensions of the input around, since Keras and PyTorch have different conventions for the dimension order.
x = x.view(x.shape[0],x.shape[2],x.shape[1])
.view() does not swap the dimensions, but changes which part of the data is part of a given dimension. You can consider it as a 1D array, then you decide how many steps you take to cover the dimension. An example makes it much simpler to understand.
# Let's start with a 1D tensor
# That's how the underlying data looks in memory.
x = torch.arange(6)
# => tensor([0, 1, 2, 3, 4, 5])
# How the tensor looks when using Keras' convention (expected input)
keras_version = x.view(2, 3)
# => tensor([[0, 1, 2],
# [3, 4, 5]])
# Vertical isn't swapped with horizontal, but the data is arranged differently
# The numbers are still incrementing from left to right
incorrect_pytorch_version = keras_version.view(3, 2)
# => tensor([[0, 1],
# [2, 3],
# [4, 5]])
To swap the dimensions you need to use torch.transpose.
correct_pytorch_version = keras_version.transpose(0, 1)
# => tensor([[0, 3],
# [1, 4],
# [2, 5]])
| https://stackoverflow.com/questions/61279222/ |
Pytorch - Project each row of a tensor to the column space of another tensor | Currently, I have a tensor A, and a tensor U where U is an orthogonal matrix and is of full rank(so that its columns is a set of basis of U's column space, and all columns, say, u_i, have a norm of 1).
I am trying to compute the projection of each row of A onto the column space of U, using the formula from this post.
Which is, to compute Proj(A).
Is there any convenient functions or better operations to achieve this?
Thanks.
| If you already have a unit norm in each column of projection matrix, just simply
torch.mm(A,U)
should be sufficient.
| https://stackoverflow.com/questions/61280201/ |
Binary DenseNet 121 Classifier only predicting positive with probability >0.5 | I borrowed code from this github repo for training of a DenseNet-121 [https://github.com/gaetandi/cheXpert/blob/master/cheXpert_final.ipynb][1]
The github code is for 14 class classification on the CheXpert chest X-ray dataset. I've revised it for binary classification.
# initialize and load the model
pathModel = "/ds2/images/model_ones_2epoch_densenet.tar"#"m-epoch0-07032019-213933.pth.tar"
I initialize the 14 class model so I can use the pretrained weights:
model = DenseNet121(nnClassCount).cuda()
model = torch.nn.DataParallel(model).cuda()
modelCheckpoint = torch.load(pathModel)
model.load_state_dict(modelCheckpoint['state_dict'])
And then convert to binary classification:
nnClassCount = 1
model.module.densenet121.classifier = nn.Sequential(
nn.Linear(1024, nnClassCount),
nn.Sigmoid()
).cuda()
model = torch.nn.DataParallel(model).cuda()
And then train via:
batch, losst, losse = CheXpertTrainer.train(model, dataLoaderTrain, dataLoaderVal, nnClassCount, 100, timestampLaunch, checkpoint = None, weight_path = weight_path)
My training data is laid out in a 2 column csv with column headers ('Path' and 'Class-Positive'), with path locations in the first column and 0 or 1 in the second column. I used oversampling when compiling the training list so paths in the csv are roughly a 50/50 split between 0's and 1's...shuffled.
I use livelossplot to monitor training/validation loss and accuracy. My loss plots look as expected but accuracy plots are flatlined around 0.5 (which makes sense given the 50/50 data if the net is saying its 100% positive or negative). I'm assuming I'm doing something wrong in how I'm doing predictions, but maybe something in the training is incorrect.
For predictions and probabilities I'm running:
varOutput = model(varInput)
_, preds = torch.max(varOutput, 1)
print('varshape: ',varOutput.shape)
probs = torch.sigmoid(varOutput)
*My issue: preds are all coming out as 0 and probs all above 0.5 *
Here is the initial code from github:
import os
import numpy as np
import time
import sys
import csv
import cv2
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
import torch.nn.functional as tfunc
from torch.utils.data import Dataset
from torch.utils.data.dataset import random_split
from torch.utils.data import DataLoader
from torch.optim.lr_scheduler import ReduceLROnPlateau
from PIL import Image
import torch.nn.functional as func
from sklearn.metrics.ranking import roc_auc_score
import sklearn.metrics as metrics
import random
use_gpu = torch.cuda.is_available()
# Paths to the files with training, and validation sets.
# Each file contains pairs (path to image, output vector)
pathFileTrain = '../CheXpert-v1.0-small/train.csv'
pathFileValid = '../CheXpert-v1.0-small/valid.csv'
# Neural network parameters:
nnIsTrained = False #pre-trained using ImageNet
nnClassCount = 14 #dimension of the output
# Training settings: batch size, maximum number of epochs
trBatchSize = 64
trMaxEpoch = 3
# Parameters related to image transforms: size of the down-scaled image, cropped image
imgtransResize = (320, 320)
imgtransCrop = 224
# Class names
class_names = ['No Finding', 'Enlarged Cardiomediastinum', 'Cardiomegaly', 'Lung Opacity',
'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia', 'Atelectasis', 'Pneumothorax',
'Pleural Effusion', 'Pleural Other', 'Fracture', 'Support Devices']
class CheXpertDataSet(Dataset):
def __init__(self, image_list_file, transform=None, policy="ones"):
"""
image_list_file: path to the file containing images with corresponding labels.
transform: optional transform to be applied on a sample.
Upolicy: name the policy with regard to the uncertain labels
"""
image_names = []
labels = []
with open(image_list_file, "r") as f:
csvReader = csv.reader(f)
next(csvReader, None)
k=0
for line in csvReader:
k+=1
image_name= line[0]
label = line[5:]
for i in range(14):
if label[i]:
a = float(label[i])
if a == 1:
label[i] = 1
elif a == -1:
if policy == "ones":
label[i] = 1
elif policy == "zeroes":
label[i] = 0
else:
label[i] = 0
else:
label[i] = 0
else:
label[i] = 0
image_names.append('../' + image_name)
labels.append(label)
self.image_names = image_names
self.labels = labels
self.transform = transform
def __getitem__(self, index):
"""Take the index of item and returns the image and its labels"""
image_name = self.image_names[index]
image = Image.open(image_name).convert('RGB')
label = self.labels[index]
if self.transform is not None:
image = self.transform(image)
return image, torch.FloatTensor(label)
def __len__(self):
return len(self.image_names)
#TRANSFORM DATA
normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transformList = []
#transformList.append(transforms.Resize(imgtransCrop))
transformList.append(transforms.RandomResizedCrop(imgtransCrop))
transformList.append(transforms.RandomHorizontalFlip())
transformList.append(transforms.ToTensor())
transformList.append(normalize)
transformSequence=transforms.Compose(transformList)
#LOAD DATASET
dataset = CheXpertDataSet(pathFileTrain ,transformSequence, policy="ones")
datasetTest, datasetTrain = random_split(dataset, [500, len(dataset) - 500])
datasetValid = CheXpertDataSet(pathFileValid, transformSequence)
#Problèmes de l'overlapping de patients et du transform identique ?
dataLoaderTrain = DataLoader(dataset=datasetTrain, batch_size=trBatchSize, shuffle=True, num_workers=24, pin_memory=True)
dataLoaderVal = DataLoader(dataset=datasetValid, batch_size=trBatchSize, shuffle=False, num_workers=24, pin_memory=True)
dataLoaderTest = DataLoader(dataset=datasetTest, num_workers=24, pin_memory=True)
class CheXpertTrainer():
def train (model, dataLoaderTrain, dataLoaderVal, nnClassCount, trMaxEpoch, launchTimestamp, checkpoint):
#SETTINGS: OPTIMIZER & SCHEDULER
optimizer = optim.Adam (model.parameters(), lr=0.0001, betas=(0.9, 0.999), eps=1e-08, weight_decay=1e-5)
#SETTINGS: LOSS
loss = torch.nn.BCELoss(size_average = True)
#LOAD CHECKPOINT
if checkpoint != None and use_gpu:
modelCheckpoint = torch.load(checkpoint)
model.load_state_dict(modelCheckpoint['state_dict'])
optimizer.load_state_dict(modelCheckpoint['optimizer'])
#TRAIN THE NETWORK
lossMIN = 100000
for epochID in range(0, trMaxEpoch):
timestampTime = time.strftime("%H%M%S")
timestampDate = time.strftime("%d%m%Y")
timestampSTART = timestampDate + '-' + timestampTime
batchs, losst, losse = CheXpertTrainer.epochTrain(model, dataLoaderTrain, optimizer, trMaxEpoch, nnClassCount, loss)
lossVal = CheXpertTrainer.epochVal(model, dataLoaderVal, optimizer, trMaxEpoch, nnClassCount, loss)
timestampTime = time.strftime("%H%M%S")
timestampDate = time.strftime("%d%m%Y")
timestampEND = timestampDate + '-' + timestampTime
if lossVal < lossMIN:
lossMIN = lossVal
torch.save({'epoch': epochID + 1, 'state_dict': model.state_dict(), 'best_loss': lossMIN, 'optimizer' : optimizer.state_dict()}, 'm-epoch'+str(epochID)+'-' + launchTimestamp + '.pth.tar')
print ('Epoch [' + str(epochID + 1) + '] [save] [' + timestampEND + '] loss= ' + str(lossVal))
else:
print ('Epoch [' + str(epochID + 1) + '] [----] [' + timestampEND + '] loss= ' + str(lossVal))
return batchs, losst, losse
#--------------------------------------------------------------------------------
def epochTrain(model, dataLoader, optimizer, epochMax, classCount, loss):
batch = []
losstrain = []
losseval = []
model.train()
for batchID, (varInput, target) in enumerate(dataLoaderTrain):
varTarget = target.cuda(non_blocking = True)
#varTarget = target.cuda()
varOutput = model(varInput)
lossvalue = loss(varOutput, varTarget)
optimizer.zero_grad()
lossvalue.backward()
optimizer.step()
l = lossvalue.item()
losstrain.append(l)
if batchID%35==0:
print(batchID//35, "% batches computed")
#Fill three arrays to see the evolution of the loss
batch.append(batchID)
le = CheXpertTrainer.epochVal(model, dataLoaderVal, optimizer, trMaxEpoch, nnClassCount, loss).item()
losseval.append(le)
print(batchID)
print(l)
print(le)
return batch, losstrain, losseval
#--------------------------------------------------------------------------------
def epochVal(model, dataLoader, optimizer, epochMax, classCount, loss):
model.eval()
lossVal = 0
lossValNorm = 0
with torch.no_grad():
for i, (varInput, target) in enumerate(dataLoaderVal):
target = target.cuda(non_blocking = True)
varOutput = model(varInput)
losstensor = loss(varOutput, target)
lossVal += losstensor
lossValNorm += 1
outLoss = lossVal / lossValNorm
return outLoss
#--------------------------------------------------------------------------------
#---- Computes area under ROC curve
#---- dataGT - ground truth data
#---- dataPRED - predicted data
#---- classCount - number of classes
def computeAUROC (dataGT, dataPRED, classCount):
outAUROC = []
datanpGT = dataGT.cpu().numpy()
datanpPRED = dataPRED.cpu().numpy()
for i in range(classCount):
try:
outAUROC.append(roc_auc_score(datanpGT[:, i], datanpPRED[:, i]))
except ValueError:
pass
return outAUROC
#--------------------------------------------------------------------------------
def test(model, dataLoaderTest, nnClassCount, checkpoint, class_names):
cudnn.benchmark = True
if checkpoint != None and use_gpu:
modelCheckpoint = torch.load(checkpoint)
model.load_state_dict(modelCheckpoint['state_dict'])
if use_gpu:
outGT = torch.FloatTensor().cuda()
outPRED = torch.FloatTensor().cuda()
else:
outGT = torch.FloatTensor()
outPRED = torch.FloatTensor()
model.eval()
with torch.no_grad():
for i, (input, target) in enumerate(dataLoaderTest):
target = target.cuda()
outGT = torch.cat((outGT, target), 0).cuda()
bs, c, h, w = input.size()
varInput = input.view(-1, c, h, w)
out = model(varInput)
outPRED = torch.cat((outPRED, out), 0)
aurocIndividual = CheXpertTrainer.computeAUROC(outGT, outPRED, nnClassCount)
aurocMean = np.array(aurocIndividual).mean()
print ('AUROC mean ', aurocMean)
for i in range (0, len(aurocIndividual)):
print (class_names[i], ' ', aurocIndividual[i])
return outGT, outPRED
class DenseNet121(nn.Module):
"""Model modified.
The architecture of our model is the same as standard DenseNet121
except the classifier layer which has an additional sigmoid function.
"""
def __init__(self, out_size):
super(DenseNet121, self).__init__()
self.densenet121 = torchvision.models.densenet121(pretrained=True)
num_ftrs = self.densenet121.classifier.in_features
self.densenet121.classifier = nn.Sequential(
nn.Linear(num_ftrs, out_size),
nn.Sigmoid()
)
def forward(self, x):
x = self.densenet121(x)
return x
# initialize and load the model
model = DenseNet121(nnClassCount).cuda()
model = torch.nn.DataParallel(model).cuda()
timestampTime = time.strftime("%H%M%S")
timestampDate = time.strftime("%d%m%Y")
timestampLaunch = timestampDate + '-' + timestampTime
batch, losst, losse = CheXpertTrainer.train(model, dataLoaderTrain, dataLoaderVal, nnClassCount, trMaxEpoch, timestampLaunch, checkpoint = None)
print("Model trained")
| It looks like you have adapted the training correctly for the binary classification, but the prediction wasn't, as you are still trying it as if it were a multi-class prediction.
The output of your model (varOutput) has the size (batch_size, 1), since there is only one class. The maximum across that dimension will always be 0, since that is the only class available, there is no separate class for 1.
This single class represents both cases (0 and 1), so you can consider it is a the probability of it being positive (1). To get the distinct value of either 0 or 1, you simply use a threshold of 0.5, so everything below that receives the class 0 and above that 1. This can be easily done with torch.round.
But you also have another problem, you're applying the sigmoid function twice in a row, once in the classifier nn.Sigmoid() and then afterwards again torch.sigmoid(varOutput). That is problematic, because sigmoid(0) = 0.5, hence all your probabilities are over 0.5.
The output of your model are already the probabilities, the only thing left is to round them:
probs = model(varInput)
# The .squeeze(1) is to get rid of the singular class dimension
preds = torch.round(probs).squeeze(1)
| https://stackoverflow.com/questions/61282680/ |
Is it a good idea to Multiply loss().item by batch_size to get the loss of a batch when batch size is not a factor of train_size? | Suppose we have problem where we have 100 images and a batch size of 15. We have 15 images in all of out batches except our last batch which contains 10 images.
Suppose we have network training as:
network = Network()
optimizer = optim.Adam(network.parameters(),lr=0.001)
for epoch in range(5):
total_loss = 0
train_loader = torch.utils.data.DataLoader(train_set,batch_size=15)
for batch in train_loader:
images,labels = batch
pred = network(images)
loss = F.cross_entropy(pred,labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss+= loss.item()*15
Is'nt the last batch always supposed to give us a increased value of loss because we will be multiplying by 15 where we were supposed to multiply by 10 in the last batch? Should't it be
total_loss+= loss.item()*len(images) in place of 15 or batch_size??
Can we use
for every epoch:
for every batch:
loss = F.cross_entropy(pred,labels,reduction='sum')
total_loss+=loss.item()
avg_loss_per_epoch = (total_loss/len(train_set))
can someone please explain that multiplying by batch_size a good idea and how am I wrong?
| Yes, you're right. Usually, for running loss the term
total_loss+= loss.item()*15
is written instead as (as done in transfer learning tutorial)
total_loss+= loss.item()*images.size(0)
where images.size(0) gives the current batch size. Thus, it'll give 10 (in your case) instead of hard-coded 15 for the last batch. loss.item()*len(images) is also correct!
In your second example, since you're using reduction='sum', the loss won't be divided by the batch size as it's done by default (because, by default, the reduction='mean' i.e. losses are averaged across observations for each minibatch).
| https://stackoverflow.com/questions/61284248/ |
How can I slice a PyTorch tensor with another tensor? | I have:
inp = torch.randn(4, 1040, 161)
and I have another tensor called indices with values:
tensor([[124, 583, 158, 529],
[172, 631, 206, 577]], device='cuda:0')
I want the equivalent of:
inp0 = inp[:,124:172,:]
inp1 = inp[:,583:631,:]
inp2 = inp[:,158:206,:]
inp3 = inp[:,529:577,:]
Except all added together, to have a .size of [4, 48, 161]. How can I accomplish this?
Currently, my solution is a for loop:
left_indices = torch.empty(inp.size(0), self.side_length, inp.size(2))
for batch_index in range(len(inp)):
print(left_indices_start[batch_index].item())
left_indices[batch_index] = inp[batch_index, left_indices_start[batch_index].item():left_indices_end[batch_index].item()]
| Here you go (EDIT: you probably need to copy tensors to cpu using tensor=tensor.cpu() before doing following operations):
index = tensor([[124, 583, 158, 529],
[172, 631, 206, 577]], device='cuda:0')
#create a concatenated list of ranges of indices you desire to slice
indexer = np.r_[tuple([np.s_[i:j] for (i,j) in zip(index[0,:],index[1,:])])]
#slice using numpy indexing
sliced_inp = inp[:, indexer, :]
Here is how it works:
np.s_[i:j] creates a slice object (simply a range) of indices from start=i to end=j.
np.r_[i:j, k:m] creates a list ALL indices in slices (i,j) and (k,m) (You can pass more slices to np.r_ to concatenate them all together at once. This is an example of concatenating only two slices.)
Therefore, indexer creates a list of ALL indices by concatenating a list of slices (each slice is a range of indices).
UPDATE: If you need to remove interval overlaps and sort intervals:
indexer = np.unique(indexer)
if you want to remove interval overlaps but not sort and keep original order (and first occurrences of overlaps)
uni = np.unique(indexer, return_index=True)[1]
indexer = [indexer[index] for index in sorted(uni)]
| https://stackoverflow.com/questions/61290287/ |
Breaking down a batch in pytorch leads to different results, why? | I was trying something with batch processing in pytorch. In my code below, you may think of x as a batch of batch size 2 (each sample is a 10d vector). I use x_sep to denote the first sample in x.
import torch
import torch.nn as nn
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.fc1 = nn.Linear(10,10)
def forward(self, x):
x = self.fc1(x)
return x
f = net()
x = torch.randn(2,10)
print(f(x[0])==f(x)[0])
Ideally, f(x[0])==f(x)[0] should give a tensor with all true entries. But the output on my computer is
tensor([False, False, True, True, False, False, False, False, True, False])
Why does this happen? Is it a computational error? Or is it related to how the batch precessing in implemented in pytorch?
Update: I simplified the code a bit. The question remains the same.
My reasoning:
I believe f(x)[0]==f(x[0]) should have all its entries True because the law of matrix multiplication says so. Let us think of x as a 2x10 matrix, and think of the linear transformation f() as represented by matrix B (ignoring the bias for a moment). Then f(x)=xB by our notations. The law of matrix multiplication tells us that xB is equal to first multiply the two rows by B on the right separately, and then put the two rows back together. Translated back to the code, it is f(x[0])==f(x)[0] and f(x[1])==f(x)[1].
Even if we consider the bias, every row should have the same bias and the equality should still hold.
Also note that no training is done here. Hence how the weights are initialized shouldn't matter.
| TL;DR
Under the hood it uses a function named addmm that have some optimizations, and probably multiply the vectors in a slightly different way
I just understood what was the real issue, and I edited the answer.
After trying to reproduce and debug it on my machine.
I found out that:
f(x)[0].detach().numpy()
>>>array([-0.5386441 , 0.4983463 , 0.07970242, 0.53507525, 0.71045876,
0.7791027 , 0.29027492, -0.07919329, -0.12045971, -0.9111403 ],
dtype=float32)
f(x[0]).detach().numpy()
>>>array([-0.5386441 , 0.49834624, 0.07970244, 0.53507525, 0.71045876,
0.7791027 , 0.29027495, -0.07919335, -0.12045971, -0.9111402 ],
dtype=float32)
f(x[0]).detach().numpy() == f(x)[0].detach().numpy()
>>>array([ True, False, False, True, True, True, False, False, True,
False])
If you give a close look, you will find out that all the indices which are False, there is a slight numeric change in 5th floating point.
After some more debugging, I saw in the linear function it uses addmm:
def linear(input, weight, bias=None):
if input.dim() == 2 and bias is not None:
# fused op is marginally faster
ret = torch.addmm(bias, input, weight.t())
else:
output = input.matmul(weight.t())
if bias is not None:
output += bias
ret = output
return ret
When addmm addmm, implements beta*mat + alpha*(mat1 @ mat2) and is supposedly faster (see here for example).
Credit to Szymon Maszke
| https://stackoverflow.com/questions/61292150/ |
Cannot find in-place operation causing "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:" | I'm relatively new to PyTorch and am trying to reproduce an algorithm from an academic paper that approximates a term using the Hessian matrix. I've set up a toy problem so that I can compare the results of the full Hessian with the approximation. I found this gist and have been playing with it to compute the full Hessian part of the algorithm.
I am getting the error: "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation."
I've scoured through the simple example code, documentation, and many, many forum posts about this issue and cannot find any in-place operations. Any help would be greatly appreciated!
Here is my code:
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
torch.set_printoptions(precision=20, linewidth=180)
def jacobian(y, x, create_graph=False):
jac = []
flat_y = y.reshape(-1)
grad_y = torch.zeros_like(flat_y)
for i in range(len(flat_y)):
grad_y[i] = 1.
grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
jac.append(grad_x.reshape(x.shape))
grad_y[i] = 0.
return torch.stack(jac).reshape(y.shape + x.shape)
def hessian(y, x):
return jacobian(jacobian(y, x, create_graph=True), x)
def f(x):
return x * x
np.random.seed(435537698)
num_dims = 2
num_samples = 3
X = [np.random.uniform(size=num_dims) for i in range(num_samples)]
print('X: \n{}\n\n'.format(X))
mean = torch.Tensor(np.mean(X, axis=0))
mean.requires_grad = True
print('mean: \n{}\n\n'.format(mean))
cov = torch.Tensor(np.cov(X, rowvar=False))
print('cov: \n{}\n\n'.format(cov))
with autograd.detect_anomaly():
hessian_matrices = hessian(f(mean), mean)
print('hessian: \n{}\n\n'.format(hessian_matrices))
And here is the output with the stack trace:
X:
[array([0.81700949, 0.17141617]), array([0.53579366, 0.31141496]), array([0.49756485, 0.97495776])]
mean:
tensor([0.61678934097290039062, 0.48592963814735412598], requires_grad=True)
cov:
tensor([[ 0.03043144382536411285, -0.05357056483626365662],
[-0.05357056483626365662, 0.18426130712032318115]])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-5a1c492d2873> in <module>()
42
43 with autograd.detect_anomaly():
---> 44 hessian_matrices = hessian(f(mean), mean)
45 print('hessian: \n{}\n\n'.format(hessian_matrices))
2 frames
<ipython-input-3-5a1c492d2873> in hessian(y, x)
21
22 def hessian(y, x):
---> 23 return jacobian(jacobian(y, x, create_graph=True), x)
24
25 def f(x):
<ipython-input-3-5a1c492d2873> in jacobian(y, x, create_graph)
15 for i in range(len(flat_y)):
16 grad_y[i] = 1.
---> 17 grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
18 jac.append(grad_x.reshape(x.shape))
19 grad_y[i] = 0.
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
155 return Variable._execution_engine.run_backward(
156 outputs, grad_outputs, retain_graph, create_graph,
--> 157 inputs, allow_unused)
158
159
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
| I sincerely thought this was a bug in PyTorch, but after posting a bug, I got a good answer from albanD. https://github.com/pytorch/pytorch/issues/36903#issuecomment-616671247 He also pointed out that https://discuss.pytorch.org/ is available for asking questions.
The problem arises because we traverse the computation graph several times again and again. Exactly what is going on here is beyond me though...
The in place edit that your error message refers to are the obvious ones: grad_y[i] = 1. and grad_y[i] = 0.. Resuing grad_y over and over again in the computation is what causes trouble. Redefining jacobian(...) as below works for me.
def jacobian(y, x, create_graph=False):
jac = []
flat_y = y.reshape(-1)
for i in range(len(flat_y)):
grad_y = torch.zeros_like(flat_y)
grad_y[i] = 1.
grad_x, = torch.autograd.grad(flat_y, x, grad_y, retain_graph=True, create_graph=create_graph)
jac.append(grad_x.reshape(x.shape))
return torch.stack(jac).reshape(y.shape + x.shape)
An alternative, that works, but is more like black magic to me is leaving jacobian(...) as it is, and instead redefine f(x) to
def f(x):
return x * x * 1
That works too.
| https://stackoverflow.com/questions/61308237/ |
Artificial Neural Networks | Which loss function is used for the optimization of the model? | I have a code sample from github. Link is here: github code sample
My question is "Which loss function is used for the optimization of the model?"
Adam Optimizer or CrossEntropyLoss
What are the differences between them?
| Loss Function determines, how good is the model. It is done by comparing the predictions and actual. CrossEntropyLoss is commonly used for classification task.
Once Loss is identified. To train neural network model. we need to update weights of model.
It is done by calculating the gradients using the calculated loss. Where each optimizer determines how the loss has to be moved inside model. where Adam is one of the popular optimizer. It is based on momentum and RMSprop.
| https://stackoverflow.com/questions/61309456/ |
Problem with updating running_mean and running_var in a custom Batchnorm built in Pytorch? | I have been trying to implement a custom batch normalization function such that it can be extended to the Multi GPU version, in particular, the DataParallel module in Pytorch.The custom batchnorm works alright when using 1 GPU, but, when extended to 2 or more, the running mean and variance work in the forward function, but when it returns back from the network, the mean and variance are reinitialized to 0 and 1.
The torch.nn.DataParallel mentions in the warning section that " In each forward, module is replicated on each device, so any updates to the running module in forward will be lost. For example, if module has a counter attribute that is incremented in each forward, it will always stay at the initial value because the update is done on the replicas which are destroyed after forward." But I am not really sure how to retain the mean and variance from the default device.
I have provided code with the result obtained during multi GPU training. This code utilizes the Batchnorm provided here.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from torch.nn.parameter import Parameter
class ptrblck_BatchNorm2d(nn.BatchNorm2d):
def __init__(self, num_features, eps=1e-5, momentum=0.1,
affine=True, track_running_stats=True):
super(ptrblck_BatchNorm2d, self).__init__(
num_features, eps, momentum, affine, track_running_stats)
def forward(self, input):
self._check_input_dim(input)
exponential_average_factor = 0.0
if self.training and self.track_running_stats:
if self.num_batches_tracked is not None:
self.num_batches_tracked += 1
if self.momentum is None: # use cumulative moving average
exponential_average_factor = 1.0 / float(self.num_batches_tracked)
else: # use exponential moving average
exponential_average_factor = self.momentum
# calculate running estimates
if self.training:
mean = input.mean([0, 2, 3])
# use biased var in train
var = input.var([0, 2, 3], unbiased=False)
n = input.numel() / input.size(1)
with torch.no_grad():
self.running_mean = exponential_average_factor * mean\
+ (1 - exponential_average_factor) * self.running_mean
# update running_var with unbiased var
self.running_var = exponential_average_factor * var * n / (n - 1)\
+ (1 - exponential_average_factor) * self.running_var
else:
mean = self.running_mean
var = self.running_var
input = (input - mean[None, :, None, None]) / (torch.sqrt(var[None, :, None, None] + self.eps))
if self.affine:
input = input * self.weight[None, :, None, None] + self.bias[None, :, None, None]
return input
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.bn1 = ptrblck_BatchNorm2d(64)
print("==> printing bn1 mean when init")
print(self.bn1.running_mean)
print("==> printing bn1 when init")
print(self.bn1.running_mean)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.classifier = nn.Linear(64, 10)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
x = self.pool(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
print("======================================================")
print("==> printing bn1 running mean from NET during forward")
print(net.module.bn1.running_mean)
print("==> printing bn1 running mean from SELF. during forward")
print(self.bn1.running_mean)
print("==> printing bn1 running var from NET during forward")
print(net.module.bn1.running_var)
print("==> printing bn1 running mean from SELF. during forward")
print(self.bn1.running_var)
return x
# Data
print('==> Preparing data..')
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# Model
print('==> Building model..')
net = net()
net = torch.nn.DataParallel(net).cuda()
print('Number of GPU {}'.format(torch.cuda.device_count()))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
# Training
def train(epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.cuda(), targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
print("====================================================")
print("==> printing bn1 running mean FROM net after forward")
print(net.module.bn1.running_mean)
print("==> printing bn1 running var FROM net after forward")
print(net.module.bn1.running_var)
break
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# train_loss += loss.item()
# _, predicted = outputs.max(1)
# total += targets.size(0)
# correct += predicted.eq(targets).sum().item()
# break
for epoch in range(0, 1):
train(epoch)
Result:
==> Preparing data..
Files already downloaded and verified
Files already downloaded and verified
==> Building model..
==> printing bn1 mean when init
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
==> printing bn1 when init
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
Number of GPU 2
Epoch: 0
======================================================
==> printing bn1 running mean from NET during forward
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
device='cuda:0')
==> printing bn1 running mean from SELF. during forward
tensor([ 0.0053, 0.0010, -0.0077, -0.0290, 0.0241, 0.0258, -0.0048, 0.0151,
-0.0133, 0.0080, 0.0197, -0.0042, -0.0188, 0.0233, 0.0310, -0.0230,
-0.0133, 0.0222, 0.0119, -0.0042, -0.0220, -0.0169, -0.0342, -0.0025,
0.0338, -0.0070, 0.0202, 0.0050, 0.0108, 0.0008, 0.0363, 0.0347,
-0.0106, 0.0082, 0.0128, 0.0074, 0.0111, -0.0030, -0.0089, 0.0070,
-0.0262, -0.0029, 0.0053, -0.0136, -0.0183, 0.0045, -0.0014, -0.0221,
0.0132, 0.0064, 0.0388, -0.0220, -0.0008, 0.0400, -0.0187, 0.0397,
-0.0131, -0.0176, 0.0035, 0.0055, -0.0270, 0.0066, -0.0149, 0.0135],
device='cuda:0')
==> printing bn1 running var from NET during forward
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0')
==> printing bn1 running mean from SELF. during forward
tensor([0.9665, 0.9073, 0.9220, 1.0947, 1.0687, 0.9624, 0.9252, 0.9131, 0.9066,
0.9536, 0.9258, 0.9203, 1.0359, 0.9690, 1.1066, 1.0636, 0.9135, 0.9644,
0.9373, 0.9846, 0.9696, 0.9454, 1.0459, 0.9245, 0.9778, 0.9709, 0.9352,
0.9995, 0.9657, 0.9510, 1.0943, 1.0171, 0.9298, 1.0747, 0.9341, 0.9635,
0.9978, 0.9303, 0.9261, 0.9137, 0.9569, 1.0066, 1.0463, 0.9955, 0.9621,
0.9172, 0.9836, 0.9817, 0.9086, 0.9576, 1.0905, 0.9861, 0.9661, 1.1773,
0.9345, 1.0904, 0.9133, 1.0660, 0.9164, 0.9058, 0.9446, 0.9225, 1.0914,
0.9292], device='cuda:0')
======================================================
==> printing bn1 running mean from NET during forward
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
device='cuda:0')
==> printing bn1 running mean from SELF. during forward
tensor([-0.0020, 0.0002, -0.0103, -0.0426, 0.0386, 0.0311, -0.0059, 0.0151,
-0.0140, 0.0145, 0.0218, -0.0029, -0.0281, 0.0284, 0.0449, -0.0329,
-0.0107, 0.0278, 0.0135, -0.0123, -0.0260, -0.0214, -0.0423, -0.0035,
0.0410, -0.0097, 0.0276, 0.0102, 0.0197, -0.0001, 0.0483, 0.0451,
-0.0078, 0.0190, 0.0135, -0.0004, 0.0196, -0.0028, -0.0140, 0.0070,
-0.0332, -0.0110, 0.0151, -0.0210, -0.0226, 0.0074, -0.0088, -0.0314,
0.0125, -0.0003, 0.0505, -0.0312, 0.0086, 0.0544, -0.0245, 0.0528,
-0.0086, -0.0290, 0.0063, 0.0042, -0.0339, 0.0061, -0.0277, 0.0092],
device='cuda:1')
==> printing bn1 running var from NET during forward
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0')
==> printing bn1 running mean from SELF. during forward
tensor([0.9665, 0.9072, 0.9211, 1.0999, 1.0714, 0.9610, 0.9209, 0.9125, 0.9063,
0.9553, 0.9260, 0.9189, 1.0386, 0.9706, 1.1139, 1.0610, 0.9121, 0.9660,
0.9366, 0.9886, 0.9683, 0.9454, 1.0511, 0.9227, 0.9792, 0.9704, 0.9330,
0.9989, 0.9657, 0.9476, 1.1008, 1.0191, 0.9294, 1.0814, 0.9320, 0.9642,
1.0006, 0.9287, 0.9254, 0.9128, 0.9559, 1.0100, 1.0521, 0.9972, 0.9621,
0.9168, 0.9849, 0.9803, 0.9083, 0.9556, 1.0946, 0.9865, 0.9651, 1.1880,
0.9330, 1.0959, 0.9116, 1.0706, 0.9149, 0.9057, 0.9450, 0.9215, 1.0972,
0.9261], device='cuda:1')
====================================================
==> printing bn1 running mean FROM net after forward
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
device='cuda:0')
==> printing bn1 running var FROM net after forward
tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], device='cuda:0')
How can I make sure that the running estimates of the default device be used? Currently, I am not working towards synchronized Batchnorm.
| Replacing
self.running_mean = (...)
with
self.running_mean.copy_(...)
did the job.
Reference
| https://stackoverflow.com/questions/61311334/ |
Index a torch tensor with an array | I have the following torch tensor:
tensor([[-0.2, 0.3],
[-0.5, 0.1],
[-0.4, 0.2]])
and the following numpy array: (I can convert it to something else if necessary)
[1 0 1]
I want to get the following tensor:
tensor([0.3, -0.5, 0.2])
i.e. I want the numpy array to index each sub-element of my tensor. Preferably without using a loop.
Thanks in advance
| You may want to use torch.gather - "Gathers values along an axis specified by dim."
t = torch.tensor([[-0.2, 0.3],
[-0.5, 0.1],
[-0.4, 0.2]])
idxs = np.array([1,0,1])
idxs = torch.from_numpy(idxs).long().unsqueeze(1)
# or torch.from_numpy(idxs).long().view(-1,1)
t.gather(1, idxs)
tensor([[ 0.3000],
[-0.5000],
[ 0.2000]])
Here, your index is numpy array so you have to convert it to LongTensor.
| https://stackoverflow.com/questions/61311688/ |
PyTorch - what is the reason to resize my image and how do I decide the best size? | I have some malaria infected blood cell images (multiple blood cells per image) with details of:
num_features = 929
num_features_train = 743
num_features_test = 186
depth = 24
channels = 3
width = 1600
height = 1200
In the previous image classification I did on a malaria infected blood cell image datasets (single blood cell per image) the image details were width = 100, height = 101, depth = 24. So resizing to 50x50 didn't seem an issue.
I need to resize this, obviously, but don't know how to choose the best size for resizing an image so large. I can't find anything in my online searching talking about this. Any advise/experience would be helpful and greatly appreciated. Thanks!!
p.s. I already figured out that if I don't resize something this large I will get a memory error. Yeah, probably. :)
MemoryError: Unable to allocate 6.64 GiB for an array with shape (929, 1600, 1600, 3) and data type uint8
p.p.s. resized to 100x100 and still got memory error
resized to 50x50 and it was OK
So, I guess my question is, doesn't reducing the size so much reduce the resolution? So how does the convolutional layer filters do proper filtering if the resolution is reduced so drastically?
| Reducing the size reduces the resolution, but it still can keep all important features of the original image. Smaller images = fewer features = quicker training, less overfishing. However, a too drastic drop in size may cause images to lose the point of interest. For example, after resizing, a tumor may be smoothened by the surrounding pixels and disappear.
Overall: if images keep the point of interest after resizing, it should be OK.
| https://stackoverflow.com/questions/61312746/ |
libtorch (PyTorch C++) weird class syntax | In the official PyTorch C++ examples on GitHub Here
you can witness a strange definition of a class:
class CustomDataset : public torch::data::datasets::Dataset<CustomDataset> {...}
My understanding is that this defines a class CustomDataset which "inherits from" or "extends" torch::data::datasets::Dataset<CustomDataset>. This is weird to me since the class we're creating is inheriting from another class which is parameterized by the class we're creating...How does this even work? What does it mean? This seems to me like an Integer class inheriting from vector<Integer>, which seems absurd.
| This is the curiously-recurring template pattern, or CRTP for short. A major advantage of this technique is that it enabled so-called static polymorphism, meaning that functions in torch::data::datasets::Dataset can call into functions of CustomDataset, without needing to make those functions virtual (and thus deal with the runtime mess of virtual method dispatch and so on). You can also perform compile-time metaprogramming such as compile-time enable_ifs depending on the properties of the custom dataset type.
In the case of PyTorch, BaseDataset (the superclass of Dataset) uses this technique heavily to support operations such as mapping and filtering:
template <typename TransformType>
MapDataset<Self, TransformType> map(TransformType transform) & {
return datasets::map(static_cast<Self&>(*this), std::move(transform));
}
Note the static cast of this to the derived type (legal as long as CRTP is properly applied); datasets::map constructs a MapDataset object which is also parametrized by the dataset type, allowing the MapDataset implementation to statically call methods such as get_batch (or encounter a compile-time error if they do not exist).
Furthermore, since MapDataset receives the custom dataset type as a type parameter, compile-time metaprogramming is possible:
/// The implementation of `get_batch()` for the stateless case, which simply
/// applies the transform to the output of `get_batch()` from the dataset.
template <
typename D = SourceDataset,
typename = torch::disable_if_t<D::is_stateful>>
OutputBatchType get_batch_impl(BatchRequestType indices) {
return transform_.apply_batch(dataset_.get_batch(std::move(indices)));
}
/// The implementation of `get_batch()` for the stateful case. Here, we follow
/// the semantics of `Optional.map()` in many functional languages, which
/// applies a transformation to the optional's content when the optional
/// contains a value, and returns a new optional (of a different type) if the
/// original optional returned by `get_batch()` was empty.
template <typename D = SourceDataset>
torch::enable_if_t<D::is_stateful, OutputBatchType> get_batch_impl(
BatchRequestType indices) {
if (auto batch = dataset_.get_batch(std::move(indices))) {
return transform_.apply_batch(std::move(*batch));
}
return nullopt;
}
Notice that the conditional enable is dependent on SourceDataset, which we only have available because the dataset is parametrized with this CRTP pattern.
| https://stackoverflow.com/questions/61314839/ |
Resize Vs CenterCrop Vs RandomResizedCrop Vs RandomCrop | Can anyone tell me in which situations the above functions are used and how they affect the image size?
I want to resize the Cat V Dogs images and i am a bit confuse about how to use them.
| There are lots of details in TorchVision documentation actually.
The typical use case is for object detection or image segmentation tasks, but other uses could exist.
Here is a non-exhaustive list of uses:
Resize is used in Convolutional Neural Networks to adapt the input image to the network input shape, in this case this is not data-augmentation but just pre-processing. It can also be used in Fully Convolutional Networks to emulate different scales for an input image, this is data-augmentation.
CenterCrop RandomCrop and RandomResizedCrop are used in segmentation tasks to train a network on fine details without impeding too much burden during training. For with a database of 2048x2048 images you can train on 512x512 sub-images and then at test time infer on full resolution images. It is also used in object detection networks as data-augmentation. The resized variant lets you combine the previous resize operation.
All of them potentially change the image resolution.
| https://stackoverflow.com/questions/61324483/ |
Gradient of the loss of DistilBERT for measuring token importance | I am trying to access the gradient of the loss in DistilBERT with respect to each attention weight in the first layer. I could access the computed gradient value of the output weight matrix via the following code when requires_grad=True
loss.backward()
for name, param in model.named_parameters():
if name == 'transformer.layer.0.attention.out_lin.weight':
print(param.grad) #shape is [768,768]
where model is the loaded distilbert model.
My question is how to get the gradient with respect to [SEP] or [CLS] or other tokens' attention? I need it to reproduce the figure about the "Gradient-based feature importance estimates for attention to [SEP]" in the following link:
https://medium.com/analytics-vidhya/explainability-of-bert-through-attention-7dbbab8a7062
A similar question for the same purpose has been asked in the following, but it is not my issue:
BERT token importance measuring issue. Grad is none
| By default, the gradients are retained only for parameters, basically just to save memory. If you need gradients of inner nodes of the computation graph, you need to have the respective tensor before calling backward() and add a hook that will be executed at the backward pass.
A minimum solution from PyTorch forum:
yGrad = torch.zeros(1,1)
def extract(xVar):
global yGrad
yGrad = xVar
xx = Variable(torch.randn(1,1), requires_grad=True)
yy = 3*xx
zz = yy**2
yy.register_hook(extract)
#### Run the backprop:
print (yGrad) # Shows 0.
zz.backward()
print (yGrad) # Show the correct dzdy
In this case, the gradients are stored in a global variable where they persist after PyTorch get rid of them in the graph itself.
| https://stackoverflow.com/questions/61326892/ |
how to solve size mismatch for this model? | data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(256),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
batchsize=4
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=5, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=5, padding=1)
self.fc1 = nn.Linear(256*6*6, 4096)
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.reshape(x.size(0), -1)
print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
RuntimeError: size mismatch, m1: [4 x 373248], m2: [9216 x 4096] at C:/w/1/s/tmp_conda_3.8_075429/conda/conda-bld/pytorch_1579852542185/work/aten/src\THC/generic/THCTensorMathBlas.cu:290
| With the provided debug information we can only tell that there is a size mismatch in one of the layers.
Looking at the error, it seems the error is in the first linear layer. You should change its size to 373248 instead of 256*6*6.
This should also be clear from the output of the print statement print(x.shape).
| https://stackoverflow.com/questions/61328638/ |
softmax dims and variable volatile in PyTorch | I have a code for previous version of PyTorch and I receive 2 warning for the 3nd line of it:
import torch.nn.functional as F
def select_action(self, state):
probabilities = F.softmax(self.model(Variable(state, volatile = True))*100) # T=100
action = probs.multinomial(num_samples=1)
return action.data[0,0]
UserWarning: volatile was removed and now has no effect. Use with
torch.no_grad(): instead.
UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X > as an argument.
I found that:
Volatile is recommended for purely inference mode, when you’re sure
you won’t be even calling .backward(). It’s more efficient than any
other autograd setting - it will use the absolute minimal amount of
memory to evaluate the model. volatile also determines that
requires_grad is False.
Am I right that I should just remove it?
And because I want to get probabilities therefore should I use dim=1 ?
and the 3nd line of my code should look like:
probabilities = F.softmax(self.model(Variable(state), dim=1)*100) # T=100
state is created here:
def update(self, reward, new_signal):
new_state = torch.Tensor(new_signal).float().unsqueeze(0)
self.memory.push((self.last_state, new_state, torch.LongTensor([int(self.last_action)]), torch.Tensor([self.last_reward])))
action = self.select_action(new_state)
if len(self.memory.memory) > 100:
batch_state, batch_next_state, batch_action, batch_reward = self.memory.sample(100)
self.learn(batch_state, batch_next_state, batch_reward, batch_action)
self.last_action = action
self.last_state = new_state
self.last_reward = reward
self.reward_window.append(reward)
if len(self.reward_window) > 1000:
del self.reward_window[0]
return action
| You are right but not "fully" right.
Except changes you mentioned you should use torch.no_grad() as mentioned like this:
def select_action(self, state):
with torch.no_grad():
probabilities = F.softmax(self.model(state), dim=1)*100
action = probs.multinomial(num_samples=1)
return action.data[0,0]
This block turns off autograd engine for code within it (so you save the memory similarly to volatile).
Also please notice Variable is deprecated as well (check here) and state should be simply torch.tensor created with requires_grad=True.
BTW. You have probs and probabilities but I assume it's the same thing and merely a typo.
| https://stackoverflow.com/questions/61333684/ |
Is it safe to always use torch.tensor or torch.FloatTensor? Or do I need to treat Ints with care? | ERROR: type should be string, got "\n https://pytorch.org/docs/stable/tensors.html\n\n\nI am trying to understand the difference between tensor, FloatTensor, IntTensor - and am wondering if I can just always stick to tensor... or maybe FloatTensor.\n\nI am going to be using a mix of different tensors that will be:\n\n{integers:labels, floats:continuous, one-hot-encoded:categoricals}\n\n\nDo I need to explicitly set each of these as kinds of variables as different types of tensors? Will they all work as floats? Will they work in combination with each other?\n\n\n\nWould this get me into trouble downstream?\n\nl_o_l = [[1,2,3],[1,2,3],[1,2,3]]\n\nint_tnz = th.FloatTensor(l_o_l)\nint_tnz\n\n\ntensor([[1., 2., 3.],\n [1., 2., 3.],\n [1., 2., 3.]])\n\nint_tnz.dtype\n\n\ntorch.float32\n\n\n\nl_o_fl = [[1.1,2.2,3.3],[1.1,2.2,3.3],[1.1,2.2,3.3]]\n\nint_tnz = th.tensor(l_o_fl)\nint_tnz\n\n\ntensor([[1.1000, 2.2000, 3.3000],\n [1.1000, 2.2000, 3.3000],\n [1.1000, 2.2000, 3.3000]])\n\nint_tnz.dtype\n\n\ntorch.float32\n" | CrossEntropyLoss (or NLLLoss) expect the target type to be Long. For example, the code below raises a RuntimeError:
import torch.nn
criterion = torch.nn.CrossEntropyLoss()
predicted = torch.rand(10, 10, dtype=torch.float)
target = torch.rand(10) #.to(torch.long)
criterion(predicted, target)
# RuntimeError: expected scalar type Long but found Float
You need to uncomment the conversion to make it work. I cannot think of a bigger issue, but why bother converting the integers to float in the first place?
Regarding the use of torch.tensor and torch.FloatTensor, I prefer the former. torch.FloatTensor seems to be the legacy constructor, and it does not accept device as an argument. Again, I do not think this a big concern, but still, using torch.tensor increases the readability of the code.
| https://stackoverflow.com/questions/61334090/ |
bool value of Tensor with more than one value is ambiguous | I'm writing a neural network to do regression and here is my codes:
class Model(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
self.h1 = nn.Linear(input_size, hidden_size)
self.h2 = nn.Linear(hidden_size, hidden_size)
self.h3 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
x = self.h1(x)
x = Fuc.tanh(x)
x = self.h2(x)
x = Fuc.relu(x)
x = self.h3(x)
return x
model = Model(input_size=input_size, hidden_size=hidden_size, num_classes=num_classes)
opt = optim.Adam(params=model.parameters(), lr=learning_rate)
for epoch in range(1000):
out = model(data)
print('target', target)
print('pred', out)
loss = torch.nn.MSELoss(out, target)
print('loss', loss)
model.zero_grad()
loss.backward()
opt.step()
my input is of shape (numberOfSample X 2) and out put is of the form [[2],[3],...], namely a list of lists where each inner list contain one number.
Ok so Now I train the neural network and got this error:
...
[-0.1753],
[-0.1753],
[-0.1753]], grad_fn=<AddmmBackward>)
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1340: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-26-38e8026bfe54> in <module>()
68 print('target', target)
69 print('pred', out)
---> 70 loss = torch.nn.MSELoss(out, target)
71 print('loss', loss)
72
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py in legacy_get_string(size_average, reduce, emit_warning)
34 reduce = True
35
---> 36 if size_average and reduce:
37 ret = 'mean'
38 elif reduce:
RuntimeError: bool value of Tensor with more than one value is ambiguous
| The issue originates from calling torch.nn.MSELoss(out, target) which is the constructor for the MSELoss which accepts size_average and reduce as the first and second optional positional arguments.
loss = torch.nn.MSELoss(out, target)
Instead, you need to create an MSELoss object first and pass the out and the target to that object.
criterion = torch.nn.MSELoss()
for epoch in range(1000):
out = model(data)
loss = criterion(out, target)
loss.backward()
| https://stackoverflow.com/questions/61334483/ |
how to set values for layers in pytorch nn.module? | I have a model that I am trying to get working. I am working through the errors, but now I think it has come down to the values in my layers. I get this error:
RuntimeError: Given groups=1, weight of size 24 1 3 3, expected input[512, 50, 50, 3] to have 1 channels,
but got 50 channels instead
My parameters are:
LR = 5e-2
N_EPOCHS = 30
BATCH_SIZE = 512
DROPOUT = 0.5
My image information is:
depth=24
channels=3
original height = 1600
original width = 1200
resized to 50x50
This is the size of my data:
Train shape (743, 50, 50, 3) (743, 7)
Test shape (186, 50, 50, 3) (186, 7)
Train pixels 0 255 188.12228712427097 61.49539262385051
Test pixels 0 255 189.35559211469533 60.688278787628775
I looked here to try to understand what values each layer is expecting, but when I put in what it says here, https://towardsdatascience.com/pytorch-layer-dimensions-what-sizes-should-they-be-and-why-4265a41e01fd, it gives me errors about wrong channels and kernels.
I found torch_summary to give me more understanding about the outputs, but it only poses more questions.
This is my torch_summary code:
from torchvision import models
from torchsummary import summary
import torch
import torch.nn as nn
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1,24, kernel_size=5) # output (n_examples, 16, 26, 26)
self.convnorm1 = nn.BatchNorm2d(24) # channels from prev layer
self.pool1 = nn.MaxPool2d((2, 2)) # output (n_examples, 16, 13, 13)
self.conv2 = nn.Conv2d(24,48,kernel_size=5) # output (n_examples, 32, 11, 11)
self.convnorm2 = nn.BatchNorm2d(48) # 2*channels?
self.pool2 = nn.AvgPool2d((2, 2)) # output (n_examples, 32, 5, 5)
self.linear1 = nn.Linear(400,120) # input will be flattened to (n_examples, 32 * 5 * 5)
self.linear1_bn = nn.BatchNorm1d(400) # features?
self.drop = nn.Dropout(DROPOUT)
self.linear2 = nn.Linear(400, 10)
self.act = torch.relu
def forward(self, x):
x = self.pool1(self.convnorm1(self.act(self.conv1(x))))
x = self.pool2(self.convnorm2(self.act(self.conv2(x))))
x = self.drop(self.linear1_bn(self.act(self.linear1(x.view(len(x), -1)))))
return self.linear2(x)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=CNN().to(device)
summary(model, (3, 50, 50))
This is what it gave me:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 24 1 5 5, expected input[2, 3, 50, 50] to have 1 channels, but got 3 channels instead
When I run my whole code, and unsqueeze_(0) my data, like so....x_train = torch.from_numpy(x_train).unsqueeze_(0) I get this error:
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 24 1 5 5, but got 5-dimensional input of size [1, 743, 50, 50, 3] instead
I don't know how to figure out how to fill in the proper values in the layers. Will someone please help me find the correct values and understand how to understand this? I do know that the output of one layer should be the input of another layer. Nothing is matching up with what I thought I knew.
Thanks in advance!!
| It seems you have wrong order of input x tensor axis.
As you can see in the doc Conv2d input must be (N, C, H, W)
N is a batch size, C denotes a number of channels, H is a height of input planes in pixels, and W is width in pixels.
So, To make it right use torch.permute to swap axis in forward pass.
...
def forward(self, x):
x = x.permute(0, 3, 1, 2)
...
...
return self.linear2(x)
...
Example of permute:
t = torch.rand(512, 50, 50, 3)
t.size()
torch.Size([512, 50, 50, 3])
t = t.permute(0, 3, 1, 2)
t.size()
torch.Size([512, 3, 50, 50])
| https://stackoverflow.com/questions/61335217/ |
Pytorch: Increase indice of tensor to specific size | I have a tensor of [20, 3, 32, 32]
I want to increase the 3 to 64 ([20,64,32,32], where 20 is the batch size). I tried the repeat function. But that only gives me 63 or 66, because with repeat you can only tile (multiply) indices.
How can I solve this?
Thanks!
| You can repeat using torch.repeat and then filter with indices.
t = torch.rand(20,3,32,32)
t.shape
torch.Size([20, 3, 32, 32])
t = t.repeat(1,22,1,1)[:,:-2,:,:]
t.shape
torch.Size([20, 64, 32, 32])
| https://stackoverflow.com/questions/61339978/ |
How to know which gcc version to choose when compiling pytorch from source? | I am working on installing PyTorch from source but am unsure about the specific dependency versions to use for the version of PyTorch I want to install.
In particular, I noticed performance variations depending on the gcc version I use to compile PyTorch. Which compiler should I be using to get the best PyTorch performance?
Tensorflow doc provides such useful information. They call it "Tested build configurations":
https://www.tensorflow.org/install/source#tested_build_configurations.
| The README.md has instructions to build from source.
If you are installing from source, you will need a C++14 compiler. Also, we highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.
Once you have Anaconda installed, here are the instructions.
If you want to compile with CUDA support, install
NVIDIA CUDA 9 or above
NVIDIA cuDNN v7 or above
Compiler compatible with CUDA
If you want to disable CUDA support, export environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py.
If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here
You can found Latest, official Compiler requirements here
| https://stackoverflow.com/questions/61340939/ |
Train Accuracy increases, Train loss is stable, Validation loss Increases, Validation Accuracy is low and increases | My neural network trainign in pytorch is getting very wierd.
I am training a known dataset that came splitted into train and validation.
I'm shuffeling the data during training and do data augmentation on the fly.
I have those results:
Train accuracy start at 80% and increases
Train loss decreases and stays stable
Validation accuracy start at 30% but increases slowly
Validation loss increases
I have the following graphs to show:
How can you explain that the validation loss increases and the validation accuracy increases?
How can be such a big difference of accuracy between validation and training sets? 90% and 40%?
Update:
I balanced the data set.
It is binary classification. It now has now 1700 examples from class 1, 1200 examples from class 2. Total 600 for validation and 2300 for training.
I still see similar behavior:
**Can it be becuase I froze the weights in part of the network?
**Can it be becuase the hyperparametrs like lr?
| I found the solution:
I had different data augmentation for training set and validation set. Matching them also increased the validation accuracy!
| https://stackoverflow.com/questions/61342448/ |
Changing thresholds in the Sigmoid Activation in Neural Networks | Hi I am new to machine learning and I have a query about changing thresholds for sigmoid function.
I know Sigmoid function's value is in the range [0;1], 0.5 is taken as a threshold, if h(theta) < 0.5 we assume that it's value is 0, if h(theta) >= 0.5 then it's 1.
Thresholds are used only on the output layer of the network and it's only when classifying. So, if you're trying to classify between 3 classes can you give different thresholds for each class (0.2,0.4,0.4 - for each class)? Or can you specify a different threshold overall, like 0.8? I am unsure how to define this in the code below. Any guidance is appreciated.
# Hyper Parameters
input_size = 14
hidden_size = 40
hidden_size2 = 30
num_classes = 3
num_epochs = 600
batch_size = 34
learning_rate = 0.01
class Net(torch.nn.Module):
def __init__(self, n_input, n_hidden, n_hidden2, n_output):
super(Net, self).__init__()
# define linear hidden layer output
self.hidden = torch.nn.Linear(n_input, n_hidden)
self.hidden2 = torch.nn.Linear(n_hidden, n_hidden2)
# define linear output layer output
self.out = torch.nn.Linear(n_hidden, n_output)
def forward(self, x):
"""
In the forward function we define the process of performing
forward pass, that is to accept a Variable of input
data, x, and return a Variable of output data, y_pred.
"""
# get hidden layer input
h_input1 = self.hidden(x)
# define activation function for hidden layer
h_output1 = torch.sigmoid(h_input1)
# get hidden layer input
h_input2 = self.hidden2(h_output1)
# define activation function for hidden layer
h_output2 = torch.sigmoid(h_input2)
# get output layer output
out = self.out(h_output2)
return out
net = Net(input_size, hidden_size, hidden_size, num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
all_losses = []
for epoch in range(num_epochs):
total = 0
correct = 0
total_loss = 0
for step, (batch_x, batch_y) in enumerate(train_loader):
X = batch_x
Y = batch_y.long()
# Forward + Backward + Optimize
optimizer.zero_grad() # zero the gradient buffer
outputs = net(X)
loss = criterion(outputs, Y)
all_losses.append(loss.item())
loss.backward()
optimizer.step()
if epoch % 50 == 0:
_, predicted = torch.max(outputs, 1)
# calculate and print accuracy
total = total + predicted.size(0)
correct = correct + sum(predicted.data.numpy() == Y.data.numpy())
total_loss = total_loss + loss
if epoch % 50 == 0:
print(
"Epoch [%d/%d], Loss: %.4f, Accuracy: %.2f %%"
% (epoch + 1, num_epochs, total_loss, 100 * correct / total)
)
train_input = train_data.iloc[:, :input_size]
train_target = train_data.iloc[:, input_size]
inputs = torch.Tensor(train_input.values).float()
targets = torch.Tensor(train_target.values - 1).long()
outputs = net(inputs)
_, predicted = torch.max(outputs, 1)
| You can use any threshold you find suitable.
Neural networks are known to be often over-confident (e.g. applying 0.95 to one of 50 classes), so it may be beneficial to use different threshold in your case.
Your training is fine, but you should change predictions (last two lines) and use torch.nn.softmax like this:
outputs = net(inputs)
probabilities = torch.nn.functional.softmax(outputs, 1)
As mentioned in other answer you will get each row with probabilities summing to 1 (previously you had unnormalized probabilities a.k.a. logits).
Now, just use your desired threshold on those probabilities:
predictions = probabilities > 0.8
Please notice you may get only zeros in some cases (e.g. [0.2, 0.3, 0.5]).
This would mean neural network isn't confident enough according to your standards and would probably drop number of incorrect positive predictions (abstract, but say you are predicting whether a patient doesn't have one of mutually exclusive 3 diseases. It's better to say so only if you are really sure).
Different thresholds for each class
This could be done as well like this:
thresholds = torch.tensor([0.1, 0.1, 0.8]).unsqueeze(0)
predictions = probabilities > thresholds
Final comments
Please notice in case of softmax only one class should be the answer (as pointed out in another answer) and this approach (and mention of sigmoid) may indicate you are after multilabel classification.
If you want to train your network so it can simultaneously predict classes you should use sigmoid and change your loss to torch.nn.BCEWithLogitsLoss.
| https://stackoverflow.com/questions/61342716/ |
Why is PIL used so often with Pytorch? | I noticed that a lot of dataloaders use PIL to load and transform images, e.g. the dataset builders in torchvision.datasets.folder.
My question is: why use PIL? You would need to do an np.asarray operation before turning it into a tensor. OpenCV seems to load it directly as a numpy array, and is faster too.
One reason I can think of is because PIL has a rich transforms library, but I feel like several of those transforms can be quickly implemented.
| There is a discussion about adding OpenCV as one of possible backends in torchvision PR.
In summary, some reasons provided:
OpenCV2 loads images in BGR format which would require wrapper class to handle changing to RGB internally or format of loaded images backend dependent
This in turn would lead to code duplication in functional transforms in torchvision many of which use PIL operations (as transformations to support multiple backends would be pretty convoluted)
OpenCV loads images as np.array, it's not really easier to do transformations on arrays
Different representation might lead to hard to catch by users bugs
PyTorch's modelzoo is dependent on RGB format as well and they would like to have it easily supported
Doesn't play well with Python's multiprocessing (but it's no-issue as it was an issue for Python 2)
To be honest I don't see much movement towards this idea as there exists albumentations which uses OpenCV and can be integrated with PyTorch rather smoothly.
A little off-topic, but one can choose faster backend via torchvision.set_image_backend to Intel's accimage. Also Pillow-SIMD can be used as a drop-in replacement for PIL (it is supposedly faster and recommended by fastai project).
When it comes to performance benchmarks they do not seem too reliable and it's not that easy to tell AFAIK.
| https://stackoverflow.com/questions/61346009/ |
What is the purpose of the dim parameter in torch.nn.Softmax | I don't understand to what does the dim parameter applies in torch.nn.Softmax. There is a warning that tells me to use it and I set it to 1, but I don't understand what I am setting. Where is it being used in the formula:
Softmax(xi)=exp(xi)/∑jexp(xj)
There is no dim here, so to what does it apply?
| The Pytorch documentation on torch.nn.Softmax states:
dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
For example, if you have a matrix with two dimensions, you can choose whether you want to apply the softmax to the rows or the columns:
import torch
import numpy as np
softmax0 = torch.nn.Softmax(dim=0) # Applies along columns
softmax1 = torch.nn.Softmax(dim=1) # Applies along rows
v = np.array([[1,2,3],
[4,5,6]])
v = torch.from_numpy(v).float()
softmax0(v)
# Returns
#[[0.0474, 0.0474, 0.0474],
# [0.9526, 0.9526, 0.9526]])
softmax1(v)
# Returns
#[[0.0900, 0.2447, 0.6652],
# [0.0900, 0.2447, 0.6652]]
Note how for softmax0 the columns add to 1, and for softmax1 the rows add to 1.
| https://stackoverflow.com/questions/61350445/ |
While converting a PIL image into a tensor why the pixels are changing? | transform = transforms.Compose([transforms.ToPILImage(), transforms.ToTensor()])
Before applying the transformation
After applying the transformation
Q.1 Why the pixel values are changed?
Q.2 How to correct this?
| I was able to solve this problem by normalizing the input data before transforming it.
The problem was that ToPILImage() was discarding all the values which were greater than 1 hence the bright pixels became dark.
| https://stackoverflow.com/questions/61351130/ |
Getting error when using print() or summary() in pytorch to see the layers and weight dimensions in a Pytorch model | When using print on an existing model,
it doesn't print the model. Instead it shows:
<function resnext101_32x8d at 0x00000178CC26BA68>
>>> import torch
>>> import torchvision.models as models
>>> m1 = models.resnext101_32x8d
>>> print(m1)
<function resnext101_32x8d at 0x00000178CC26BA68>
>>>
When using summary, it gives the following error:
AttributeError: 'function' object has no attribute 'apply'
>>> import torch
>>> import torchvision.models as models
>>> from torchvision import summary
>>> m1 = models.resnext101_32x8d
>>>
>>> summary(m1, (3, 224, 224))
Traceback(most recent call last):
File "<stdin>", line 1, in <module>
File torchsummary.py, line 68, in summary
model.apply(register_hook)
AttributeError: 'function' object has no attribute 'apply'
How to fix these issues related to print and summary? Any other ways to easily see all pytorch layers and model topology?
| models.resnext101_32x8d is the class constructor, you need to call the constructor, just add parentheses at the end.
m1 = models.resnext101_32x8d()
print(m1)
| https://stackoverflow.com/questions/61352375/ |
Pass pretrained weights in CNN Pytorch to a CNN in Tensorflow | I have trained this network in Pytorch for 224x224 size images and 4 classes.
class CustomConvNet(nn.Module):
def __init__(self, num_classes):
super(CustomConvNet, self).__init__()
self.layer1 = self.conv_module(3, 64)
self.layer2 = self.conv_module(64, 128)
self.layer3 = self.conv_module(128, 256)
self.layer4 = self.conv_module(256, 256)
self.layer5 = self.conv_module(256, 512)
self.gap = self.global_avg_pool(512, num_classes)
#self.linear = nn.Linear(512, num_classes)
#self.relu = nn.ReLU()
#self.softmax = nn.Softmax()
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = self.layer5(out)
out = self.gap(out)
out = out.view(-1, 4)
#out = self.linear(out)
return out
def conv_module(self, in_num, out_num):
return nn.Sequential(
nn.Conv2d(in_num, out_num, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2), stride=None))
def global_avg_pool(self, in_num, out_num):
return nn.Sequential(
nn.Conv2d(in_num, out_num, kernel_size=3, stride=1, padding=1),
#nn.BatchNorm2d(out_num),
#nn.LeakyReLU(),
nn.ReLU(),
nn.Softmax(),
nn.AdaptiveAvgPool2d((1, 1)))
I got the weights from the first Conv2D and it's size torch.Size([64, 3, 3, 3])
I have saved it as:
weightsCNN = net.layer1[0].weight.data
np.save('CNNweights.npy', weightsCNN)
This is my model I built in Tensorflow. I would like to pass those weights I saved from the Pytorch model into this Tensorflow CNN.
model = models.Sequential()
model.add(layers.Conv2D(64, (3, 3), activation='relu', input_shape=(224, 224, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(512, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(512, (3, 3), activation='relu'))
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dense(4, activation='softmax'))
print(model.summary())
adam = optimizers.Adam(learning_rate=0.0001, amsgrad=False)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
nb_train_samples = 6596
nb_validation_samples = 1290
epochs = 10
batch_size = 256
history = model.fit_generator(
train_generator,
steps_per_epoch=np.ceil(nb_train_samples/batch_size),
epochs=epochs,
validation_data=validation_generator,
validation_steps=np.ceil(nb_validation_samples / batch_size)
)
How should I actually do that? What shape of weights does Tensorflow require? Thanks!
| You can check shapes of all weights of all keras layers quite simply:
for layer in model.layers:
print([tensor.shape for tensor in layer.get_weights()])
This would give you shapes of all weights (including biases), so you can prepare loaded numpy weights accordingly.
To set them, do something similar:
for torch_weight, layer in zip(model.layers, torch_weights):
layer.set_weights(torch_weight)
where torch_weights should be a list containing lists of np.array which you would have to load.
Usually each element of torch_weights would contain one np.array for weights and one for bias.
Remember shapes received from print have to be exactly the same as the ones you put in set_weights.
See documentation for more info.
BTW. Exact shapes are dependent on layers and operations performed by model, you may have to transpose some arrays sometimes to "fit them in".
| https://stackoverflow.com/questions/61353171/ |
optimizer got an empty parameter list (skorch) | So, I am used to use PyTorch and now decided to give Skorch a shot.
Here they define the network as
class ClassifierModule(nn.Module):
def __init__(
self,
num_units=10,
nonlin=F.relu,
dropout=0.5,
):
super(ClassifierModule, self).__init__()
self.num_units = num_units
self.nonlin = nonlin
self.dropout = dropout
self.dense0 = nn.Linear(20, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(dropout)
self.dense1 = nn.Linear(num_units, 10)
self.output = nn.Linear(10, 2)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = F.relu(self.dense1(X))
X = F.softmax(self.output(X), dim=-1)
return X
I prefer inputting lists of neurons in each layer i.e num_units=[30,15,5,2] would have 2 hidden layers with 15 and 5 neurons. Furthermore we have 30 features and 2 classes, thus re-writing it to something like this
class Net(nn.Module):
def __init__(
self,
num_units=[30,15,5,2],
nonlin=[F.relu,F.relu,F.relu],
dropout=[0.5,0.5,0.5],
):
super(Net, self).__init__()
self.layer_units = layer_units
self.nonlin = nonlin #Activation function
self.dropout = dropout #Drop-out rates in each layer
self.layers = [nn.Linear(i,p) for i,p in zip(layer_units,layer_units[1:])] #Dense layers
def forward(self, X, **kwargs):
print("Forwards")
for layer,func,drop in zip(self.layers[:-1],self.nonlin,self.dropout):
print(layer,func,drop)
X=drop(func(layer(X)))
X = F.softmax(X, dim=-1)
return X
should do the trick. The problem is that when calling
net = NeuralNetClassifier(Net,max_epochs=20,lr=0.1,device="cuda")
net.fit(X,y)
I get the error "ValueError: optimizer got an empty parameter list". I have narrowed it down to the removal of self.output = nn.Linear(10, 2) simply makes the net not enter forward i.e it seems like output is some kind of "trigger" variable. Is that really the case the network need a variable called output (being a layer) at the end, and that we are not free to define the variable-names ourself ?
| Pytorch will look for subclasses of nn.Module, so changing
self.layers = [nn.Linear(i,p) for i,p in zip(layer_units,layer_units[1:])]
to
self.layers = nn.ModuleList([nn.Linear(i,p) for i,p in zip(layer_units,layer_units[1:])])
should work fine
| https://stackoverflow.com/questions/61354265/ |
Convert a list of tensors to tensors of tensors pytorch | I have this code:
import torch
list_of_tensors = [ torch.randn(3), torch.randn(3), torch.randn(3)]
tensor_of_tensors = torch.tensor(list_of_tensors)
I am getting the error:
ValueError: only one element tensors can be converted to Python scalars
How can I convert the list of tensors to a tensor of tensors in pytorch?
| Here is a solution:
tensor_of_tensors = torch.stack((list_of_tensors))
print(tensor_of_tensors) #shape (3,3)
| https://stackoverflow.com/questions/61359162/ |
PyTorch augmentation | i'm new to machine learning and pytorch. I'm using imgaug library for images augmentation (https://github.com/aleju/imgaug)
I have this code:
class ImgAugTransform:
def __init__(self):
self.aug = seq = iaa.Sequential(
[
# Apply the following augmenters to most images
iaa.Fliplr(0.5), # horizontally flip 50% of all images
iaa.Flipud(0.2), # vertically flip 20% of all images
random_aug_use(iaa.CropAndPad( # crop images by -5% to 10% of their height/width
percent=(-0.1, 0.2),
pad_mode=ia.ALL,
pad_cval=(0.,255)
)),
random_aug_use(iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}, # scale images to 80-120% of their size, individually per axis
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}, # translate by -20 to +20 percent (per axis)
rotate=(-45, 45), # rotate by -45 to +45 degrees
shear=(-16, 16), # shear by -16 to +16 degrees
order=[0, 1], # use nearest neighbour or bilinear interpolation (fast)
cval=(0, 255), # if mode is constant, use a cval between 0 and 255
mode=ia.ALL # use any of scikit-image's warping modes (see 2nd image from the top for examples)
))
],
random_order=True)
def __call__(self, img):
img = np.array(img)
return self.aug.augment_image(img)
train_transforms = ImgAugTransform()
train_dataset = torchvision.datasets.ImageFolder(train_dir, train_transforms)
train_dataloader = torch.utils.data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True, num_workers=batch_size)
So now i cant do this:
X_batch, y_batch = next(iter(train_dataloader))
I get error:
ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future releases.
| You should make your augmented numpy arrays contiguous again.
try modifying your augmenter code to:
def __call__(self, img):
img = np.array(img)
return np.ascontiguousarray(self.aug.augment_image(img))
| https://stackoverflow.com/questions/61361461/ |
Fit a Gaussian curve with a neural network using Pytorch | Suppose the following model :
import torch.nn as nn
class PGN(nn.Module):
def __init__(self, input_size):
super(PGN, self).__init__()
self.linear = nn.Sequential(
nn.Linear(in_features=input_size, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=1)
)
def forward(self, x):
return self.linear(x)
I figure I have to modify the model to fit a 2-dimensional curve.
Is there a way to fit a Gaussian curve with mu=0 and sigma=0 using Pytorch? If so, can you show me?
| A neural network can approximate an arbitrary function of any number of parameters to a space of any dimension.
To fit a 2 dimensional curve your network should be fed with vectors of size 2, that is a vector of x and y coordinates. The output is a single value of size 1.
For training you must generate ground truth data, that is a mapping between coordinates (x and y) and the value (z). The loss function should compare this ground truth value with the estimate of your network.
If it is just a tutorial to learn Pytorch and not a real application, you can define a function that for a given x and y output the gaussian value according to your parameters.
Then during training you randomly choose a x and y and feed this to the networks then do backprop with the true value.
| https://stackoverflow.com/questions/61364250/ |
'Sequential' object has no attribute 'classifier' | How to find in_features of a Pytorch model? model.classifier.in_features is working on densenet121 but nit on vgg18, is there any function which may work on all torchvision models?
| classifier is a Sequential module in the VGG's implementation, so, if you want to access the in_features passed to the classifier, you have to check the in_features of the first layer.
models.vgg19().classifier[0].in_features
It seems like different implementations follow different patterns, so the best way to determine in_features for all the models would be to check the source code directly.
| https://stackoverflow.com/questions/61367204/ |
Display image in a PIL format from torch.Tensor | I’m quite new to Pytorch. I was wondering how I could convert my tensor of size torch.Size([1, 3, 224, 224]) to display in an image format on a Jupyter notebook. A PIL format or a CV2 format should be fine.
I tried using transforms.ToPILImage(x) but it resulted in a different format like this: ToPILImage(mode=ToPILImage(mode=tensor([[[[1.3034e-16, 1.3034e-16, 1.3034e-16, ..., 1.4475e-16,.
Maybe I’m doing something wrong :no_mouth:
| Since your image is normalized, you need to unnormalize it. You have to do the reverse operations that you did during normalization. One way is
class UnNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
# The normalize code -> t.sub_(m).div_(s)
return tensor
To use this, you'll need the mean and standard deviation (which you used to normalize the image). Then,
unorm = UnNormalize(mean = [0.35675976, 0.37380189, 0.3764753], std = [0.32064945, 0.32098866, 0.32325324])
image = unorm(normalized_image)
| https://stackoverflow.com/questions/61368632/ |
Using ModuleDict, I have: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | I'm trying in my __init__ function:
self.downscale_time_conv = np.empty(8, dtype=object)
for i in range(8):
self.downscale_time_conv[i] = torch.nn.ModuleDict({})
But in my forward, I have:
down_out = False
for i in range(8):
if not down_out:
down_out = self.downscale_time_conv[i][side](inputs)
else:
down_out += self.downscale_time_conv[i][side](inputs)
and I get:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
| self.downscale_time_conv = torch.nn.ModuleList()
for i in range(8):
self.downscale_time_conv.append(torch.nn.ModuleDict({}))
this solved it. Apparently I needed to use a ModuleList
| https://stackoverflow.com/questions/61370554/ |
How to implement time-distributed dense (TDD) layer in PyTorch | In some deep learning models which analyse temporal data (e.g. audio, or video), we use a "time-distributed dense" (TDD) layer. What this creates is a fully-connected (dense) layer which is applied separately to every time-step.
In Keras this can be done using the TimeDistributed wrapper, which is actually slightly more general. In PyTorch it's been an open feature request for a couple of years.
How can we implement time-distributed dense manually in PyTorch?
| Specifically for time-distributed dense (and not time-distributed anything else), we can hack it by using a convolutional layer.
Look at the diagram you've shown of the TDD layer. We can re-imagine it as a convolutional layer, where the convolutional kernel has a "width" (in time) of exactly 1, and a "height" that matches the full height of the tensor. If we do this, while also making sure that our kernel is not allowed to move beyond the edge of the tensor, it should work:
self.tdd = nn.Conv2d(1, num_of_output_channels, (num_of_input_channels, 1))
You may need to do some rearrangement of tensor axes. The "input channels" for this line of code are in fact coming from the "freq" axis (the "image's y axis") of your tensor, and the "output channels" will indeed be arranged on the "channel" axis. (The "y axis" of the output will be a singleton dimension of height 1.)
| https://stackoverflow.com/questions/61372645/ |
With PyTorch, how is my Conv1d dimension reducing when I have padding? | My conv module is:
return torch.nn.Sequential(
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=1,
padding=1
),
torch.nn.ReLU(),
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=2,
padding=1
),
torch.nn.ReLU(),
torch.nn.Conv1d(
in_channels=in_channels,
out_channels=in_channels,
kernel_size=2,
stride=1,
dilation=4,
padding=1
),
torch.nn.ReLU()
)
And in forward, I have:
down_out = self.downscale_time_conv(inputs)
inputs has a .size of torch.Size([8, 161, 24]). I'd expect down_out to have the same size, but instead it has: torch.Size([8, 161, 23])
Where did that last element go?
| The answer can be found on Pytorch documentation online (here). For every operation the output shape is expressed with respect to the input parameters:
For each conv1D:
- L1 = 25 → int((24 + 2*1 - 1*(2 - 1) - 1) / 1 + 1)
- L2 = 25 → int((25 + 2*1 - 2*(2 - 1) - 1) / 1 + 1)
- L3 = 23 → int((25 + 2*1 - 4*(2 - 1) - 1) / 1 + 1)
Do not forget that Lin is the previous size.
| https://stackoverflow.com/questions/61375457/ |
Can I program a GPU when it is the system's only graphics card? | Can I run PyTorch or Tensorflow on Windows on a GPU that is also acting as the system's graphics card (e.g. there is no graphics built-in to a Ryzen 3600 CPU)? If so, is there any downside, or would I be better off getting a CPU with built-in graphics?
| Yes, it is possible to run i.e. Tensorflow on GPU while also using the GPU for your system. You do not need a 2nd graphics-card or integrated GPU.
Keep in mind, your graphicscard will share memory and processing-power between all your programs. GPU-intensive work might slow down the fps on your system and the other way around. Also keep an eye on the memory usage.
I had tensorflow_gpu with a multi-layer CNN running, while playing AAA-Games (i.e. GTA V) using a Ryzen 3600. It worked on my super old NVIDIA GTX 260 (2GB memory) but it crashed quite often cause of the limited memory. Upgraded to a GTX 1080 with 8GB and it worked quite well. Needless to say, you can always fill up your GPU-memory and crash, no matter the size.
| https://stackoverflow.com/questions/61387332/ |
Loading image data from pandas to pytorch | I am completely new to pytorch and have previously worked on keras and fastai.
Currently trying an image regression task and the challenge is I have to load the data from pandas dataframe.
Data frame structure:
ID Path Score
fig1 /folder/fig1.jpg 2
fig2 /folder/fig2.jpg 3
.....
I have previously worked on loading images to pytorch directly from folders because it was a simple classification task but kind of stuck now.
I looked into pytorch forums but didn't quiet understand how to implement.
Any help would be appreciated.
| Datasets
You have to use torch.utils.data.Dataset structure to define it.
Here is how you can do it in plain pytorch (I'm using pillow to load the images and torchvision to transform them to torch.Tensor objects):
import torch
import torchvision
from PIL import Image
class MyDataset(torch.utils.data.Dataset):
def __init__(self, dataframe):
self.dataframe = dataframe
def __len__(self):
return len(self.dataframe)
def __getitem__(self, index):
row = self.dataframe.iloc[index]
return (
torchvision.transforms.functional.to_tensor(Image.open(row["Path"])),
row["Score"],
)
dataset = MyDataset(dataframe)
Alternatively, you can use torchdata (disclaimer: shameless self-promotion as I'm the author...) which allows you to decouple Path and Scores like this:
import torchvision
from PIL import Image
import torchdata
class ImageDataset(torchdata.datasets.FilesDataset):
def __getitem__(self, index):
return Image.open(self.files[index])
class Labels(torchdata.Dataset):
def __init__(self, scores):
super().__init__()
self.scores = scores
def __len__(self):
return len(self.scores)
def __getitem__(self, index):
return self.scores[index]
# to_numpy for convenience
# I assume all your images are in /folder and have *.jpg extension
dataset = ImageDataset.from_folder("/folder", regex="*.jpg").map(
torchvision.transforms.ToTensor()
) | Labels(dataframe["Score"].to_numpy())
(or you could implement it just like in regular pytorch but inheriting from torchdata.Dataset and calling super().__init__() in the constructor).
torchdata allows you to cache your images easily or apply some other transformations via .map as shown there, check github repository for more info or ask in the comment.
DataLoader
Either way you choose you should wrap your dataset in torch.utils.data.DataLoader to create batches and iterate over them, like this:
dataloader = torch.utils.data.DataLoader(dataset, batch_size=64, shuffle=True)
for images, scores in dataloader:
# Rest of your code to train neural network or smth
...
Do with those images and scores what you want in the loop.
| https://stackoverflow.com/questions/61391919/ |
PyTorch: Speed up data loading | I am using densenet121 to do cat/dog detection from Kaggle dataset. I enabled cuda and it appears that training is very fast. However, the data loading (or perhaps processing) appears to be very slow. Are there some ways to speed it up? I tried to play witch batch size, that didn't provide much help. I also changed num_workers from 0 to some positive numbers. Going from 0 to 2 reduces loading time by perhaps 1/3, increasing by more doesn't have additional effect. Are there some other ways I can speed loading things up?
This is my rough code (I am focused on learning, so it's not very organized):
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
data_dir = 'Cat_Dog_data'
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5],
[0.5, 0.5, 0.5])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor()])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train',
transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64,
num_workers=16, shuffle=True,
pin_memory=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64,
num_workers=16)
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
model.cuda()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
import time
device = torch.device('cuda:0')
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
count = 0
total_start = time.time()
for images, labels in trainloader:
start = time.time()
images = images.cuda()
labels = labels.cuda()
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
elapsed = time.time() - start
if count % 20 == 0:
print("Optimized elapsed: ", elapsed, "count:", count)
print("Total elapsed ", time.time() - total_start)
total_start = time.time()
count += 1
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
for images, labels in testloader:
images = images.cuda()
labels = labels.cuda()
with torch.no_grad():
model.eval()
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
compare = top_class == labels.view(*top_class.shape)
accuracy += compare.type(torch.FloatTensor).mean()
model.train()
train_losses.append(running_loss / len(trainloader))
test_losses.append(test_loss / len(testloader))
print("Epoch: {}/{}.. ".format(e + 1, epochs),
"Training Loss: {:.3f}.. ".format(
running_loss / len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss / len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy / len(testloader)))
| torchvision 0.8.0 version or greater
Actually torchvision now supports batches and GPU when it comes to transformations (this is done on torch.Tensors instead of PIL images), so one should use it as an initial improvement.
See here for more info about this release. Also those act as torch.nn.Module, hence can be used inside a model, for example:
transforms = torch.nn.Sequential(
T.RandomCrop(224),
T.RandomHorizontalFlip(p=0.3),
T.ConvertImageDtype(torch.float),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
)
Furthermore, those operations could be JITed possibly improving the performance even further.
torchvision < 0.8.0 (original answer)
Increasing batch_size won't help as torchvision performs transform on single image while it's loaded from your disk.
There are a couple of ways one could speed up data loading with increasing level of difficulty:
Improve image loading times
Load & normalize images and cache in RAM (or on disk)
Produce transformations and save them to disk
Apply non-cache'able transforms (rotations, flips, crops) in batched manner
Prefetching
1. Improve image loading
Easy improvements can be gained by installing Pillow-SIMD instead of original pillow. It is a drop-in replacement and could be faster (or so is claimed at least for Resize which you are using).
Alternatively, you could create your own data loading and processing with OpenCV as some say it's faster or check albumentations (though can't tell you whether those will improve the performance and might be a lot of time wasted for no gain except learning experience).
2. Load & normalize images & cache
You can use Python's LRU Cache functionality to cache some outputs.
You can also use torchdata which acts almost exactly like PyTorch's torch.utils.data.Dataset but allows caching to disk or in RAM (or mixed modes) with simple cache() on torchdata.Dataset (see github repository, disclaimer: i'm the author).
Remember: you have to load and normalize images, cache and after that use RandomRotation, RandomResizedCrop and RandomHorizontalFlip (as those change each time they are run).
3. Produce transformations and save them to disk
You would have to perform a lot of transformations on images, save them to disk and use this enhanced dataset afterwards. Once again that could be done with torchdata but it's really wasteful when it comes to I/O and hard drive and very inelegant solution. Furthermore it's "static" so the data would only last your for X epochs, it wouldn't be "infinite" generator with augmentations.
4. Batched transformations
torchvision does not support it so you would have to write those functions on your own. See this issue for justification. AFAIK no other 3rd party provides it either. For large batches it should speed up things but implementation is open question I think (correct me if I'm wrong).
5. Prefetch
IMO would be hardest to implement (though a really good idea for the project come to think about it). Basically you load data for the next iteration when your model trains. torch.utils.data.DataLoader does provide it, though there are some concerns (like workers pausing after their data got loaded). You can read PyTorch thread about it (not sure about it as I didn't verify on my own). Also, a lot of valuable insight provided by this comment and this blog post (though not sure how up to date those are).
All in all to substantially improve data loading you would need to get your hands quite dirty (or maybe there are libraries doing this some of those for PyTorch, if so,I would love to know about them).
Also remember to profile your changes, see torch.nn.bottleneck
EDIT: DALI project might be worth checking out, though AFAIK it has some problems with RAM memory growing linearly with number of epochs.
| https://stackoverflow.com/questions/61393613/ |
Pytorch: RGB value ranges 0-1 after rescaling, How do I normalize images? | I wrote a class to rescale images, but the RGB value became ranging from 0 to 1 after preocessing. What happened to the RGB which intuitively should be ranging from 0-255 ? Following are the Rescale class and the RGB values after rescaling.
Question:
Do I still need a Min-Max Normalization, map the RGB value to 0-1?
How do I apply transforms.Normalization, where do I put the Normalization, before or after the Rescale, how do I calculate the mean and variance, use the RGB value ranging from 0-255 or 0-1?
Thanks for your time!
class Rescale(object):
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, anno = sample['image'], sample['anno']
# get orginal width and height of image
h, w = image.shape[0:2]
# if output_size is an integer
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = h * self.output_size / w, self.output_size
else:
new_h, new_w = self.output_size / h, w * self.output_size / h
# if output size is a tuple (a, b)
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
image = transform.resize(image, (new_h, new_w))
return {'image': image, 'anno': anno}
[[[0.67264216 0.50980392 0.34503034]
[0.67243905 0.51208121 0.34528431]
[0.66719145 0.51817184 0.3459951 ]
...
[0.23645098 0.2654311 0.3759458 ]
[0.24476471 0.28003857 0.38963938]
[0.24885877 0.28807445 0.40935877]]
[[0.67465196 0.50994608 0.3452402 ]
[0.68067157 0.52031373 0.3531848 ]
[0.67603922 0.52732436 0.35839216]
...
[0.23458333 0.25195098 0.36822142]
[0.2461343 0.26886127 0.38314558]
[0.2454384 0.27233056 0.39977664]]
[[0.67707843 0.51237255 0.34766667]
[0.68235294 0.5219951 0.35553024]
[0.67772059 0.52747687 0.35659176]
...
[0.24485294 0.24514568 0.36592999]
[0.25407436 0.26205475 0.38063318]
[0.2597007 0.27202914 0.40214216]]
...
[[[172 130 88]
[172 130 88]
[172 130 88]
...
[ 63 74 102]
[ 65 76 106]
[ 67 77 112]]
[[173 131 89]
[173 131 89]
[173 131 89]
...
[ 65 74 103]
[ 64 75 105]
[ 63 73 108]]
[[173 131 89]
[174 132 90]
[174 132 90]
...
[ 63 72 101]
[ 62 71 102]
[ 61 69 105]]
...
| You can use torchvision to accomplish this.
transform = transforms.Compose([
transforms.Resize(output_size),
transforms.ToTensor(),
])
This requires a PIL image as input. It will return the tensor in [0, 1] range.You may also add mean-standard normalization as below
transform = transforms.Compose([
transforms.Resize(output_size),
transforms.ToTensor(),
transforms.Normalize(mean, std),
])
Here mean and std are per channel mean and standard deviation of all pixels of all images in the training set. You need to calculate them after resizing all images and converting to torch Tensor. One way to do this would be to apply first two transformation (resize and ToTensor) and then calculate mean and std over all training images like this
x = torch.concatenate([train_data[i] for i in range(len(train_data))])
mean = torch.mean(x, dim=(0, 1))
std = torch.std(x, dim=(0, 1))
Then you use this mean and std value with Normalize transorm above.
| https://stackoverflow.com/questions/61397564/ |
How to apply conditions for rows in a tensor where there is boolean values | I have the following tensor:
predictions = torch.tensor([[ True, False, False],
[False, False, True],
[False, True, True],
[ True, False, False]])
I applied conditions along the axis like below.
new_pred= []
if predictions == ([True,False,False]):
new_pred = torch.Tensor(0)
if predictions == ([False,False,True]):
new_pred = torch.Tensor(2)
if predictions == ([False,True,True]):
new_pred = torch.Tensor(2)
So I want the final output (new_pred) to be:
tensor([0, 2, 2, 0])
But I am getting a blank [] for the new_pred tensor. I think my logic must be flawed since nothing is getting stored in the new_pred. Can someone help me write this logic accurately?
| The type of predictions is torch.Tensor while ([True, False, False]) is a list, first, you have to make sure both sides have the same type.
predictions == torch.tensor([True,False,False])
>>> tensor([[ True, True, True],
[False, True, False],
[False, False, False],
[True, True, True]])
Then, you are still comparing a 2d tensor to a 1d tensor, which is ambiguous in an if statement, an easy way to fix this would be to write a for loop, compare each row of the predictions to the conditions and append the result to the new_pred list. Note that you will be comparing two booleans tensors with the size of three, therefore, you have to make sure the result of the comparison is True for all of the cells.
predictions = torch.tensor([[ True, False, False],
[False, False, True],
[False, True, True],
[ True, False, False]])
conditions = torch.tensor([[True,False,False],
[False,False,True],
[False,True,True]])
new_predict = []
for index in range(predictions.size(0)):
if (predictions[index] == conditions[0]).all():
new_predict.append(0)
# ...
Alternatively, you can use slicing to achieve your expected result without any for loop.
| https://stackoverflow.com/questions/61398117/ |
Unable to update PyTorch 1.4.0 to 1.5.0 using Conda | When I tried to udpdate PyTorch from 1.4.0 to 1.5.0, Anaconda says that all the packages are already installed.
$ conda install -c pytorch pytorch torchvision
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
$ conda list | grep -i torch
_pytorch_select 0.2 gpu_0
pytorch 1.4.0 py3.7_cuda10.0.130_cudnn7.6.3_0 pytorch
torchvision 0.5.0 py37_cu100 pytorch
I believe 1.5.0 is available in the pytorch channel
$ conda search -c pytorch pytorch=1.5.0
Loading channels: done
# Name Version Build Channel
pytorch 1.5.0 py3.5_cpu_0 pytorch
pytorch 1.5.0 py3.5_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.5_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch 1.5.0 py3.5_cuda9.2.148_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.6_cpu_0 pytorch
pytorch 1.5.0 py3.6_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.6_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch 1.5.0 py3.6_cuda9.2.148_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.7_cpu_0 pytorch
pytorch 1.5.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.7_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch 1.5.0 py3.7_cuda9.2.148_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.8_cpu_0 pytorch
pytorch 1.5.0 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
pytorch 1.5.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
pytorch 1.5.0 py3.8_cuda9.2.148_cudnn7.6.3_0 pytorch
Why is conda not updating PyTorch to 1.5.0?
Using Python 3.7.3 & conda 4.8.3 on Ubuntu 18.04
Thanks!
| Install Validates Constraints
The Conda install first checks to see if a constraint is satisfied, rather than blindly trying to install the latest of everything. A better reading of the command:
conda install -c pytorch pytorch torchvision
would be
With the pytorch channel prioritized, ensure that the currently activated environment has some version of pytorch and torchvision installed.
Your environment already satisfies this constraint, so there is nothing to do.
Updating Packages, or Constraints
If you want to update a package, then look into the conda update command or, if you know a minimum version you require, then specify it:
conda install -c pytorch pytorch[version='>=1.5'] torchvision
which effectively changes the constraint.
Better Practice (Recommended)
Best practice though is to simply make a new env when you require changes to packages. Every time one changes the packages in an env, one risks breaking/invalidating existing code.
conda create -n pytorch_1_5 -c pytorch pytorch torchvison
And this will grab the latest possible versions by default.
| https://stackoverflow.com/questions/61412874/ |
question with fast.ai course lesson 8 g attribute | In the course fast.ai 2019 lesson 8, there is a weird g attribute used in back propagation, which i check for torch.Tensor this attribute doesn't exist. I tried to print the value of inp.g/out.g in call method but i got AttributeError: 'Tensor' object has no attribute 'g', but i am able to obtain the inp.g/out.g value before the assignment in backward, how does this g attribute works?
class Linear():
def __init__(self, w, b):
self.w, self.b = w, b
def __call__(self, inp):
print('in lin call')
self.inp = inp
self.out = [email protected] + self.b
try:
print('out.g', self.out.g)
except Exception as e:
print('out.g dne yet')
return self.out
def backward(self):
print('out.g', self.out.g)
self.inp.g = self.out.g @ self.w.t()
self.w.g = (self.inp.unsqueeze(-1) * self.out.g.unsqueeze(1)).sum(0)
self.b.g = self.out.g.sum(0)
link to full code from the course
-update-
i am able to figure out the self.out.g value is exact same as the cost function MSE self.inp.g but still unable to figure out how the value is passed into the last linear layer.
class MSE():
def __call__(self, inp, targ):
self.inp = inp
self.targ = targ
self.out = (inp.squeeze() - targ).pow(2).mean()
return self.out
def backward(self):
self.inp.g = 2. * (self.inp.squeeze() - self.targ).unsqueeze(-1) \
/ self.targ.shape[0]
print('in mse backward', self.inp.g)
class Model():
def __init__(self, w1, b1, w2, b2):
self.layers = [Lin(w1, b1), Relu(), Lin(w2, b2)]
self.loss = Mse()
def __call__(self, x, targ):
for l in self.layers:
x = l(x)
return self.loss(x, targ)
def backward(self):
self.loss.backward()
for l in reversed(self.layers):
l.backward()
| basically this has to deal with how python assignments works (pointers, similar to how C pointers works). after tracing the variables with id(variable name) i am able to figure out how things the g attribute comeby.
# ... in model (forward pass)...
x = layer(x) # from linear layer >> return self.out and is assigned to x
# ...
return self.loss(x, targ) # x is the same x (id) obtained from the model
# ========
# ... in model (backward pass) ...
self.loss.backward() # this is how the self.inp.g came by
# ... in linear ...
self.inp.g = self.out.g @ self.w.t()
# this self.out.g is the same instance as self.inp.g from loss
| https://stackoverflow.com/questions/61429085/ |
Understanding LSTM with a simple dataset | I wanted to make sure I understand LSTM so I implemented a dummy example using Pytorch framework.
As an input, I use sequences of consecutive numbers of length 10 and the value to predict is always the last number of sequence + 1. For instance:
x = [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
y = 16
Since it's a very simple forecasting task, I expected the model to work well but I observe very poor performances. The model predicts a constant value by batch that keeps increasing during the training process.
I am wondering what I am missing. Below is the code I've made - any help would be highly appreciated.
from torch.utils.data import Dataset, TensorDataset, DataLoader, RandomSampler, SequentialSampler
import torch.nn as nn
import torch
class MyDataset(Dataset):
def __init__(self):
pass
def __getitem__(self, index):
x = torch.tensor([index-9,index-8,index-7,index-6,index-5,index-4,index-3,index-2,index-1,index])
y = torch.tensor(index + 1)
return x,y
def __len__(self):
return 1000
class LSTM(nn.Module):
def __init__(self, hidden_layer_size=1, batch_size = 1):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.batch_size = batch_size
self.lstm = nn.LSTM(1, hidden_layer_size)
self.linear = nn.Linear(10, 1)
self.hidden_cell = (torch.zeros(1,self.batch_size,self.hidden_layer_size),
torch.zeros(1,self.batch_size,self.hidden_layer_size))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(10 ,self.batch_size, -1), self.hidden_cell)
predictions = self.linear(lstm_out.squeeze().T)
return predictions
batch_size = 32
epochs = 1000
train = MyDataset()
sampler = RandomSampler(train)
train_dataloader = DataLoader(train, sampler=sampler, batch_size= batch_size , drop_last = True)
model = LSTM(batch_size = batch_size)
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for e in range(epochs):
for step, batch in enumerate(train_dataloader) :
seq, labels = batch
optimizer.zero_grad()
model.hidden_cell = (torch.zeros(1, batch_size, model.hidden_layer_size),
torch.zeros(1, batch_size, model.hidden_layer_size))
y_pred = model(seq.float())
print(y_pred)
single_loss = loss_function(y_pred, labels.float())
single_loss.backward()
optimizer.step()
| There are multiple issues in your forward function. Take a look at the input that you are passing to the LSTM:
input_seq = input_seq.view(10 ,self.batch_size, -1)
print(input_seq[:, 0])
>>> tensor([[168.],
[ 21.],
[450.],
[436.],
[789.],
[941.],
[ -7.],
[811.],
[789.],
[992.]])
This is a series of random numbers. You either have to transpose the input_seq or even better, pass batch_first=True to the LSTM constructor and just unsqueeze the input_seq before passing it to the LSTM.
You also have to update the lstm_out, the only operation that is needed now is to reshape it to [batch_size x (10 * hidden_size)].
Lastly, you need to squeeze the output of the linear layer.
Aside from those, the hidden size of the LSTM is too small, use 10 (or even 100) instead of one, only then the model converges in 1000 epochs.
Here is the updated code:
class LSTM(nn.Module):
def __init__(self, hidden_layer_size=100, batch_size = 1):
super().__init__()
self.hidden_layer_size = hidden_layer_size
self.batch_size = batch_size
self.lstm = nn.LSTM(1, hidden_layer_size, batch_first=True)
self.linear = nn.Linear(10 * hidden_layer_size, 1)
self.hidden_cell = (torch.zeros(1,self.batch_size,self.hidden_layer_size),
torch.zeros(1,self.batch_size,self.hidden_layer_size))
def forward(self, input_seq):
batch_size = input_seq.size(0)
input_seq = input_seq.unsqueeze(2)
lstm_out, self.hidden_cell = self.lstm(input_seq, self.hidden_cell)
lstm_out = lstm_out.reshape(batch_size, -1)
predictions = self.linear(lstm_out).squeeze()
return predictions
| https://stackoverflow.com/questions/61435747/ |
SSD’s loss not decreasing in PyTorch | I am implementing SSD(Single shot detector) to study in PyTorch.
However, my custom training loss didn't decrease...
I've searched and tried various solution for week, but problem is still remaining.
What should I do?
My loss function is incorrect?
Here is my SSD300 model
SSD300(
(feature_layers): ModuleDict(
(conv1_1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu1_1): ReLU()
(conv1_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu1_2): ReLU()
(pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
(conv2_1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu2_1): ReLU()
(conv2_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu2_2): ReLU()
(pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
(conv3_1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu3_1): ReLU()
(conv3_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu3_2): ReLU()
(conv3_3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu3_3): ReLU()
(pool3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=True)
(conv4_1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu4_1): ReLU()
(conv4_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu4_2): ReLU()
(conv4_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu4_3): ReLU()
(pool4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
(conv5_1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu5_1): ReLU()
(conv5_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu5_2): ReLU()
(conv5_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu5_3): ReLU()
(pool5): MaxPool2d(kernel_size=(3, 3), stride=(1, 1), padding=1, dilation=1, ceil_mode=False)
(conv6): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))
(relu6): ReLU()
(conv7): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))
(relu7): ReLU()
(conv8_1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(relu8_1): ReLU()
(conv8_2): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(relu8_2): ReLU()
(conv9_1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
(relu9_1): ReLU()
(conv9_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(relu9_2): ReLU()
(conv10_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
(relu10_1): ReLU()
(conv10_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
(relu10_2): ReLU()
(conv11_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
(relu11_1): ReLU()
(conv11_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
(relu11_2): ReLU()
)
(localization_layers): ModuleDict(
(loc1): Sequential(
(l2norm_loc1): L2Normalization()
(conv_loc1): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc1): ReLU()
)
(loc2): Sequential(
(conv_loc2): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc2): ReLU()
)
(loc3): Sequential(
(conv_loc3): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc3): ReLU()
)
(loc4): Sequential(
(conv_loc4): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc4): ReLU()
)
(loc5): Sequential(
(conv_loc5): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc5): ReLU()
)
(loc6): Sequential(
(conv_loc6): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_loc6): ReLU()
)
)
(confidence_layers): ModuleDict(
(conf1): Sequential(
(l2norm_conf1): L2Normalization()
(conv_conf1): Conv2d(512, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf1): ReLU()
)
(conf2): Sequential(
(conv_conf2): Conv2d(1024, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf2): ReLU()
)
(conf3): Sequential(
(conv_conf3): Conv2d(512, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf3): ReLU()
)
(conf4): Sequential(
(conv_conf4): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf4): ReLU()
)
(conf5): Sequential(
(conv_conf5): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf5): ReLU()
)
(conf6): Sequential(
(conv_conf6): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(relu_conf6): ReLU()
)
)
(predictor): Predictor()
)
My loss function is defined as;
class SSDLoss(nn.Module):
def __init__(self, alpha=1, matching_func=None, loc_loss=None, conf_loss=None):
super().__init__()
self.alpha = alpha
self.matching_strategy = matching_strategy if matching_func is None else matching_func
self.loc_loss = LocalizationLoss() if loc_loss is None else loc_loss
self.conf_loss = ConfidenceLoss() if conf_loss is None else conf_loss
def forward(self, predicts, gts, dboxes):
"""
:param predicts: Tensor, shape is (batch, total_dbox_nums, 4+class_nums=(cx, cy, w, h, p_class,...)
:param gts: Tensor, shape is (batch*bbox_nums(batch), 1+4+class_nums) = [[img's_ind, cx, cy, w, h, p_class,...],..
:param dboxes: Tensor, shape is (total_dbox_nums, 4=(cx,cy,w,h))
:return:
loss: float
"""
# get predict's localization and confidence
pred_loc, pred_conf = predicts[:, :, :4], predicts[:, :, 4:]
# matching
pos_indicator, gt_loc, gt_conf = self.matching_strategy(gts, dboxes, batch_num=predicts.shape[0], threshold=0.5)
# calculate ground truth value considering default boxes
gt_loc = gt_loc_converter(gt_loc, dboxes)
# Localization loss
loc_loss = self.loc_loss(pos_indicator, pred_loc, gt_loc)
# Confidence loss
conf_loss = self.conf_loss(pos_indicator, pred_conf, gt_conf)
return conf_loss + self.alpha * loc_loss
class LocalizationLoss(nn.Module):
def __init__(self):
super().__init__()
self.smoothL1Loss = nn.SmoothL1Loss(reduction='none')
def forward(self, pos_indicator, predicts, gts):
N = pos_indicator.sum()
total_loss = self.smoothL1Loss(predicts, gts).sum(dim=-1) # shape = (batch num, dboxes num)
loss = total_loss.masked_select(pos_indicator)
return loss.sum() / N
class ConfidenceLoss(nn.Module):
def __init__(self, neg_factor=3):
"""
:param neg_factor: int, the ratio(1(pos): neg_factor) to learn pos and neg for hard negative mining
"""
super().__init__()
self.logsoftmax = nn.LogSoftmax(dim=-1)
self._neg_factor = neg_factor
def forward(self, pos_indicator, predicts, gts):
loss = (-gts * self.logsoftmax(predicts)).sum(dim=-1) # shape = (batch num, dboxes num)
N = pos_indicator.sum()
neg_indicator = torch.logical_not(pos_indicator)
pos_loss = loss.masked_select(pos_indicator)
neg_loss = loss.masked_select(neg_indicator)
neg_num = neg_loss.shape[0]
neg_num = min(neg_num, self._neg_factor * N)
_, topk_indices = torch.topk(neg_loss, neg_num)
neg_loss = neg_loss.index_select(dim=0, index=topk_indices)
return (pos_loss.sum() + neg_loss.sum()) / N
loss output is below;
Training... Epoch: 1, Iter: 1, [32/21503 (0%)] Loss: 28.804445
Training... Epoch: 1, Iter: 10, [320/21503 (1%)] Loss: 12.880742
Training... Epoch: 1, Iter: 20, [640/21503 (3%)] Loss: 15.932519
Training... Epoch: 1, Iter: 30, [960/21503 (4%)] Loss: 14.624641
Training... Epoch: 1, Iter: 40, [1280/21503 (6%)] Loss: 16.301014
Training... Epoch: 1, Iter: 50, [1600/21503 (7%)] Loss: 15.710087
Training... Epoch: 1, Iter: 60, [1920/21503 (9%)] Loss: 12.441727
Training... Epoch: 1, Iter: 70, [2240/21503 (10%)] Loss: 12.283393
Training... Epoch: 1, Iter: 80, [2560/21503 (12%)] Loss: 12.272835
Training... Epoch: 1, Iter: 90, [2880/21503 (13%)] Loss: 12.273635
Training... Epoch: 1, Iter: 100, [3200/21503 (15%)] Loss: 12.273409
Training... Epoch: 1, Iter: 110, [3520/21503 (16%)] Loss: 12.266172
Training... Epoch: 1, Iter: 120, [3840/21503 (18%)] Loss: 12.272820
Training... Epoch: 1, Iter: 130, [4160/21503 (19%)] Loss: 12.274920
Training... Epoch: 1, Iter: 140, [4480/21503 (21%)] Loss: 12.275247
Training... Epoch: 1, Iter: 150, [4800/21503 (22%)] Loss: 12.273258
Training... Epoch: 1, Iter: 160, [5120/21503 (24%)] Loss: 12.277486
Training... Epoch: 1, Iter: 170, [5440/21503 (25%)] Loss: 12.266512
Training... Epoch: 1, Iter: 180, [5760/21503 (27%)] Loss: 12.265674
Training... Epoch: 1, Iter: 190, [6080/21503 (28%)] Loss: 12.265306
Training... Epoch: 1, Iter: 200, [6400/21503 (30%)] Loss: 12.269717
Training... Epoch: 1, Iter: 210, [6720/21503 (31%)] Loss: 12.274122
Training... Epoch: 1, Iter: 220, [7040/21503 (33%)] Loss: 12.263970
Training... Epoch: 1, Iter: 230, [7360/21503 (34%)] Loss: 12.267252
| I must normalize predicted boxes before calculating loss function.
The word of variance caused to mislead...
link
class Encoder(nn.Module):
def __init__(self, norm_means=(0, 0, 0, 0), norm_stds=(0.1, 0.1, 0.2, 0.2)):
super().__init__()
# shape = (1, 1, 4=(cx, cy, w, h)) or (1, 1, 1)
self.norm_means = torch.tensor(norm_means, requires_grad=False).unsqueeze(0).unsqueeze(0)
self.norm_stds = torch.tensor(norm_stds, requires_grad=False).unsqueeze(0).unsqueeze(0)
def forward(self, gt_boxes, default_boxes):
"""
:param gt_boxes: Tensor, shape = (batch, default boxes num, 4)
:param default_boxes: Tensor, shape = (default boxes num, 4)
Note that 4 means (cx, cy, w, h)
:return:
encoded_boxes: Tensor, calculate ground truth value considering default boxes. The formula is below;
gt_cx = (gt_cx - dbox_cx)/dbox_w, gt_cy = (gt_cy - dbox_cy)/dbox_h,
gt_w = train(gt_w / dbox_w), gt_h = train(gt_h / dbox_h)
shape = (batch, default boxes num, 4)
"""
assert gt_boxes.shape[1:] == default_boxes.shape, "gt_boxes and default_boxes must be same shape"
gt_cx = (gt_boxes[:, :, 0] - default_boxes[:, 0]) / default_boxes[:, 2]
gt_cy = (gt_boxes[:, :, 1] - default_boxes[:, 1]) / default_boxes[:, 3]
gt_w = torch.log(gt_boxes[:, :, 2] / default_boxes[:, 2])
gt_h = torch.log(gt_boxes[:, :, 3] / default_boxes[:, 3])
encoded_boxes = torch.cat((gt_cx.unsqueeze(2),
gt_cy.unsqueeze(2),
gt_w.unsqueeze(2),
gt_h.unsqueeze(2)), dim=2)
# normalization
return (encoded_boxes - self.norm_means.to(gt_boxes.device)) / self.norm_stds.to(gt_boxes.device) <<<<<<<<<<<<< answer!!
| https://stackoverflow.com/questions/61436770/ |
is crossentropy loss of pytorch different than "categorical_crossentropy" of keras? | I am trying to mimic a pytorch neural network in keras.
I am confident that my keras version of the neural network is very close to the one in pytorch but during training, I see that the loss value of the pytorch network are much lower than the loss values of the keras network. I wonder if this is because I have not properly copied the pytorch network in keras or the loss computation is different in the two framework.
Pytorch loss definition:
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)
Keras loss definition:
sgd = optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
resnet.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['categorical_accuracy'])
Note that all the layers in the keras network have been implemented with L2 regularization kernel_regularizer=regularizers.l2(5e-4), also I used he_uniform initialization which I believe is default in pytorch, according to the source code.
The batch size for the two networks are the same: 128.
In the pytorch version, I get loss values around 4.1209 which decreases to around 0.5. In keras it starts around 30 and decreases to 2.5.
| PyTorch CrossEntropyLoss accepts unnormalized scores for each class i.e., not probability (source). Keras categorical_crossentropy by default uses from_logits=False which means it assumes y_pred contains probabilities (not raw scores) (source).
In PyTorch, if you use CrossEntropyLoss, you should not use the softmax/sigmoid layer at the end. In keras you can use it or not use it but set the from_logits accordingly.
| https://stackoverflow.com/questions/61437961/ |
Add tensor to another by repeating along axis | I have a tensor out with shape:
torch.Size([4, 644, 25])
and another one x with shape:
torch.Size([4, 161, 25])
I want to add x to out 4 times, something like:
out[:, 0:161] += x
out[:, 161:322] += x
out[:, 322:483] += x
out[:, 483:644] += x
Is there some one-liner that I can use to do this?
| We can use np.tile here:
out += np.tile(x, (1,out.shape[1]//x.shape[1],1))
Or using pytorch's repeat:
out += x.repeat(1,out.shape[1]//x.shape[1],1)
| https://stackoverflow.com/questions/61443972/ |
Pytorch: How to optimize multiple variables with respect to multiple losses? | I want different losses to have their gradients computed with respect to different variables, and those variables to then all step together.
Here's a simple example demonstrating what I want:
import torch as T
x = T.randn(3, requires_grad = True)
y = T.randn(4, requires_grad = True)
z = T.randn(5, requires_grad = True)
x_opt = T.optim.Adadelta([x])
y_opt = T.optim.Adadelta([y])
z_opt = T.optim.Adadelta([z])
for i in range(n_iter):
x_opt.zero_grad()
y_opt.zero_grad()
z_opt.zero_grad()
shared_computation = foobar(x, y, z)
x_loss = f(x, y, z, shared_computation)
y_loss = g(x, y, z, shared_computation)
z_loss = h(x, y, z, shared_computation)
x_loss.backward_with_respect_to(x)
y_loss.backward_with_respect_to(y)
z_loss.backward_with_respect_to(z)
x_opt.step()
y_opt.step()
z_opt.step()
My question is how do we do that backward_with_respect_to part in PyTorch? I only want x's gradient w.r.t. x_loss, etc.. And then I want all the optimizers to step together (based on the current values of x, y, and z).
| I've written a function to do just this. The two key components are (1) using retain_graph=True for all but the final call to .backward() and (2) saving grads after each call to .backward(), and restoring them at the end before .step()ing.
def multi_step(losses, optms):
# optimizers each take a step, with `optms[i]`'s variables being
# optimized w.r.t. `losses[i]`.
grads = [None]*len(losses)
for i, (loss, optm) in enumerate(zip(losses, optms)):
retain_graph = i != (len(losses)-1)
optm.zero_grad()
loss.backward(retain_graph=retain_graph)
grads[i] = [
[
p.grad+0 for p in group['params']
] for group in optm.param_groups
]
for optm, grad in zip(optms, grads):
for p_group, g_group in zip(optm.param_groups, grad):
for p, g in zip(p_group['params'], g_group):
p.grad = g
optm.step()
In the example code stated in the question, multi_step would be used as follows:
for i in range(n_iter):
shared_computation = foobar(x, y, z)
x_loss = f(x, y, z, shared_computation)
y_loss = g(x, y, z, shared_computation)
z_loss = h(x, y, z, shared_computation)
multi_step([x_loss, y_loss, z_loss], [x_opt, y_opt, z_opt])
| https://stackoverflow.com/questions/61451021/ |
PyTorch LSTM crashing on colab gpu (works fine on cpu) | Hello I have following LSTM which runs fine on a CPU.
import torch
class LSTMForecast(torch.nn.Module):
"""
A very simple baseline LSTM model that returns
an output sequence given a multi-dimensional input seq. Inspired by the StackOverflow link below.
https://stackoverflow.com/questions/56858924/multivariate-input-lstm-in-pytorch
"""
def __init__(self, seq_length: int, n_time_series: int, output_seq_len=1, hidden_states:int=20, num_layers=2, bias=True, batch_size=100):
super().__init__()
self.forecast_history = seq_length
self.n_time_series = n_time_series
self.hidden_dim = hidden_states
self.num_layers = num_layers
self.lstm = torch.nn.LSTM(n_time_series, hidden_states, num_layers, bias, batch_first=True)
self.final_layer = torch.nn.Linear(seq_length*hidden_states, output_seq_len)
self.init_hidden(batch_size)
def init_hidden(self, batch_size)->None:
# This is what we'll initialise our hidden state
self.hidden = (torch.zeros(self.num_layers, batch_size, self.hidden_dim), torch.zeros(self.num_layers, batch_size, self.hidden_dim))
def forward(self, x: torch.Tensor) -> torch.Tensor:
batch_size = x.size()[0]
self.init_hidden(batch_size)
out_x, self.hidden = self.lstm(x, self.hidden)
x = self.final_layer(out_x.contiguous().view(batch_size, -1))
return x
However, when I try to run on colab GPU it crashes without even an error message.
model = LSTMForecast(1, 1, batch_size=1).to('cuda')
a = torch.rand(1, 1, 1).to('cuda')
model(a)
The logs don't tell me anything either. I'm really at a loss.
| I had to explicitly call CUDA. Once I did that it worked.
def init_hidden(self, batch_size)->None:
# This is what we'll initialise our hidden state
self.hidden = (torch.zeros(self.num_layers, batch_size, self.hidden_dim).to('cuda'), torch.zeros(self.num_layers, batch_size, self.hidden_dim).to('cuda'))
| https://stackoverflow.com/questions/61451339/ |
How can I calculate accuracy for keypoints detection CNN model in pytorch? | Can someone help me with this please,
def train_net(n_epochs):
valid_loss_min = np.Inf
history = {'train_loss': [], 'valid_loss': [], 'epoch': []}
for epoch in range(n_epochs):
train_loss = 0.0
valid_loss = 0.0
net.train()
running_loss = 0.0
for batch_i, data in enumerate(train_loader):
images = data['image']
key_pts = data['keypoints']
key_pts = key_pts.view(key_pts.size(0), -1)
key_pts = key_pts.type(torch.FloatTensor).to(device)
images = images.type(torch.FloatTensor).to(device)
output_pts = net(images)
loss = criterion(output_pts, key_pts)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()*images.data.size(0)
net.eval()
with torch.no_grad():
for batch_i, data in enumerate(test_loader):
images = data['image']
key_pts = data['keypoints']
key_pts = key_pts.view(key_pts.size(0), -1)
key_pts = key_pts.type(torch.FloatTensor).to(device)
images = images.type(torch.FloatTensor).to(device)
output_pts = net(images)
loss = criterion(output_pts, key_pts)
valid_loss += loss.item()*images.data.size(0)
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(test_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch+1,train_loss,valid_loss))
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(valid_loss_min,valid_loss))
torch.save(net,f'X:\\xxxx\\xxx\\xxx\\epoch{epoch + 1}_loss{valid_loss}.pth')
valid_loss_min = valid_loss
history['epoch'].append(epoch + 1)
history['train_loss'].append(train_loss)
history['valid_loss'].append(valid_loss)
print('Finished Training')
return history
'''
Above is the training code for reference!
| This is funny, I was just working on this minutes ago myself! As you probably realise, simply calculating the Euclidean distance between 2 sets of keypoints doesn't generalise well to cases where you need to compare across body shapes and sizes. So I would recommend using the Object Keypoint Similarity Score, which measures the body joints distance normalised by the scale of the person. As represented in this blog, OKS is defined as:
Here (line 313 function computeOKS) is Facebook research's implementation:
def computeOks(self, imgId, catId):
p = self.params
# dimention here should be Nxm
gts = self._gts[imgId, catId]
dts = self._dts[imgId, catId]
inds = np.argsort([-d['score'] for d in dts], kind='mergesort')
dts = [dts[i] for i in inds]
if len(dts) > p.maxDets[-1]:
dts = dts[0:p.maxDets[-1]]
# if len(gts) == 0 and len(dts) == 0:
if len(gts) == 0 or len(dts) == 0:
return []
ious = np.zeros((len(dts), len(gts)))
sigmas = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62,.62, 1.07, 1.07, .87, .87, .89, .89])/10.0
vars = (sigmas * 2)**2
k = len(sigmas)
# compute oks between each detection and ground truth object
for j, gt in enumerate(gts):
# create bounds for ignore regions(double the gt bbox)
g = np.array(gt['keypoints'])
xg = g[0::3]; yg = g[1::3]; vg = g[2::3]
k1 = np.count_nonzero(vg > 0)
bb = gt['bbox']
x0 = bb[0] - bb[2]; x1 = bb[0] + bb[2] * 2
y0 = bb[1] - bb[3]; y1 = bb[1] + bb[3] * 2
for i, dt in enumerate(dts):
d = np.array(dt['keypoints'])
xd = d[0::3]; yd = d[1::3]
if k1>0:
# measure the per-keypoint distance if keypoints visible
dx = xd - xg
dy = yd - yg
else:
# measure minimum distance to keypoints in (x0,y0) & (x1,y1)
z = np.zeros((k))
dx = np.max((z, x0-xd), axis=0) + np.max((z, xd-x1), axis=0)
dy = np.max((z, y0-yd), axis=0) + np.max((z, yd-y1), axis=0)
e = (dx**2 + dy**2) / vars / (gt['area'] + np.spacing(1)) / 2
if k1 > 0:
e=e[vg > 0]
ious[i, j] = np.sum(np.exp(-e)) / e.shape[0]
return ious
| https://stackoverflow.com/questions/61453186/ |
convert sess.run to pytorch | I am trying to convert a code from tf to pytorch.
The part of the code where I am stuck is this sess.run. As fas as I know, pytorch doesn't need it, but I don't find the way to replicate it. I attach you the code.
TF:
ebnos_db = np.linspace(1,6, 6)
bers_no_training = np.zeros(shape=[ebnos_db.shape[0]])
for j in range(epochs):
for i in range(ebnos_db.shape[0]):
ebno_db = ebnos_db[i]
bers_no_training[i] += sess.run(ber, feed_dict={
batch_size: samples,
noise_var: ebnodb2noisevar(ebno_db, coderate)
})
bers_no_training /= epochs
samples is a int32 and ebnodb2noisevar() returns a float32.
BER in TF is calculated as:
ber = tf.reduce_mean(tf.cast(tf.not_equal(x, x_hat), dtype=tf.float32))
and in PT:
wrong_bits = ( torch.eq(x, x_hat).type(torch.float32) * -1 ) + 1
ber = torch.mean(wrong_bits)
I think BER is well computed, but the main problem is that I don't know how to convert sess.run into PyTorch, nor I completely understand its function.
Can anybody help me?
Thanks
| You can do the same in PyTorch but easier when it comes to ber:
ber = torch.mean((x != x_hat).float())
would be enough.
Yes, PyTorch doesn't need it as it's based on dynamic graph construction (unlike Tensorflow with it's static approach).
In tensorflow sess.run is used to feed values into created graph; here tf.Placeholder (variable in graph which represents a node where a user can "inject" his data) named batch_size will be fed with samples and noise_var with ebnodb2noisevar(ebno_db, coderate).
Translating this to PyTorch is usually straightforward as you don't need any graph-like approaches with session. Just use your neural network (or a-like) with correct input (like samples and noise_var) and you are fine. You have to check your graph (so how ber is constructed from batch_size and noise_var) and reimplement it in PyTorch.
Also, please check PyTorch introductory tutorials to get a feel of the framework before diving into it.
| https://stackoverflow.com/questions/61455950/ |
How to use a Batchsampler within a Dataloader | I have a need to use a BatchSampler within a pytorch DataLoader instead of calling __getitem__ of the dataset multiple times (remote dataset, each query is pricy). I cannot understand how to use the batchsampler with any given dataset.
e.g
class MyDataset(Dataset):
def __init__(self, remote_ddf, ):
self.ddf = remote_ddf
def __len__(self):
return len(self.ddf)
def __getitem__(self, idx):
return self.ddf[idx] --------> This is as expensive as a batch call
def get_batch(self, batch_idx):
return self.ddf[batch_idx]
my_loader = DataLoader(MyDataset(remote_ddf),
batch_sampler=BatchSampler(Sampler(), batch_size=3))
The thing I do not understand, neither found any example online or in torch docs, is how do I use my get_batch function instead of the __getitem__ function.
Edit:
Following the answer of Szymon Maszke, this is what I tried and yet, \_\_get_item__ gets one index each call, instead of a list of size batch_size
class Dataset(Dataset):
def __init__(self):
...
def __len__(self):
...
def __getitem__(self, batch_idx): ------> here I get only one index
return self.wiki_df.loc[batch_idx]
loader = DataLoader(
dataset=dataset,
batch_sampler=BatchSampler(
SequentialSampler(dataset), batch_size=self.hparams.batch_size, drop_last=False),
num_workers=self.hparams.num_data_workers,
)
| You can't use get_batch instead of __getitem__ and I don't see a point to do it like that.
torch.utils.data.BatchSampler takes indices from your Sampler() instance (in this case 3 of them) and returns it as list so those can be used in your MyDataset __getitem__ method (check source code, most of samplers and data-related utilities are easy to follow in case you need it).
I assume your self.ddf supports list slicing (e.g. self.ddf[[25, 44, 115]] returns values correctly and uses only one expensive call). In this case simply switch get_batch into __getitem__ and you are good to go.
class MyDataset(Dataset):
def __init__(self, remote_ddf, ):
self.ddf = remote_ddf
def __len__(self):
return len(self.ddf)
def __getitem__(self, batch_idx):
return self.ddf[batch_idx] -> batch_idx is a list
EDIT: You have to specify batch_sampler as sampler, otherwise the batch will be divided into single indices. This should be fine:
loader = DataLoader(
dataset=dataset,
# This line below!
sampler=BatchSampler(
SequentialSampler(dataset), batch_size=self.hparams.batch_size, drop_last=False
),
num_workers=self.hparams.num_data_workers,
)
| https://stackoverflow.com/questions/61458305/ |
Minimal (light version) PyTorch and Numpy packages in production | I am putting a model into production and I am required to scan all dependencies (Pytorch and Numpy) beforehand via VeraCode Scan.
I noticed that the majority of the flaws are coming from test scripts and caffe2 modules in Pytorch and numpy.
Is there any way to build/install only part of these packages that I use in my application? (e.g. I won't use testing and caffe2 in the application so there's no need to have them in my PyTorch / Numpy source code)
| 1. PyInstaller
You could package your application using pyinstaller. This tool packages your app with Python and dependencies and use only the parts you need (simplifying, in reality it's hard to trace your package exactly so some other stuff would be bundled as well).
Also you might be in for some quirks and workarounds to make it work with pytorch and numpy as those dependencies are quite heavy (especially pytorch).
2. Use only PyTorch
numpy and pytorch are pretty similar feature-wise (as PyTorch tries to be compatible with it) hence maybe you could only use only of them which would simplify the whole thing further
3. Use C++
Depending on other parts of your app you may write it (at least neural network) in C++ using PyTorch's C++ frontend which is stable since 1.5.0 release.
Going this route would allow you to compile PyTorch's .cpp source code statically (so all dependencies are linked) which allows you for relatively small binary size (30Mb when compared to PyTorch's 1GB+), but requires a lot of work.
| https://stackoverflow.com/questions/61463593/ |
persistent pip install in rapids.ai docker container | This is probably a really stupid question, but one has got to start somewhere. I am playing with NVDIA's rapids.ai gpu-enhanced docker container, but this (presumably by design) does not come with pytorch. Now, of course, I can do a pip install torch torch-ignite every time, but this is both annoying and resource-consuming (and pytorch is a large download). What is the approved method for persisting a pip install in a container?
| Create a new Dockerfile that builds a new image based on the existing one:
FROM the/rapids-ai/image
RUN pip install torch torch-ignite
And then
$ ls Dockerfile
Dockerfile
$ docker build -t myimage .
You can now do:
$ docker run myimage
| https://stackoverflow.com/questions/61463624/ |
How to keep model fixed during training? | I am trying to implement a model that uses encoding from multiple pre-trained BERT models on different datasets and gets a combined representation using a fully-connected layer. In this, I want that BERT models should remain fixed and only fully-connected layers should get trained. Is it possible to achieve this in huggingface-transformers? I don't see any flag which allows me to do that.
PS: I don't want to go by the way of dumping the encoding of inputs for each BERT model and use them as inputs.
| A simple solution to this is to just exclude the parameters related to the BERT model while passing to the optimizer.
param_optimizer = [x for x in param_optimizer if 'bert' not in x[0]]
optimizer = AdamW(param_optimizer, lr)
| https://stackoverflow.com/questions/61464726/ |
Pytorch: looking for a function that let me to manually set learning rates for specific epochs intervals | For example, set lr = 0.01 for the first 100 epochs, lr = 0.001 from epoch 101 to epoch 1000, lr = 0.0005 for epoch 1001-4000. Basically my learning rate plan is not letting it decay exponentially with a fixed number of steps. I know it can be achieved by self-defined functions, just curious if there are already developed functions to do that.
| torch.optim.lr_scheduler.LambdaLR is what you are looking for. It returns multiplier of initial learning rate so you can specify any value for any given epoch. For your example it would be:
def lr_lambda(epoch: int):
if 100 < epoch < 1000:
return 0.1
if 1000 < epoch 4000:
return 0.05
# Optimizer has lr set to 0.01
scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
for epoch in range(100):
train(...)
validate(...)
optimizer.step()
scheduler.step()
In PyTorch there are common functions (like MultiStepLR or ExponentialLR) but for custom use case (as is yours), LambdaLR is the easiest.
| https://stackoverflow.com/questions/61473193/ |
Remove RELU activation from Resnet model in Pytorch | How to remove all RELU activation layers from Resnet model in pytorch and if possible replace it by a linear activation layer?
| model = models.resnet50()
names = []
for name, module in model.named_modules():
if hasattr(module, 'relu'):
module.relu = nn.Sigmoid() // or nn.Identity() accordingly
print(model)
This works for either replacing activations or making it identity
| https://stackoverflow.com/questions/61474514/ |
Python matplotlib, invalid shape for image data | Currently I have this code to show three images:
imshow(image1, title='1')
imshow(image2, title='2')
imshow(image3, title='3')
And it works fine. But I am trying to put them all three in a row instead of column.
Here is the code I have tried:
f = plt.figure()
f.add_subplot(1,3,1)
plt.imshow(image1)
f.add_subplot(1,3,2)
plt.imshow(image2)
f.add_subplot(1,3,3)
plt.imshow(image3)
It throws
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
If I do
f = plt.figure()
f.add_subplot(1,3,1)
plt.imshow(image1.cpu())
f.add_subplot(1,3,2)
plt.imshow(image2.cpu())
f.add_subplot(1,3,3)
plt.imshow(image3.cpu())
It throws
TypeError: Invalid shape (1, 3, 128, 128) for image data
How should I fix this or is there an easier way to implement it?
| The matplotlib function 'imshow' gets 3-channel pictures as (h, w, 3) as you can see in the documentation.
It seems that you passed a "batch" of single image (the first dimention) of three channels (second dimention) of the image (h and w are the third and forth dimention).
You need to reshape or view your image (after converting to cpu, try to use:
image1.squeeze().permute(1,2,0)
The result will be an image of the desired shape (128, 128, 3).
The squeeze() function will remove the first dimention. And the premute() function will transpose the dimenstion where the first will shift to the third position and the two other will shift to the beginning.
Also, have a look here for further talk on the GPU and CPU issues:
link
Hope that helps.
| https://stackoverflow.com/questions/61480762/ |
I am trying to use pytorch's implementation of XLNet and got 'Trying to create tensor with negative dimension -1: [-1, 768]' when loading XLNet | I started working on this about two months ago on Google Colab for a midterm project and everything worked perfectly. Now I am modifying it for a final project and keep getting the error 'RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768]'. It looks like pytorch recently pushed a new version 1.5, so I downgraded to version 1.4 and still got the same error. Same with 1.3, and I know I wasn't using anything lower since that came out last year. I checked it with my midterm code and still got the same error, so I don't know what's going on. Here is the chunk of code related to downloading and using the model.
train_inputs, validation_inputs, train_labels, validation_labels = train_test_split(inputIds,
labels,
random_state=2020,
test_size=0.2)
train_masks, validation_masks, _, _ = train_test_split(attention_masks, inputIds, random_state=2020,
test_size=0.2)
# Turn data into torch tensors
train_inputs = torch.tensor(train_inputs)
validation_inputs = torch.tensor(validation_inputs)
train_labels = torch.tensor(train_labels)
validation_labels = torch.tensor(validation_labels)
train_masks = torch.tensor(train_masks)
validation_masks = torch.tensor(validation_masks)
# Create Iterators of the datasets
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
model = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels=2)
# Loads model into GPU memory
model.cuda()
param_optimizer = list(model.named_parameters())
no_decay = ['bias','gamma','beta']
optimizer_grouped_parameters = [
{'params':[p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate':0.01},
{'params':[p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate':0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5)
The error happens on the line model = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels=2). The packages I am using:
from pandas import to_datetime
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
# MUST INSTALL PYTORCH-TRANSFORMERS
from pytorch_transformers import XLNetTokenizer, XLNetForSequenceClassification, AdamW
from tqdm import trange
from numpy import argmax, sum
import nltk
nltk.download('punkt')
Thank you to anyone who tries to help.
| You can try transformers instead of pytorch_transformers.
! pip install transformers (Google Colab)
In terminal,
pip install transformers
import torch
from transformers import XLNetForSequenceClassification
model = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels=2)
model.cuda()
param_optimizer = list(model.named_parameters())
no_decay = ['bias','gamma','beta']
optimizer_grouped_parameters = [
{'params':[p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate':0.01},
{'params':[p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate':0.0}
]
Here's the code without any error in google colab: https://colab.research.google.com/drive/1A8edGYyFuE7d1Z-ZJusz5zU_7yCevF2L
| https://stackoverflow.com/questions/61493753/ |
Input dimension for CrossEntropy Loss in PyTorch | For a binary classification problem with batch_size = 1, I have logit and label values using which I need to calculate loss.
logit: tensor([0.1198, 0.1911], device='cuda:0', grad_fn=<AddBackward0>)
label: tensor(1], device='cuda:0')
# calculate loss
loss_criterion = nn.CrossEntropyLoss()
loss_criterion.cuda()
loss = loss_criterion( b_logits, b_labels )
However, this always results in the following error,
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
What input dimensions is the CrossEntropyLoss actually asking for?
| You are passing wrong shape of tensors.
shape should be (from doc)
Input: (N,C) where C = number of classes
Target: (N) where each value is 0 ≤ targets[i] ≤ C−1
So here, b_logits shape should be ([1,2]) instead of ([2]) to make it right shape you can use torch.view like b_logits.view(1,-1).
And b_labels shape should be ([1]).
Ex.:
b_logits = torch.tensor([0.1198, 0.1911], requires_grad=True)
b_labels = torch.tensor([1])
loss_criterion = nn.CrossEntropyLoss()
loss = loss_criterion( b_logits.view(1,-1), b_labels )
loss
tensor(0.6581, grad_fn=<NllLossBackward>)
| https://stackoverflow.com/questions/61501417/ |
Filter out np.nan values from pytorch 1d tensor | I have a 1d tensor looking kinda like this:
import numpy as np
import torch
my_list = [0, 1, 2, np.nan, np.nan, 4]
tensor = torch.Tensor(my_list)
How do i filter out the nan-values, so it becomes a tensor of size 4?
| You can use torch.isnan
my_list = [0, 1, 2, np.nan, np.nan, 4]
tensor = torch.Tensor(my_list)
tensor[~torch.isnan(tensor)]
tensor([0., 1., 2., 4.])
| https://stackoverflow.com/questions/61503138/ |
Subsets and Splits