instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Runtime error, shape is invalid for input | i am trying to get the input and output information of a network. When debugging, i got this error, Runtime, shape ‘[-1, 400]’ is invalid for input of size 384. I tried different values, but can’t find the correct value. Is there a way to solve this issue? Thanks.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
input_shape = (3, 21,21)
dummy_input = torch.randn(6,*input_shape)
graph = torch.jit._get_trace_graph(model, args=dummy_input, _force_outplace=False, _return_inputs_states=False)
Error message:
RuntimeError: shape '[-1, 400]' is invalid for input of size 384
| The shape of the tensor after the convolutional layers is [6,16,2,2]. So you cannot reshape it to 16*5*5 before feeding them to the linear layers. You should change your network to the one given below if you want to use the same filter sizes as the original in the convolutional layers.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*2*2, 120) # changed the size
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*2*2) # changed the size
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| https://stackoverflow.com/questions/61794587/ |
Weak optimizers in Pytorch | Consider a simple line fitting a * x + b = x, where a, b are the optimized parameters and x is the observed vector given by
import torch
X = torch.randn(1000,1,1)
One can immediately see that the exact solution is a=1, b=0 for any x and it can be found as easily as:
import numpy as np
np.polyfit(X.numpy().flatten(), X.numpy().flatten(), 1)
I am trying now to find this solution by means of gradient descent in PyTorch, where the mean square error is used as an optimization criterion.
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
from torch.optim import Adam, SGD, Adagrad, ASGD
X = torch.randn(1000,1,1) # Sample data
class SimpleNet(nn.Module): # Trivial neural network containing two weights
def __init__(self):
super(SimpleNet, self).__init__()
self.f1 = nn.Linear(1,1)
def forward(self, x):
x = self.f1(x)
return x
# Testing default setting of 3 basic optimizers
K = 500
net = SimpleNet()
optimizer = Adam(params=net.parameters())
Adam_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
Adam_losses.append(float(loss.detach()))
net = SimpleNet()
optimizer = SGD(params=net.parameters(), lr=0.0001)
SGD_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
SGD_losses.append(float(loss.detach()))
net = SimpleNet()
optimizer = Adagrad(params=net.parameters())
Adagrad_losses = []
optimizer.zero_grad() # zero the gradient buffers
for k in range(K):
for b in range(1): # single batch
loss = torch.mean((net.forward(X[b,:,:]) - X[b,:, :])**2)
loss.backward()
optimizer.step()
Adagrad_losses.append(float(loss.detach()))
The training progress in terms of loss evolution can be shown as
What is surprising for me is a very slow convergence of the algorithms in default setting. I have thus 2 questions:
1) Is it possible to achieve an arbitrary small error (loss) purely by means of some Pytorch optimizer? Since the loss function is convex, it should be definitely possible, however, I am not able to figure out, how to achieve this using PyTorch. Note that the above 3 optimizers cannot do that - see the loss progress in log scale for 20000 iterations:
2) I am wondering how the optimizers can work well in complex examples, when they does not work well even in this extremely simple example. Or (and that is the second question) is it something wrong in their application above that I missed?
| The place where you called zero_grad is wrong. During each epoch, gradient is added to the previous one and backpropagated. This makes the loss oscillate as it gets closer, but previous gradient throws it off of the solution again.
Code below will easily perform the task:
import torch
X = torch.randn(1000,1,1)
net = SimpleNet()
optimizer = Adam(params=net.parameters())
for epoch in range(EPOCHS):
optimizer.zero_grad() # zero the gradient buffers
loss = torch.mean((net.forward(X) - X) ** 2)
if loss < 1e-8:
print(epoch, loss)
break
loss.backward()
optimizer.step()
1) Is it possible to achieve an arbitrary small error (loss) purely by
means of some Pytorch optimizer?
Yeah, precision above is reached in around ~1500 epochs, you can go lower up to the machine (float in this case) precision
2) I am wondering how the optimizers can work well in complex
examples, when they does not work well even in this extremely simple
example.
Currently, we don't have anything better (at least wide spread) for network optimization than first order methods. Those are used as it's much faster to calculate gradient than Hessians for higher order methods. And complex, non-convex functions may have a lot of minima which kinda fulfill the task we threw at it, there is no need for global minima per se (although they may under some conditions, see this paper).
| https://stackoverflow.com/questions/61801066/ |
Deep Learning NLP: "Efficient" BERT-like Implementations? | I work in a legacy corporate setting where I only have 16 core 64GB VM to work with on an NLP project. I have a multi-label NLP text classification problem where I would really like to utilize a deep representation learning model like BERT, RoBERTa, ALBERT, etc.
I have approximately 200,000 documents that need to be labeled and I have annotated set of about 2,000 to use as the ground truth for training/testing/fine tuning. I also have a much larger volume of domain related documents to use for pre-training. I will need to do the pre-training from scratch most likely, since this in a clinical domain. I am also open to pre-trained models if they might have a chance working with just fine-tuning like Hugging Face, etc..
What models and their implementations that are PyTorch or Keras compatible would folks suggest as a starting point? Or is this a computational non-starter with my existing compute resources?
| If you want to use your current setup, it will have no problem running a transformer model. You can reduce memory use by reducing the batch size, but at the cost of slower runs.
Alternatively, test your algorithm on google Colab which is free. Then open a GCP account, google will provide $300 dollars of free credits. Use this to create a GPU cloud instance and then run your algorithm there.
You probably want to use Albert or Distilbert from HuggingFace Transformers. Albert and Distilbert are both compute and memory optimized. HuggingFace has lot's of excellent examples.
Rule of thumb you want to avoid Language Model training from scratch. If possible fine tune the language model or better yet skip it and go straight to the training the classifier. Also, HuggingFace and others have MedicalBert, ScienceBert, and other specialized pretrained models.
| https://stackoverflow.com/questions/61806293/ |
Having trouble with input dimensions for Pytorch LSTM with torchtext | Problem
I'm trying to build a text classifier network using LSTM. The error I'm getting is:
RuntimeError: Expected hidden[0] size (4, 600, 256), got (4, 64, 256)
Details
The data is json and looks like this:
{"cat": "music", "desc": "I'm in love with the song's intro!", "sent": "h"}
I'm using torchtext to load the data.
from torchtext import data
from torchtext import datasets
TEXT = data.Field(fix_length = 600)
LABEL = data.Field(fix_length = 10)
BATCH_SIZE = 64
fields = {
'cat': ('c', LABEL),
'desc': ('d', TEXT),
'sent': ('s', LABEL),
}
My LSTM looks like this
EMBEDDING_DIM = 64
HIDDEN_DIM = 256
N_LAYERS = 4
MyLSTM(
(embedding): Embedding(11967, 64)
(lstm): LSTM(64, 256, num_layers=4, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.3, inplace=False)
(fc): Linear(in_features=256, out_features=8, bias=True)
(sig): Sigmoid()
)
I end up with the following dimensions for the inputs and labels
batch = list(train_iterator)[0]
inputs, labels = batch
print(inputs.shape) # torch.Size([600, 64])
print(labels.shape) # torch.Size([100, 2, 64])
And my initialized hidden tensor looks like:
hidden # [torch.Size([4, 64, 256]), torch.Size([4, 64, 256])]
Question
I'm trying to understand what the dimensions at each step should be.
Should the hidden dimension be initialized to (4, 600, 256) or (4, 64, 256)?
| The documentation of nn.LSTM - Inputs explains what the dimensions are:
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. If the LSTM is bidirectional, num_directions should be 2, else it should be 1.
Therefore, your hidden state should have size (4, 64, 256), so you did that correctly. On the other hand, you are not providing the correct size for the input.
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details.
While it says that the size of the input needs to be (seq_len, batch, input_size), you've set batch_first=True in your LSTM, which swaps batch and seq_len. Therefore your input should have size (batch_size, seq_len, input_size), but that is not the case as your input has seq_len first (600) and batch second (64), which is the default in torchtext because that's the more common representation, which also matches the default behaviour of LSTM.
You need to set batch_first=False in your LSTM.
Alternatively. if you prefer having batch as the first dimension in general, torch.data.Field also has the batch_first option.
| https://stackoverflow.com/questions/61808266/ |
I am having hard time understanding the BCEWITHLOGITLOSS | Hello everyone recently i am working on a assignment where i have to predict mask of image using foreground_background image,background image.I used bcewithlogitloss bcz i changed my target value as combination of 1 and 0 where 1 is for foreground and 0 is for background.So my results are pretty good but i am not still convinced why i used bcewithlogitloss.It is a reconstruction problem where i am using probabilty distribution(bcewithlogitloss).SO can anyone plz hepl me why i can use bcewithlogitloss in this example.Thank you
| The task you're describing is semantic segmentation, where the model predicts a mask for the image.
In the mask, for every input pixel, there are 2 possible categories. Either 0 (black) for background, and 1 (white) for foreground.
So, technically, it's a classification problem with 2 classes.
We commonly use binary cross-entropy for binary classification problems (why), hence BCE is a good fit here. Finally, the withLogit part means, the sigmoid activation will be applied internally, your model won't need a separate activation layer (details).
| https://stackoverflow.com/questions/61809209/ |
Split a multiple dimensional pytorch tensor into "n" smaller tensors | Let's say I have a 5D tensor which has this shape for example : (1, 3, 10, 40, 1). I want to split it into smaller equal tensors (if possible) according to a certain dimension with a step equal to 1 while preserving the other dimensions.
Let's say for example I want to split it according to the fourth dimension (=40) where each tensor will have a size equal to 10. So the first tensor_1 will have values from 0->9, tensor_2 will have values from 1->10 and so on.
The 39 tensors will have these shapes :
Shape of tensor_1 : (1, 3, 10, 10, 1)
Shape of tensor_2 : (1, 3, 10, 10, 1)
Shape of tensor_3 : (1, 3, 10, 10, 1)
...
Shape of tensor_39 : (1, 3, 10, 10, 1)
Here's what I have tried :
a = torch.randn(1, 3, 10, 40, 1)
chunk_dim = 10
a_split = torch.chunk(a, chunk_dim, dim=3)
This gives me 4 tensors. How can I edit this so I'll have 39 tensors with a step = 1 like I explained ?
| This creates overlapping tensors which what I wanted :
torch.unfold(dimension, size, step)
| https://stackoverflow.com/questions/61811257/ |
Train-Valid-Test split for custom dataset using PyTorch and TorchVision | I have some image data for a binary classification task and the images are organised into 2 folders as data/model_data/class-A and data/model_data/class-B.
There are a total of N images. I want to have a 70/20/10 split for train/val/test.
I am using PyTorch and Torchvision for the task. Here is the code I have so far.
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils, datasets, models
data_transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
model_dataset = datasets.ImageFolder(root, transform=data_transform)
train_count = int(0.7 * total_count)
valid_count = int(0.2 * total_count)
test_count = total_count - train_count - valid_count
train_dataset, valid_dataset, test_dataset = torch.utils.data.random_split(model_dataset, (train_count, valid_count, test_count))
train_dataset_loader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKER)
valid_dataset_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKER)
test_dataset_loader = torch.utils.data.DataLoader(test_dataset , batch_size=BATCH_SIZE, shuffle=False,num_workers=NUM_WORKER)
dataloaders = {'train': train_dataset_loader, 'val': valid_dataset_loader, 'test': test_dataset_loader}
I feel that this isn't the correct way to be doing this because of 2 reasons.
I am applying the same transform to all the splits. (This is not what I want to do, obviously! The solution for this is most probably the answer here.)
Usually people first separate the original data into test/train and then they
separate train into train/val, whereas I am directly separating the
original data into train/val/test. (Is this correct?)
So, my question is, is what I am doing correct? (Probably not)
And if it is not correct, how do I go about writing the data loaders to achieve the required splits, so that I can apply separate transforms to each of train/test/val?
|
Usually people first separate the original data into test/train and
then they separate train into train/val, whereas I am directly
separating the original data into train/val/test. (Is this correct?)
Yes, it's fully correct, readable and totally fine all in all
I am applying the same transform to all the splits. (This is not what
I want to do, obviously! The solution for this is most probably the
answer here.)
Yes, that answer is a possibility but it's pointlessly verbose tbh. You can use third party tool torchdata, simply instalable with:
pip install torchdata
Documentation can be found here (also disclaimer: I'm the author).
It allows you to map your transformations to any torch.utils.data.Dataset easily (in this case to train). Your code would look like that (only two lines have to change, check the comments, also formatted your code to follow it easier):
import torch
import torchvision
import torchdata as td
data_transform = torchvision.transforms.Compose(
[
torchvision.transforms.RandomResizedCrop(224),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
),
]
)
# Single change, makes an instance of torchdata.Dataset
# Works just like PyTorch's torch.utils.data.Dataset, but has
# additional capabilities like .map, cache etc., see project's description
model_dataset = td.datasets.WrapDataset(torchvision.datasets.ImageFolder(root))
# Also you shouldn't use transforms here but below
train_count = int(0.7 * total_count)
valid_count = int(0.2 * total_count)
test_count = total_count - train_count - valid_count
train_dataset, valid_dataset, test_dataset = torch.utils.data.random_split(
model_dataset, (train_count, valid_count, test_count)
)
# Apply transformations here only for train dataset
train_dataset = train_dataset.map(data_transform)
# Rest of the code goes the same
train_dataset_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKER
)
valid_dataset_loader = torch.utils.data.DataLoader(
valid_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKER
)
test_dataset_loader = torch.utils.data.DataLoader(
test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=NUM_WORKER
)
dataloaders = {
"train": train_dataset_loader,
"val": valid_dataset_loader,
"test": test_dataset_loader,
}
And yeah, I agree that specifying transform before splitting isn't too clear and IMO this is way more readable.
| https://stackoverflow.com/questions/61811946/ |
Is it required to clear GPU tensors in PyTorch? | I am new to PyTorch, and I am exploring the functionality of .to() method. As per the documentation for the CUDA tensors, I see that it is possible to transfer the tensors between the CPU and GPU memory.
# let us run this cell only if CUDA is available
if torch.cuda.is_available():
# creates a LongTensor and transfers it to GPU as torch.cuda.LongTensor
a = torch.full((10,), 3, device=torch.device("cuda"))
# transfers it to CPU, back to being a torch.LongTensor
b = a.to(torch.device("cpu"))
In this context, I would like to know if it always necessary to transfer back the tensors from the GPU to CPU, perhaps to free the GPU memory? Doesn't, the runtime automatically clear the GPU memory?
Apart from its use of transferring data between CPU and GPU, I would like to know the recommended usage for .to() method (from the memory perspective). Thanks in advance.
|
In this context, I would like to know if it always necessary to
transfer back the tensors from the GPU to CPU, perhaps to free the GPU
memory?
No, it's not always necessary. Memory should be freed when there are no more references to GPU tensor. Tensor should be cleared automatically in this case:
def foo():
my_tensor = torch.tensor([1.2]).cuda()
return "whatever"
smth = foo()
but it won't in this case:
def bar():
return torch.tensor([1.2]).cuda()
tensor = bar()
In the second case (tensor being passed around, possibly accumulated or added to list), you should cast it to CPU in order not to waste GPU memory.
Apart from its use of transferring data between CPU and GPU, I would
like to know the recommended usage for .to() method (from the memory
perspective)
Not sure what you mean here. What you should be after is the least to calls as they require copying the array (O(n) complexity), but shouldn't be too costly anyway (in comparison to pushing data through neural network for example) and probably not worth to be too hardcore about this micro-optimization.
Usually data loading is done on CPU (transformations, augmentations) and each batch copied to GPU (possibly with pinned memory) just before it is passed to neural network.
Also, as of 1.5.0 release, pytorch provides memory_format argument in .to method. This allows you to specify whether (N, C, H, W) (PyTorch default) or channel last (N, H, W, C) should be used for tensor and model (convolutional models with torch.nn.Conv2d to be precise). This could further speed up your models (for torchvision midels speedup of 16% was reported IIRC), see here for more info and usage.
| https://stackoverflow.com/questions/61813730/ |
How to divide the dataset when it is distributed | Now I want to divide a dataset into two parts: the train set and validation set. I know that on a single GPU I can do this using a sampler:
indices = list(range(len(train_data)))
train_loader = torch.utils.data.DataLoader(
train_data, batch_size=args.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[:split]),
pin_memory=True, num_workers=2)
But when I want to train it in a parallel way using torch.distributed, I have to use another sampler, namely, sampler = torch.utils.data.distributed.DistributedSampler(train_data)
So how should I do to use the two samplers, so that I can divide the dataset and distribute it at the same time?
Thank you very much for any help!
| You can split torch.utils.data.Dataset before creating torch.utils.data.DataLoader.
Simply use torch.utils.data.random_split like this:
train, validation =
torch.utils.data.random_split(
dataset,
(len(dataset)-val_length, val_length)
)
This would give you two separate datasets which could be used with dataloaders however you wish.
| https://stackoverflow.com/questions/61816158/ |
Passing weights to cross entropy loss | I am trying to assign different weights to different classes, so I have modified my loss criterion as such:
I had to convert the weight tensor to double torch.DoubleTensor(weight) since my model is already moved to double(). Am I doing it correctly ?
weights = [0.4,0.8,1.0]
class_weights = torch.DoubleTensor(weights).cuda()
criterion = nn.CrossEntropyLoss(weight=class_weights)
| According to the documentation, the weight parameter to CrossEntropyLoss should be:
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C.
I assume you have 3 classes (C=3). By the way, are you sure your model is moved to double()? Otherwise, you should prefer using FloatTensor. For example,
nn.CrossEntropyLoss(weight=torch.FloatTensor([0.4, 0.8, 1.0]))
Otherwise, your example looks ok!
| https://stackoverflow.com/questions/61821500/ |
Can you train a BERT model from scratch with task specific architecture? | BERT pre-training of the base-model is done by a language modeling approach, where we mask certain percent of tokens in a sentence, and we make the model learn those missing mask. Then, I think in order to do downstream tasks, we add a newly initialized layer and we fine-tune the model.
However, suppose we have a gigantic dataset for sentence classification. Theoretically, can we initialize the BERT base architecture from scratch, train both the additional downstream task specific layer + the base model weights form scratch with this sentence classification dataset only, and still achieve a good result?
Thanks.
| BERT can be viewed as a language encoder, which is trained on a humongous amount of data to learn the language well. As we know, the original BERT model was trained on the entire English Wikipedia and Book corpus, which sums to 3,300M words. BERT-base has 109M model parameters. So, if you think you have large enough data to train BERT, then the answer to your question is yes.
However, when you said "still achieve a good result", I assume you are comparing against the original BERT model. In that case, the answer lies in the size of the training data.
I am wondering why do you prefer to train BERT from scratch instead of fine-tuning it? Is it because you are afraid of the domain adaptation issue? If not, pre-trained BERT is perhaps a better starting point.
Please note, if you want to train BERT from scratch, you may consider a smaller architecture. You may find the following papers useful.
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
| https://stackoverflow.com/questions/61826824/ |
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 4, 160, 40] to have 3 channels, but got 4 channels instead | I am trying to use a pre-trained model. Here's where the problem occurs
Isn't the model supposed to take in a simple colored image? Why is it expecting a 4-dimensional input?
The eval() method is:
def eval(file):
image = io.imread(file)
plt.imshow(image)
image = cv2.resize(image, (160,40)).transpose((2,1,0))
output = model(torch.tensor(image[np.newaxis,...]).float())[0].squeeze().detach().numpy()
return decode_prob(output)
And the output of eval('image.png') :
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-66-0b951c2596f8> in <module>()
----> 1 eval('/content/image.png')
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
344 _pair(0), self.dilation, self.groups)
345 return F.conv2d(input, weight, self.bias, self.stride,
--> 346 self.padding, self.dilation, self.groups)
347
348 def forward(self, input):
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 4, 160, 40] to have 3 channels, but got 4 channels instead
| The image you are loading is a 4 channel Image.
You can read it with cv2 and convert it to 3 channel RGB image as
def eval(file):
image = cv2.imread(file)
image = cv2.cvtColor(image, cv2.COLOR_BGRA2RGB)
plt.imshow(image)
image = cv2.resize(image, (160,40)).transpose((2,1,0))
output = model(torch.tensor(image[np.newaxis,...]).float())[0].squeeze().detach().numpy()
return decode_prob(output)
| https://stackoverflow.com/questions/61827738/ |
Pytorch transformation on MNIST dataset | I currently have a project with Weak Supervision where I need to put a "masking" in front of a dataset. My issue right now is that I don't exactly know how to do it. Let me explain further with some code and images.
I am using the MNIST dataset that I have to edit in this way. As you can see a middle square is cut out. The code below is used to edit the MNIST using a for loop.
for i in range(int(image_size/2-5),int(image_size/2+3)):
for j in range(int(image_size/2-5),int(image_size/2+3)):
image[i][j] = 0
However, I am currently not sure how I should use this in a dataloader transform. The code for the dataloader and transform is shown here:
transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
train_dataset = torchvision.datasets.MNIST(
root="~/torch_datasets", train=True, transform=transform, download=True
)
test_dataset = torchvision.datasets.MNIST(
root="~/torch_datasets", train=False, transform=transform, download=True
)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=128, shuffle=True, num_workers=4, pin_memory=True
)
test_loader = torch.utils.data.DataLoader(
test_dataset, batch_size=32, shuffle=False, num_workers=4
)
def imshow(img):
#img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
dataiter = iter(train_loader)
images, labels = dataiter.next()
imshow(torchvision.utils.make_grid(images))
So is there a straightforward way to apply the transform to the full dataset in the torchvision.transforms.Compose?
| You can define any custom transformation and as a function and use torchvision.transforms.Lambda in the transformation pipeline.
def erase_middle(image: torch.Tensor) -> torch.Tensor:
for i in range(int(image_size/2-5),int(image_size/2+3)):
for j in range(int(image_size/2-5),int(image_size/2+3)):
image[:, i, j] = 0
return image
transform = torchvision.transforms.Compose(
[
# First transform it to a tensor
torchvision.transforms.ToTensor(),
# Then erase the middle
torchvision.transforms.Lambda(erase_middle),
]
)
erase_middle can be made more generic, such that it works for images with varying sizes and that aren't necessarily square.
def erase_middle(image: torch.Tensor) -> torch.Tensor:
_, height, width = image.size()
x_start = width // 2 - 5
x_end = width // 2 + 3
y_start = height // 2 - 5
y_end = height // 2 + 3
# Using slices achieves the same as the for loops
image[:, y_start:y_end, x_start:x_end] = 0
return image
| https://stackoverflow.com/questions/61830423/ |
Access lower dimensional encoded data of autoencoder | Here is an autoencoder I’m working on from tutorial:https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/
I’m just learning about autoencoders and I’ve modified the source encode a custom small dataset which consists of:
[0,1,0,1,0,1,0,1,0],[0,1,1,0,0,1,0,1,0],[0,1,1,0,0,1,0,1,0],[0,1,1,0,0,1,0,1,0]
It seems to work ok, but I’m unsure how to access the lower dimensional embedding values of dimension 2 (set by parameter out_features).
I've added a methods to the Autoencoder class to return the embedding , is this the recommended method of accessing the embedding's ?
Code:
import torch
import torchvision
from torch import nn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
from torchvision.utils import save_image
import warnings
import os
# import packages
import os
import torch
import torchvision
import torch.nn as nn
import torchvision.transforms as transforms
import torch.optim as optim
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torchvision import datasets
from torch.utils.data import DataLoader
from torchvision.utils import save_image
import numpy as np
# utility functions
def get_device():
if torch.cuda.is_available():
device = 'cuda:0'
else:
device = 'cpu'
return device
device = get_device()
features = torch.tensor(np.array([ [0,1,0,1,0,1,0,1,0],[0,1,1,0,0,1,0,1,0],[0,1,1,0,0,1,0,1,0],[0,1,1,0,0,1,0,1,0] ])).float()
tic_tac_toe_data_loader = torch.utils.data.DataLoader(features, batch_size=1, shuffle=True)
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.fc1 = nn.Linear(in_features=9, out_features=2)
def forward(self, x):
return F.sigmoid(self.fc1(x))
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.fc1 = nn.Linear(in_features=2, out_features=9)
def forward(self, x):
return F.sigmoid(self.fc1(x))
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.fc1 = Encoder()
self.fc2 = Decoder()
def forward(self, x):
return self.fc2(self.fc1(x))
net = Autoencoder()
net.to(device)
NUM_EPOCHS = 50
LEARNING_RATE = 1e-3
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
# image transformations
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
outputs = None
def train(net, trainloader, NUM_EPOCHS):
train_loss = []
for epoch in range(NUM_EPOCHS):
running_loss = 0.0
for data in trainloader:
img = data
img = img.to(device)
img = img.view(img.size(0), -1)
# print('img.shape' , img.shape)
optimizer.zero_grad()
outputs = net(img)
loss = criterion(outputs, img)
loss.backward()
optimizer.step()
running_loss += loss.item()
loss = running_loss / len(trainloader)
train_loss.append(loss)
return train_loss
# train the network
train_loss = train(net, tic_tac_toe_data_loader, NUM_EPOCHS)
I can access the lower dimensional embedding using
print(Encoder().forward( torch.tensor(np.array([0,1,0,1,0,1,0,1,0])).float()))
But is this using the trained weight values for the embedding ? If I call Encoder multiple times with same values:
print(Encoder().forward( torch.tensor(np.array([0,1,0,1,0,1,0,1,0])).float()))
print(Encoder().forward( torch.tensor(np.array([0,1,0,1,0,1,0,1,0])).float()))
different results are returned:
tensor([0.5083, 0.5020], grad_fn=<SigmoidBackward>)
tensor([0.4929, 0.6940], grad_fn=<SigmoidBackward>)
Why is this the case ? Is an extra training step being invoked as a result of calling Encoder ?
| By calling Encoder() you are basically creating a new instance of the encoder everytime and the weights are randomly initialized each time.
Generally, you make one instance of it and train it, save the weights, and infer on it.
Also, for PyTorch, you need not call .forward(), but call the instance directly. Forward is called by it implicitly, including other hook methods if any.
enc = Encoder()
input = torch.from_numpy(np.asarray([0,1,0,1,0,1,0,1,0]).float()
print(enc(input))
print(enc(input))
Training pass happens when you pass the Encode() instance to train function. Calling Encoder() only creates a new object.
Since each object has it's own weights, and the weights are initialized randomly (see xavier and kaiming initialization), you are different outputs. Moving to a single object, you still have to explicitly train it with the train function.
| https://stackoverflow.com/questions/61830547/ |
training and evaluating an stacked auto-encoder model in pytorch | I am trying to train a model in pytorch.
input: 686-array
first layer: 64-array
second layer: 2-array
output: predition either 1 or 0
this is what I have so far:
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.encoder_softmax = nn.Sequential(
nn.Linear(686, 256),
nn.ReLU(True),
nn.Linear(256, 2),
nn.Softmax()
)
def forward(self, x):
x = self.encoder_softmax(x)
return x
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
net = net.to(device)
iterations = 10
learning_rate = 0.98
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(
net.parameters(), lr=learning_rate, weight_decay=1e-5)
for epoch in range(iterations):
loss = 0.0
print("train_dl len: ", len(train_dl))
# net.train()
for i, data in enumerate(train_dl, 0):
inputs, labels, vectorize = data
labels = labels.long().to(device)
inputs = inputs.float().to(device)
optimizer.zero_grad()
outputs = net(inputs)
train_loss = criterion(outputs, labels)
train_loss.backward()
optimizer.step()
loss += train_loss.item()
loss = loss / len(train_dl)
but when I train the model, the loss is not going down. What am I doing wrong?
| You're using nn.CrossEntropyLoss as the loss function, which applies log-softmax, but you also apply softmax in the model:
self.encoder_softmax = nn.Sequential(
nn.Linear(686, 256),
nn.ReLU(True),
nn.Linear(256, 2),
nn.Softmax() # <- needs to be removed
)
The output of your model should be the raw logits, without the nn.Softmax.
You should also lower the learning rate, because a learning rate of 0.98 is very high, which makes the training much less stable and you'll likely see the loss oscillate. Are more appropriate learning rate would be in the magnitude of 0.01 or 0.001.
| https://stackoverflow.com/questions/61837275/ |
How can I resize Olivetti Dataset images 64x64 to 32x32 ?? I am getting error | batch_boyut = 2
train_loader = torch.utils.data.DataLoader(
X_egitim, batch_size=batch_boyut)
val_loader = torch.utils.data.DataLoader(
X_val, batch_size=batch_boyut)
class CNNModule(nn.Module):
def __init__(self):
super(CNNModule, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 40)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = CNNModule()
print(model)
model.forward(X_egitim)
I am getting this error.How can I solve? How can resize olivetti dataset images from 64x64 to 32x32 ?
I think I should change x.view but I don't know how I can
x = x.view(-1, 16 * 5 * 5)
RuntimeError: shape '[-1, 400]' is invalid for input of size 865280
| I assume your input shape is 320 x 1 x 64 x 64.
I think you need to understand what is the output shape of convolution and max-pool operations. In your model, you have 2 CNN layers, followed by the max-pooling layer.
First CNN and max-pool layer:
x = self.pool(f.relu(self.conv1(x)))
Since conv1 has a 2d kernel of size 5 x 5, the output shape of conv1 will be (6 x 60 x 60). After passing through the max-pooling layer with kernel size 2 x 2, you will get 6 x 30 x 30 shaped tensor. You can easily compute the output shape of tensor as follows.
64 x 64 -> 60 x 60 -> 30 x 30
Where, 60 = (64 - cnn_kernel_size + 1) and, 30 = 60 // max_pool_kernel_size.
Similarly, after the second CNN and max-pool layer:
x = self.pool(f.relu(self.conv2(x)))
The shape of x will be: 320 x 16 x 13 x 13. Here, 13 = 26 // 2 and 26 = 30 - 5 + 1. Now, when you are trying to reshape x as x.view(-1, 16 * 5 * 5), it is throwing the error.
So, either you should revise the following layers which would take input of shape -1, 16 * 13 * 13, or you revise the CNN layers to finally get output tensor of shape -1, 16 * 5 * 5.
Note, if you have an input tensor of shape 32 x 32, you won't face the problem. Why?
Because: 32 -> (32 - 5 + 1) // 2 -> 14 -> (14 - 5 + 1) // 2 -> 5
So, if you want to convert size 64 x 64 to 32 x 32, you may consider performing downscaling using some operations, e.g., max-pooling 2d (with kernel_size 2 x 2), a linear transformation, etc.
| https://stackoverflow.com/questions/61844103/ |
HDF5 dataloading very slow. Causes zero % volatility in GPUs | I am using a custom PyTorch dataclass to load instances from a H5 dataset I created. However, it appears to be incredibly slow when loading samples. I have followed several bits of advice on dealing with large HDF5 datasets, but I am wondering whether I am doing something that is obviously wrong. I am deploying my code on Linux if that makes a difference. I am running the code on 4 GPUs with nn.dataparallel in place for my model. As the dataloading is very slow, the GPU Volatility is at 0%. Here is my dataclass loader:
import h5py
from torch.utils import data
class Features_Dataset(data.Dataset):
def __init__(self, archive, phase):
self.archive = archive
self.phase = phase
def __getitem__(self, index):
with h5py.File(self.archive, 'r', libver='latest', swmr=True) as archive:
datum = archive[str(self.phase) + '_all_arrays'][index]
label = archive[str(self.phase) + '_labels'][index]
path = archive[str(self.phase) + '_img_paths'][index]
return datum, label, path
def __len__(self):
with h5py.File(self.archive, 'r', libver='latest', swmr=True) as archive:
datum = archive[str(self.phase) + '_all_arrays']
return len(datum)
if __name__ == '__main__':
train_dataset = Features_Dataset(archive= "featuresdata/train.hdf5", phase= 'train')
trainloader = data.DataLoader(train_dataset, num_workers=8, batch_size=128)
print(len(trainloader))
for i, (data, label, path) in enumerate(trainloader):
print(path)
Am I missing something obvious? Is there a better way of loading instances rapidly?
EDIT:
Here is the updated dataclass however I now get a picling error when trying to use multiprocessing.
import h5py
from torch.utils import data
import torch.multiprocessing as mp
mp.set_start_method('spawn')
class Features_Dataset(data.Dataset):
def __init__(self, archive, phase):
self.archive = h5py.File(archive, 'r')
self.labels = self.archive[str(phase) + '_labels']
self.data = self.archive[str(phase) + '_all_arrays']
self.img_paths = self.archive[str(phase) + '_img_paths']
def __getitem__(self, index):
datum = self.data[index]
label = self.labels[index]
path = self.img_paths[index]
return datum, label, path
def __len__(self):
return len(self.data)
def close(self):
self.archive.close()
if __name__ == '__main__':
train_dataset = Features_Dataset(archive= "featuresdata/train.hdf5", phase= 'train')
trainloader = data.DataLoader(train_dataset, num_workers=2, batch_size=4)
print(len(trainloader))
for i, (data, label, path) in enumerate(trainloader):
print(path)
| Can you not open file once in the init and store the file handler? Currently when ever you called get item or len, you will be always opening the file on each call.
| https://stackoverflow.com/questions/61845837/ |
How to decide which mode to use for 'kaiming_normal' initialization | I have read several codes that do layer initialization using nn.init.kaiming_normal_() of PyTorch. Some codes use the fan in mode which is the default. Of the many examples, one can be found here and shown below.
init.kaiming_normal(m.weight.data, a=0, mode='fan_in')
However, sometimes I see people using the fan out mode as seen here and shown below.
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
Can someone give me some guidelines or tips to help me decide which mode to select? Further I am working on image super resolutions and denoising tasks using PyTorch and which mode will be more beneficial.
| According to documentation:
Choosing 'fan_in' preserves the magnitude of the variance of the
weights in the forward pass. Choosing 'fan_out' preserves the
magnitudes in the backwards pass.
and according to Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015):
We note that it is sufficient to use either Eqn.(14) or
Eqn.(10)
where Eqn.(10) and Eqn.(14) are fan_in and fan_out appropriately. Furthermore:
This means that if the initialization properly scales the backward
signal, then this is also the case for the forward signal; and vice
versa. For all models in this paper, both forms can make them converge
so all in all it doesn't matter much but it's more about what you are after. I assume that if you suspect your backward pass might be more "chaotic" (greater variance) it is worth changing the mode to fan_out. This might happen when the loss oscillates a lot (e.g. very easy examples followed by very hard ones).
Correct choice of nonlinearity is more important, where nonlinearity is the activation you are using after the layer you are initializaing currently. Current defaults set it to leaky_relu with a=0, which is effectively the same as relu. If you are using leaky_relu you should change a to it's slope.
| https://stackoverflow.com/questions/61848635/ |
Could not find a version that satisfies the requirement torch~=1.4.0 (from syft) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) | I get the above error while installing syft package through Anaconda command. I followed the below link exactly as is,
https://medium.com/secure-and-private-ai-writing-challenge/installing-pysyft-package-ffa1ff0ad83c
Following commands were used:
conda create -n pysyft python=3
conda activate pysyft
pip install syft
Some of the links suggested to update the Pytorch version. I have 1.5.0+cpu version already and updating didnt help. I also tried using pip install syft without creating a conda environment, that didnt solve the problem either.
I went through the below stackoverflow link too which explains similar error, but it didnt help either:
Issues installing PyTorch 1.4 - "No matching distribution found for torch===1.4.0"
Any advice? Thanks in advance
| Not all PyTorch versions are available on Python's package registry PyPI. For instance, the CPU only version or any Windows version is only available on PyTorch's custom registry. Selecting one of these versions on PyTorch - Get Started Locally will give you an installation command including the custom registry. Installing PySyft also installs PyTorch and the specific version you're getting, requires adding the custom registry:
pip install syft -f https://download.pytorch.org/whl/torch_stable.html
You might need to add --user if you don't have write access to the system wide package location.
| https://stackoverflow.com/questions/61850455/ |
torch.optim returns "ValueError: can't optimize a non-leaf Tensor" for multidimensional tensor | I am trying to optimize the translations of the vertices of a scene with torch.optim.adam. It is a code piece from the redner tutorial series, which works fine with the initial setting. It tries to optimize a scene with shifting all the vertices by the same value called translation. Here is the original code:
vertices = []
for obj in base:
vertices.append(obj.vertices.clone())
def model(translation):
for obj, v in zip(base, vertices):
obj.vertices = v + translation
# Assemble the 3D scene.
scene = pyredner.Scene(camera = camera, objects = objects)
# Render the scene.
img = pyredner.render_albedo(scene)
return img
# Initial guess
# Set requires_grad=True since we want to optimize them later
translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)
init = model(translation)
# Visualize the initial guess
t_optimizer = torch.optim.Adam([translation], lr=0.5)
I tried to modify the code such that it calculates an individual translation for each of the vertices. For this I applied the following modifications to the code above, that makes the shape of the translation from torch.Size([3]) to torch.Size([43380, 3]):
# translation = torch.tensor([10.0, -10.0, 10.0], device = pyredner.get_device(), requires_grad=True)
translation = base[0].vertices.clone().detach().requires_grad_(True)
translation[:] = 10.0
This introduces the ValueError: can't optimize a non-leaf Tensor. Could you please help me work around the problem.
PS: I am sorry for the long text, I am very new to this subject, and I wanted to state the problem as comprehensive as possible.
| Only leaf tensors can be optimised. A leaf tensor is a tensor that was created at the beginning of a graph, i.e. there is no operation tracked in the graph to produce it. In other words, when you apply any operation to a tensor with requires_grad=True it keeps track of these operations to do the back propagation later. You cannot give one of these intermediate results to the optimiser.
An example shows that more clearly:
weight = torch.randn((2, 2), requires_grad=True)
# => tensor([[ 1.5559, 0.4560],
# [-1.4852, -0.8837]], requires_grad=True)
weight.is_leaf # => True
result = weight * 2
# => tensor([[ 3.1118, 0.9121],
# [-2.9705, -1.7675]], grad_fn=<MulBackward0>)
# grad_fn defines how to do the back propagation (kept track of the multiplication)
result.is_leaf # => False
The result in this example, cannot be optimised, since it's not a leaf tensor. Similarly, in your case translation is not a leaf tensor because of the operation you perform after it was created:
translation[:] = 10.0
translation.is_leaf # => False
This has grad_fn=<CopySlices> therefore it's not a leaf and you cannot pass it to the optimiser. To avoid that, you would have to create a new tensor from it that is detached from the graph.
# Not setting requires_grad, so that the next operation is not tracked
translation = base[0].vertices.clone().detach()
translation[:] = 10.0
# Now setting requires_grad so it is tracked in the graph and can be optimised
translation = translation.requires_grad_(True)
What you're really doing here, is create a new tensor filled with the value 10.0 with the same size as the vertices tensor. This can be achieved much easier with torch.full_like
translation = torch.full_like(base[0],vertices, 10.0, requires_grad=True)
| https://stackoverflow.com/questions/61851506/ |
Why must use DataParallel when testing? | Train on the GPU, num_gpus is set to 1:
device_ids = list(range(num_gpus))
model = NestedUNet(opt.num_channel, 2).to(device)
model = nn.DataParallel(model, device_ids=device_ids)
Test on the CPU:
model = NestedUNet_Purn2(opt.num_channel, 2).to(dev)
device_ids = list(range(num_gpus))
model = torch.nn.DataParallel(model, device_ids=device_ids)
model_old = torch.load(path, map_location=dev)
pretrained_dict = model_old.state_dict()
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
This will get the correct result, but when I delete:
device_ids = list(range(num_gpus))
model = torch.nn.DataParallel(model, device_ids=device_ids)
the result is wrong.
| nn.DataParallel wraps the model, where the actual model is assigned to the module attribute. That also means that the keys in the state dict have a module. prefix.
Let's look at a very simplified version with just one convolution to see the difference:
class NestedUNet(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
model = NestedUNet()
model.state_dict().keys() # => odict_keys(['conv1.weight', 'conv1.bias'])
# Wrap the model in DataParallel
model_dp = nn.DataParallel(model, device_ids=range(num_gpus))
model_dp.state_dict().keys() # => odict_keys(['module.conv1.weight', 'module.conv1.bias'])
The state dict you saved with nn.DataParallel does not line up with the regular model's state. You are merging the current state dict with the loaded state dict, that means that the loaded state is ignored, because the model does not have any attributes that belong to the keys and instead you are left with the randomly initialised model.
To avoid making that mistake, you shouldn't merge the state dicts, but rather directly apply it to the model, in which case there will be an error if the keys don't match.
RuntimeError: Error(s) in loading state_dict for NestedUNet:
Missing key(s) in state_dict: "conv1.weight", "conv1.bias".
Unexpected key(s) in state_dict: "module.conv1.weight", "module.conv1.bias".
To make the state dict that you have saved compatible, you can strip off the module. prefix:
pretrained_dict = {key.replace("module.", ""): value for key, value in pretrained_dict.items()}
model.load_state_dict(pretrained_dict)
You can also avoid this issue in the future by unwrapping the model from nn.DataParallel before saving its state, i.e. saving model.module.state_dict(). So you can always load the model first with its state and then later decide to put it into nn.DataParallel if you wanted to use multiple GPUs.
| https://stackoverflow.com/questions/61854165/ |
Convert from .npy to MLMultiArray for CoreML prediction in swift | I have exported a PyTorch model to CoreML and want to do inference in swift. I have my input data stored on disk as a 2D float32 numpy ndarray .npy and need load into a MLMultiArray in swift. Is there a convenient way to do this?
| Instead of saving as .npy (which is pickled), save the raw data from NumPy:
array.astype(np.float32).tofile(filename)
Now you can simply load this into a Data object in Swift and copy that into the MLMultiArray.
| https://stackoverflow.com/questions/61854333/ |
NameError: name 'transforms' is not defined, while using google colab | train = datasets.MNIST("", train=True, download=True,transform = transforms.Compose([transforms.ToTensor()]))
test = datasets.MNIST("", train=False, download=True,transform = transforms.Compose([transforms.ToTensor()]))`
After executing this on colab notebook, I am getting this error:
Traceback (most recent call last)
<ipython-input-6-b81aa6cf1cbe> in <module>()
----> 1 train = datasets.MNIST("", train=True, download=True,transform = transforms.Compose([transforms.ToTensor()]))
2 test = datasets.MNIST("", train=False, download=True,transform = transforms.Compose([transforms.ToTensor()]))
NameError: name 'transforms' is not defined*
| I'm guessing from the context that you're using Pytorch, in which case you need to make sure you have:
from torchvision import transforms
In your imports. By the looks of things you have already imported datasets from the same library.
| https://stackoverflow.com/questions/61856895/ |
PyTorch: one dimension in input data | I don't understand why this code works:
# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
print(model)
# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
The size of an image is (images.shape[0], 1, 784) but our networks has input_size = 784. How does the network handle 1 dimension in an input image? I tried to change images.resize_(images.shape[0], 1, 784) to images = images.view(images.shape[0], -1) but I got an error:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
For reference, data loader is created the next way:
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
| PyTorch networks take inputs as [batch_size, input_dimensions]
In your case, images[0,:] is of the shape [1,784] where “1” is tue batch size and that is why your code works.
| https://stackoverflow.com/questions/61857018/ |
pytorch object too deep for array when saving image | I am trying to run the code from the following github rep:
https://github.com/iamkrut/image_inpainting_resnet_unet
I havent changed anything in the code and it is causing a ValueError, that the object is too deep, when the code tries to save the image. The error seems to come from these two lines.
images = img_tensor.cpu().detach().permute(0,2,3,1)
plt.imsave(join(data_dir, 'samples', image), images[index,:,:,:3])
Here is the error statement
File "train.py", line 205, in <module>
data_dir=args.data_dir)
File "train.py", line 94, in train_net
plt.imsave(join(data_dir, 'samples', image), images[index,:,:,:]);
File "C:\ProgramData\Anaconda3\envs\torch2\lib\site-packages\matplotlib\pyplot.py", line 2140, in imsave
return matplotlib.image.imsave(fname, arr, **kwargs)
File "C:\ProgramData\Anaconda3\envs\torch2\lib\site-packages\matplotlib\image.py", line 1498, in imsave
_png.write_png(rgba, fname, dpi=dpi)
ValueError: object too deep for desired array
Anyone know what could be causing this or how to fix it?
Thank you
| matplotlib package does not understand the pytorch datatype (tensor). you should convert tensor array to numpy array and then use matplotlib functions.
a = torch.rand(10, 3, 20, 20)
plt.imsave("test.jpg", a.cpu().detach().permute(0, 2, 3, 1)[0, ...]) # Error
plt.imsave("test.jpg", a.cpu().detach().permute(0, 2, 3, 1).numpy()[0, ...])
| https://stackoverflow.com/questions/61857079/ |
RuntimeError: shape '[1, 1024]' is invalid for input of size 50176 | I am trying to use Alexnet over the CIFAR-10 dataset. I have resized my image to 224x224 which I'm guessing is the issue. Nevertheless, I get the following error:
<ipython-input-11-34884668038d> in forward(self, x)
37 def forward(self, x):
38 x = self.features(x)
---> 39 x = x.view(x.size(0), 256 * 2 * 2)
40 x = self.classifier(x)
41 return x
RuntimeError: shape '[1, 1024]' is invalid for input of size 50176
My Alexnet model code is as follows:
NUM_CLASSES = 10
class AlexNet(nn.Module):
def __init__(self, num_classes=NUM_CLASSES):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(64, 192, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 2 * 2, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 2 * 2)
x = self.classifier(x)
return x
Any help would be appreciated :)
| CIFAR nets expects input data much smaller than 224x224, usually 32x32.
| https://stackoverflow.com/questions/61857393/ |
How to manage the hidden state dims when using pad_sequence? | Using Pytorch LSTM architecture trying to build a text generation model. For every batch, I'm using pad_sequence to have min padding for every sequence, therefore I have a variable dims batch (batch_size * seq_len). I'm applying also pack_padded_seq to only give the non-zero (non-padding) tokens to the LSTM. But, the variable dims batch throws an error while feeding it to the LSTM as following; Expected hidden[0] size (1, 8, 16), got (1, 16, 16). In this error, I have provided batch size 16 with 8 tokens for every sequence, but the hidden state is 16 * 16.
I have tried to create the hidden state in the forward function, but that did not work well. How can I create the hidden state such that it can accept variable dims batch and it will not be lost for the whole epoche?
class RNNModule(nn.Module):
def __init__(self, n_vocab, seq_size, embedding_size, lstm_size):
super(RNNModule, self).__init__()
self.seq_size = seq_size
self.lstm_size = lstm_size
self.embedding, num_embeddings, embedding_dim = create_emb_layer(weight_matrix, False)
self.lstm = nn.LSTM(embedding_size,
lstm_size,
num_layers=flags.n_layers,
batch_first=True
)
self.dense = nn.Linear(lstm_size, n_vocab)
def forward(self, x,length,prev_state):
embed = self.embedding(x)
packed_input = db.pack_src(embed,length)
packed_output, state = self.lstm(packed_input,prev_state)
padded,_ = db.pad_pack(packed_output)
logits = self.dense(padded)
return logits, state
def zero_state(self, batch_size = flags.batch_size):
return (torch.zeros(flags.n_layers, batch_size, self.lstm_size),
torch.zeros(flags.n_layers, batch_size, self.lstm_size))
input: tensor([[ 19, 9, 4, 3, 68, 8, 6, 2],
[ 19, 9, 4, 3, 7, 8, 6, 2],
[ 3, 12, 17, 10, 6, 40, 2, 0],
[ 4, 3, 109, 7, 6, 2, 0, 0],
[ 188, 6, 7, 18, 3, 2, 0, 0],
[ 4, 3, 12, 6, 7, 2, 0, 0],
[ 6, 7, 3, 13, 2, 0, 0, 0],
[ 3, 28, 17, 69, 2, 0, 0, 0],
[ 6, 3, 12, 11, 2, 0, 0, 0],
[ 3, 13, 6, 7, 2, 0, 0, 0],
[ 3, 6, 7, 13, 2, 0, 0, 0],
[ 6, 3, 23, 7, 2, 0, 0, 0],
[ 3, 28, 10, 2, 0, 0, 0, 0],
[ 6, 3, 23, 2, 0, 0, 0, 0],
[ 3, 6, 37, 2, 0, 0, 0, 0],
[1218, 2, 0, 0, 0, 0, 0, 0]])
Zero tokens are padding.
Embedding size: 64
LSTM size: 16
batch size: 16
| The size of the hidden state you create has the correct size, but your input does not. When you pack it with nn.utils.rnn.pack_padded_sequence you've set batch_first=False, but your data has size [batch_size, seq_len, embedding_size] when you pass it to the packing, so that has batch_size as the first dimension. Also for the LSTM you use batch_first=True, which is appropriate for your data.
You only need to pack it correctly by setting batch_first=True as well, to match the order of your data.
rnn_utils.pack_padded_sequence(embed,length,batch_first=True)
| https://stackoverflow.com/questions/61858738/ |
Understanding the parameters of a simple neural network in Pytorch | I have defined a simple NN as follows in Pytorch:
my_model = nn.Sequential(
nn.Linear(1, 5),
nn.Tanh(),
nn.Linear(5, 1))
I then iterate through the parameters and check their sizes:
[parameter.shape for parameter in my_model.parameters()]
I get:
[torch.Size([5, 1]), torch.Size([5]), torch.Size([1, 5]), torch.Size([1])]
I'm confused as to why the last Size is 1. Shouldn't there be 5 bias values as well, going out from the hidden layer into the output layer?
| Um, so it looks like you are defining a two layers network.
The input size is [1], and the hidden layer size is [5], and next, the hidden layer is connected to the output layer, the output layer size is [1].
It should look like:
* <- input
/ / | \ \
* * * * * <- hidden layer
\ \ | / /
* <- output
So for the hidden layer, there are 5 values as bias, for the output layer, there is only one value as bias. Sounds reasonable?
| https://stackoverflow.com/questions/61860818/ |
regarding one code segment in computing log_sum_exp | In this tutorial on using Pytorch to implement BiLSTM-CRF, author implements the following function. In specific, I am not quite understand what does max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1]) try to do?, or which kind of math formula it corresponds to?
# Compute log sum exp in a numerically stable way for the forward algorithm
def log_sum_exp(vec):
max_score = vec[0, argmax(vec)]
max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])
return max_score + \
torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))
| Looking at the code, it seems like vec has a shape of (1, n).
Now we can follow the code line by line:
max_score = vec[0, argmax(vec)]
Using vec in the location 0, argmax(v) is just a fancy way of taking the maximal value of vec. So, max_score is (as the name suggests) the maximal value of vec.
max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1])
Next, we want to subtract max_score from each of the elements of vec.
To do so the code creates a vector of the same shape as vec with all elements equal to max_score.
First, max_score is reshaped to have two dimensions using the view command, then the expanded 2d vector is "stretched" to have length n using the expand command.
Finally, the log sum exp is computed robustly:
return max_score + \
torch.log(torch.sum(torch.exp(vec - max_score_broadcast)))
The validity of this computation can be seen in this picture:
The rationale behind it is that exp(x) can "explode" for x > 0, therefore, for numerical stability, it is best to subtract the maximal value before taking exp.
As a side note, I think a slightly more elegant way to do the same computation, taking advantage of broadcasting, would be
max_score, _ = vec.max(dim=1, keepdim=True) # take max along second dimension
lse = max_score + torch.log(torch.sum(torch.exp(vec - max_score), dim=1))
return lse
Also note that log sum exp is already implemented by pytorch: torch.logsumexp.
| https://stackoverflow.com/questions/61862579/ |
Custom Dataset, Dataloader, Sampler, or something else? | I'm working on a project that requires training a PyTorch framework NN on a very large dataset of images. Some of these images are completely irrelevant to the problem, and but these irrelevant images are not labelled as such. However, there are some metrics I can use to calculate if they are irrelevant (e.g. summing all the pixel values would give me a good sense of which are the relevant images and which are not). What I would ideally like to do is have a Dataloader that can take in a Dataset class, and create batches only with the relevant images. The Dataset class would just know the list of images and their labels, and the Dataloader would interpret whether or not the image it is making a batch with is relevant or not, and would then only make batches with relevant images.
To apply this to an example, lets say I have a dataset of black and white images. The white images are irrelevant, but they are not labelled as such. I want to be able to load batches from a file location, and have these batches only contain the black images. I could filter at some point by summing all the pixels and finding it equals to 0.
What I am wondering is if a custom Dataset, Dataloader, or Sampler would be able to solve this task for me? I already have written a custom Dataset that stores the directory of all the saved images, and a list of all the images in that directory, and can return an image with its label in the getitem function. Is there something more I should add there to filter out certain images? Or should that filter be applied in a custom Dataloader, or Sampler?
Thank you!
| I'm assuming that your image dataset belongs to two classes (0 or 1) but it's unlabeled. As @PranayModukuru mentioned that you can determine the similarity by using some measure (e.g aggregating all the pixels intensity values of a image, as you mentioned) in the getitem function in tour custom Dataset class.
However, determining the similarity in getitem function while training your model will make the training process very slow. So, i would recommend you to approximate the similarity before start training (not in the getitem function). Moreover if your image dataset is comprised of complex images (not black and white images) it's better to use a pretrained deep learning model (e.g. resnet or autoencoder) for dimentionality reduction followed by applying clustering approach (e.g. agglomerative clustering) to label your image.
In the second approach you only need to label your images for exactly one time and if you apply augmentation on images while training you don't need to re-determine the similarity (label) in the getitem funcion. On the other hand, in the first approach you need to determine the similarity (label) every time (after applying transformation on images) in the getitem function which is redundant, unnecessary and time consuming.
Hope this will help.
| https://stackoverflow.com/questions/61863541/ |
Why my accuracy values don't change on my train model | When I start to train my model, Loss values decreasing but Accuracy values never change.I don't know why?
# -*- coding: utf-8 -*-
#Libraries
import torch
import torch.nn.functional as F
from torch import autograd, nn
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
from torchvision import transforms, datasets
from torch.utils import data
"""
Olivetti face dataset
"""
from sklearn.datasets import fetch_olivetti_faces
# Olivetti dataset download
olivetti = fetch_olivetti_faces()
train = olivetti.images
label = olivetti.target
X = train
Y = label
print("Format for X:", X.shape)
print("Format for Y: ", Y.shape)
print("\nDownload Ok")
"""
Set for train
"""
train_rate = 0.8
X_train = np.zeros([int(train_rate * X.shape[0]),64,64], dtype=float)
Y_train = np.zeros([int(train_rate * X.shape[0])], dtype=int)
X_val = np.zeros([int((1-train_rate) * X.shape[0]+1),64,64], dtype=float)
Y_val = np.zeros([int((1-train_rate) * X.shape[0]+1)], dtype=int)
#Split data for train and validation
for i in range(X.shape[0]):
ie=0
iv=0
if (i%10)/10 >= train_rate:
X_train[ie] = X[i]
Y_train[ie] = Y[i]
ie += 1
else:
X_val[iv] = X[i]
Y_val[iv] = Y[i]
iv += 1
X_train = X_train.reshape(320,-1,64,64)
X_val = X_val.reshape(80,-1,64,64)
print(Y_train.shape)
X_train = torch.Tensor(X_train)
Y_train = torch.Tensor(Y_train)
X_val = torch.Tensor(X_val)
Y_val = torch.Tensor(Y_val)
batch_size = 20
train_loader = torch.utils.data.DataLoader(X_train,
batch_size=batch_size,
)
val_loader = torch.utils.data.DataLoader(X_val,
batch_size=batch_size,
)
class CNNModule(nn.Module):
def __init__(self):
super(CNNModule, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 13 * 13, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 40)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 13 * 13)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def make_train(model,dataset,n_iters,gpu):
# Organize data
X_train,Y_train,X_val,Y_val = dataset
kriter = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.01)
#Arrays to save loss and accuracy
tl=np.zeros(n_iters) #For train loss
ta=np.zeros(n_iters) #For train accuracy
vl=np.zeros(n_iters) #For validation loss
va=np.zeros(n_iters) #For validation accuracy
# Convert labels to long
Y_train = Y_train.long()
Y_val = Y_val.long()
# GPU control
if gpu:
X_train,Y_train = X_train.cuda(),Y_train.cuda()
X_val,Y_val = X_val.cuda(),Y_val.cuda()
model = model.cuda() # Parameters to GPU!
print("Using GPU")
else:
print("Using CPU")
# print(X_train.shape)
# print(Y_train.shape)
for i in range(n_iters):
# train forward
train_out = model.forward(X_train)
train_loss = kriter(train_out,Y_train)
# Backward and optimization
train_loss.backward()
optimizer.step()
optimizer.zero_grad()
# Compute train accuracy
train_predict = train_out.cpu().detach().argmax(dim=1)
train_accuracy = (train_predict.cpu().numpy()==Y_train.cpu().numpy()).mean()
# For validation
val_out = model.forward(X_val)
val_loss = kriter(val_out,Y_val)
# Compute validation accuracy
val_predict = val_out.cpu().detach().argmax(dim=1)
val_accuracy = (val_predict.cpu().numpy()==Y_val.cpu().numpy()).mean()
tl[i] = train_loss.cpu().detach().numpy()
ta[i] = train_accuracy
vl[i] = val_loss.cpu().detach().numpy()
va[i] = val_accuracy
# Show result each 5 loop
if i%5==0:
print("Loop --> ",i)
print("Train Loss :",train_loss.cpu().detach().numpy())
print("Train Accuracy :",train_accuracy)
print("Validation Loss :",val_loss.cpu().detach().numpy())
print("Validation Accuracy :",val_accuracy)
model = model.cpu()
#Print result
plt.subplot(2,2,1)
plt.plot(np.arange(n_iters), tl, 'r-')
plt.subplot(2,2,2)
plt.plot(np.arange(n_iters), ta, 'b--')
plt.subplot(2,2,3)
plt.plot(np.arange(n_iters), vl, 'r-')
plt.subplot(2,2,4)
plt.plot(np.arange(n_iters), va, 'b--')
dataset = X_train,Y_train,X_val,Y_val
gpu = True
gpu = gpu and torch.cuda.is_available()
model = CNNModule()
make_train(model,dataset,100,gpu)
OUTPUT:
Using CPU
Loop --> 0
Train Loss : 3.6302185
Train Accuracy : 0.0
Validation Loss : 3.6171098
Validation Accuracy : 0.0
Loop --> 5
Train Loss : 3.557933
Train Accuracy : 0.996875
Validation Loss : 3.545982
Validation Accuracy : 0.9875
.
.
.
Loop --> 95
Train Loss : 0.04211783
Train Accuracy : 0.996875
Validation Loss : 0.13397054
Validation Accuracy : 0.9875
| From your code,
train_accuracy = (train_predict.cpu().numpy()==Y_train.cpu().numpy()).mean()
you are taking mean of correct values that's why you are getting same answer in every loop. Instead you should divide the total correct numbers with total number of examples to find the accuracy.
| https://stackoverflow.com/questions/61867231/ |
Python Dataset Class + PyTorch Dataloader: Stuck at __getitem__, how to get Index, Label and so on during Testing? | I have a, maybe small problem but I am stuck for quite a while now. Hope someone can help me with that. I am currently on a Kddcup99 dataset which I like to train via DeepLearning (CNN Network)
I have a "Dataset" Class which includes the Panda Dataframe. Thus i split up into normal and validate dataset. So far, no problem.
I load it into a Numpy vector, torch it to Tensor and then direct it to the DataLoader.
The Dataset Class has these two important classes for iterating through:
def __len__(self):
return len(self.val_df)
def __getitem__(self, index):
img, target = self.val_df[index][:-1], self.val_df[index][-1]
return img, target, index
Not in the class is the DataLoader string:
test_dataloader = DataLoader(datat.val_df, batch_size=10, shuffle=True)
In my Trainer Class i have a for loop which should iterate through the Dataloader:
with torch.no_grad():
for data in dataloader:
inputs, labels, idx = data
inputs = inputs.to(self.device)
But it won't. I can't access the labels, index and such.
My question is now: Why?
How can I access Labels, Index from the given Dataset via the Dataloader?
Thank you all for your help!
Much appreciate it.
| The first argument to DataLoader is the dataset from which you want to load the data, that's usually a Dataset, but it's not restricted to any instance of Dataset. As long as it defines the length (__len__) and can be indexed (__getitem__ allows that) it is acceptable.
You are passing datat.val_df to the DataLoader, which is presumably a NumPy array. A NumPy array has a length and can be indexed, so it can be used in the DataLoader. Since you pass that array directly, your dataset's __getitem__ is never called, but the array itself is indexed, so every item is just data.val_df[index].
Instead of using the underlying data for the DataLoader, you have to use the dataset itself (datat):
test_dataloader = DataLoader(datat, batch_size=10, shuffle=True)
| https://stackoverflow.com/questions/61868754/ |
PyTorch: Row-wise Dot Product | Suppose I have two tensors:
a = torch.randn(10, 1000, 1, 4)
b = torch.randn(10, 1000, 6, 4)
Where the third index is the index of a vector.
I want to take the dot product between each vector in b with respect to the vector in a.
To illustrate, this is what I mean:
dots = torch.Tensor(10, 1000, 6, 1)
for b in range(10):
for c in range(1000):
for v in range(6):
dots[b,c,v] = torch.dot(b[b,c,v], a[b,c,0])
How would I achieve this using torch functions?
| a = torch.randn(10, 1000, 1, 4)
b = torch.randn(10, 1000, 6, 4)
c = torch.sum(a * b, dim=-1)
print(c.shape)
torch.Size([10, 1000, 6])
c = c.unsqueeze(-1)
print(c.shape)
torch.Size([10, 1000, 6, 1])
| https://stackoverflow.com/questions/61875963/ |
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 32, 32] instead | I am new to PyTorch and neural networks in general. I was trying to implement the resnet-50 model from torchvision on the CIFAR-10 dataset.
import torchvision
import torch
import torch.nn as nn
from torch import optim
import os
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
from collections import OrderedDict
import matplotlib.pyplot as plt
transformations=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
trainset=torchvision.datasets.CIFAR10(root='./CIFAR10',download=True,transform=transformations,train=True)
testset=torchvision.datasets.CIFAR10(root='./CIFAR10',download=True,transform=transformations,train=False)
trainloader=DataLoader(dataset=trainset,batch_size=4)
testloader=DataLoader(dataset=testset,batch_size=4)
inputs,labels=next(iter(trainset))
inputs.size()
resnet=torchvision.models.resnet50(pretrained=True)
if torch.cuda.is_available():
resnet=resnet.cuda()
inputs,labels=inputs.cuda(),torch.Tensor(labels).cuda()
outputs=resnet(inputs)
OUTPUT
--------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-904acb410fe4> in <module>()
6 inputs,labels=inputs.cuda(),torch.Tensor(labels).cuda()
7
----> 8 outputs=resnet(inputs)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
344 _pair(0), self.dilation, self.groups)
345 return F.conv2d(input, weight, self.bias, self.stride,
--> 346 self.padding, self.dilation, self.groups)
347
348 def forward(self, input):
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 32, 32] instead
Is there a problen with the dataset for some reason and if not, how do i give a 4 dimensional input? Is the torchvision implementation of ResNet-50 not usable for CIFAR-10?
| Currently you are iterating over dataset that's why you are getting a (3-dimensional) single image. You actually need to iterate over dataloader to get a 4-dimensional image batch. Therefore, you just need to change the following line:
inputs,labels=next(iter(trainset))
to
inputs,labels=next(iter(trainloader))
| https://stackoverflow.com/questions/61877098/ |
How to pass an intermediate layer of one model to another model for skip connection in PyTorch | I want to define an encoder decoder architecture as two separate models and later connect them using nn.Sequential() as shown in the code below. Now, let's say that I want to connect/concatenate the output of Encoder conv4 block to the deconv1 block of Decoder as a skip connection. Is there a way to achieve that without combining the two models (encoder and decoder) into one. I want to keep them separate to be able to use output of the same encoder as input of multiple decoders.
class Encoder(nn.Module):
def __init__(self, conv_dim=64, n_res_blocks=2):
super(Encoder, self).__init__()
# Define the encoder
self.conv1 = conv(3, conv_dim, 4)
self.conv2 = conv(conv_dim, conv_dim*2, 4)
self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
self.conv4 = conv(conv_dim*4, conv_dim*4, 4)
# Define the resnet part of the encoder
# Residual blocks
res_layers = []
for layer in range(n_res_blocks):
res_layers.append(ResidualBlock(conv_dim*4))
# use sequential to create these layers
self.res_blocks = nn.Sequential(*res_layers)
# leaky relu function
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)
def forward(self, x):
# define feedforward behavior, applying activations as necessary
conv1 = self.leaky_relu(self.conv1(x))
conv2 = self.leaky_relu(self.conv2(conv1))
conv3 = self.leaky_relu(self.conv3(conv2))
conv4 = self.leaky_relu(self.conv4(conv3))
out = self.res_blocks(conv4)
return out
# Define the Decoder Architecture
class Decoder(nn.Module):
def __init__(self, conv_dim=64, n_res_blocks=2):
super(Decoder, self).__init__()
# Define the resnet part of the decoder
# Residual blocks
res_layers = []
for layer in range(n_res_blocks):
res_layers.append(ResidualBlock(conv_dim*4))
# use sequential to create these layers
self.res_blocks = nn.Sequential(*res_layers)
# Define the decoder
self.deconv1 = deconv(conv_dim*4, conv_dim*4, 4)
self.deconv2 = deconv(conv_dim*4, conv_dim*2, 4)
self.deconv3 = deconv(conv_dim*2, conv_dim, 4)
self.deconv4 = deconv(conv_dim, conv_dim, 4)
# no batch norm on last layer
self.out_layer = deconv(conv_dim, 3, 1, stride=1, padding=0, normalization=False)
# leaky relu function
self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)
def forward(self, x):
# define feedforward behavior, applying activations as necessary
res = self.res_blocks(x)
deconv1 = self.leaky_relu(self.deconv1(res))
deconv2 = self.leaky_relu(self.deconv2(deconv1))
deconv3 = self.leaky_relu(self.deconv3(deconv2))
deconv4 = self.leaky_relu(self.deconv4(deconv3))
# tanh applied to last layer
out = F.tanh(self.out_layer(deconv4))
out = torch.clamp(out, min=-0.5, max=0.5)
return out
def model():
enc = Encoder(conv_dim=64, n_res_blocks=2)
dec = Decoder(conv_dim=64, n_res_blocks=2)
return nn.Sequential(enc, dec)
| Instead of returning only the latent feature (output of last layer) from encoder you can return the output of intermediate layers along with the latent feature, may be as a list. Afterwards, in the decoder's forward function you can access the list of values returned from encoder (which is the parameters of decoder) and use that correspondingly in the decoder layer.
Hope this bit helps.
| https://stackoverflow.com/questions/61877573/ |
torch stack producing wrong dimensions | I am quite new to pytorch, and I am trying to create data within a dataloader and my code looks like so:
a=[]
.. within a for loop
self.a.append(torch.stack([b[ith_idx][j], \
b[ith_idx][rnd_dist], \
b[rnd_cls_idx][rnd_dist_rnd_cls]]\
))
self.c.append([1,0])
where, b is a python list of tensors. For example, the first element of b has shape torch.Size([46, 3, 512, 512]) (3 channels 512 x 512).
self.a = torch.stack(self.a)
self.c = torch.tensor(self.c)
and I notice I have shapes of [500,3,3,512,512] and [500,2] for a and b, while I was expecting 500,3,3,512,512 and 500 as tensor shapes.
Any pointers as to why this is happening would be helpful.
| The shape of c is [500, 2] because you are appending a list of two values, [1,0] each time to c inside the loop. If you append either 1 or 0 iteratively then you will have a list of 500 elements.
For instance, you can append 1 or 0 randomly like follows:
self.c.append(np.random.randint(2)) # import numpy as np
or
self.c.append(torch.randint(0,2,(1,)))
| https://stackoverflow.com/questions/61880270/ |
masked_scatter but rowwise? | Assuming a mask as follows:
mask = torch.tensor([
[True, True, False, True, False],
[True, False, True, True, True ],
])
I would like to number the True values with sequential values in each row separately. I don't care what's in the False spots, so 0 for simplicity. Thus the desired result is
tensor([[0, 1, 0, 2, 0], # 0 1 _ 2 _
[0, 0, 1, 2, 3]]) # 0 _ 1 2 3
I hoped this would work:
replacements = torch.arange(mask.size(1)).expand(mask.size())
target = torch.zeros(mask.size(), dtype=int)
target.masked_scatter(mask, replacements)
Unfortunately, masked_scatter ignores the shape of replacements, so this code results in:
tensor([[0, 1, 0, 2, 0], # 0 1 _ 2 _
[3, 0, 4, 0, 1]]) # 3 _ 4 0 1
What would I need to do instead?
| I would try something with torch.cumsum: torch.cumsum(mask,dim=1) -1) * mask
The complete example
import torch
mask = torch.tensor([
[True, True, False, True, False],
[True, False, True, True, True ],
])
result=torch.cumsum(mask,dim=1) -1) * mask
print(result)
That would print:
tensor([[0, 1, 0, 2, 0],
[0, 0, 1, 2, 3]])
| https://stackoverflow.com/questions/61883535/ |
Best practices to benchmark deep models on CPU (and not GPU) in PyTorch? | I am little uncertain about how to measure execution time of deep models on CPU in PyTorch ONLY FOR INFERENCE. I list here some of them but they maybe inaccurate. Please correct them if required and mention more if required. I am running on PyTorch version 1.3.1 and Intel Xeon with 64GB RAM, 3.5GHz processor and 8 cores.
Should we use time.time()?
I know that for GPU this is a very bad idea. For GPU I do as follows
with torch.no_grad():
wTime = 0
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
out = model(input) # JUST FOR WARMUP
start.record()
for i in range(200):
input = torch.rand(1,1,2048,2048).to(device)
# beg = time.time() DO NOT USE FOR GPU
got = net_amplifier(low,for_amplifier)
# wTime+=time.time()-beg DO NOT USE FOR GPU
end.record()
torch.cuda.synchronize()
print('execution time in MILLISECONDS: {}'.format(start.elapsed_time(end)/200))
For this code execution was done on GPU. If I have to run them on CPU what changes should be made? Will time.time() do?
Should we use volatile?
I think the use if volatile is now discouraged after v0.3. But will it still help if I use the eval mode and no_grad()
input = Variable(torch.randn(1, 3, 227, 227), volatile=True)
model(input)
Should the page cache be cleared?
One way of doing this that I know is using sudo sh -c "/bin/echo 1 > /proc/sys/vm/drop_caches"
Should I remove nn.Sequential() and directly put in forward part
According to this link
All the methods using copy_ take some time to execute, especially on CPU this might be slow. Also the nn.Sequential() modules are slower than just executing them on the forward pass. I think this is due to some overhead that needs to be created when executing the Sequential module.
Another thing which i do not understand on the same link is
If you are running into performance issues with these small numbers, you might try to use torch.set_flush_denormal(True) to disable denormal floating point numbers on the CPU.
Should torch.set_num_threads(int) be used? If yes can a demo code be provided?
What does These context managers are thread local, so they won’t work if you send work to another thread using the :module:`threading` module, etc. mean as given in the documentation.
Please list any more issues for calculating execution time in CPU.
Thankyou
|
Should we use time.time()?
Yes, it's fine for CPU
Should we use volatile?
As you said it's deprecated. Since 0.4.0 torch.Tensor was merged with torch.Variable (it's deprecated as well) and torch.no_grad context manager should be used.
Should the page cache be cleared?
I don't think so unless you know it's a problem
Should I remove nn.Sequential() and directly put in forward part
No, torch.nn.Sequential should have no or negligible performance burden on your model. It's forward is only:
def forward(self, input):
for module in self:
input = module(input)
return input
If you are running into performance issues with these small numbers,
you might try to use torch.set_flush_denormal(True) to disable
denormal floating point numbers on the CPU.
Flushing denormal numbers (numbers which underflow) means replacing them strictly by 0.0 which might help with your performance if you have a lot of really small numbers. Example given by PyTorch docs:
>>> torch.set_flush_denormal(True)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor([ 0.], dtype=torch.float64)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor(9.88131e-324 *
[ 1.0000], dtype=torch.float64)
Should torch.set_num_threads(int) be used? If yes can a demo code be
provided?
According to this document it might help if you don't allocate too many threads (probably at most as many as cores in your CPU so you might try 8).
So this piece at the beginning of your code might help:
torch.set_num_threads(8)
You may want to check numbers out and see whether and how much each value helps.
What does These context managers are thread local, so they won’t work
if you send work to another thread using the :module:threading
module, etc. mean as given in the documentation.
If you use module like torch.multiprocessing and run torch.multiprocessing.spawn (or a-like) and one of your processes won't get into the context manager block the gradient won't be turned off (in case of torch.no_grad). Also if you use Python's threading only the threads where the block was run into will have gradients turned off (or on, it depends).
This code will make it clear for you:
import threading
import torch
def myfunc(i, tensor):
if i % 2 == 0:
with torch.no_grad():
z = tensor * 2
else:
z = tensor * 2
print(i, z.requires_grad)
if __name__ == "__main__":
tensor = torch.randn(5, requires_grad=True)
with torch.no_grad():
for i in range(10):
t = threading.Thread(target=myfunc, args=(i, tensor))
t.start()
Which outputs (order may vary):
0 False
1 True
2 False
3 True
4 False
6 False
5 True
7 True
8 False
9 True
Also notice that torch.no_grad() in __main__ has no effect on spawned threads (neither would torch.enable_grad).
Please list any more issues for calculating execution time in CPU.
Converting to torchscript (see here) might help, building PyTorch from source targeted at your architecture and it's capabilities and tons of other things, this question is too wide.
| https://stackoverflow.com/questions/61884380/ |
Accuracy/train and Loss/train graph by tensorboard | i used tensorboard for my pytorch project and got this result for accuracy/train and loss/train but i dont understand what it means
| Your loss does not decrease and your accuracy does not increase during training. Not significantly.
First thing to try is adjusting the learning rate:
- One possibility is that the learning rate is too small, and therefore the weight updates are tiny and insignificant. Try increasing the learning rate by factor of x10 or even x100.
- On the other hand, your loss/accuracy do seem to oscillate, which may suggest update steps are too large. Try decreasing the learning rate by x10 and see if this oscillation subsides.
| https://stackoverflow.com/questions/61886303/ |
Getting an Error in Pytorch: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | I seem to be having a problem with my code. The error occurs at:
x, predicted = torch.max(net(value).data.squeeze(), 1)
I'm not sure what the issue is, and I've tried everything to fix. From my understanding, there seems to be a problem with the tensor dimension. I'm not sure what else to do. Can anyone give me any suggestions or solutions on how to fix this problem? Thank you in advance.
class Network(nn.Module): #Class for the neural network
def __init__(self):
super(Network, self).__init__()
self.layer1 = nn.Linear(6, 10) #First number in the number of inputs(784 since 28x28 is 784.) Second number indicates the number of inputs for the hidden layer(can be any number).
self.hidden = nn.Softmax() #Activation Function
self.layer2 = nn.Linear(10, 1) #First number is the hidden layer number(same as first layer), second number is the number of outputs.
self.layer3 = nn.Sigmoid()
def forward(self, x): #Feed-forward part of the neural network. We will will feed the input through every layer of our network.
y = self.layer1(x)
y = self.hidden(y)
y = self.layer2(y)
y = self.layer3(y)
return y #Returns the result
net = Network()
loss_function = nn.BCELoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)
for x in range(1): #Amount of epochs over the dataset
for index, value in enumerate(new_train_loader):
print(value)#This loop loops over every image in the dataset
#actual = value[0]
actual_value = value[5]
#print(value.size())
#print(net(value).size())
print("ACtual", actual_value)
net(value)
loss = loss_function(net(value), actual_value.unsqueeze(0)) #Updating our loss function for every image
#Backpropogation
optimizer.zero_grad() #Sets gradients to zero.
loss.backward() #Computes gradients
optimizer.step() #Updates gradients
print("Loop #: ", str(x+1), "Index #: ", str(index+1), "Loss: ", loss.item())
right = 0
total = 0
for value in new_test_loader:
actual_value = value[5]
#print(torch.max(net(value).data, 1))
print(net(value).shape)
x, predicted = torch.max(net(value).data.squeeze(), 1)
total += actual_value.size(0)
right += (predicted==actual_value).sum().item()
print("Accuracy: " + str((100*right/total)))
I should also mention that i'm using the latest versions.
| You are calling .squeeze() on the model's output, which removes all singular dimensions (dimensions that have size 1). Your model's output has size [batch_size, 1], so
.squeeze() removes the second dimension entirely, resulting in size [batch_size]. After, you're trying to take the maximum value across dimension 1, but the only dimension you have is the 0th dimension.
You don't need to take the maximum value in this case, since you have only one class as the output, and with the sigmoid at the end of your model you get values between [0, 1]. Since your are doing a binary classification that single class acts as two, namely either it's 0 or it's 1. So it can be seen as the probability that it is the class 1. Then you just need to set use a threshold of 0.5, meaning when the probability is over 0.5 it's class 1 and if the probability is under 0.5 it's the class 0. That's exactly what rounding does, therefore you can use torch.round.
output = net(value)
predicted = torch.round(output.squeeze())
On a side note, you are calling net(value) multiple times with the same value, and that means that its output is calculated multiple times as well, because it needs to go through the entire network again. That is unnecessary and you should just save the output in a variable. With this small network it isn't noticeable, but with larger networks that will take a lot of unnecessary time to recalculate the output.
| https://stackoverflow.com/questions/61893786/ |
Getting Bad Images After Data Augmentation in PyTorch | I'm working on a nuclear segmentation problem where I'm trying to identify the positions of nuclei in images of stained tissues. The given training dataset has a picture of the stained tissue and a mask with the nuclei positions. Since the dataset was small, I wanted to try data augmentation in PyTorch, but after doing that, for some reason, when I output my mask image, it looks fine, but the corresponding tissue image is incorrect.
All my training images are in X_train with shape (128, 128, 3), the corresponding masks in Y_train with shape (128, 128, 1) and similarly the cross-validation images and masks in X_val and Y_val respectively.
Y_train and Y_val have dtype = np.bool, X_train and X_val have dtype = np.uint8.
Before data augmentation, I check my images like this:
fig, axis = plt.subplots(2, 2)
axis[0][0].imshow(X_train[0].astype(np.uint8))
axis[0][1].imshow(np.squeeze(Y_train[0]).astype(np.uint8))
axis[1][0].imshow(X_val[0].astype(np.uint8))
axis[1][1].imshow(np.squeeze(Y_val[0]).astype(np.uint8))
The output is as follows:
Before Data Augmentation
For the data augmentation, I define a custom class as follows:
Here I have imported torchvision.transforms.functional as TF and torchvision.transforms as transforms. images_np and masks_np are the inputs which are numpy arrays.
class Nuc_Seg(Dataset):
def __init__(self, images_np, masks_np):
self.images_np = images_np
self.masks_np = masks_np
def transform(self, image_np, mask_np):
ToPILImage = transforms.ToPILImage()
image = ToPILImage(image_np)
mask = ToPILImage(mask_np.astype(np.int32))
angle = random.uniform(-10, 10)
width, height = image.size
max_dx = 0.2 * width
max_dy = 0.2 * height
translations = (np.round(random.uniform(-max_dx, max_dx)), np.round(random.uniform(-max_dy, max_dy)))
scale = random.uniform(0.8, 1.2)
shear = random.uniform(-0.5, 0.5)
image = TF.affine(image, angle = angle, translate = translations, scale = scale, shear = shear)
mask = TF.affine(mask, angle = angle, translate = translations, scale = scale, shear = shear)
image = TF.to_tensor(image)
mask = TF.to_tensor(mask)
return image, mask
def __len__(self):
return len(self.images_np)
def __getitem__(self, idx):
image_np = self.images_np[idx]
mask_np = self.masks_np[idx]
image, mask = self.transform(image_np, mask_np)
return image, mask
This is followed by:
I have used from torch.utils.data import DataLoader
train_dataset = Nuc_Seg(X_train, Y_train)
train_loader = DataLoader(train_dataset, batch_size = 16, shuffle = True)
val_dataset = Nuc_Seg(X_val, Y_val)
val_loader = DataLoader(val_dataset, batch_size = 16, shuffle = True)
After this step, I try to check on my first set training image and mask using this:
%matplotlib inline
for ex_img, ex_mask in train_loader:
img = ex_img[0]
img = img.reshape(128, 128, 3)
mask = ex_mask[0]
mask = mask.reshape(128, 128)
img = img.numpy()
mask = mask.numpy()
fig, (axis_1, axis_2) = plt.subplots(1, 2)
axis_1.imshow(img.astype(np.uint8))
axis_2.imshow(mask.astype(np.uint8))
break
I get this as my output:
After Data Augmentation 1
When I change the line axis_1.imshow(img.astype(np.uint8)) to axis_1.imshow(img),
I get this image:
After Data Augmentation 2
The images of the mask are correct, but for some reason, the images of nuclei are wrong. With the .astype(np.uint8), the tissue image is completely black.
Without the .astype(np.uint8), the positions of the nuclei are correct, but the color scheme is all messed up (I expect images like those seen before data augmentation, either gray-ish or pink-ish), plus 9 copies of the same image in a grid are displayed for some reason. Can you please help me get the correct output of the tissue images?
| You are converting the images to PyTorch tensors, and in PyTorch the images have size [C, H, W]. When you are visualising them, you are converting the tensors back to NumPy arrays, where images have size [H, W, C]. Therefore you are trying to rearrange the dimensions, but you are using torch.reshape, which doesn't swap the dimensions, but only partitions the data in a different way.
An example makes this clearer:
# Incrementing numbers with size 2 x 3 x 3
image = torch.arange(2 * 3 * 3).reshape(2, 3, 3)
# => tensor([[[ 0, 1, 2],
# [ 3, 4, 5],
# [ 6, 7, 8]],
#
# [[ 9, 10, 11],
# [12, 13, 14],
# [15, 16, 17]]])
# Reshape keeps the same order of elements but for a different size
# The numbers are still incrementing from left to right
image.reshape(3, 3, 2)
# => tensor([[[ 0, 1],
# [ 2, 3],
# [ 4, 5]],
#
# [[ 6, 7],
# [ 8, 9],
# [10, 11]],
#
# [[12, 13],
# [14, 15],
# [16, 17]]])
To reorder the dimensions you can use permute:
# Dimensions are swapped
# Now the numbers increment from top to bottom
image.permute(1, 2, 0)
# => tensor([[[ 0, 9],
# [ 1, 10],
# [ 2, 11]],
#
# [[ 3, 12],
# [ 4, 13],
# [ 5, 14]],
#
# [[ 6, 15],
# [ 7, 16],
# [ 8, 17]]])
With the .astype(np.uint8), the tissue image is completely black.
PyTorch images are represented as floats with values between [0, 1], but NumPy uses integer values between [0, 255]. Casting the float values to np.uint8 will result in only 0s and 1s, where everything that was not equal to 1, will be set to 0, therefore the whole image is black.
You need to multiply the values by 255 to bring them into range of [0, 255].
img = img.permute(1, 2, 0) * 255
img = img.numpy().astype(np.uint8)
This conversion is also automatically done when you are converting a tensor to a PIL image with transforms.ToPILImage (or with TF.to_pil_image if you prefer the functional version) and the PIL image can be converted directly to a NumPy array. With that you don't have to worry about the dimensions, value ranges or type and the code above can be replaced with:
img = np.array(TF.to_pil_image(img))
| https://stackoverflow.com/questions/61894592/ |
Multiplying matrices of different sizes | I have a matrix full of zeros except for one zone, where I have values other than 0:
i=20
j=30
z = np.zeros((50,50))
while i < 30:
while j < 40:
z[j, i] = np.random.rand(1,1)
j+=1
j=30
i+=1
matshow(z)
np.where(z == z.max())
Then I have another matrix, smaller than the first one. The distribution of values in this matrix follows a Gaussian curve:
import numpy as np
from scipy import signal
def gkern(kernlen=21, std=3):
"""Returns a 2D Gaussian kernel array."""
gkern1d = signal.gaussian(kernlen, std=std).reshape(kernlen, 1)
gkern2d = np.outer(gkern1d, gkern1d)
return gkern2d
gkern(21, 3)
What I want is the following:
Identify where the maximum value of the 50x50 matrix is located. On this value, I want to center the maximum value of the smallest matrix, and then do the multiplication between the value of my small matrix and the values of the big matrices that are convered by the smaller matrix.
| First use the original code to make the big matrix:
i=20
j=30
z = np.zeros((50,50))
while i < 30:
while j < 40:
z[j, i] = np.random.rand(1,1)
j+=1
j=30
i+=1
Then figure out where zmax is:
[x,y] = np.where(z==z.max());
Now we know where the matrix should be centered. Get the shape of the g_kern matrix. This will tell us the size of the submatrix that we need to construct by taking the values of our larger matrix "covered" by the shape of the g_kern matrix, centered around the max of the larger matrix
[a,b] = np.shape(g_kern_matrix)
Now we center the submatrix of the larger matrix at the position of z_max. Its shape is determined by the second smaller matrix:
#x range of larger matrix slice
[x_min,x_max] = [int(x-a/2),int(x+a/2)]
#y range of larger matrix slice
[y_min,y_max] = [int(y-b/2),int(y+b/2)]
#construct the submatrix of the larger matrix
submatrix = z[x_min:x_max,y_min:y_max]
#can take a look at the submatrix to see that it seems correct
plt.matshow(submatrix)
plt.show()
And now we can do the multiplication
#now element wise multiply(hadamard product) this with the gaussian curve matrix
z[x_min:x_max,y_min:y_max] = submatrix*g_kern_matrix
plt.matshow(z)
plt.show()
Here I have assumed that when you ask to multiply the g_kern matrix and the submatrix, you mean element wise multiplication, as you cannot multiply two nonsquare matrices of the same shape.
| https://stackoverflow.com/questions/61896421/ |
net.zero_grad() vs optim.zero_grad() pytorch | Here they mention the need to include optim.zero_grad() when training to zero the parameter gradients. My question is: Could I do as well net.zero_grad() and would that have the same effect? Or is it necessary to do optim.zero_grad(). Moreover, what happens if I do both? If I do none, then the gradients get accumulated, but what does that exactly mean? do they get added? In other words, what's the difference between doing optim.zero_grad() and net.zero_grad(). I am asking because here, line 115 they use net.zero_grad() and it is the first time I see that, that is an implementation of a reinforcement learning algorithm, where one has to be especially careful with the gradients because there are multiple networks and gradients, so I suppose there is a reason for them to do net.zero_grad() as opposed to optim.zero_grad().
| net.zero_grad() sets the gradients of all its parameters (including parameters of submodules) to zero. If you call optim.zero_grad() that will do the same, but for all parameters that have been specified to be optimised. If you are using only net.parameters() in your optimiser, e.g. optim = Adam(net.parameters(), lr=1e-3), then both are equivalent, since they contain the exact same parameters.
You could have other parameters that are being optimised by the same optimiser, which are not part of net, in which case you would either have to manually set their gradients to zero and therefore keep track of all the parameters, or you can simply call optim.zero_grad() to ensure that all parameters that are being optimised, had their gradients set to zero.
Moreover, what happens if I do both?
Nothing, the gradients would just be set to zero again, but since they were already zero, it makes absolutely no difference.
If I do none, then the gradients get accumulated, but what does that exactly mean? do they get added?
Yes, they are being added to the existing gradients. In the backward pass the gradients in respect to every parameter are calculated, and then the gradient is added to the parameters' gradient (param.grad). That allows you to have multiple backward passes, that affect the same parameters, which would not be possible if the gradients were overwritten instead of being added.
For example, you could accumulate the gradients over multiple batches, if you need bigger batches for training stability but don't have enough memory to increase the batch size. This is trivial to achieve in PyTorch, which is essentially leaving off optim.zero_grad() and delaying optim.step() until you have gathered enough steps, as shown in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups.
That flexibility comes at the cost of having to manually set the gradients to zero. Frankly, one line is a very small cost to pay, even though many users won't make use of it and especially beginners might find it confusing.
| https://stackoverflow.com/questions/61898668/ |
PyTorch "Caught IndexError in DataLoader worker process 0", "IndexError: too many indices for array" | I am trying to implement a detection model based on "finetuning object detection" official tutorial of PyTorch.
It seemed to have worked with minimal data, (for 10 of images). However I uploaded my whole dataset to Drive and checked the index-data-label correspondences. There are not unmatching items in my setup, I have all the errors in that part solved. (I deleted extra items from the labels on GDrive)
class SomeDataset(torch.utils.data.Dataset):
def __init__(self, root_path, transforms):
self.root_path = root_path
self.transforms = transforms
# load all image files, sorting them to
# ensure that they are aligned
self.imgs = list(sorted(os.listdir(os.path.join(root_path, "images"))))
self.labels = list(sorted(os.listdir(os.path.join(root_path, "labels"))))
def __getitem__(self, idx):
# load images ad masks
img_path = os.path.join(self.root_path, "images", self.imgs[idx])
label_path = os.path.join(self.root_path, "labels", self.labels[idx])
img = Image.open(img_path).convert("RGB")
# get labels and boxes
label_data = np.loadtxt(label_path, dtype=str, delimiter=' ');
print(f"{len(label_data)} is the length of label data")
num_objs = label_data.shape[0];
if num_objs != 0:
print(f"number of objects {num_objs}")
# label values should start from 1
for i,label_name in enumerate(classnames):
label_data[np.where(label_name==label_data)] = i;
label_data = label_data.astype(np.float);
print(f"label data {label_data}")
xs = label_data[:,0:8:2];
ys = label_data[:,1:8:2];
x_min = np.min(xs, axis=1)[...,np.newaxis];
x_max = np.max(xs, axis=1)[...,np.newaxis];
y_min = np.min(ys, axis=1)[...,np.newaxis];
y_max = np.max(ys, axis=1)[...,np.newaxis];
boxes = np.hstack((x_min,y_min,x_max,y_max));
labels = label_data[:,8];
else:
# if there is no label add background whose label is 0
boxes = [[0,0,1,1]];
labels = [0];
num_objs = 1;
boxes = torch.as_tensor(boxes, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.int64)
image_id = torch.tensor([idx])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
# suppose all instances are not crowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
target["image_id"] = image_id
target["area"] = area
target["iscrowd"] = iscrowd
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
My main method is like the following,
def main():
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# our dataset has 16 classes - background and others
num_classes = 16
# use our dataset and defined transformations
dataset = SomeDataset('trainImages', get_transform(train=True))
print(f"{len(dataset)} number of images in training dataset")
dataset_validation = SomeDataset('valImages', get_transform(train=True))
print(f"{len(dataset_validation)} number of images in validation dataset")
# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=20, shuffle=True, num_workers=4,
collate_fn=utils.collate_fn)
data_loader_val = torch.utils.data.DataLoader(
dataset_validation, batch_size=10, shuffle=False, num_workers=4,
collate_fn=utils.collate_fn)
# get the model using our helper function
#model = get_model_instance_segmentation(num_classes)
model = get_rcnn(num_classes);
# move model to the right device
model.to(device)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
#optimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005);
optimizer = torch.optim.Adam(params, lr=0.0005);
# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# let's train it for 10 epochs
num_epochs = 5
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=100)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
#evaluate(model, data_loader_test, device=device)
print("That's it!")
return model;
When I run my code, it runs for a few number of data (for example 10 of them) and then stops and gives out this error.
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-114-e0ccd94603fd>", line 31, in __getitem__
xs = label_data[:,0:8:2];
IndexError: too many indices for array
The error goes from model = main() to > train_one_epoch() and goes on.
I do not understand why this is happening.
Also, this is an example from one instance of dataset,
(<PIL.Image.Image image mode=RGB size=1024x1024 at 0x7F46FC0A94A8>, {'boxes': tensor([[ 628., 6., 644., 26.],
[ 633., 50., 650., 65.],
[ 620., 27., 637., 44.],
[ 424., 193., 442., 207.],
[ 474., 188., 496., 204.],
[ 383., 226., 398., 236.],
[ 399., 218., 418., 231.],
[ 42., 189., 63., 203.],
[ 106., 159., 129., 169.],
[ 273., 17., 287., 34.],
[ 225., 961., 234., 980.],
[ 220., 1004., 230., 1024.]]), 'labels': tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5]), 'image_id': tensor([0]), 'area': tensor([320., 255., 289., 252., 352., 150., 247., 294., 230., 238., 171., 200.]), 'iscrowd': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])})
| When using np.loadtxt() method, make sure to add ndims = 2 as a parameter.
Because the number of objects parameter num_obj becomes 10 even if it has only 1 object in it.
It is because 1 object becomes a column vector which shows up as 10 objects. (representing 10 columns)
ndims = 2, makes sure that the output of np.loadtxt() method does not give out any row or column vectors, only 2 dimensional outputs.
| https://stackoverflow.com/questions/61900138/ |
ResNet50 torchvision implementation gives low accuracy on CIFAR-10 | I am new to Deep Learning and PyTorch. I am using the resnet-50 model in the torchvision module on cifar10. I have imported the CIFAR-10 dataset from torchvision. The accuracy is very low on testing and I have tried configuring the classification layers but there is no change in the accuracy. Is there something wrong with my code? Am I making a mistake in calculating the accuracy?
import torchvision
import torch
import torch.nn as nn
from torch import optim
import os
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
import numpy as np
from collections import OrderedDict
import matplotlib.pyplot as plt
transformations=transforms.Compose([transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225])])
trainset=torchvision.datasets.CIFAR10(root='./CIFAR10',download=True,transform=transformations,train=True)
testset=torchvision.datasets.CIFAR10(root='./CIFAR10',download=True,transform=transformations,train=False)
trainloader=DataLoader(dataset=trainset,batch_size=4)
testloader=DataLoader(dataset=testset,batch_size=4)
inputs,labels=next(iter(trainloader))
labels=labels.float()
inputs.size()
print(labels.type())
resnet=torchvision.models.resnet50(pretrained=True)
if torch.cuda.is_available():
resnet=resnet.cuda()
inputs,labels=inputs.cuda(),torch.Tensor(labels).cuda()
outputs=resnet(inputs)
outputs.size()
for param in resnet.parameters():
param.requires_grad=False
numft=resnet.fc.in_features
print(numft)
resnet.fc=torch.nn.Sequential(nn.Linear(numft,1000),nn.ReLU(),nn.Linear(1000,10))
resnet.cuda()
resnet.train(True)
optimizer=torch.optim.SGD(resnet.parameters(),lr=0.001,momentum=0.9)
criterion=nn.CrossEntropyLoss()
for epoch in range(5):
resnet.train(True)
trainloss=0
correct=0
for x,y in trainloader:
x,y=x.cuda(),y.cuda()
optimizer.zero_grad()
yhat=resnet(x)
loss=criterion(yhat,y)
loss.backward()
optimizer.step()
trainloss+=loss.item()
print('Epoch: {} Loss: {}'.format(epoch,(trainloss/len(trainloader))))
accuracy=[]
running_corrects=0.0
for x_test,y_test in testloader:
x_test,y_test=x_test.cuda(),y_test.cuda()
yhat=resnet(x_test)
_,z=yhat.max(1)
running_corrects += torch.sum(y_test == z)
accuracy.append(running_corrects/len(testloader))
print(running_corrects/len(testloader))
accuracy=max(accuracy)
print(accuracy)
OUTPUT AFTER TRAINING/TESTING
Epoch: 0 Loss: 1.9808503997325897
Epoch: 1 Loss: 1.7917569598436356
Epoch: 2 Loss: 1.624434965057373
Epoch: 3 Loss: 1.4082191940283775
Epoch: 4 Loss: 1.1343850775527955
tensor(1.1404, device='cuda:0')
tensor(1.1404, device='cuda:0')
| Couple of my observations:
You may want to fine-tune learning-rate and number of epochs and batch size. For example, currently you are training your model for only five epochs which might not be sufficient to achieve high accuracy. you can try with lager value of epochs.
Have you tried adapting backbone (feature extractor) model for CIFAR10 dataset by setting `param.requires_grad=True? Because the original model is trained on imagenet that might need to adapt on CIFAR10.
Before evaluation/testing you may like to set resnet.train(False) or resnet.eval() to let the model know that you are in eval mode. Furthermore, you may want to evaluate your model under the scope of no_grad() by using with torch.no_grad(): that will speed up inference time and reduce memory usage.
[CIFAR-10 is a balanced dataset so it's an optional (EDA) task here.] Have you checked the class distribution of CIFAR10 in terms of whether it's an imbalanced dataset or not? If it's an imbalanced dataset you may want to employ weighted cross entropy for you loss calculation. There are other strategies to tackle class-imbalance like over-sampling or under-sampling.
Regarding test accuracy, You need to divide the total number of correct prediction by the total number of samples in the dataset, len(testloader.dataset) instead of len(testloader). If you want your accuracy in the range of [0,100], just multiply by 100. You can print test accuracy for each epoch to check how it's changing whereas you are currently showing the maximum accuracy.
| https://stackoverflow.com/questions/61901144/ |
How to set target in cross entropy loss for pytorch multi-class problem | Problem Statement: I have an image and a pixel of the image can belong to only(either) one of Band5','Band6', 'Band7' (see below for details). Hence, I have a pytorch multi-class problem but I am unable to understand how to set the targets which needs to be in form [batch, w, h]
My dataloader return two values:
x = chips.loc[:, :, :, self.input_bands]
y = chips.loc[:, :, :, self.output_bands]
x = x.transpose('chip','channel','x','y')
y_ohe = y.transpose('chip','channel','x','y')
Also, I have defined:
input_bands = ['Band1','Band2', 'Band3', 'Band3', 'Band4'] # input classes
output_bands = ['Band5','Band6', 'Band7'] #target classes
model = ModelName(num_classes = 3, depth=default_depth, in_channels=5, merge_mode='concat').to(device)
loss_new = nn.CrossEntropyLoss()
In my training function:
#get values from dataloader
X = normalize_zero_to_one(X) #input
y = normalize_zero_to_one(y) #target
images = Variable(torch.from_numpy(X)).to(device) # [batch, channel, H, W]
masks = Variable(torch.from_numpy(y)).to(device)
optim.zero_grad()
outputs = model(images)
loss = loss_new(outputs, masks) # (preds, target)
loss.backward()
optim.step() # Update weights
I know the the target (here masks) should be [batch_size, w, h]. However, it is currently [batch_size, channels, w, h].
I read a lot of posts including 1, 2 and they say the target should only contain the target class indices. I don't understand how can I concatenate indices of three classes and still set target as [batch_size, w, h].
Right now, I get the error:
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4
To the best of my understanding, I don't need to do any one hot encoding. Similar errors and explanation I found on the internet are here:'
Reference 1
Reference 2
Reference 3
Reference 4
Any help will be appreciated! Thank you.
| If I understand correctly, your current "target" is [batch_size, channels, w, h] with channels==3 as you have three possible targets.
What are the values in your target represent? You basically have a 3-vector target for each pixel - are these the expected class probabilities? Are they "one-hot-vectors" indicating the correct "band"?
If so, you can get the target indices by simply taking the argmax along the target channel dimension:
proper_target = torch.argmax(masks, dim=1) # make sure keepdim=False
loss = loss_new(outputs, proper_target)
| https://stackoverflow.com/questions/61904987/ |
Impact of data shuffling on results reproducibility in Pytorch | Pytorch dataloader class has the following constructor:
DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None)
When shuffle is set to True, data is reshuffled at every epoch. Shuffling the order in which examples are fed to the classifier is helpful so that batches between epochs do not look alike. Doing so will eventually make our model more robust.
However, I'm unable to understand by setting shuffle=True, can we get the same accuracy value for different runs?
| The main algorithm/principal deep learning is based on is weight optimization using stochastic gradient descend (and its variants). Being a stochastic algorithm you cannot expect to get exactly the same results if you run your algorithm multiple times.
In fact, you should see some variations, but they should be "roughly the same".
If you need to have exactly the same results when running your algorithm multiple times, you should look into reproducibility of results - which is a very delicate subject.
In summary:
1. If you do not shuffle at all, you will have perfect reproducibility, but the resulting accuracy are expected to be very low.
2. If you randomly shuffle (what most of the world does) you should expect slightly different accuracy value for each run, but they should all be significantly larger than the values of (1) "no shuffle".
3. If you follow the guidelines of reproducible results, you should have the exact same accuracy values for each run and they should be close to the values of (2) "shuffle".
| https://stackoverflow.com/questions/61907618/ |
torch.nn.DataParallel and to(device) does not support nested modules | I have a torch.nn.module class defined in the following way:
class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.sub_module_a = .... # nn.module
self.sub_module_b_dict = {
'B': .... # nn.module
}
However after I call torch.nn.DataParallel(MyModule) and MyModule.to(device) only sub_module_a is put on cuda. The 'B' inside self.sub_module_b_dict is still on CPU.
Looks like DataParallel and to(device) only support first level variables inside a torch.nn.Module class. The modules nested inside a customized structure (in this case, a dictionary) seem to be ignored.
Am I missing some caveats here?
| You MUST use proper nn containers for all nn.Module's methods to act recursively on sub modules.
In your case, 'B' module is stored in a simple pythonic dictionary. Replace this with [nn.ModuleDict] and you should be fine:
self.sub_module_b_dict = nn.ModuleDict({'B': ...})
See related threads:
a, b, c, d, e, to name just a few...
| https://stackoverflow.com/questions/61907804/ |
Why can't I get this Runge-Kutta solver to converge as the time step decreases? | For reasons, I need to implement the Runge-Kutta4 method in PyTorch (so no, I'm not going to use scipy.odeint). I tried and I get weird results on the simplest test case, solving x'=x with x(0)=1 (analytical solution: x=exp(t)). Basically, as I reduce the time step, I cannot get the numerical error to go down. I'm able to do it with a simpler Euler method, but not with the Runge-Kutta 4 method, which makes me suspect some floating point issue here (maybe I'm missing some hidden conversion from double precision to single)?
import torch
import numpy as np
import matplotlib.pyplot as plt
def Euler(f, IC, time_grid):
y0 = torch.tensor([IC])
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dy = f(t_i, y_i) * dt
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
def RungeKutta4(f, IC, time_grid):
y0 = torch.tensor([IC])
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dtd2 = 0.5 * dt
f1 = f(t_i, y_i)
f2 = f(t_i + dtd2, y_i + dtd2 * f1)
f3 = f(t_i + dtd2, y_i + dtd2 * f2)
f4 = f(t_next, y_i + dt * f3)
dy = 1/6 * dt * (f1 + 2 * (f2 + f3) +f4)
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
# differential equation
def f(T, X):
return X
# initial condition
IC = 1.
# integration interval
def integration_interval(steps, ND=1):
return torch.linspace(0, ND, steps)
# analytical solution
def analytical_solution(t_range):
return np.exp(t_range)
# test a numerical method
def test_method(method, t_range, analytical_solution):
numerical_solution = method(f, IC, t_range)
L_inf_err = torch.dist(numerical_solution, analytical_solution, float('inf'))
return L_inf_err
if __name__ == '__main__':
Euler_error = np.array([0.,0.,0.])
RungeKutta4_error = np.array([0.,0.,0.])
indices = np.arange(1, Euler_error.shape[0]+1)
n_steps = np.power(10, indices)
for i, n in np.ndenumerate(n_steps):
t_range = integration_interval(steps=n)
solution = analytical_solution(t_range)
Euler_error[i] = test_method(Euler, t_range, solution).numpy()
RungeKutta4_error[i] = test_method(RungeKutta4, t_range, solution).numpy()
plots_path = "./plots"
a = plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.plot(n_steps, Euler_error, label="Euler error", linestyle='-')
plt.plot(n_steps, RungeKutta4_error, label="RungeKutta 4 error", linestyle='-.')
plt.legend()
plt.savefig(plots_path + "/errors.png")
The result:
As you can see, the Euler method converges (slowly, as expected of a first order method). However, the Runge-Kutta4 method does not converge as the time step gets smaller and smaller. The error goes down initially, and then up again. What's the issue here?
| The reason is indeed a floating point precision issue. torch defaults to single precision, so once the truncation error becomes small enough, the total error is basically determined by the roundoff error, and reducing the truncation error further by increasing the number of steps <=> decreasing the time step doesn't lead to any decrease in the total error.
To fix this, we need to enforce double precision 64bit floats for all floating point torch tensors and numpy arrays. Note that the right way to do this is to use respectively torch.float64 and np.float64 rather than, e.g., torch.double and np.double, because the former are fixed-sized float values, (always 64bit) while the latter depend on the machine and/or compiler. Here's the fixed code:
import torch
import numpy as np
import matplotlib.pyplot as plt
def Euler(f, IC, time_grid):
y0 = torch.tensor([IC], dtype=torch.float64)
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dy = f(t_i, y_i) * dt
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
def RungeKutta4(f, IC, time_grid):
y0 = torch.tensor([IC], dtype=torch.float64)
time_grid = time_grid.to(y0[0])
values = y0
for i in range(0, time_grid.shape[0] - 1):
t_i = time_grid[i]
t_next = time_grid[i+1]
y_i = values[i]
dt = t_next - t_i
dtd2 = 0.5 * dt
f1 = f(t_i, y_i)
f2 = f(t_i + dtd2, y_i + dtd2 * f1)
f3 = f(t_i + dtd2, y_i + dtd2 * f2)
f4 = f(t_next, y_i + dt * f3)
dy = 1/6 * dt * (f1 + 2 * (f2 + f3) +f4)
y_next = y_i + dy
y_next = y_next.unsqueeze(0)
values = torch.cat((values, y_next), dim=0)
return values
# differential equation
def f(T, X):
return X
# initial condition
IC = 1.
# integration interval
def integration_interval(steps, ND=1):
return torch.linspace(0, ND, steps, dtype=torch.float64)
# analytical solution
def analytical_solution(t_range):
return np.exp(t_range, dtype=np.float64)
# test a numerical method
def test_method(method, t_range, analytical_solution):
numerical_solution = method(f, IC, t_range)
L_inf_err = torch.dist(numerical_solution, analytical_solution, float('inf'))
return L_inf_err
if __name__ == '__main__':
Euler_error = np.array([0.,0.,0.], dtype=np.float64)
RungeKutta4_error = np.array([0.,0.,0.], dtype=np.float64)
indices = np.arange(1, Euler_error.shape[0]+1)
n_steps = np.power(10, indices)
for i, n in np.ndenumerate(n_steps):
t_range = integration_interval(steps=n)
solution = analytical_solution(t_range)
Euler_error[i] = test_method(Euler, t_range, solution).numpy()
RungeKutta4_error[i] = test_method(RungeKutta4, t_range, solution).numpy()
plots_path = "./plots"
a = plt.figure()
plt.xscale('log')
plt.yscale('log')
plt.plot(n_steps, Euler_error, label="Euler error", linestyle='-')
plt.plot(n_steps, RungeKutta4_error, label="RungeKutta 4 error", linestyle='-.')
plt.legend()
plt.savefig(plots_path + "/errors.png")
Result:
Now, as we decrease the time step, the error of the RungeKutta4 approximation decreases with the correct rate.
| https://stackoverflow.com/questions/61909430/ |
pytorch load _IncompatibleKeys | I trained a model of Efficentnet-b6(Architechture is as follows):
https://github.com/lukemelas/EfficientNet-PyTorch
Now, I tried to load a model I trained with it:
checkpoint = torch.load('model.pth', map_location=torch.device('cpu'))
model.load_state_dict(checkpoint, strict=False)
but then I got the following error:
_IncompatibleKeys
missing_keys=['_conv_stem.weight', '_bn0.weight', '_bn0.bias', ...]
unexpected_keys=['module._conv_stem.weight', 'module._bn0.weight', 'module._bn0.bias', ...]
Please let me know how can I fix that, what am I missing?
Thank you!
| If you compare the missing_keys and unexpected_keys, you may realize what is happening.
missing_keys=['_conv_stem.weight', '_bn0.weight', '_bn0.bias', ...]
unexpected_keys=['module._conv_stem.weight', 'module._bn0.weight', 'module._bn0.bias', ...]
As you can see, the model weights are saved with a module. prefix.
And this is because you have trained the model with DataParallel.
Now, to load the model weights without using DataParallel, you can do the following.
# original saved file with DataParallel
checkpoint = torch.load(path, map_location=torch.device('cpu'))
# create new OrderedDict that does not contain `module.`
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in checkpoint.items():
name = key.replace("module.", "") # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict, strict=False)
OR, if you wrap the model using DataParallel, then you do not need the above approach.
checkpoint = torch.load('model.pth', map_location=torch.device('cpu'))
model = torch.nn.DataParallel(model)
model.load_state_dict(checkpoint, strict=False)
Although the second approach is not encouraged (since you may not need DataParallel in many cases).
| https://stackoverflow.com/questions/61909973/ |
Does Pytorch 1.4 support cuda10.1? | I try to install pytorch 1.5+cu101, but it can't find caffe2_nvrtc.dll(it indeed exists in lib\caffe2_nvrtc.dll). So I try to install pytorch 1.4+cu101, but I can not find such wheel(only pytorch1.4+cu100/cu92) from https://download.pytorch.org/whl/torch_stable.html.
Any help?
| PyTorch 1.4.0 shipped with CUDA 10.1 by default, so there is no separate package with the cu101 suffix, those are only for alternative versions. You just need to install the regular torch package:
pip install torch==1.4.0 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/61911837/ |
convert float image array to int in PIL via Image.fromarray | I have a PyTorch tensor of shape 3,256,256 where 3 is the number of channels and the image dimensions are 256, with all float values.
I am trying to feed this into the network and use PIL to do some transforms. To this end, I do:
img = Image.fromarray((255*imgs[i]).numpy().astype(np.uint8))
but I get:
TypeError: Cannot handle this data type: (1, 1, 256), |u1
When I check the output of (255*imgs[i]).numpy().astype(np.uint8), I do however see, for example:
[[[ 62 57 59 ... 63 46 36]
[ 72 71 67 ... 80 76 82]
[ 58 63 63 ... 145 152 169]
...
[238 240 243 ... 7 7 7]
[241 239 240 ... 5 5 6]
[241 243 242 ... 4 3 5]]
[[ 83 78 80 ... 86 70 61]
[ 91 90 85 ... 95 93 98]
[ 80 83 80 ... 141 150 168]
...
[176 178 181 ... 14 14 14]
[177 176 178 ... 15 15 17]
[179 180 180 ... 13 13 15]]
[[147 141 143 ... 150 136 128]
[147 149 148 ... 154 149 154]
[141 149 148 ... 178 182 196]
...
[129 131 134 ... 43 43 43]
[130 130 131 ... 45 45 47]
[133 134 133 ... 44 44 46]]]
I am not an image expert by a long shot, and I am struggling to troubleshoot this issue now.
| You need to match the order of dimensions, not only the dtype. PIL.Image expects its RGB images to be of shape hxwx3 - channel dimension last, while pytorch prefer to represent images as 3xhxw - channel dimension first.
Thus you need:
img = Image.fromarray((255*imgs[i]).numpy().astype(np.uint8).transpose(1, 2, 0))
| https://stackoverflow.com/questions/61912083/ |
Pytorch isnt found by my code using import torch | I have installed torch using pip. And when I try to run
import torch
I get an error saying
raceback (most recent call last):
File "C:\Users\leoga\Documents\AI Image Classifier\Classifier.py", line 2, in <module>
import torch
File "C:\Python38\lib\site-packages\torch\__init__.py", line 81, in <module>
ctypes.CDLL(dll)
File "C:\Python38\lib\ctypes\__init__.py", line 373, in __init__
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Python38\lib\site-packages\torch\lib\caffe2_nvrtc.dll'. Try using the full path with constructor syntax.
I am using python 3.8
| Have you had python 2.x installed on your computer? It is possible that you need to use
pip3 install torch
if you use just pip instead of pip3 it could install the library in the wrong version of python. If you are on a mac then this is probably the case, as python 2 comes preinstalled on the machine.
| https://stackoverflow.com/questions/61914525/ |
Is there a PyTorch tensor function that reduces dimension in regular patterns | Sorry for the poor wording of the title. What I wanted to do is like this:
Matrix 1 is the original matrix, and matrix 2 is matrix 1 but with every even columns and rows taken out. Matrix 3 is matrix 1 but only has 1 (mod 3) columns and rows. Matrix 4 is the same, with 1 (mod 4) columns and rows. Matrix 5 has 1 (mod 2) columns and all rows.
Is there a PyTorch function that manipulates tensors in this way that is fast and can utilize the GPU? This is sort of like MaxPool2d, however I just need the first value and not the max. If there aren't any functions like that, is there a way to do it manually but still fast?
| Matrix 5 is the easiest to show, because you only need to slice along one dimension. But you can slice along both to get the other results.
matrix5 = matrix1[, ::2]
This notation takes every second column, starting at the zeroth.
| https://stackoverflow.com/questions/61920798/ |
How to load your image dataset in python | I have a folder (on my windows desktop) containing the images I want to use to build my deep learning classifier. I also have one .csv file which has the image number (for example img_1035) and the corresponding class label. How do I load the dataset with the labels into python/jupyter notebooks?
This is the link to the dataset on kaggle (https://www.kaggle.com/debdoot/bdrw).
I would preferably like to use PyTorch to do this but any other ways would also be highly appreciated.
| Luckily, PyTorch has a convenient "ImageFolder" class that you can extend to create your own dataset.
Here's an example of a dataset that uses ImageFolder:
class MyDataset(torchvision.datasets.ImageFolder):
def __init__(self, train_folder_path='.', transform=None, target_transform=None):
super().__init__(train_folder_path, transform, target_transform)
# [ Some functions omitted ]
Then you load your set using PyTorch's "DataLoader".
Here's an example for a training set:
training_set = MyDataset(root_path, transform)
train_loader = torch.utils.data.DataLoader(training_set, batch_size=batch_size, shuffle=True)
Using the train loader you can get batches from your dataset. You can then use these batches to train / validate and so on:
batch = next(iter(train_loader))
images, labels = batch
Training is a rather involved process so I'm not entirely sure how deep you want to dive here. I hope this was a nudge in the right direction.
| https://stackoverflow.com/questions/61929181/ |
Unexpected error when loading the model: problem in predictor - ModuleNotFoundError: No module named 'torchvision' | I've been trying to deploy my model to the AI platform for Prediction through the console on my vm instance, but I've gotten the error "(gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in predictor - ModuleNotFoundError: No module named 'torchvision' (Error code: 0)"
I need to include both torch and torchvision. I followed the steps in this question Cannot deploy trained model to Google Cloud Ai-Platform with custom prediction routine: Model requires more memory than allowed, but I couldn't fetch the files pointed to by user gogasca. I tried downloading this .whl file from Pytorch website and uploading it to my cloud storage but got the same error that there is no module torchvision, even though this version is supposed to include both torch and torchvision. Also tried using Cloud AI compatible packages here, but they don't include torchvision.
I tried pointing to two separate .whl files for torch and torchvision in the --package-uris arguments, those point to files in my cloud storage, but then I got the error that the memory capacity was exceeded. This is strange, because collectively their size is around 130Mb. An example of my command that resulted in absence of torchvision looked like this:
gcloud beta ai-platform versions create version_1 \
--model online_pred_1 \
--runtime-version 1.15 \
--python-version 3.7 \
--origin gs://BUCKET/model-dir \
--package-uris gs://BUCKET/staging-dir/my_package-0.1.tar.gz,gs://BUCKET/torchvision-dir/torch-1.4.0+cpu-cp37-cp37m-linux_x86_64.whl \
--prediction-class predictor.MyPredictor
I've tried pointing to different combinations of .whl files that I obtained from different sources, but got either the no module error or not enough memory. I don't understand how the modules interact in this case and why the compiler thinks there is no such module. How can I resolve this? Or alternatively, how can I compile a package myself that include both torch and torchvision. Can you please give detailed answers because I'm not very familiar with package management and bash scripting.
Here's the code I used, torch_model.py:
from torch import nn
class EthnicityClassifier44(nn.Module):
def __init__(self, num_classes=2):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=1, padding=3)
self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv22 = nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)
self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
self.maxpool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv4 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
self.maxpool4 = nn.MaxPool2d(kernel_size=2, stride=2)
self.relu = nn.ReLU(inplace=False)
self.fc1 = nn.Linear(8*8*128, 128)
self.fc2 = nn.Linear(128, 128)
self.fc4 = nn.Linear(128, num_classes)
def forward(self, x):
x = self.relu(self.conv1(x))
x = self.maxpool1(x)
x = self.relu(self.conv22(x))
x = self.maxpool2(x)
x = self.maxpool3(self.relu(self.conv3(x)))
x = self.maxpool4(self.relu(self.conv4(x)))
x = self.relu(self.fc1(x.view(x.shape[0], -1)))
x = self.relu(self.fc2(x))
x = self.fc4(x)
return x
This is predictor_py:
from facenet_pytorch import MTCNN, InceptionResnetV1, extract_face
import torch
import torchvision
from torchvision import transforms
from torch.nn import functional as F
from PIL import Image
from sklearn.externals import joblib
import numpy as np
import os
import torch_model
class MyPredictor(object):
import torch
import torchvision
def __init__(self, model, preprocessor, device):
"""Stores artifacts for prediction. Only initialized via `from_path`.
"""
self._resnet = model
self._mtcnn_mult = preprocessor
self._device = device
self.get_std_tensor = transforms.Compose([
np.float32,
np.uint8,
transforms.ToTensor(),
])
self.tensor2pil = transforms.ToPILImage(mode='RGB')
self.trans_resnet = transforms.Compose([
transforms.Resize((100, 100)),
np.float32,
transforms.ToTensor()
])
def predict(self, instances, **kwargs):
pil_transform = transforms.Resize((512, 512))
imarr = np.asarray(instances)
pil_im = Image.fromarray(imarr)
image = pil_im.convert('RGB')
pil_im_512 = pil_transform(image)
boxes, _ = self._mtcnn_mult(pil_im_512)
box = boxes[0]
face_tensor = extract_face(pil_im_512, box, margin=40)
std_tensor = self.get_std_tensor(face_tensor.permute(1, 2, 0))
cropped_pil_im = self.tensor2pil(std_tensor)
face_tensor = self.trans_resnet(cropped_pil_im)
face_tensor4d = face_tensor.unsqueeze(0)
face_tensor4d = face_tensor4d.to(self._device)
prediction = self._resnet(face_tensor4d)
preds = F.softmax(prediction, dim=1).detach().numpy().reshape(-1)
print('probability of (class1, class2) = ({:.4f}, {:.4f})'.format(preds[0], preds[1]))
return preds.tolist()
@classmethod
def from_path(cls, model_dir):
import torch
import torchvision
import torch_model
model_path = os.path.join(model_dir, 'class44_M40RefinedExtra_bin_no_norm_7860.joblib')
classifier = joblib.load(model_path)
mtcnn_path = os.path.join(model_dir, 'mtcnn_mult.joblib')
mtcnn_mult = joblib.load(mtcnn_path)
device_path = os.path.join(model_dir, 'device_cpu.joblib')
device = joblib.load(device_path)
return cls(classifier, mtcnn_mult, device)
And setup.py:
from setuptools import setup
REQUIRED_PACKAGES = ['opencv-python-headless', 'facenet-pytorch']
setup(
name="my_package",
version="0.1",
include_package_data=True,
scripts=["predictor.py", "torch_model.py"],
install_requires=REQUIRED_PACKAGES
)
| The solution was to place the following packages in thsetup.py file for the custom prediction code:
REQUIRED_PACKAGES = ['torchvision==0.5.0', 'torch @ https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl', 'opencv-python', 'facenet-pytorch']
I then had a different problem with custom class instantiation, but this article explains it well. So I was able to successfully deploy my model to the AI Platform for prediction.
| https://stackoverflow.com/questions/61933879/ |
Where is the PatchGAN implementation in the official PyTorch CycleGAN repo? | There's mention in the paper and in the code comments of a 70x70 PatchGAN, although I can't find anywhere in the code this is explicitly implemented.
Does anyone know how it is implemented?
| I've figured it out. After the last conv layer of the PatchGAN (before average pool) the receptive field size is 70. So each neuron on the single channel feature map (which is 30x30) coming out of that conv layer has information from a 70x70 patch of the input. The corresponding patches overlap one another on the input.
| https://stackoverflow.com/questions/61938742/ |
Two questions on DCGAN: data normalization and fake/real batch | I am analyzing a meta-learning class that uses DCGAN + Reptile within the image generation.
I have two questions about this code.
First question: why during DCGAN training (line 74)
training_batch = torch.cat ([real_batch, fake_batch])
is a training_batch made up of real examples (real_batch) and fake examples (fake_batch) created? Why is training done by mixing real and false images? I have seen many DCGANs, but never with training done in this way.
The second question: why is the normalize_data function (line 49) and the unnormalize_data function (line 55) used during training?
def normalize_data(data):
data *= 2
data -= 1
return data
def unnormalize_data(data):
data += 1
data /= 2
return data
The project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations?
| Training GANs involves giving the discriminator real and fake examples. Usually, you will see that they are given in two separate occasions. By default torch.cat concatenates the tensors on the first dimension (dim=0), which is the batch dimensions. Therefore it just doubled the batch size, where the first half are the real images and the second half the fake images.
To calculate the loss, they adapt the targets, such that the first half (original batch size) is classified as real, and the second half is classified as fake. From initialize_gan:
self.discriminator_targets = torch.tensor([1] * self.batch_size + [-1] * self.batch_size, dtype=torch.float, device=device).view(-1, 1)
Images are represented with float values between [0, 1]. The normalisation changes that to produce values between [-1, 1]. GANs generally use tanh in the generator, therefore the fake images have values between [-1, 1], hence the real images should be in the same range, otherwise it would be trivial for the discriminator to distinguish the fake images from the real ones.
If you want to display these images, you need to unnormalise them first, i.e. convert them to values between [0, 1].
The project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations?
No, you don't need to change them, because images in colour also have their values between [0, 1], there are simply more values, representing the 3 channels (RGB).
| https://stackoverflow.com/questions/61939491/ |
Pad torch tensors of different sizes to be equal | I am looking for a way to take an image/target batch for segmentation and return the batch where the image dimensions have been changed to be equal for the whole batch. I have tried this using the code below:
def collate_fn_padd(batch):
'''
Padds batch of variable length
note: it converts things ToTensor manually here since the ToTensor transform
assume it takes in images rather than arbitrary tensors.
'''
# separate the image and masks
image_batch,mask_batch = zip(*batch)
# pad the images and masks
image_batch = torch.nn.utils.rnn.pad_sequence(image_batch, batch_first=True)
mask_batch = torch.nn.utils.rnn.pad_sequence(mask_batch, batch_first=True)
# rezip the batch
batch = list(zip(image_batch, mask_batch))
return batch
However, I get this error:
RuntimeError: The expanded size of the tensor (650) must match the existing size (439) at non-singleton dimension 2. Target sizes: [3, 650, 650]. Tensor sizes: [3, 406, 439]
How do I efficiently pad the tensors to be of equal dimensions and avoid this issue?
| rnn.pad_sequence only pads the sequence dimension, it requires all other dimensions to be equal. You cannot use it to pad images across two dimensions (height and width).
To pad an image torch.nn.functional.pad can be used, but you need to manually determine the height and width it needs to get padded to.
import torch.nn.functional as F
# Determine maximum height and width
# The mask's have the same height and width
# since they mask the image.
max_height = max([img.size(1) for img in image_batch])
max_width = max([img.size(2) for img in image_batch])
image_batch = [
# The needed padding is the difference between the
# max width/height and the image's actual width/height.
F.pad(img, [0, max_width - img.size(2), 0, max_height - img.size(1)])
for img in image_batch
]
mask_batch = [
# Same as for the images, but there is no channel dimension
# Therefore the mask's width is dimension 1 instead of 2
F.pad(mask, [0, max_width - mask.size(1), 0, max_height - mask.size(0)])
for mask in mask_batch
]
The padding lengths are specified in reverse order of the dimensions, where every dimension has two values, one for the padding at the beginning and one for the padding at the end. For an image with the dimensions [channels, height, width] the padding is given as: [width_beginning, width_end, height_beginning, height_top], which can be reworded to [left, right, top, bottom]. Therefore the code above pads the images to the right and bottom. The channels are left out, because they are not being padded, which also means that the same padding could be directly applied to the masks.
| https://stackoverflow.com/questions/61943896/ |
PyTorch LSTM dimension | I am trying to teach my first LSTM with numeric sequence data to predict a singular value.
My training set is a numpy matrix shape=207x7 where each input vector has 7 features.
I am not sure how to setup and train my LSTM properly. I have CNN experience, but now first LSTM.
class LSTM_NN(nn.Module):
"""
Stores the network format
"""
def __init__(self, input_size=7, hidden_layer_size=100, output_size=1):
super(self.__class__, self).__init__()
self._hidden_layer_size = hidden_layer_size
self._lstm = nn.LSTM(input_size, hidden_layer_size)
self._hidden_cell = (torch.zeros(1, 1, self._hidden_layer_size),
torch.zeros(1, 1, self._hidden_layer_size))
self._linear = nn.Linear(hidden_layer_size, output_size)
def forward(self, input_data):
"""
Forward propagation
"""
lstm_out, self._hidden_cell = self._lstm(
torch.tensor(np.expand_dims(input_data, 0)),
self._hidden_cell)
training_data.shape
# (200, 7)
model = LSTM_NN(input_size=training_data.shape[1])
model.forward(training_data)
But I get this error:
Expected hidden[0] size (1, 200, 100), got (1, 1, 100)
File "train_lstm.py", line 44, in forward
| Your input has size [batch_size, seq_len, num_features] = [1, 200, 7]. The LSTM on the other hand, expects the input to have size [seq_len, batch_size, num_features] (as described in nn.LSTM - Inputs).
You can either change the dimensions of your input, or you can set batch_first=True when creating the LSTM, if you prefer having batch size as the first dimension, in which case batch_size and seq_len are swapped and your current input size would be the expected one.
self._lstm = nn.LSTM(input_size, hidden_layer_size, batch_first=True)
| https://stackoverflow.com/questions/61945936/ |
CPU version of "torch._C._nn.nll_loss" function | Is there a function for torch._C._nn.nll_loss that takes in a CPU input? I don't have enough GPU memory to run my function so I'm trying to run everything on CPU.
This is my specific error (look at the anaconda files)
Traceback (most recent call last):
File "plot_parametric_pytorch.py", line 395, in <module>
val_result = validate(val_loader, model, criterion, 0)
File "plot_parametric_pytorch.py", line 228, in validate
training=False, optimizer=None)
File "plot_parametric_pytorch.py", line 169, in forward
loss = criterion(output, target_var)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2115, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _thnn_nll_loss_forward
| nll_loss works for both CPU and GPU, but the input and the target need to be on the same device. Yours are on different devices, where the first one (output) is on the CPU, but the second (target_var) is on the GPU.
You need to put target_var onto the CPU.
loss = criterion(output, target_var.cpu())
| https://stackoverflow.com/questions/61948655/ |
Is the forward definition of a model executed sequentially in PyTorch or in parallel? | I wanted to know if instructions in forward definition of deep models class are executed sequentially? For example:
class Net(nn.Module):
...
def forward(self,x):
#### Group 1
y = self.conv1(x)
y = self.conv2(y)
y = self.conv3(y)
### Group 2
z = self.conv4(x)
z = self.conv5(z)
z = self.conv6(z)
out = torch.cat((y,z),dim=1)
return out
In this case Group1 and Group2 instructions can be parallelized. But will the forward definition understand this automatically or will they be executed sequentially? If no, then how to run them in parallel?
I am running PyTorch 1.3.1
Thankyou very much
| They are executed sequentially, only the calculations of the operations are parallelised. As far as I'm aware, there is no direct way to let them run in parallel by PyTorch.
I'm assuming that you are expecting a performance improvement from running them in parallel, but that would be at best minimal and at worst a lot slower, because operations like convolutions are already heavily parallelised and unless the input is extremely small, all cores will be used permanently. Running multiple convolutions in parallel would result in a lot of context switches, except if you would distribute the available cores evenly, but that wouldn't really make it any faster than doing them sequentially with all cores instead.
You can observe the same behaviour if you run two PyTorch programs at the same time, for example running the following, which has 3 relatively common convolutions and uses 224x224 images (like ImageNet), which is small compared to what other models (e.g. object detection) use:
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
def forward(self, input):
out = self.conv1(input)
out = self.conv2(out)
out = self.conv3(out)
return out
input = torch.randn((10, 3, 224, 224))
model = Model().eval()
# Running it 100 times just to create a microbenchmark
for i in range(100):
out = model(input)
To obtain information about context switches, /usr/bin/time can be used (not the built in time).
/usr/bin/time -v python bench.py
Single run:
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:22.68
Involuntary context switches: 857
Running two instances at the same time:
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:43.69
Involuntary context switches: 456753
To clarify, each of the instances took about 43 seconds, that's not the accumulated time.
| https://stackoverflow.com/questions/61949309/ |
"Can't use matmul on the given tensors" error when converting pytorch to onnx JS | I made a simple pytorch MLP(GAN generator) and converted it to onnx using the tutorial (https://www.youtube.com/watch?v=Vs730jsRgO8), my code is a bit different but I cant catch the error.
class Generator(nn.Module):
def __init__(self, g_input_dim, g_output_dim):
super(Generator, self).__init__()
# g_input = 100
self.net = nn.Sequential(
nn.Linear(g_input_dim, 256),
nn.LeakyReLU(.2),
nn.Linear(256, 512),
nn.LeakyReLU(.2),
nn.Linear(512, 1024),
nn.LeakyReLU(.2),
nn.Linear(1024, 784),
nn.Tanh()
)
# forward method
def forward(self, x):
return self.net(x)
After training I export the model to onnx.
torch.save(G.state_dict(), "pytorch_model.pth")
import torch.onnx
model = Generator(z_dim,mnist_dim)
state_dict = torch.load("pytorch_model.pth")
model.load_state_dict(state_dict)
model.eval()
dummy_input = torch.zeros(100)
torch.onnx.export(model, dummy_input, "onnx_model.onnx", verbose=True)
Which gives the following onnx graph, which seems accurate.
graph(%input.1 : Float(100),
%net.0.bias : Float(256),
%net.2.bias : Float(512),
%net.4.bias : Float(1024),
%net.6.bias : Float(784),
%25 : Float(100, 256),
%26 : Float(256, 512),
%27 : Float(512, 1024),
%28 : Float(1024, 784)):
%10 : Float(256) = onnx::MatMul(%input.1, %25) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1612:0
%11 : Float(256) = onnx::Add(%10, %net.0.bias)
%12 : Float(256) = onnx::LeakyRelu[alpha=0.20000000000000001](%11) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1239:0
%14 : Float(512) = onnx::MatMul(%12, %26) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1612:0
%15 : Float(512) = onnx::Add(%14, %net.2.bias)
%16 : Float(512) = onnx::LeakyRelu[alpha=0.20000000000000001](%15) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1239:0
%18 : Float(1024) = onnx::MatMul(%16, %27) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1612:0
%19 : Float(1024) = onnx::Add(%18, %net.4.bias)
%20 : Float(1024) = onnx::LeakyRelu[alpha=0.20000000000000001](%19) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1239:0
%22 : Float(784) = onnx::MatMul(%20, %28) # /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1612:0
%23 : Float(784) = onnx::Add(%22, %net.6.bias)
%24 : Float(784)
Then I imported the code into javascript.
<html>
<body>
<script src="./onnx.min.js"></script>
<script>
async function test() {
const sess = new onnx.InferenceSession()
await sess.loadModel('./onnx_model.onnx')
const input = new onnx.Tensor(new Float32Array(100), 'float32', [100])
const outputMap = await sess.run([input])
const outputTensor = outputMap.values().next().value
console.log(`Output tensor: ${outputTensor.data}`)
}
test()
</script>
</body>
</html>
I know the input dimension is correct but onnx gives me the following error.
onnx.min.js:8 Uncaught (in promise) Error: Can't use matmul on the given tensors
at e.createProgramInfo (onnx.min.js:8)
at t.run (onnx.min.js:8)
at e.run (onnx.min.js:8)
at t.<anonymous> (onnx.min.js:14)
at onnx.min.js:14
at Object.next (onnx.min.js:14)
at onnx.min.js:14
at new Promise (<anonymous>)
at r (onnx.min.js:14)
at onnx.min.js:14
I also know that matmul is a supported operator with onnx, but I can't figure out how or if my input tensor is correct.
| I think the matmul operator expects the input to be 2 dimensional. It seems to work when I add a batch size dimension to the input (a batch size of 1):
Before: dummy_input = torch.zeros(100)
After: dummy_input = torch.zeros(1, 100)
Before: const input = new onnx.Tensor(new Float32Array(100), 'float32', [100])
After: const input = new onnx.Tensor(new Float32Array(100), 'float32', [1, 100])
| https://stackoverflow.com/questions/61955851/ |
Change input shape dimensions for ResNet model (pytorch) | I want to feed my 3,320,320 pictures in an existing ResNet model. The model actually expects input of size 3,32,32. As I am afraid of loosing information I don't simply want to resize my pictures.
What is the best way to preprocess my images, so that they are able to run on the ResNet34?
Should I add additional layers in the forward method of ResNet? If yes, what would be a suitable combination in my case?
import torch
import torch.nn as nn
import torch.nn.functional as F
from pytorch_fitmodule import FitModule
from torch.autograd import Variable
import numpy as np
def conv3x3(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
class BasicBlock(FitModule):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(in_planes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion * planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(FitModule):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = conv3x3(3, 64)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512 * block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x): # add additional layers here?
x = x.float()
out = F.relu(self.bn1(self.conv1(x).float()).float())
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def ResNet34():
return ResNet(BasicBlock, [3, 4, 6, 3])
Thanks plenty!
Regards,
Fabian
| If you change your avg_pool operation to 'AdaptiveAvgPool2d' your model will work for any image size.
However with your current setup, your 320x320 images would be 40x40 going into the pooling stage, which is a large feature map to pool over. Consider adding more conv layers.
| https://stackoverflow.com/questions/61955991/ |
How to mask a 3D tensor with 2D mask and keep the dimensions of original vector? | Suppose, I have a 3D tensor A
A = torch.arange(24).view(4, 3, 2)
print(A)
and require masking it using 2D tensor
mask = torch.zeros((4, 3), dtype=torch.int64) # or dtype=torch.ByteTensor
mask[0, 0] = 1
mask[1, 1] = 1
mask[3, 0] = 1
print('Mask: ', mask)
Using masked_select functionality from PyTorch leads to the following error.
torch.masked_select(X, (mask == 1))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-72-fd6809d2c4cc> in <module>
12
13 # Select based on new mask
---> 14 Y = torch.masked_select(X, (mask == 1))
15 #Y = X * mask_
16 print(Y)
RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 2
How to mask a 3D tensor with a 2D mask and keep the dimensions of the original vector? Any hints will be appreciated.
| Essentially, we need to match the dimension of the tensor mask with the tensor being masked.
There are two ways to do it.
Approach 1: Does not preserve original tensor dimensions.
X = torch.arange(24).view(4, 3, 2)
print(X)
mask = torch.zeros((4, 3), dtype=torch.int64) # or dtype=torch.ByteTensor
mask[0, 0] = 1
mask[1, 1] = 1
mask[3, 0] = 1
print('Mask: ', mask)
# Add a dimension to the mask tensor and expand it to the size of original tensor
mask_ = mask.unsqueeze(-1).expand(X.size())
print(mask_)
# Select based on the new expanded mask
Y = torch.masked_select(X, (mask_ == 1)) # does not preserve the dims
print(Y)
The output for approach 1:
tensor([ 0, 1, 8, 9, 18, 19])
Approach 2: Preserves the original tensor dimensions (by padding).
X = torch.arange(24).view(4, 3, 2)
print(X)
mask = torch.zeros((4, 3), dtype=torch.int64) # or dtype=torch.ByteTensor
mask[0, 0] = 1
mask[1, 1] = 1
mask[3, 0] = 1
print('Mask: ', mask)
# Add a dimension to the mask tensor and expand it to the size of original tensor
mask_ = mask.unsqueeze(-1).expand(X.size())
print(mask_)
# Select based on the new expanded mask
Y = X * mask_
print(Y)
The output for approach 2:
tensor([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]],
[[18, 19],
[20, 21],
[22, 23]]])
Mask: tensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 0],
[1, 0, 0]])
tensor([[[1, 1],
[0, 0],
[0, 0]],
[[0, 0],
[1, 1],
[0, 0]],
[[0, 0],
[0, 0],
[0, 0]],
[[1, 1],
[0, 0],
[0, 0]]])
tensor([[[ 0, 1],
[ 0, 0],
[ 0, 0]],
[[ 0, 0],
[ 8, 9],
[ 0, 0]],
[[ 0, 0],
[ 0, 0],
[ 0, 0]],
[[18, 19],
[ 0, 0],
[ 0, 0]]]
| https://stackoverflow.com/questions/61956893/ |
concatenating two tensors in pytorch(with a twist) | I have a word embedding (of type tensor)of size torch.Size([8, 768]) stored in the variable embeddings which looks something like this :-
tensor([[-0.0687, -0.1327, 0.0112, ..., 0.0715, -0.0297, -0.0477],
[ 0.0115, -0.0029, 0.0323, ..., 0.0277, -0.0297, -0.0599],
[ 0.0760, 0.0788, 0.1640, ..., 0.0574, -0.0805, 0.0066],
...,
[-0.0110, -0.1773, 0.1143, ..., 0.1397, 0.3021, 0.1670],
[-0.1379, -0.0294, -0.0026, ..., -0.0966, -0.0726, 0.1160],
[ 0.0466, -0.0113, 0.0283, ..., -0.0735, 0.0496, 0.0963]],
grad_fn=<IndexBackward>)
Now, I wish to take mean of some embeddings and place the mean back in the tensor.
For example,(I'll explain with the help of list and not tensor)
a = [1,2,3,4,5]
output = [1.5, 3, 4, 5]
So, here I have taken mean of 1 and 2 and then placed in the list output by shifting the elements to the left in the list. I want to do the same thing for tensors as well.
I have the index stored in variable i from where I need to take average and j variable is being used for the stopping index. Now, let's look into the code :-
if i != len(embeddings):
sum = 0
count = 0
#Calculating sum
for x in range(i-1, j):
sum += text_index[x]
count += 1
avg = sum/count
#Inserting the average in place of the other embeddings
embeddings = embeddings[:i-1] + [avg] + embeddings[j:]
else :
pass
Now, I am getting an error at this line embeddings = embeddings[:i-1] + [avg] + embeddings[j:]
The error is :-
TypeError: unsupported operand type(s) for +: 'Tensor' and 'list'
Now, I understand that the above code would have worked well if embeddings was a list but it is a tensor. How do I do it?
NOTE :
*1. *embeddings.shape : torch.Size([8, 768])
2. avg is of type float**
| To concatenate multiple tensors you can use torch.cat, where the list of tensors are concatenate across the specified dimensions. That requires that all tensors have the same number of dimensions and all dimensions except the one that they are concatenated on, need to have the same size.
Your embeddings has size [8, 768], therefore the left part and the right part will have size [num_left, 768] and [num_right, 768] respectively. And avg has size [768] (it's a tensor, not a single float) since you averaged multiple embeddings into one. In order to concatenate them with the two other parts, it needs to have size [1, 768], so that it can be concatenated on the first dimension to create a tensor of size [num_left + 1 + num_right, 768]. The singular first dimension can be added with torch.unsqueeze.
embeddings = torch.cat([embeddings[:i-1], avg.unsqueeze(0), embeddings[j:]], dim=0)
The for loop can also be replaced by slicing the tensor and taking the mean with torch.mean.
# keepdim=True keeps the dimension that the average is taken on
# So the output has size [1, 768] instead of [768]
avg = torch.mean(embeddings[i-1:j], dim=0, keepdim=True)
embeddings = torch.cat([embeddings[:i-1], avg, embeddings[j:]], dim=0)
| https://stackoverflow.com/questions/61956923/ |
Using LSTM stateful for passing context b/w batches; may be some error in context passing, not getting good results? | I have checked the data before giving it to the network. The data is correct.
Using LSTM and passing the context b/w batches. per_class_accuracy is changing, but the loss is not going down. Been stuck for long, not sure if there is an error in the Code?
I have multi-class classification problem based upon an imbalanced dataset
Dataset_type: CSV
Dataset_size: 20000
Based upon CSV data of sensors
X = 0.6986111111111111,0,0,1,0,1,0,0,0,1,0,0,0,0,1,0,0,0,1,1,0,0,0
Y = leaveHouse
Per class accuracy:
{'leaveHouse': 0.34932855, 'getDressed': 1.0, 'idle': 0.8074534, 'prepareBreakfast': 0.8, 'goToBed': 0.35583413, 'getDrink': 0.0, 'takeShower': 1.0, 'useToilet': 0.0, 'eatBreakfast': 0.8857143}
Training:
# Using loss weights, the inverse of class frequency
criterion = nn.CrossEntropyLoss(weight = class_weights)
hn, cn = model.init_hidden(batch_size)
for i, (input, label) in enumerate(trainLoader):
hn.detach_()
cn.detach_()
input = input.view(-1, seq_dim, input_dim)
if torch.cuda.is_available():
input = input.float().cuda()
label = label.cuda()
else:
input = input.float()
label = label
# Forward pass to get output/logits
output, (hn, cn) = model((input, (hn, cn)))
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(output, label)#weig pram
running_loss += loss
loss.backward() # Backward pass
optimizer.step() # Now we can do an optimizer step
optimizer.zero_grad() # Reset gradients tensors
Network
class LSTMModel(nn.Module):
def init_hidden(self, batch_size):
self.batch_size = batch_size
if torch.cuda.is_available():
hn = torch.zeros(self.layer_dim, self.batch_size, self.hidden_dim).cuda()
# Initialize cell state
cn = torch.zeros(self.layer_dim, self.batch_size, self.hidden_dim).cuda()
else:
hn = torch.zeros(self.layer_dim, self.batch_size, self.hidden_dim)
# Initialize cell state
cn = torch.zeros(self.layer_dim, self.batch_size, self.hidden_dim)
return hn, cn
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, seq_dim):
super(LSTMModel, self).__init__()
# Hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
self.input_dim = input_dim
# Building your LSTM
# batch_first=True causes input/output tensors to be of shape
# (batch_dim, seq_dim, feature_dim)
self.lstm = nn.LSTM(self.input_dim, hidden_dim, layer_dim, batch_first=True)
# Readout layer
self.fc = nn.Linear(hidden_dim, output_dim)
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
self.seq_dim = seq_dim
def forward(self, inputs):
# Initialize hidden state with zeros
input, (hn, cn) = inputs
input = input.view(-1, self.seq_dim, self.input_dim)
# time steps
out, (hn, cn) = self.lstm(input, (hn, cn))
# Index hidden state of last time step
out = self.fc(out[:, -1, :])
out = self.softmax(out)
return out, (hn,cn)
| One problem you might have is CrossEntropyLoss combines a log softmax operation with negative log likelihood loss, but you're applying a softmax in your model. You should pass the raw logits out of the final layer to CrossEntropyLoss.
Also I an't say without seeing the models forward pass, but it looks like you're applying the softmax on dimension 1 to a tensor that (I'm inferring) has shape batch_size, sequence_length, output_dim, when you should be applying it along the output dim.
| https://stackoverflow.com/questions/61960385/ |
Does the reduction of the dimensions over multiple layers allow more details to be stored within the final representation? | From : https://debuggercafe.com/implementing-deep-autoencoder-in-pytorch/
the following autoencoder is defined
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
# encoder
self.enc1 = nn.Linear(in_features=784, out_features=256)
self.enc2 = nn.Linear(in_features=256, out_features=128)
self.enc3 = nn.Linear(in_features=128, out_features=64)
self.enc4 = nn.Linear(in_features=64, out_features=32)
self.enc5 = nn.Linear(in_features=32, out_features=16)
# decoder
self.dec1 = nn.Linear(in_features=16, out_features=32)
self.dec2 = nn.Linear(in_features=32, out_features=64)
self.dec3 = nn.Linear(in_features=64, out_features=128)
self.dec4 = nn.Linear(in_features=128, out_features=256)
self.dec5 = nn.Linear(in_features=256, out_features=784)
def forward(self, x):
x = F.relu(self.enc1(x))
x = F.relu(self.enc2(x))
x = F.relu(self.enc3(x))
x = F.relu(self.enc4(x))
x = F.relu(self.enc5(x))
x = F.relu(self.dec1(x))
x = F.relu(self.dec2(x))
x = F.relu(self.dec3(x))
x = F.relu(self.dec4(x))
x = F.relu(self.dec5(x))
return x
net = Autoencoder()
From the Autoencoder class, we can see that 784 features are passed through a series of transformations and are converted to 16 features.
The transformations (in_features to out_features) for each layer are:
784 to 256
256 to 128
128 to 64
64 to 32
32 to 16
Why do we perform this sequence of operations? For example, why don't we perform the following sequence of operations instead?
784 to 256
256 to 128
Or maybe
784 to 512
512 to 256
256 to 128
Or maybe just encode in two layers:
784 to 16
Does the reduction of the dimensions over multiple layers (instead of a single layer) allow more details to be stored within the final representation? For example, if we used only the transformation $784 \rightarrow 16$, may this cause some detail not to be encoded? If so, why is this the case?
| Well yes. Reduction of dimensions over multiple layers allow the model to learn more information than it would otherwise. You could do an experiment with it too; change the autoencoder to be much shallower and see what representation it learns!
In any case, think of it in simpler terms: Let's say you have a regression system. As opposed to a simple linear transformation like y = a0 + a1.x, a regression that uses higher degree polynomials to approximate the task has much more approximation power, for a lack of better terms. Which is why they tend to overfit easily; they're approximating the training data too perfectly and thus have trouble generalizing later. However, as a rule of thumb, an overfitting model may be considered as a more preferable scenario than an underfitting one: because you can regularize the overfitting model to generalize better, but the underfitting model may simply not be capable of approximating your task.
Now let's think of a deep neural network. At it's core, a neural network is also a highly complicated function that helps you approximate a task. If you have a linear layer that maps 784 input features to 16 output features, it's basically doing a simple transformation: ReLU(X.W+B). But if you're passing through multiple layers, your transformation becomes something like ReLU((ReLU((ReLU(X.W1+B1)).W2+B2)).W3+B3) and so on. It's a much more complex function with higher approximation power, which means it's liable to better approximate your task, and learn a better representation. This concept holds true for Deep Learning in general, and even truer for convolutional layers in particular. You'll notice most deep learning model architectures are basically layers or blocks repeatedly stacked one after another.
| https://stackoverflow.com/questions/61960784/ |
PyTorch tensor slice and memory usage | import torch
T = torch.FloatTensor(range(0,10 ** 6)) # 1M
#case 1:
torch.save(T, 'junk.pt')
# results in a 4 MB file size
#case 2:
torch.save(T[-20:], 'junk2.pt')
# results in a 4 MB file size
#case 3:
torch.save(torch.FloatTensor(T[-20:]), 'junk3.pt')
# results in a 4 MB file size
#case 4:
torch.save(torch.FloatTensor(T[-20:].tolist()), 'junk4.pt')
# results in a 405 Bytes file size
My questions are:
In case 3 the resulting file size seems surprising as we are creating a new tensor. Why is this new tensor not just the slice?
Is case 4, the optimal method for saving just part (slice) of a tensor?
More generally, if I want to 'trim' a very large 1-dimensional tensor by removing the first half of its values in order to save memory, do I have to proceed as in case 4, or is there a more direct and less computationally costly way that does not involve creating a python list.
|
(i) In case 3 the resulting file size seems surprising as we are creating a new tensor. Why is this new tensor not just the slice?
Slicing creates a view of the tensor, which shares the underlying data but contains information about the memory offsets used for the visible data. This avoids having to copy the data frequently, which makes a lot of operations much more efficient. See PyTorch - Tensor Views for a list of affected operations.
You are dealing with one of the few cases, where the underlying data matters. To save the tensor, it needs to save the underlying data, otherwise the offsets would no longer be valid.
torch.FloatTensor does not create a copy of the tensor, if it's not necessary. You can verify that their underlying data is still the same (they have the exact same memory location):
torch.FloatTensor(T[-20:]).storage().data_ptr() == T.storage().data_ptr()
# => True
(ii) Is case 4, the optimal method for saving just part (slice) of a tensor?
(iii) More generally, if I want to 'trim' a very large 1-dimensional tensor by removing the first half of its values in order to save memory, do I have to proceed as in case 4, or is there a more direct and less computationally costly way that does not involve creating a python list.
You will most likely not get around copying the data of the slice, but at least you can avoid creating a Python list from it and creating a new tensor from the list, by using torch.Tensor.clone instead:
torch.save(T[-20:].clone(), 'junk5.pt')
| https://stackoverflow.com/questions/61964164/ |
Passing embedded sequence to LSTM and getting TypeError: 'int' object is not subscriptable | I have some really basic pytorch code here where I'm trying to test running an input tensor through what will eventually become my forward function.
Goal: Treat the a sentence as a single input sequence after embedding each word number.
Embed a tensor
Convert that embedding back to a float32 tensor
Reshape embedding to shape (batch_size, seq_len, input_size)
Pass through lstm.
I've converted back to a float32 tensor after embedding so idk why I'm getting this error.
hidden_size=10
embedding = nn.Embedding(VOC.n_words, hidden_size)
lstm = nn.LSTM(hidden_size, hidden_size, # Will output 2x hidden size
num_layers=1, dropout=0.5,
bidirectional=True, batch_first=True)
print("Input tensor",idx_sentence)
# Forward test
embedded = embedding(idx_sentence.long())
embedded = torch.tensor(embedded, dtype=torch.float32)
print(f"embedding: {embedded.size()}")
# reshape to (batch_size, seq_len, input_size)
sequence = embedded.view(1,-1,hidden_size)
print(f"sequence shape: {sequence.size()}")
output, hidden = lstm(sequence, hidden_size)
print(f"output shape: {output.size()}")
Input tensor tensor([ 3., 20., 21., 90., 9.])
embedding: torch.Size([5, 10])
sequence shape: torch.Size([1, 5, 10])
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
# Remove the CWD from sys.path while we load stuff.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-116-ab3d6ed0e51c> in <module>()
16
17 # Input have shape (seq_len, batch, input_size)
---> 18 output, hidden = lstm(sequence, hidden_size)
19 print(f"output shape: {output.size()}")
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
520 expected_hidden_size = self.get_expected_hidden_size(input, batch_sizes)
521
--> 522 self.check_hidden_size(hidden[0], expected_hidden_size,
523 'Expected hidden[0] size {}, got {}')
524 self.check_hidden_size(hidden[1], expected_hidden_size,
TypeError: 'int' object is not subscriptable
| The LSTM accepts two inputs, as described in nn.LSTM - Inputs:
input: the input sequences
(h_0, c_0): a tuple with the initial hidden state h_0 and the initial cell state c_0.
But you are passing hidden_size as the second argument, which is an int and not a tuple. When the tuple is unpacked it fails, since hidden_size[0] does not work, as integers cannot be indexed.
The second argument is optional and if you don't provide it, the hidden and cell states will default to zero. That's usually what you want, therefore you can leave it off:
output, hidden = lstm(sequence)
| https://stackoverflow.com/questions/61964672/ |
Generating pytorch's theta from affine transform matrix | I'm trying to perform a rigid + scale transformation on a 3D volume with pytorch, but I can't seem to understand how the theta required for torch.nn.functional.affine_grid works.
I have a transformation matrix of size (1,4,4) generated by multiplying the matrices Translation * Scale * Rotation. If I use this matrix in, for example, scipy.ndimage.affine_transform, it works with no issues. However, the same matrix (cropped to size (1,3,4)) fails completely with torch.nn.functional.affine_grid.
I have managed to understand how the translation works (range -1 to 1) and I have confirmed that the Translation matrix works by simply normalizing the values to that range. As for the other two, I am lost.
I tried using a basic Scaling matrix alone (below) as a most basic comparison but the results in pytorch are different than that of scipy
Scaling =
[[0.75, 0, 0, 0],
[[0, 0.75, 0, 0],
[[0, 0, 0.75, 0],
[[0, 0, 0, 1]]
How can I convert the (1,4,4) affine matrix to work the same with torch.nn.functional.affine_grid? Alternatively, is there a way to generate the correct matrix based on the transformation parameters (shift, euler angles, scaling)?
| To anyone that comes across a similar issue in the future, the problem with scipy vs pytorch affine transforms is that scipy applies the transforms around (0, 0, 0) while pytorch applies it around the middle of the image/volume.
For example, let's take the parameters:
euler_angles = [ea0, ea1, ea2]
translation = [tr0, tr1, tr2]
scale = [sc0, sc1, sc2]
and create the following transformation matrices:
# Rotation matrix
R_x(ea0, ea1, ea2) = np.array([[1, 0, 0, 0],
[0, math.cos(ea0), -math.sin(ea0), 0],
[0, math.sin(ea0), math.cos(ea0), 0],
[0, 0, 0, 1]])
R_y(ea0, ea1, ea2) = np.array([[math.cos(ea1), 0, math.sin(ea1), 0],
[0, 1, 0, 0],
[-math.sin(ea1), 0, math.cos(ea1)], 0],
[0, 0, 0, 1]])
R_z(ea0, ea1, ea2) = np.array([[math.cos(ea2), -math.sin(ea2), 0, 0],
[math.sin(ea2), math.cos(ea2), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
R = R_x.dot(R_y).dot(R_z)
# Translation matrix
T(tr0, tr1, tr2) = np.array([[1, 0, 0, -tr0],
[0, 1, 0, -tr1],
[0, 0, 1, -tr2],
[0, 0, 0, 1]])
# Scaling matrix
S(sc0, sc1, sc2) = np.array([[1/sc0, 0, 0, 0],
[0, 1/sc1, 0, 0],
[0, 0, 1/sc2, 0],
[0, 0, 0, 1]])
If you have a volume of size (100, 100, 100), the scipy transform around the centre of the volume requires moving the centre of the volume to (0, 0, 0) first, and then moving it back to (50, 50, 50) after S, T, and R have been applied. Defining:
T_zero = np.array([[1, 0, 0, 50],
[0, 1, 0, 50],
[0, 0, 1, 50],
[0, 0, 0, 1]])
T_centre = np.array([[1, 0, 0, -50],
[0, 1, 0, -50],
[0, 0, 1, -50],
[0, 0, 0, 1]])
The scipy transform around the centre is then:
transform_scipy_centre = T_zero.dot(T).dot(S).dot(R).T_centre
In pytorch, there are some slight differences to the parameters.
The translation is defined between -1 and 1. Their order is also different. Using the same (100, 100, 100) volume as an example, the translation parameters in pytorch are given by:
# Note the order difference
translation_pytorch = =[tr0_p, tr1_p, tr2_p] = [tr0/50, tr2/50, tr1/50]
T_p = T(tr0_p, tr1_p, tr2_p)
The scale parameters are in a different order:
scale_pytorch = [sc0_p, sc1_p, sc2_p] = [sc2, sc0, sc1]
S_p = S(sc0_p, sc1_p, sc2_p)
The euler angles are the biggest difference. To get the equivalent transform, first the parameters are negative and in a different order:
# Note the order difference
euler_angles_pytorch = [ea0_p, ea1_p, ea2_p] = [-ea0, -ea2, -ea1]
R_x_p = R_x(ea0_p, ea1_p, ea2_p)
R_y_p = R_y(ea0_p, ea1_p, ea2_p)
R_z_p = R_z(ea0_p, ea1_p, ea2_p)
The order in which the rotation matrix is calculated is also different:
# Note the order difference
R_p = R_x_p.dot(R_z_p).dot(R_y_p)
With all these considerations, the scipy transform with:
transform_scipy_centre = T_zero.dot(T).dot(S).dot(R).T_centre
is equivalent to the pytorch transform with:
transform_pytorch = T_p.dot(S_p).dot(R_p)
I hope this helps!
| https://stackoverflow.com/questions/61964914/ |
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) throws IndexError: Target 42 is out of bounds | After loading CIFAR 100, I try to train my neural network. But I don't know why I get the out of bounds error shown below
Optimizing the network with batch size 25
Epoch: 0 of 30 Average loss: -
/mnt_home/klee/LBSBGenGapSharpnessResearch/vgg.py:43: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
return F.log_softmax(x)
Traceback (most recent call last):
File "plot_parametric_pytorch_cifar100.py", line 130, in <module>
loss_fn = F.nll_loss(ops, tgts)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2115, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 42 is out of bounds.
Here is the script I'm running:
cudnn.benchmark = True
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32')
X_train = np.transpose(X_train, axes=(0, 3, 1, 2))
X_test = X_test.astype('float32')
X_test = np.transpose(X_test, axes=(0, 3, 1, 2))
X_train /= 255
X_test /= 255
device = torch.device('cuda:0')
# This is where you can load any model of your choice.
# I stole PyTorch Vision's VGG network and modified it to work on CIFAR-10.
# You can take this line out and add any other network and the code
# should run just fine.
model = vgg.vgg11_bn()
#model.to(device)
# Forward pass
opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))
# Forward pass through the network given the input
predsfun = lambda op: np.argmax(op.data.numpy(), 1)
# Do the forward pass, then compute the accuracy
accfun = lambda op, y: np.mean(np.equal(predsfun(op), y.squeeze()))*100
# Initial point
x0 = deepcopy(model.state_dict())
# Number of epochs to train for
# Choose a large value since LB training needs higher values
# Changed from 150 to 30
nb_epochs = 30
batch_range = [25, 40, 50, 64, 80, 128, 256, 512, 625, 1024, 1250, 1750, 2048, 2500, 3125, 4096, 4500, 5000]
# parametric plot (i.e., don't train the network)
hotstart = False
if not hotstart:
for batch_size in batch_range:
optimizer = torch.optim.Adam(model.parameters())
model.load_state_dict(x0)
#model.to(device)
average_loss_over_epoch = '-'
print('Optimizing the network with batch size %d' % batch_size)
np.random.seed(1337) #So that both networks see same sequence of batches
for e in range(nb_epochs):
model.eval()
print('Epoch:', e, ' of ', nb_epochs, 'Average loss:', average_loss_over_epoch)
average_loss_over_epoch = 0
# Checkpoint the model every epoch
torch.save(model.state_dict(), "./models/30EpochCIFAR100ExperimentBatchSize" + str(batch_size) + ".pth")
array = np.random.permutation(range(X_train.shape[0]))
slices = X_train.shape[0] // batch_size
beginning = 0
end = 1
# Training loop!
for _ in range(slices):
start_index = batch_size * beginning
end_index = batch_size * end
smpl = array[start_index:end_index]
model.train()
optimizer.zero_grad()
ops = opfun(X_train[smpl])
tgts = Variable(torch.from_numpy(y_train[smpl]).long().squeeze())
loss_fn = F.nll_loss(ops, tgts) <--- errorring linne
average_loss_over_epoch += loss_fn.data.numpy() / (X_train.shape[0] // batch_size)
loss_fn.backward()
optimizer.step()
beginning += 1
end += 1
and here's the VGG model:
__all__ = [
'VGG', 'vgg11', 'vgg11_bn', 'vgg13', 'vgg13_bn', 'vgg16', 'vgg16_bn',
'vgg19_bn', 'vgg19',
]
model_urls = {
'vgg11': 'https://s3.amazonaws.com/pytorch/models/vgg11-fb7e83b2.pth',
'vgg13': 'https://s3.amazonaws.com/pytorch/models/vgg13-58758d87.pth',
'vgg16': 'https://s3.amazonaws.com/pytorch/models/vgg16-82412952.pth',
'vgg19': 'https://s3.amazonaws.com/pytorch/models/vgg19-341d7465.pth',
}
class VGG(nn.Module):
def __init__(self, features):
super(VGG, self).__init__()
self.features = features
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(512, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Linear(4096, 10),
)
self._initialize_weights()
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return F.log_softmax(x)
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
n = m.weight.size(1)
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
def make_layers(cfg, batch_norm=False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
return nn.Sequential(*layers)
cfg = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
def vgg11(pretrained=False, **kwargs):
"""VGG 11-layer model (configuration "A")
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = VGG(make_layers(cfg['A']), **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['vgg11']))
return model
def vgg11_bn(**kwargs):
"""VGG 11-layer model (configuration "A") with batch normalization"""
return VGG(make_layers(cfg['A'], batch_norm=True), **kwargs)
I'm not sure how to fix the index error. I think it has to do with the number of classes, but I'm not sure where to fix that (in the above code): I read https://discuss.pytorch.org/t/indexerror-target-2-is-out-of-bounds/69614 but not sure where to go from here.
| You are using CIFAR-100, which has 100 classes (hence the name). But your model only predicts 10 classes. Naturally, any class above 10 will result in an error.
The output of the last linear linear in the model's classifier needs to be changed to 100:
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(512, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Linear(4096, 100),
)
| https://stackoverflow.com/questions/61968540/ |
huggingface bert showing poor accuracy / f1 score [pytorch] | I am trying BertForSequenceClassification for a simple article classification task.
No matter how I train it (freeze all layers but the classification layer, all layers trainable, last k layers trainable), I always get an almost randomized accuracy score. My model doesn't go above 24-26% training accuracy (I only have 5 classes in my dataset).
I'm not sure what did I do wrong while designing/training the model. I tried the model with multiple datasets, every time it gives the same random baseline accuracy.
Dataset I used: BBC Articles (5 classes)
https://github.com/zabir-nabil/pytorch-nlp/tree/master/bbc
Consists of 2225 documents from the BBC news website corresponding to
stories in five topical areas from 2004-2005. Natural Classes: 5
(business, entertainment, politics, sport, tech)
I added the model part and the training part which are the most important portion (to avoid any irrelevant details). I added the full source-code + data too if that's useful for reproducibility.
My guess is there is something wrong with the I way I designed the network or the way I'm passing the attention_masks/ labels to the model. Also, the token length 512 should not be a problem as most of the texts has length < 512 (the mean length is < 300).
Model code:
import torch
from torch import nn
class BertClassifier(nn.Module):
def __init__(self):
super(BertClassifier, self).__init__()
self.bert = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels = 5)
# as we have 5 classes
# we want our output as probability so, in the evaluation mode, we'll pass the logits to a softmax layer
self.softmax = torch.nn.Softmax(dim = 1) # last dimension
def forward(self, x, attn_mask = None, labels = None):
if self.training == True:
# print(x.shape)
loss = self.bert(x, attention_mask = attn_mask, labels = labels)
# print(x[0].shape)
return loss
if self.training == False: # in evaluation mode
x = self.bert(x)
x = self.softmax(x[0])
return x
def freeze_layers(self, last_trainable = 1):
# we freeze all the layers except the last classification layer + few transformer blocks
for layer in list(self.bert.parameters())[:-last_trainable]:
layer.requires_grad = False
# create our model
bertclassifier = BertClassifier()
Training code:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # cuda for gpu acceleration
# optimizer
optimizer = torch.optim.Adam(bertclassifier.parameters(), lr=0.001)
epochs = 15
bertclassifier.to(device) # taking the model to GPU if possible
# metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
train_losses = []
train_metrics = {'acc': [], 'f1': []}
test_metrics = {'acc': [], 'f1': []}
# progress bar
from tqdm import tqdm_notebook
for e in tqdm_notebook(range(epochs)):
train_loss = 0.0
train_acc = 0.0
train_f1 = 0.0
batch_cnt = 0
bertclassifier.train()
print(f'epoch: {e+1}')
for i_batch, (X, X_mask, y) in tqdm_notebook(enumerate(bbc_dataloader_train)):
X = X.to(device)
X_mask = X_mask.to(device)
y = y.to(device)
optimizer.zero_grad()
loss, y_pred = bertclassifier(X, X_mask, y)
train_loss += loss.item()
loss.backward()
optimizer.step()
y_pred = torch.argmax(y_pred, dim = -1)
# update metrics
train_acc += accuracy_score(y.cpu().detach().numpy(), y_pred.cpu().detach().numpy())
train_f1 += f1_score(y.cpu().detach().numpy(), y_pred.cpu().detach().numpy(), average = 'micro')
batch_cnt += 1
print(f'train loss: {train_loss/batch_cnt}')
train_losses.append(train_loss/batch_cnt)
train_metrics['acc'].append(train_acc/batch_cnt)
train_metrics['f1'].append(train_f1/batch_cnt)
test_loss = 0.0
test_acc = 0.0
test_f1 = 0.0
batch_cnt = 0
bertclassifier.eval()
with torch.no_grad():
for i_batch, (X, y) in enumerate(bbc_dataloader_test):
X = X.to(device)
y = y.to(device)
y_pred = bertclassifier(X) # in eval model we get the softmax output so, don't need to index
y_pred = torch.argmax(y_pred, dim = -1)
# update metrics
test_acc += accuracy_score(y.cpu().detach().numpy(), y_pred.cpu().detach().numpy())
test_f1 += f1_score(y.cpu().detach().numpy(), y_pred.cpu().detach().numpy(), average = 'micro')
batch_cnt += 1
test_metrics['acc'].append(test_acc/batch_cnt)
test_metrics['f1'].append(test_f1/batch_cnt)
Full source-code with the dataset is available here: https://github.com/zabir-nabil/pytorch-nlp/blob/master/bert-article-classification.ipynb
Update:
After observing the prediction, it seems model almost always predicts 0:
bertclassifier.eval()
with torch.no_grad():
for i_batch, (X, y) in enumerate(bbc_dataloader_test):
X = X.to(device)
y = y.to(device)
y_pred = bertclassifier(X) # in eval model we get the softmax output so, don't need to index
y_pred = torch.argmax(y_pred, dim = -1)
print(y)
print(y_pred)
print('--------------------')
tensor([4, 2, 2, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 3, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 0, 0, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 4, 4, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 3, 2, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 3, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 1, 4, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 0, 0, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 1, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 2, 4, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 1, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 1, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 0, 1, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 3, 1, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 2, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 1, 2, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 4, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 0, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 3, 2, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 1, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 2, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 3, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 4, 2, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 4, 4, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 1, 3, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 2, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 0, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 1, 4, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 4, 3, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 2, 1, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 4, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 1, 1, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 2, 4, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 2, 3, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 0, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 1, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 2, 2, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 2, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 3, 2, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 0, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 1, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 4, 0, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 3, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 2, 0, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 0, 0, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 0, 2, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 2, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 2, 3, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 3, 0, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 0, 0, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 2, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 4, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 0, 4, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 3, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 2, 0, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 3, 1, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 1, 3, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 3, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 3, 0, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 2, 3, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 0, 0, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 0, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 1, 1, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 1, 0, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 4, 1, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 2, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 3, 4, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([3, 0, 4, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 1, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 4, 3, 1], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 0, 3, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 3, 3, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 0, 3, 4], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 0, 1, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([1, 2, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([2, 0, 4, 2], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([4, 2, 4, 0], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
tensor([0, 0, 3, 3], device='cuda:0')
tensor([0, 0, 0, 0], device='cuda:0')
--------------------
...
...
Actually, the model is always predicting the same output [0.2270, 0.1855, 0.2131, 0.1877, 0.1867] for any input, it's like it didn't learn anything at all.
It's weird because my dataset is not imbalanced.
Counter({'politics': 417,
'business': 510,
'entertainment': 386,
'tech': 401,
'sport': 511})
| After some digging I found out, the main culprit was the learning rate, for fine-tuning bert 0.001 is extremely high. When I reduced my learning rate from 0.001 to 1e-5, both my training and test accuracy reached 95%.
When BERT is fine-tuned, all layers are trained - this is quite different from fine-tuning in a lot of other ML models, but it matches what was described in the paper and works quite well (as long as you only fine-tune for a few epochs - it's very easy to overfit if you fine-tune the whole model for a long time on a small amount of data!)
src: https://github.com/huggingface/transformers/issues/587
Best result is found when all the layers are trained with a really small learning rate.
src: https://github.com/uzaymacar/comparatively-finetuning-bert
| https://stackoverflow.com/questions/61969783/ |
How to put image uploaded in tkinter into a function? | I am trying to create a Python tkinter application where the user can upload an image from file and the image is put through a image segmentation function which outputs an matplotlib plot.
I have the image segmentation function, it takes two parameters: neural network, image file pathway.
from torchvision import models
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torchvision.transforms as T
import numpy as np
fcn = models.segmentation.fcn_resnet101(pretrained=True).eval()
# Define the helper function
def decode_segmap(image, nc=21):
label_colors = np.array([(0, 0, 0), # 0=background
# 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle
(128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128),
# 6=bus, 7=car, 8=cat, 9=chair, 10=cow
(0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0),
# 11=dining table, 12=dog, 13=horse, 14=motorbike, 15=person
(192, 128, 0), (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128),
# 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor
(0, 64, 0), (128, 64, 0), (0, 192, 0), (128, 192, 0), (0, 64, 128)])
r = np.zeros_like(image).astype(np.uint8)
g = np.zeros_like(image).astype(np.uint8)
b = np.zeros_like(image).astype(np.uint8)
for l in range(0, nc):
idx = image == l
r[idx] = label_colors[l, 0]
g[idx] = label_colors[l, 1]
b[idx] = label_colors[l, 2]
rgb = np.stack([r, g, b], axis=2)
return rgb
def segment(net, path):
img = Image.open(path)
plt.imshow(img); plt.axis('off'); plt.show()
# Comment the Resize and CenterCrop for better inference results
trf = T.Compose([T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(mean = [0.485, 0.456, 0.406],
std = [0.229, 0.224, 0.225])])
inp = trf(img).unsqueeze(0)
out = net(inp)['out']
om = torch.argmax(out.squeeze(), dim=0).detach().cpu().numpy()
rgb = decode_segmap(om)
plt.imshow(rgb); plt.axis('off'); plt.show()
I have not made the Tkinter GUI as I am unsure how I can take the image uploaded from file and convert it into a file path (string) and put it through the function. There is only one neural network for now, being fcn.
| Firstly import filedialog and PIL:
from tkinter import filedialog
from PIL import Image
Now use a variable path (or anything) to define the path that is returned when you choose inside of a GUI.
path = filedialog.askopenfilename(initialdir='/Downloads', title='Select Photo', filetypes=(('JPEG files', '*.jpg'), ('PNG files', '*.png')))
You can use any filetype you want by specifying it in the argument there with the title and '*.extension'. NOTE THAT IT IS CASE-SENSITIVE. Now your variable path has the path for the image and all you have to do is open it with PIL using
img = Image.open(path)
Just like you did in your code
| https://stackoverflow.com/questions/61970972/ |
L-BFGS-B code, Scipy (sciopt.fmin_l_bfgs_b(func, init_guess, maxiter=10, bounds=list(bounds), disp=1, iprint=101)) | I'm using the L-BFGS-B optimizer to find the minima of a function. This will help me calculate sharpness for the function. However, I'm not sure if this following message is considered a normal message (i.e. Is there something wrong with my program or is this message typical?) See below:
RUNNING THE L-BFGS-B CODE
* * *
Machine precision = 2.220D-16
N = 28149514 M = 10
At X0 0 variables are exactly at the bounds
^[[C
At iterate 0 f= -3.59325D+00 |proj g|= 2.10249D-03
At iterate 1 f= -2.47853D+01 |proj g|= 4.20499D-03
Bad direction in the line search;
refresh the lbfgs memory and restart the iteration.
At iterate 2 f= -2.53202D+01 |proj g|= 4.17686D-03
At iterate 3 f= -2.53202D+01 |proj g|= 4.17686D-03
* * *
Tit = total number of iterations
Tnf = total number of function evaluations
Tnint = total number of segments explored during Cauchy searches
Skip = number of BFGS updates skipped
Nact = number of active bounds at final generalized Cauchy point
Projg = norm of the final projected gradient
F = final function value
* * *
N Tit Tnf Tnint Skip Nact Projg F
***** 3 43 ****** 0 ***** 4.177D-03 -2.532D+01
F = -25.320247650146484
CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
Warning: more than 10 function and gradient
evaluations in the last line search. Termination
may possibly be caused by a bad search direction.
I got the following sharpness anyway which is relatively consistent with the paper I'm trying to reproduce: It's just that I'm a bit concerned with the above message.
tensor(473.0201)
Here is my code for computing sharpness:
def get_sharpness(data_loader, model, criterion, epsilon, manifolds=0):
# extract current x0
x0 = None
for p in model.parameters():
if x0 is None:
x0 = p.data.view(-1)
else:
x0 = torch.cat((x0, p.data.view(-1)))
x0 = x0.cpu().numpy()
# get current f_x
f_x0, _ = get_minus_cross_entropy(x0, data_loader, model, criterion)
f_x0 = -f_x0
logging.info('min loss f_x0 = {loss:.4f}'.format(loss=f_x0))
# find the minimum
if 0==manifolds:
x_min = np.reshape(x0 - epsilon * (np.abs(x0) + 1), (x0.shape[0], 1))
x_max = np.reshape(x0 + epsilon * (np.abs(x0) + 1), (x0.shape[0], 1))
bounds = np.concatenate([x_min, x_max], 1)
func = lambda x: get_minus_cross_entropy(x, data_loader, model, criterion, training=True)
init_guess = x0
else:
warnings.warn("Small manifolds may not be able to explore the space.")
assert(manifolds<=x0.shape[0])
#transformer = rp.GaussianRandomProjection(n_components=manifolds)
#transformer.fit(np.random.rand(manifolds, x0.shape[0]))
#A_plus = transformer.components_
#A = np.linalg.pinv(A_plus)
A_plus = np.random.rand(manifolds, x0.shape[0])*2.-1.
# normalize each column to unit length
A_plus_norm = np.linalg.norm(A_plus, axis=1)
A_plus = A_plus / np.reshape(A_plus_norm, (manifolds,1))
A = np.linalg.pinv(A_plus)
abs_bound = epsilon * (np.abs(np.dot(A_plus, x0))+1)
abs_bound = np.reshape(abs_bound, (abs_bound.shape[0], 1))
bounds = np.concatenate([-abs_bound, abs_bound], 1)
def func(y):
floss, fg = get_minus_cross_entropy(x0 + np.dot(A, y), data_loader, model, criterion, training=True)
return floss, np.dot(np.transpose(A), fg)
#func = lambda y: get_minus_cross_entropy(x0+np.dot(A, y), data_loader, model, criterion, training=True)
init_guess = np.zeros(manifolds)
#rand_selections = (np.random.rand(bounds.shape[0])+1e-6)*0.99
#init_guess = np.multiply(1.-rand_selections, bounds[:,0])+np.multiply(rand_selections, bounds[:,1])
minimum_x, f_x, d = sciopt.fmin_l_bfgs_b(func, init_guess, maxiter=10, bounds=list(bounds), disp=1, iprint=101)
#factr=10.,
#pgtol=1.e-12,
f_x = -f_x
logging.info('max loss f_x = {loss:.4f}'.format(loss=f_x))
sharpness = (f_x - f_x0)/(1+f_x0)*100
print(sharpness)
# recover the model
x0 = torch.from_numpy(x0).float()
x0 = x0.cuda()
x_start = 0
for p in model.parameters():
psize = p.data.size()
peltnum = 1
for s in psize:
peltnum *= s
x_part = x0[x_start:x_start + peltnum]
p.data = x_part.view(psize)
x_start += peltnum
return sharpness
Which was taken from this repository:
https://github.com/wenwei202/smoothout/blob/master/measure_sharpness.py
I'm concerned about exact accuracy.
| First, l-bfgs-b will only give a global minimum for a convex function.
the message
CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
is the normal convergence message.
The warning you are getting says that there are a lot of function/gradient evaluations in the line search - this can often happen when you use l-bfgs-b on non convex functions. So if the thing you're minimizing is non convex (and it seems like it might be just by glancing at the code), I would say this is normal.
| https://stackoverflow.com/questions/61978831/ |
How can I keep a PyTorch submodule in eval mode? | I have a pretrained model that I'm using in conjunction with a model being trained. I want the pretrained model to always be in eval mode, but the other model will be moving back and forth between eval and train mode. I'd still like the pretrained model to be a submodule of the other one, though (e.g. so that all parameters stay on the same device). Is there a way to do this? Here's a minimal example:
from torch import nn
class FixedModule(nn.Module):
pass
class TrainableModule(nn.Module):
def __init__(self, fixed_module):
super().__init__()
self.fixed_module = fixed_module
fixed = FixedModule().eval()
assert not fixed.training
trainable = TrainableModule(fixed)
assert trainable.training and not trainable.fixed_module.training
trainable.train()
assert trainable.fixed_module.training # I'd like this to give an error
I know I can work around this by, e.g., always doing
trainable.train()
trainable.fixed_module.eval()
but that's error-prone and doesn't work well with existing code.
| You can override train in FixedModule to prevent it from changing modes. Note that eval just calls train(False), so you don't need to override that too. But calling FixedModule.eval won't do anything now, so you have to set training = False in init.
from torch import nn
class FixedModule(nn.Module):
def __init__(self):
super().__init__()
self.training = False
# add any other nn.Module attributes here before calling self.children
# you could override `train` in each child too if you really wanted,
# but that seems like overkill unless there are external references
# to any submodules of FixedModule
for module in self.children():
module.eval()
def train(self, mode):
return self
class TrainableModule(nn.Module):
def __init__(self, fixed_module):
super().__init__()
self.fixed_module = fixed_module
fixed = FixedModule().eval()
assert not fixed.training
trainable = TrainableModule(fixed)
assert trainable.training and not trainable.fixed_module.training
trainable.train()
assert not trainable.fixed_module.training # passes
| https://stackoverflow.com/questions/61980943/ |
Is maxpooling on odd number possible? | I am going through the Udacity DeepLearning Nanodegree and working on the autoencoder mini project. I do not understand the solution, nor how to check it myself. So this is 2 questions.
We start with 28*28 images. These are fed through 3 convolutional layers, each with padding of 1, and each with a maxpooling to half the original dimensions. What I don't understand is the last element? Surely 2 rounds of maxpooling (28/2)/2 gives 7 and therefore a further maxpooling shouldn't be possible as it results in an odd number. Can someone explain why this is the case to me? The code to replicate is here:
'''
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvDenoiser(nn.Module):
def __init__(self):
super(ConvDenoiser, self).__init__()
## encoder layers ##
# conv layer (depth from 1 --> 32), 3x3 kernels
self.conv1 = nn.Conv2d(1, 32, 3, padding=1)
# conv layer (depth from 32 --> 16), 3x3 kernels
self.conv2 = nn.Conv2d(32, 16, 3, padding=1)
# conv layer (depth from 16 --> 8), 3x3 kernels
self.conv3 = nn.Conv2d(16, 8, 3, padding=1)
# pooling layer to reduce x-y dims by two; kernel and stride of 2
self.pool = nn.MaxPool2d(2, 2)
## decoder layers ##
# transpose layer, a kernel of 2 and a stride of 2 will increase the spatial dims by 2
self.t_conv1 = nn.ConvTranspose2d(8, 8, 3, stride=2) # kernel_size=3 to get to a 7x7 image output
# two more transpose layers with a kernel of 2
self.t_conv2 = nn.ConvTranspose2d(8, 16, 2, stride=2)
self.t_conv3 = nn.ConvTranspose2d(16, 32, 2, stride=2)
# one, final, normal conv layer to decrease the depth
self.conv_out = nn.Conv2d(32, 1, 3, padding=1)
def forward(self, x):
## encode ##
# add hidden layers with relu activation function
# and maxpooling after
x = F.relu(self.conv1(x))
x = self.pool(x)
# add second hidden layer
x = F.relu(self.conv2(x))
x = self.pool(x)
# add third hidden layer
x = F.relu(self.conv3(x))
x = self.pool(x) # compressed representation
## decode ##
# add transpose conv layers, with relu activation function
x = F.relu(self.t_conv1(x))
x = F.relu(self.t_conv2(x))
x = F.relu(self.t_conv3(x))
# transpose again, output should have a sigmoid applied
x = F.sigmoid(self.conv_out(x))
return x
# initialize the NN
model = ConvDenoiser()
print(model)
I wanted to try to understand this by passing a single image through the layers manually and see what the result was but this resulted in an error. Can someone explain to me how I can see the shapes that pass through the layers? Code is a bit messy but I left it there so you can see what I tried.
dataiter = iter(train_loader)
images, labels = dataiter.next()
# images = images.numpy()
# get one image from the batch
# img = np.squeeze(images[0])
img=images[0]
#create hidden layer
conv1 = nn.Conv2d(1, 32, 3, padding=1)
# z=torch.from_numpy(images[0])
z1=conv1(img)
Appreciate any insights you can give me.
Thanks,
J
| Regarding your first question:
You can read in the documentation how the output shape of max-pooling is computed. You can max-pool odd-shaped tensors with even strides with or without padding. You need to be careful about the boundaries where some pixels may be lost.
Regarding your second question:
Your model expects a 4D input: batch-channel-height-width.
By selecting only one image from the batch (img=images[0]) you eliminate the batch dimension ending up with only a 3D tensor.
To fix this:
img=images[0:1, ...] # select first image, but leave batch dimension as a singleton
| https://stackoverflow.com/questions/61983630/ |
Bring any PyTorch cuda tensor in the range [0,1] | Suppose I have a PyTorch Cuda Float tensor x of the shape [b,c,h,w] taking on any arbitrary value allowed by Float Tensor range. I want to normalise it in the range [0,1].
I think of the following algorithm (but any other will also do).
Step1: Find minimum in each batch. Call it min and having shape [b,1,1,1].
Step2: Similarly find the maximum and call it max.
Step3: Use y = (x-min)/max. Alternatively use y = (x-min)/(max-min). I don't know which one will be better. y should have the same shape as that of x.
I am using PyTorch 1.3.1.
Specifically I am unable to get the desired min using torch.min(). Same goes for max.
I am going to use it for feeding it to pre-trained VGG for calculating perceptual loss (after the above normalisation i will additionally bring them to ImageNet mean and std). Due to some reason I cannot enforce [0,1] range during data loading part because the previous works in my area have a very specific normalisation algorithm which has to be used but some times does not ensures [0,1] bound but will be somewhere in its vicinity. That is why at the time computing perceptual loss I have to do this explicit normalisation as a precaution. All out of the box implementation of perceptual loss I am aware assume data is in [0,1] or [-1,1] range and so do not do this transformation.
Thankyou very much
| Not the most elegant way, but you can do that using keepdim=True and specifying each of the dimensions:
channel_min = x.min(dim=1, keepdim=True)[0].min(dim=2,keepdim=True)[0].min(dim=3, keepdim=True)[0]
channel_max = x.max(dim=1, keepdim=True)[0].max(dim=2,keepdim=True)[0].max(dim=3, keepdim=True)[0]
| https://stackoverflow.com/questions/61983657/ |
How to calculate perplexity for a language model using Pytorch | I'm fine-tuning GPT-2 model for language generation task using huggingface Transformers library-pytorch, and I need to calculate an evaluation score(perplexity) for the fine-tuned model. But I'm not sure how that can be done using loss. I would like to know how perplexity can be calculated for the model with sum_loss or mean loss or any other suggestions are also welcome. Any help is appriciated.
Edit:
outputs = model(article_tens, labels=article_tens)
loss, prediction_scores = outputs[:2]
loss.backward()
sum_loss = sum_loss + loss.detach().data
given above is how I calculate loss for each batch of data for the fine-tuning task.
sum loss 1529.43408203125
loss 4.632936000823975
prediction_scores tensor([[[-11.2291, -9.2614, -11.8575, ..., -18.1927, -17.7286, -11.9215],
[-67.2786, -63.5928, -70.7110, ..., -75.3516, -73.8672, -67.6637],
[-81.1397, -80.0295, -82.9357, ..., -83.7913, -85.7201, -78.9877],
...,
[-85.3213, -82.5135, -86.5459, ..., -90.9160, -90.4393, -82.3141],
[-44.2260, -43.1702, -49.2296, ..., -58.9839, -55.2058, -42.3312],
[-63.2842, -59.7334, -61.8444, ..., -75.0798, -75.7507, -54.0160]]],
device='cuda:0', grad_fn=<UnsafeViewBackward>)
The above given is when loss is printed for a batch only
| As shown in Wikipedia - Perplexity of a probability model, the formula to calculate the perplexity of a probability model is:
The exponent is the cross-entropy. While logarithm base 2 (b = 2) is traditionally used in cross-entropy, deep learning frameworks such as PyTorch use the natural logarithm (b = e).
Therefore, to get the perplexity from the cross-entropy loss, you only need to apply torch.exp to the loss.
perplexity = torch.exp(loss)
The mean loss is used in this case (the 1 / N part of the exponent) and if you were to use the sum of the losses instead of the mean, the perplexity would get out of hand (exceedingly large), which can easily surpass the maximum floating point number, resulting in infinity.
| https://stackoverflow.com/questions/61988776/ |
RMSE loss for multi output regression problem in PyTorch | I'm training a CNN architecture to solve a regression problem using PyTorch where my output is a tensor of 20 values. I planned to use RMSE as my loss function for the model and tried to use PyTorch's nn.MSELoss() and took the square root for it using torch.sqrt() for that but got confused after obtaining the results.I'll try my best to explain why. It's obvious that for a batch-size bs my output tensor's dimensions would be [bs , 20].I tried to implement and RMSE function of my own :
def loss_function (predicted_x , target ):
loss = torch.sum(torch.square(predicted_x - target) , axis= 1)/(predicted_x.size()[1]) #Taking the mean of all the squares by dividing it with the number of outputs i.e 20 in my case
loss = torch.sqrt(loss)
loss = torch.sum(loss)/predicted_x.size()[0] #averaging out by batch-size
return loss
But the output of my loss_function() and how PyTorch implements it with nn.MSELoss() differed . I'm not sure whether my implementation is wrong or am I using nn.MSELoss() in the wrong way.
| The MSE loss is the mean of the squares of the errors. You're taking the square-root after computing the MSE, so there is no way to compare your loss function's output to that of the PyTorch nn.MSELoss() function — they're computing different values.
However, you could just use the nn.MSELoss() to create your own RMSE loss function as:
loss_fn = nn.MSELoss()
RMSE_loss = torch.sqrt(loss_fn(prediction, target))
RMSE_loss.backward()
Hope that helps.
| https://stackoverflow.com/questions/61990363/ |
Do ReLU1 in PyTorch | I want to use ReLU1 non-linear activation. ReLU1 is linear in [0,1] but clamps values less than 0 to 0 and clamps values more than 1 to 1.
It will be used only for the last layer of my deep net in PyTorch having a really high definition output of 2048x4096. Since the code has to be highly optimized in terms of speed and memory I do not know which of the following will be the best implementation.
Following are the two implementations I can think of for the tensor x:-
x.clamp_(min=0.0, max=1.0)
For this I am unable to see the source code given in its docs. So do not know if its the best choice. I will prefer in place operation since backpropagation can happen through it.
The second alternative I have is to use torch.nn.functional.hardtanh_(x, min_val=0.0, max_val=1.0). This is definitely a in place function and the source code says that it uses the C++ file torch._C._nn.hardtanh(input, min_val, max_val) so I think it will be fast.
Please suggest which is the most efficient implementation and another one if possible.
Thankyou
| Without trying it, my guess is that clamp and hardtanh will have the same speed, and it will be hard to do this operation any faster if you optimize it in isolation. The arithmetic is trivial so this operation will be bottlenecked by GPU memory bandwidth. To run faster, you'd want to fuse this operation with the operation that produced x. If you don't want to write a custom kernel for the combined operation, you can try using TorchScript.
| https://stackoverflow.com/questions/62002130/ |
How to change some value of a tensor into zero according to another tensor's value in pytorch? | I have two tensors: tensor a and tensor b. How can I change some value of tensor a according to the value of tensor b?
I know the codes following are right, but it runs pretty slow when the tensor is big. Is there any other method?
import torch
a = torch.rand(10).cuda()
b = torch.rand(10).cuda()
a[b > 0.5] = 0.
| For this exact use case also consider
a * (b <= 0.5)
which seems to be the fastest out of the following
In [1]: import torch
...: a = torch.rand(3**10)
...: b = torch.rand(3**10)
In [2]: %timeit a[b > 0.5] = 0.
553 µs ± 17.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [3]: a = torch.rand(3**10)
In [4]: %timeit temp = torch.where(b > 0.5, torch.tensor(0.), a)
...:
49 µs ± 391 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: a = torch.rand(3**10)
In [6]: %timeit temp = (a * (b <= 0.5))
44 µs ± 381 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [7]: %timeit a.masked_fill_(b > 0.5, 0.)
244 µs ± 3.48 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
| https://stackoverflow.com/questions/62002348/ |
Got `UnboundLocalError` error when measuring time for `torch.where` | When try to %timeit in Jupyter Notebook got error; Without it working fine.
UnboundLocalError: local variable 'a' referenced before assignment
import torch
a = torch.rand(10)
b = torch.rand(10)
%timeit a = torch.where(b > 0.5, torch.tensor(0.), a)
What is happening here?
| At first, I thought that's because %timeit only evaluates the time run by functions. But thanks to @Shiva who told me it can calculate the execution time of other things. And I checked the documentation here, and I found out this is true.
So, according to this answer, %timeit has a problem with re-assignment as the re-assignment to a causes the function to have an a local variable, hiding the global. In other words, you can use any other variable other that a to assign it to torch.where:
#this works
%timeit c = torch.where(b > 0.5, torch.tensor(0.), a) #c instead of a
# this works
%timeit torch.where(b > 0.5, torch.tensor(0.), a)
# this doesn't work
%timeit a = torch.where(b > 0.5, torch.tensor(0.), a)
| https://stackoverflow.com/questions/62004680/ |
How to find c++ source code of torch.bmm of pytorch | I have trouble finding the source code of torch.bmm(), which is defined in https://pytorch.org/cppdocs/api/function_namespaceat_1aac51f71f807ca70fd210814114520c34.html#exhale-function-namespaceat-1aac51f71f807ca70fd210814114520c34.
I have confidence it is located in namespace at, since it is referenced as at::bmm in other places.
What I have searched through is :
The directory Aten https://github.com/pytorch/pytorch/tree/34877448216149024f44cbcab830169fdb2fa7fb/aten/src/ATen
The directory of caffe2 https://github.com/pytorch/pytorch/tree/74b65c32be68b15dc7c9e8bb62459efbfbde33d8/caffe2/core
direct search in github with keyword bmm of c++ file
but have found nothing. Is there any systematic way to locate a function(in this case, bmm) in such a large project ?
| There is no (single) source for bmm per se. From ATen's Readme:
ATen "native" functions are the modern mechanism for adding operators and functions to ATen (they are "native" in contrast to legacy functions, which are bound via TH/THC cwrap metadata). Native functions are declared in native_functions.yaml and have implementations defined in one of the cpp files in this directory.
bmm is declared in aten\src\ATen\native\native_functions.yaml:
- func: bmm(Tensor self, Tensor mat2) -> Tensor
use_c10_dispatcher: full
variants: function, method
dispatch:
CPU: bmm_cpu
CUDA: bmm_cuda
SparseCPU: bmm_sparse_cpu
SparseCUDA: bmm_sparse_cuda
supports_named_tensor: True
The implementations (like bmm_cpu) are to be found in aten\src\ATen\native\LinearAlgebra.cpp.
| https://stackoverflow.com/questions/62015172/ |
Improving accuracy on a multi-class image classifiier | I am building a classifier using the Food-101 dataset. The dataset has predefined training and test sets, both labeled. It has a total of 101,000 images. I’m trying to build a classifier model with >=90% accuracy for top-1. I’m currently sitting at 75%. The training set was provided unclean. But now, I would like to know some of the ways I can improve my model and what are some of the things I’m doing wrong.
I’ve partitioned the train and test images into their respective folders. Here, I am using 0.2 of the training dataset to validate the learner by running 5 epochs.
np.random.seed(42)
data = ImageList.from_folder(path).split_by_rand_pct(valid_pct=0.2).label_from_re(pat=file_parse).transform(size=224).databunch()
top_1 = partial(top_k_accuracy, k=1)
learn = cnn_learner(data, models.resnet50, metrics=[accuracy, top_1], callback_fns=ShowGraph)
learn.fit_one_cycle(5)
epoch train_loss valid_loss accuracy top_k_accuracy time
0 2.153797 1.710803 0.563498 0.563498 19:26
1 1.677590 1.388702 0.637096 0.637096 18:29
2 1.385577 1.227448 0.678746 0.678746 18:36
3 1.154080 1.141590 0.700924 0.700924 18:34
4 1.003366 1.124750 0.707063 0.707063 18:25
And here, I’m trying to find the learning rate. Pretty standard to how it was in the lectures:
learn.lr_find()
learn.recorder.plot(suggestion=True)
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
Min numerical gradient: 1.32E-06
Min loss divided by 10: 6.31E-08
Using the learning rate of 1e-06 to run another 5 epochs. Saving it as stage-2
learn.fit_one_cycle(5, max_lr=slice(1.e-06))
learn.save('stage-2')
epoch train_loss valid_loss accuracy top_k_accuracy time
0 0.940980 1.124032 0.705809 0.705809 18:18
1 0.989123 1.122873 0.706337 0.706337 18:24
2 0.963596 1.121615 0.706733 0.706733 18:38
3 0.975916 1.121084 0.707195 0.707195 18:27
4 0.978523 1.123260 0.706403 0.706403 17:04
Previously I ran 3 stages altogether but the model wasn’t improving beyond 0.706403 so I didn’t want to repeat it. Below is my confusion matrix. I apologize for the terrible resolution. Its the doing of Colab.
Since I’ve created an additional validation set, I decided to use the test set to validate the saved model of stage-2 to see how well it was performing:
path = '/content/food-101/images'
data_test = ImageList.from_folder(path).split_by_folder(train='train', valid='test').label_from_re(file_parse).transform(size=224).databunch()
learn.load('stage-2')
learn.validate(data_test.valid_dl)
This is the result:
[0.87199837, tensor(0.7584), tensor(0.7584)]
|
Try augmentations like RandomHorizontalFlip, RandomResizedCrop,
RandomRotate, Normalize etc from torchvision transforms. These always help a lot in classification problems.
Label smoothing and/or Mixup precision training.
Simply try using a more optimized architecture, like EfficientNet.
Instead of OneCycle, a longer, more manual training approach may help. Try Stochastic Gradient Descent with a weight decay of 5e-4 and a Nesterov Momentum of 0.9. Use Warmup Training of around 1-3 epochs, and then regular training of around 200 epochs. You could set a manual learning rate schedule or cosine annealing or some other scheme. This entire method will consume a lot more time and effort than the usual onecycle training, and should be explored only if other methods don't show considerable gains.
| https://stackoverflow.com/questions/62015284/ |
Convert a Keras NN to a Pytorch NN | I'm a beginner with constructing neural networks in pytorch and Keras. I have the following Keras code for a variation of the AlexNet model:
def shallownet(nb_classes):
global img_size
model = Sequential()
model.add(Conv2D(64, (5, 5), input_shape=img_size, data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Conv2D(64, (5, 5), padding='same', data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Flatten())
model.add(Dense(384))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(192))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
return model
This is based on the C1/C3 model described on page 12 of this paper:
https://arxiv.org/pdf/1609.04836.pdf
And I want to convert this to the Pytorch version of the NN. Specifically I'm trying to get it in the form of:
class AlexNetOWT_BN(nn.Module):
def __init__(self, num_classes=1000):
super(AlexNetOWT_BN, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2,
bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2, bias=False),
nn.BatchNorm2d(192),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1, bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(384),
nn.Conv2d(384, 256, kernel_size=3, padding=1, bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, kernel_size=3, padding=1, bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256)
)
self.classifier = nn.Sequential(
nn.Linear(256 * 6 * 6, 4096, bias=False),
nn.BatchNorm1d(4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, 4096, bias=False),
nn.BatchNorm1d(4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, num_classes)
)
#self.regime = {
# 0: {'optimizer': 'SGD', 'lr': 1e-2,
# 'weight_decay': 5e-4, 'momentum': 0.9},
# 10: {'lr': 5e-3},
# 15: {'lr': 1e-3, 'weight_decay': 0},
# 20: {'lr': 5e-4},
# 25: {'lr': 1e-4}
#}
self.regime = {
0: {'optimizer': 'SGD', 'lr': 1e-2,
'weight_decay': 5e-4, 'momentum': 0.9},
20: {'lr': 1e-3},
40: {'lr': 1e-4}
}
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
self.input_transform = {
'train': transforms.Compose([
transforms.Scale(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
]),
'eval': transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
}
def forward(self, x):
x = self.features(x)
x = x.view(-1, 256 * 6 * 6)
x = self.classifier(x)
return x
The original AlexNet code for self.features is here (I switched the order of things above; not sure if that's allowed or not):
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2,
bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2, bias=False),
nn.BatchNorm2d(192),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1, bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(384),
nn.Conv2d(384, 256, kernel_size=3, padding=1, bias=False),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256),
nn.Conv2d(256, 256, kernel_size=3, padding=1, bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.ReLU(inplace=True),
nn.BatchNorm2d(256)
)
I am specifically confused about how to change the features and the classifier. The description of the model I want is here:
https://arxiv.org/pdf/1609.04836.pdf on page 12, section B.3.
Thanks so much
| Here's the code:
class AlexNet(nn.Module):
def __init__(self, num_classes=10):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=5, padding=2,
bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=5, padding=2, bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
)
self.classifier = nn.Sequential(
nn.Linear(64 * 7 * 7, 384, bias=False),
nn.BatchNorm1d(384),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(384, 192, bias=False),
nn.BatchNorm1d(192),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(192, num_classes)
)
self.regime = {
0: {'optimizer': 'SGD', 'lr': 1e-3,
'weight_decay': 5e-4, 'momentum': 0.9},
60: {'lr': 1e-2},
120: {'lr': 1e-3},
180: {'lr': 1e-4}
}
def forward(self, x):
x = self.features(x)
x = x.view(-1, 64 * 7 * 7)
x = self.classifier(x)
return F.log_softmax(x)
def cifar10_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 10)
return AlexNet(num_classes)
def cifar100_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 100)
return AlexNet(num_classes)
Written by Wei Wen. smoothout repository
| https://stackoverflow.com/questions/62016037/ |
How to transform a 2D and index tensors for torch.nn.utils.rnn.pack_sequence | I have a sequences collection in the following form:
sequences = torch.tensor([[2,1],[5,6],[3,0])
indexes = torch.tensor([1,0,1])
that is, the sequence 0 is made of just [5,6], and the sequence 1 is made of [2,1] , [3,0]. Mathematically sequence[i] = { sequences[j] such that i = indexes[j] }
I need to feed these sequences into an LSTM. Since these are variable-length sequences, pytorch documentation states to use something like torch.nn.utils.rnn.pack_sequence.
Sadly, this method and its like want, as input, a list of tensors where each of them is a L x *, with L being the length of the single sequence.
How can build something that can be fed into a pytorch LSTM?
P.s. throughout the code I work with these tensors using scatter and gather functionalities but I can't find a way to use them to achieve this goal.
| I found an alternative and more efficient way to separate the sequences:
sequences = torch.tensor([[2,1],[5,6],[3,0]])
indexes = torch.tensor([1,0,1])
sorted_src = src[indexes.argsort()]
indexes_count = torch.unique(indexes, return_counts=True)[1]
splitted = torch.split(sorted_src, indexes_count.tolist(), dim=0)
This method is almost 3 times faster then the one proposed by @Mercury.
Measured using timeit module with sequences being a (5000,256) tensor and indexes being (1500)
| https://stackoverflow.com/questions/62029901/ |
How to fix documentation format in VS Code for python kite? | I am using pytorch in VSCode but the format of the documentation is just terrible (see image). Do you have any pointers on how we can set the documentation format in VSCode? I believe that documentation contains LaTeX formulas.... Check the image in the link
| There's currently nothing you can do. It contains LaTeX and the extension does not render them as such.
| https://stackoverflow.com/questions/62031935/ |
In PyTorch, how to convert the cuda() related codes into CPU version? | I have some existing PyTorch codes with cuda() as below, while net is a MainModel.KitModel object:
net = torch.load(model_path)
net.cuda()
and
im = cv2.imread(image_path)
im = Variable(torch.from_numpy(im).unsqueeze(0).float().cuda())
I want to test the code in a machine without any GPU, so I want to convert the cuda-code into CPU version. I tried to look at some relevant posts regarding the CPU/GPU switch of PyTorch, but they are related to the usage of device and thus doesn't apply to my case.
| As pointed out by kHarshit in his comment, you can simply replace .cuda() call with .cpu():
net.cpu()
# ...
im = torch.from_numpy(im).unsqueeze(0).float().cpu()
However, this requires changing the code in multiple places every time you want to move from GPU to CPU and vice versa.
To alleviate this difficulty, pytorch has a more "general" method .to().
You may have a device variable defining where you want pytorch to run, this device can also be the CPU (!).
for instance:
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
Once you determined once in your code where you want/can run, simply use .to() to send your model/variables there:
net.to(device)
# ...
im = torch.from_numpy(im).unsqueeze(0).float().to(device)
BTW,
You can use .to() to control the data type (.float()) as well:
im = torch.from_numpy(im).unsqueeze(0).to(device=device, dtype=torch.float)
PS,
Note that the Variable API has been deprecated and is no longer required.
| https://stackoverflow.com/questions/62035811/ |
How is Vocab and Integer (one hot) representation stored and what does the ('string', int) tuple means in torchtext.vocab()? | I am trying to train an RNN for binary classification. I have my vocab made from 1000000 words and please find the below outputs...
text_field = torchtext.data.Field(tokenize=word_tokenize)
print(text_field.vocab.freqs.most_common(15))
>>
[('.', 516822), (',', 490533), ('the', 464796), ('to', 298670), ("''", 264416), ('of', 226307), ('I', 224927), ('and', 215722), ('a', 211773), ('is', 180965), ('you', 180359), ('``', 165889), ('that', 156425), ('in', 138038), (':', 132294)]
print(text_field.vocab.itos[:15])
>>
['<unk>', '<pad>', '.', ',', 'the', 'to', "''", 'of', 'I', 'and', 'a', 'is', 'you', '``', 'that']
text_field.vocab.stoi
>>
{'<unk>': 0,'<pad>': 1,'.': 2,',': 3,'the': 4,'to': 5,"''": 6,'of': 7,'I': 8,'and': 9,'a': 10, 'is': 11,'you': 12,'``': 13,'that': 14,'in': 15,....................
The documentation says:
freqs – A collections.Counter object holding the frequencies of tokens in the data used to build the Vocab.
stoi – A collections.defaultdict instance mapping token strings to numerical identifiers.
itos – A list of token strings indexed by their numerical identifiers.
Which not comprehensible by me.
Can Some one Please explain what are these by giving the intuition of each of those?
For example, if the is represented by 4, then does it mean that if a sentence contains the word the,
Is it going to be a 1 at location 4? OR
Is it going to be a 1 at position 464796 OR
Is it going to be a 4 at position 464796??
What happens when multiple the are there there??
| If "the" is represented by 4, then that means that
itos[4] is "the"
stoi["the"] is 4
there is a tuple ('the', <count>) somewhere in freqs, where count is the number of times that 'the' appears in your input text. That count has nothing to do with its numerical identifier 4.
| https://stackoverflow.com/questions/62036994/ |
In pytorch, what situations the loss function need to inherit nn.module? | I am confused about the loss function in PyTorch. Some people define the loss function as a normal python function while others define the loss function by defining a class that inherits nn.Module. So I want to know what situations we need to define the loss function by inheriting nn.Module? Many thanks.
| Generally, inheritance from nn.Module is only necessary when you want to have trainable variables in this module, otherwise it's optional to inherit it.
So same applies to loss functions, if it contains no such variables (which I assume is the major case), no inheritance is needed.
| https://stackoverflow.com/questions/62038560/ |
Batchnorm2d Pytorch - Why pass number of channels to batchnorm? | Why do I need to pass the previous nummber of channels to the batchnorm? The batchnorm should normalize over each datapoint in the batch, why does it need to have the number of channels then ?
| Batch normalisation has learnable parameters, because it includes an affine transformation.
From the documentation of nn.BatchNorm2d:
The mean and standard-deviation are calculated per-dimension over the mini-batches and γ and β are learnable parameter vectors of size C (where C is the input size). By default, the elements of γ are set to 1 and the elements of β are set to 0.
Since the norm is calculated per channel, the parameters γ and β are vectors of size num_channels (one element per channel), which results in an individual scale and shift per channel. As with any other learnable parameter in PyTorch, they need to be created with a fixed size, hence you need to specify the number of channels
batch_norm = nn.BatchNorm2d(10)
# γ
batch_norm.weight.size()
# => torch.Size([10])
# β
batch_norm.bias.size()
# => torch.Size([10])
Note: Setting affine=False does not use any parameters and the number of channels wouldn't be needed, but they are still required, in order to have a consistent interface.
| https://stackoverflow.com/questions/62041724/ |
Using Softmax Activation function after calculating loss from BCEWithLogitLoss (Binary Cross Entropy + Sigmoid activation) | I am going through a Binary Classification tutorial using PyTorch and here, the last layer of the network is torch.Linear() with just one neuron. (Makes Sense) which will give us a single neuron. as pred=network(input_batch)
After that the choice of Loss function is loss_fn=BCEWithLogitsLoss() (which is numerically stable than using the softmax first and then calculating loss) which will apply Softmax function to the output of last layer to give us a probability. so after that, it'll calculate the binary cross entropy to minimize the loss.
loss=loss_fn(pred,true)
My concern is that after all this, the author used torch.round(torch.sigmoid(pred))
Why would that be? I mean I know it'll get the prediction probabilities in the range [0,1] and then round of the values with default threshold of 0.5.
Isn't it better to use the sigmoid once after the last layer within the network rather using a softmax and a sigmoid at 2 different places given it's a binary classification??
Wouldn't it be better to just
out = self.linear(batch_tensor)
return self.sigmoid(out)
and then calculate the BCE loss and use the argmax() for checking accuracy??
I am just curious that can it be a valid strategy?
| You seem to be thinking of the binary classification as a multi-class classification with two classes, but that is not quite correct when using the binary cross-entropy approach. Let's start by clarifying the goal of the binary classification before looking at any implementation details.
Technically, there are two classes, 0 and 1, but instead of considering them as two separate classes, you can see them as opposites of each other. For example, you want to classify whether a StackOverflow answer was helpful or not. The two classes would be "helpful" and "not helpful". Naturally, you would simply ask "Was the answer helpful?", the negative aspect is left off, and if that wasn't the case, you could deduce that it was "not helpful". (Remember, it's a binary case, there is no middle ground).
Therefore, your model only needs to predict a single class, but to avoid confusion with the actual two classes, that can be expressed as: The model predicts the probability that the positive case occurs. In context of the previous example: What is the probability that the StackOverflow answer was helpful?
Sigmoid gives you values in the range [0, 1], which are the probabilities. Now you need to decide when the model is confident enough for it to be positive by defining a threshold. To make it balanced, the threshold is 0.5, therefore as long as the probability is greater than 0.5 it is positive (class 1: "helpful") otherwise it's negative (class 0: "not helpful"), which is achieved by rounding (i.e. torch.round(torch.sigmoid(pred))).
After that the choice of Loss function is loss_fn=BCEWithLogitsLoss() (which is numerically stable than using the softmax first and then calculating loss) which will apply Softmax function to the output of last layer to give us a probability.
Isn't it better to use the sigmoid once after the last layer within the network rather using a softmax and a sigmoid at 2 different places given it's a binary classification??
BCEWithLogitsLoss applies Sigmoid not Softmax, there is no Softmax involved at all. From the nn.BCEWithLogitsLoss documentation:
This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
By not applying Sigmoid in the model you get a more numerically stable version of the binary cross-entropy, but that means you have to apply the Sigmoid manually if you want to make an actual prediction outside of training.
[...] and use the argmax() for checking accuracy??
Again, you're thinking of the multi-class scenario. You only have a single output class, i.e. output has size [batch_size, 1]. Taking argmax of that, will always give you 0, because that is the only available class.
| https://stackoverflow.com/questions/62045186/ |
how to get the index of the subarray in pytorch? | a and b are torch tensor No repeating elements
a shape is [n,2] like:
[[1,2]
[2,3]
[4,6]
...]
b is[m,2] like:
[[1,2]
[4,6]
....
]
how to get the index of b in a, example:
a = [[1,2]
[2,4]
[6,7]
]
b = [[1,2]
[6,7]]
the index should be (0,3), we can use gpu,
| Here @jpp 's, numpy solution is almost your answer after this
You just need to get indices using nonzero and flatten tensor using flatten to get expected shape.
a = torch.tensor([[1, 2], [2, 4], [6, 7]])
b = torch.tensor([[1, 2], [6, 7]])
(a[:, None] == b).all(-1).any(-1).nonzero().flatten()
tensor([0, 2])
| https://stackoverflow.com/questions/62047605/ |
Print the values of torch.data.dataset object | I have made my pandas dataframe X_train into z tensor but the output is
user_id=torchtext.data.RawField()
fields=[('user_id',user_id)]
from torchtext.data import Dataset,Example
z=torchtext.data.Dataset(X_train.user_id,fields)
print(len(z))
print(z)
Ouput is:
426018
<torchtext.data.dataset.Dataset object at 0x7feffb6a8f98>
How should I print the actual data in my variable object z?
| Could print(list(torch.utils.data.DataLoader())) be what you're looking for?
DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None)
| https://stackoverflow.com/questions/62055185/ |
How conv1d pytorch operates on a sequence of characters or frames? | I understand convolution filters when applied to an image (e.g. an 224x224 image with 3 in-channels transformed by 56 total filters of 5x5 conv to a 224x224 image with 56 out-channels). The key is that there are 56 different filters each with 5x5x3 weights that end up producing output image 224x224, 56 (term after comma is output channels).
But I can't seem to understand how conv1d filter works in seq2seq models on a sequence of characters. One of the models i was looking at https://arxiv.org/pdf/1712.05884.pdf has a "post-net layer is comprised of 512 filters with shape 5×1" that operates on a spectrogram frame 80-d (means 80 different float values in the frame), and the result of filter is a 512-d frame.
I don't understand what in_channels, out_channels mean in pytorch conv1d definition as in images I can easily understand what in-channels/out-channels mean, but for sequence of 80-float values frames I'm at loss. What do they mean in the context of seq2seq model like this above?
How do 512, 5x1 filters on 80 float values produce 512 float values?**
Wouldn't a 5x1 filter when operating on 80 float values just produce 80 float values (by just taking 5 consecutive values at a time of those 80)? How many weights total these 512 filters have?**
The layer when printed in pytorch shows up as:
(conv): Conv1d(80, 512, kernel_size=(5,), stride=(1,), padding=(2,))
and the parametes in this layer show up as:
postnet.convolutions.0.0.conv.weight : 512x80x5 = 204800
Shouldn't the weights in this layer instead be 512*5*1 as it only has 512 filters each of which is 5x1?
| Intro explanation
Basically Conv1d is just like Conv2d but instead of "sliding" the rectangle window across the image (say 3x3 for kernel_size=3) you "slide" across the vector (say of length 256) with kernel (say of size 3). This is the case for in_channels and out_channels equal to 1 which is the basic one.
Below you can see Conv1d sliding across 3 in_channels (x-axis, y-axis, z-axis) across seconds steps.
You could add depth to the kernel (just like you did for 2D convolution with 5x5x3 cube), which would be 5x3 as well (5 is the kernel size, 3 is the number of in_channels). Now there could be out_channels of those squares (e.g. 56 out_channels) so the final produced sequence is 56 x sequence_length.
Questions
[...] post-net layer is comprised of 512 filters with shape 5×1" that
operates on a spectrogram frame 80-d (means 80 different float values
in the frame), and the result of filter is a 512-d frame.
So your input is 80d (instead of 3 axes like above), kernel_size is the same (5) and out_channels is 512. So the input could look something like this: [64, 80, 256] (for [batch, in_channels, length]) and output would be [64, 512, 256] (provided padding of 3 was used on both sides).
I don't understand what in_channels, out_channels mean in pytorch
conv1d definition as in images I can easily understand what
in-channels/out-channels mean, but for sequence of 80-float values
frames I'm at loss. What do they mean in the context of seq2seq model
like this above?
I guess that was answered above. Main point is: the sequence isn't 80-float values! Sequence can be of any length (just like image can be of any size when you pass it to the convolution), here in_channels is 80.
How do 512, 5x1 filters on 80 float values produce 512 float values?**
512 x sequence_length values are produced on 80 x sequence_length inputs.
Shouldn't the weights in this layer instead be 512*5*1 as it only has
512 filters each of which is 5x1?
In PyTorch, in your case, weights would be of shape torch.Size([512, 80, 5]). They could be torch.Size([512, 1, 5]) if you have one input channel, but in this case there are 80 of them.
| https://stackoverflow.com/questions/62055697/ |
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation? | I am using pytorch-1.5 to do some gan test. My code is very simple gan code which just fit the sin(x) function:
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# Hyper Parameters
BATCH_SIZE = 64
LR_G = 0.0001
LR_D = 0.0001
N_IDEAS = 5
ART_COMPONENTS = 15
PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)])
def artist_works(): # painting from the famous artist (real target)
r = 0.02 * np.random.randn(1, ART_COMPONENTS)
paintings = np.sin(PAINT_POINTS * np.pi) + r
paintings = torch.from_numpy(paintings).float()
return paintings
G = nn.Sequential( # Generator
nn.Linear(N_IDEAS, 128), # random ideas (could from normal distribution)
nn.ReLU(),
nn.Linear(128, ART_COMPONENTS), # making a painting from these random ideas
)
D = nn.Sequential( # Discriminator
nn.Linear(ART_COMPONENTS, 128), # receive art work either from the famous artist or a newbie like G
nn.ReLU(),
nn.Linear(128, 1),
nn.Sigmoid(), # tell the probability that the art work is made by artist
)
opt_D = torch.optim.Adam(D.parameters(), lr=LR_D)
opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)
for step in range(10000):
artist_paintings = artist_works() # real painting from artist
G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideas
G_paintings = G(G_ideas) # fake painting from G (random ideas)
prob_artist0 = D(artist_paintings) # D try to increase this prob
prob_artist1 = D(G_paintings) # D try to reduce this prob
D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))
G_loss = torch.mean(torch.log(1. - prob_artist1))
opt_D.zero_grad()
D_loss.backward(retain_graph=True) # reusing computational graph
opt_D.step()
opt_G.zero_grad()
G_loss.backward()
opt_G.step()
But when i runing it got this error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Is there something wrong with my code?
| This happens because the opt_D.step() modifies the parameters of your discriminator inplace. But these parameters are required to compute the gradient for the generator. You can fix this by changing your code to:
for step in range(10000):
artist_paintings = artist_works() # real painting from artist
G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideas
G_paintings = G(G_ideas) # fake painting from G (random ideas)
prob_artist1 = D(G_paintings) # G tries to fool D
G_loss = torch.mean(torch.log(1. - prob_artist1))
opt_G.zero_grad()
G_loss.backward()
opt_G.step()
prob_artist0 = D(artist_paintings) # D try to increase this prob
# detach here to make sure we don't backprop in G that was already changed.
prob_artist1 = D(G_paintings.detach()) # D try to reduce this prob
D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))
opt_D.zero_grad()
D_loss.backward(retain_graph=True) # reusing computational graph
opt_D.step()
You can find more about this issue here https://github.com/pytorch/pytorch/issues/39141
| https://stackoverflow.com/questions/62061703/ |
Choice of loss function | Here is a word2vec implementation:
%reset -f
import torch
from torch.autograd import Variable
import numpy as np
import torch.functional as F
import torch.nn.functional as F
corpus = [
'this test',
'this separate test'
]
def get_input_layer(word_idx):
x = torch.zeros(vocabulary_size).float()
x[word_idx] = 1.0
return x
def tokenize_corpus(corpus):
tokens = [x.split() for x in corpus]
return tokens
tokenized_corpus = tokenize_corpus(corpus)
vocabulary = []
for sentence in tokenized_corpus:
for token in sentence:
if token not in vocabulary:
vocabulary.append(token)
word2idx = {w: idx for (idx, w) in enumerate(vocabulary)}
idx2word = {idx: w for (idx, w) in enumerate(vocabulary)}
window_size = 2
idx_pairs = []
# for each sentence
for sentence in tokenized_corpus:
indices = [word2idx[word] for word in sentence]
# for each word, threated as center word
for center_word_pos in range(len(indices)):
# for each window position
for w in range(-window_size, window_size + 1):
context_word_pos = center_word_pos + w
# make soure not jump out sentence
if context_word_pos < 0 or context_word_pos >= len(indices) or center_word_pos == context_word_pos:
continue
context_word_idx = indices[context_word_pos]
idx_pairs.append((indices[center_word_pos], context_word_idx))
idx_pairs = np.array(idx_pairs) # it will be useful to have this as numpy array
vocabulary_size = len(vocabulary)
embedding_dims = 4
W1 = Variable(torch.randn(embedding_dims, vocabulary_size).float(), requires_grad=True)
W2 = Variable(torch.randn(vocabulary_size, embedding_dims).float(), requires_grad=True)
num_epochs = 1
learning_rate = 0.001
for epo in range(num_epochs):
loss_val = 0
for data, target in idx_pairs:
x = Variable(get_input_layer(data)).float()
y_true = Variable(torch.from_numpy(np.array([target])).long())
z1 = torch.matmul(W1, x)
z2 = torch.matmul(W2, z1)
log_softmax = F.log_softmax(z2, dim=0)
loss = F.nll_loss(log_softmax.view(1,-1), y_true)
print(float(loss))
loss_val += loss.data.item()
loss.backward()
W1.data -= learning_rate * W1.grad.data
W2.data -= learning_rate * W2.grad.data
W1.grad.data.zero_()
W2.grad.data.zero_()
print(W1.shape)
print(W2.shape)
print(f'Loss at epo {epo}: {loss_val/len(idx_pairs)}')
This prints:
0.33185482025146484
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.041481852531433105
3.302438735961914
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.45428669452667236
2.3144636154174805
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.7435946464538574
0.33418864011764526
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.7853682264685631
1.0644199848175049
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.9184207245707512
0.4970806837081909
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 0.980555810034275
3.2861199378967285
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 1.3913208022713661
6.170125961303711
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 2.16258654743433
Modifying the code to use mse_loss change y_true to float :
y_true = Variable(torch.from_numpy(np.array([target])).float())
Use mse_loss :
loss = F.mse_loss(log_softmax.view(1,-1), y_true)
Incorporated updates:
for epo in range(num_epochs):
loss_val = 0
for data, target in idx_pairs:
x = Variable(get_input_layer(data)).float()
y_true = Variable(torch.from_numpy(np.array([target])).float())
z1 = torch.matmul(W1, x)
z2 = torch.matmul(W2, z1)
log_softmax = F.log_softmax(z2, dim=0)
loss = F.mse_loss(log_softmax.view(1,-1), y_true)
print(float(loss))
loss_val += loss.data.item()
loss.backward()
W1.data -= learning_rate * W1.grad.data
W2.data -= learning_rate * W2.grad.data
W1.grad.data.zero_()
W2.grad.data.zero_()
print(W1.shape)
print(W2.shape)
print(f'Loss at epo {epo}: {loss_val/len(idx_pairs)}')
Output is now:
41.75048828125
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 5.21881103515625
16.929386138916016
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 7.334984302520752
50.63690948486328
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 13.664597988128662
36.21110534667969
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 18.190986156463623
5.304859638214111
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 18.854093611240387
9.802173614501953
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 20.07936531305313
15.515325546264648
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 22.018781006336212
30.408292770385742
torch.Size([4, 3])
torch.Size([3, 4])
Loss at epo 0: 25.81981760263443
-c:12: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 3])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
Why does mse loss not work as well as nll loss ? Is it related to the PyTorch warning:
Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 3])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
?
| The input and target must have the same size for the nn.MSELoss, because it is calculated by squaring the difference of the i-th element of the input and the i-th element of the target, i.e. mse_i = (input_i - target_i) ** 2.
Furthermore, your targets are non-negative integers in the range [0, vocabulary_size], but you use log-softmax, which has values in range [-∞, 0]. With MSE the idea is to get the prediction value to the same value as the target, but the only overlap of the two intervals is 0. That means that every class besides 0 is unreachable.
MSE is a regression loss function and is simply not appropriate in this situation, since you are dealing with categorical data.
| https://stackoverflow.com/questions/62062380/ |
TypeError: forward() takes 1 positional argument but 2 were given | class DeConv2d(nn.Module):
def __init__(self, in_channel, out_channel, kernel_size, stride, padding, dilation):
super().__init__()
self.up = nn.Upsample(scale_factor=2, mode='nearest')
self.conv = nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, \
stride=stride, padding=padding, dilation=dilation)
def forward(self, x):
output = self.up(x)
output = self.conv(output)
return output
class EncoderDecoder(nn.Module):
def __init__(self, pretrained_net, n_class):
super().__init__()
self.n_class = n_class
self.pretrained_net = pretrained_net
self.relu = nn.ReLU(inplace=True)
self.deconv1 = DeConv2d(512, 512, kernel_size=3, stride=1, padding=1, dilation=1)
self.bn1 = nn.BatchNorm2d(512)
self.deconv2 = DeConv2d(512, 256, kernel_size=3, stride=1, padding=1, dilation=1)
self.bn2 = nn.BatchNorm2d(256)
self.deconv3 = DeConv2d(256, 128, kernel_size=3, stride=1, padding=1, dilation=1)
self.bn3 = nn.BatchNorm2d(128)
self.deconv4 = DeConv2d(128, 64, kernel_size=3, stride=1, padding=1, dilation=1)
self.bn4 = nn.BatchNorm2d(64)
self.classifier = nn.Conv2d(64, n_class, kernel_size=1)
def forward(self, x):
output=self.pretrained_net.layers(x)
output=self.relu(self.deconv1(output))
output=self.bn1(output)
output=self.relu(self.deconv2(output))
output=self.bn2(output)
output=self.relu(self.deconv3(output))
output=self.bn3(output)
output=self.relu(self.deconv4(output))
output=self.bn4(output)
output=self.classifier(output)
return output
this is my code and I don't know why the type error exist . Does somebody know how to fix these problems?
| When you create a class, and define a function with self args within the class, self is autofilled with the class.Ex:
class item():# create a random class
self.var = 0
def fun(self,x):
self.var +=x
n = item()
And you can try add:
n.fun(3)
print(n.var)
returns 3
self argument is autofilled with class itself
| https://stackoverflow.com/questions/62064539/ |
Unknown behaviour of HOOKS in PyTorch | I have a straightforward and simple CNN below,
# creat a dummy deep net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,2, kernel_size=3, stride=1, padding=1, bias=True)
self.conv2 = nn.Conv2d(2,3, kernel_size=3, stride=1, padding=1, bias=True)
self.conv3 = nn.Conv2d(3,1, kernel_size=3, stride=1, padding=1, bias=True)
self.seq = nn.Sequential(
nn.Conv2d(1,5, kernel_size=3, stride=1, padding=1, bias=True),
nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(5,1, kernel_size=3, stride=1, padding=1, bias=True),
)
self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
out = self.relu(self.conv1(x))
out = self.conv3(self.conv2(out))
out = out + x
out = self.seq(x)
return out
5 hooks have been applied to each layer for the forward pass.
Hooked 0 to Conv2d(1, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
Hooked 1 to Conv2d(2, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
Hooked 2 to Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
Hooked 3 to Sequential(
(0): Conv2d(1, 5, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): LeakyReLU(negative_slope=0.2, inplace=True)
(2): Conv2d(5, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
Hooked 4 to LeakyReLU(negative_slope=0.2, inplace=True)
These hooks have been created using following class
# ------------------The Hook class begins to calculate each layer stats
class Hook():
def __init__(self, module, backward=False):
if backward==False:
self.hook = module.register_forward_hook(self.hook_fn)
else:
self.hook = module.register_backward_hook(self.hook_fn)
self.inputMean = []
self.outputMean = []
def hook_fn(self, module, input, output):
self.inputMean.append(input[0][0,...].mean().item())#calculate only for 1st image in the batch
print('\nIn hook class input {}'.format(input[0].size()))
self.outputMean.append(output[0][0,...].mean().item())
print('In hook class outout {}'.format(output[0].size()))
# create hooks on each layer
hookF = []
for i,layer in enumerate(list(net.children())):
print('Hooked to {}'.format(layer))
hookF.append(Hook(layer))
Please note between Hook 1 and Hook 2 there is no ReLU
self.conv3(self.conv2(out)). Thus OUTPUT of HOOK1 is INPUT to HOOK2 and should be identical. BUT THIS DOES NOT TURNS OUT TO BE WHY? Below is output for HOOK1 and HOOK2
Hook of layer 1 (HOOK on layer 1 which is self.conv2)
... OutputMean: [0.2381615787744522, 0.2710852324962616, 0.30706286430358887, 0.26064932346343994, 0.24395985901355743]
Hook of layer 2 (HOOK on layer 2 which is self.conv3)
InputMean: [0.13127394020557404, 0.1611362248659134, 0.1457807868719101, 0.17380955815315247, 0.1537724733352661], OutputMean: ...
These two values should have been the same but do not turn out to be.
------ The Full code is shown below -------
import torch
import torch.nn as nn
# creat a dummy deep net
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1,2, kernel_size=3, stride=1, padding=1, bias=True)
self.conv2 = nn.Conv2d(2,3, kernel_size=3, stride=1, padding=1, bias=True)
self.conv3 = nn.Conv2d(3,1, kernel_size=3, stride=1, padding=1, bias=True)
self.seq = nn.Sequential(
nn.Conv2d(1,5, kernel_size=3, stride=1, padding=1, bias=True),
nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(5,1, kernel_size=3, stride=1, padding=1, bias=True),
)
self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
out = self.relu(self.conv1(x))
out = self.conv3(self.conv2(out))
out = out + x
out = self.seq(x)
return out
net = Net()
print(net)
criterion = nn.MSELoss()
# ------------------The Hook class begins to calculate each layer stats
class Hook():
def __init__(self, module, backward=False):
if backward==False:
self.hook = module.register_forward_hook(self.hook_fn)
else:
self.hook = module.register_backward_hook(self.hook_fn)
self.inputMean = []
self.outputMean = []
def hook_fn(self, module, input, output):
self.inputMean.append(input[0][0,...].mean().item())#calculate only for 1st image in the batch
print('\nIn hook class input {}'.format(input[0].size()))
self.outputMean.append(output[0][0,...].mean().item())
print('In hook class outout {}'.format(output[0].size()))
# create hooks on each layer
hookF = []
for i,layer in enumerate(list(net.children())):
print('Hooked to {}'.format(layer))
hookF.append(Hook(layer))
optimizer = torch.optim.Adam(net.parameters())
# Do 5 forward pass
for _ in range(5):
print('Iteration --------')
data = torch.rand(2,1,10,10)*10
print('Input mean is {}'.format(data[0,...].mean()))
target = data.clone()
out = net(data)
loss = criterion(out, target)
print('backward')
loss.backward()
optimizer.step()
optimizer.zero_grad()
for i,h in enumerate(hookF):
print('\n Hook of layer {}'.format(i))
print('InputMean: {}, OutputMean: {}'.format(h.inputMean, h.outputMean))
h.hook.remove()
| The problem is that in your Conv2d layer input is a tuple and output is a torch.Tensor. Therefore output[0][0,...] is selecting the first item from dim 0 in the tensor whereas input[0][0,...] is selecting the first item from the tuple.
You just need to change output[0][0,...] to output[0,...].
| https://stackoverflow.com/questions/62065848/ |
Subsets and Splits