instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Differences in SciKit Learn, Keras, or Pytorch | Are these libraries fairly interchangeable?
Looking here, https://stackshare.io/stackups/keras-vs-pytorch-vs-scikit-learn, it seems the major difference is the underlying framework (at least for PyTorch).
| Yes, there is a major difference.
SciKit Learn is a general machine learning library, built on top of NumPy. It features a lot of machine learning algorithms such as support vector machines, random forests, as well as a lot of utilities for general pre- and postprocessing of data. It is not a neural network framework.
PyTorch is a deep learning framework, consisting of
A vectorized math library similar to NumPy, but with GPU support and a lot of neural network related operations (such as softmax or various kinds of activations)
Autograd - an algorithm which can automatically calculate gradients of your functions, defined in terms of the basic operations
Gradient-based optimization routines for large scale optimization, dedicated to neural network optimization
Neural-network related utility functions
Keras is a higher-level deep learning framework, which abstracts many details away, making code simpler and more concise than in PyTorch or TensorFlow, at the cost of limited hackability. It abstracts away the computation backend, which can be TensorFlow, Theano or CNTK. It does not support a PyTorch backend, but that's not something unfathomable - you can consider it a simplified and streamlined subset of the above.
In short, if you are going with "classic", non-neural algorithms, neither PyTorch nor Keras will be useful for you. If you're doing deep learning, scikit-learn may still be useful for its utility part; aside from it you will need the actual deep learning framework, where you can choose between Keras and PyTorch but you're unlikely to use both at the same time. This is very subjective, but in my view, if you're working on a novel algorithm, you're more likely to go with PyTorch (or TensorFlow or some other lower-level framework) for flexibility. If you're adapting a known and tested algorithm to a new problem setting, you may want to go with Keras for its greater simplicity and lower entry level.
| https://stackoverflow.com/questions/54527439/ |
How to Tensorize loss of multiple 3D Keypoints | I have a tensor of ground truth values of 3D points of G=[18000x3], and an output from my network of the same size O=[18000x3].
I need to compute a loss so that I basically have the square root of the distance between each 3D point, summed over all keypoints and normalized over 18000. How do I write this efficiently?
| Just write the expression you propose using the vectorized operations provided by PyTorch. In this case
loss = (O - G).pow(2).sum(axis=1).sqrt().mean()
Check out pow, sum, sqrt and mean.
| https://stackoverflow.com/questions/54543595/ |
Need to change GPU option to CPU in a python pytorch based code | The code basically trains the usual MNIST image dataset but it does the training on a GPU. I need to change this option so the code trains the model using my laptop computer. I need to substitute the .cuda() at the second line for the equivalent in CPU.
I know there are many examples online on how to train neural networks using the MNIST database but what is special about this code is that it does the optimization using a PID controller (commonly used in industry) and I need the code as part of my research.
net = Net(input_size, hidden_size, num_classes)
net.cuda()
net.train()
#Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = PIDOptimizer(net.parameters(), lr=learning_rate, weight_decay=0.0001, momentum=0.9, I=I, D=D)
# Train the Model
for epoch in range(num_epochs):
train_loss_log = AverageMeter()
train_acc_log = AverageMeter()
val_loss_log = AverageMeter()
val_acc_log = AverageMeter()
for i, (images, labels) in enumerate(train_loader):
# Convert torch tensor to Variable
images = Variable(images.view(-1, 28*28).cuda())
labels = Variable(labels.cuda())
Would need to be able to run the code without using the .cuda() option which is for training using a GPU. Need to run it on my PC.
Here's the source code in case needed.
https://github.com/tensorboy/PIDOptimizer
Many thanks, community!
| It is better to move up to latest pytorch (1.0.x).
With latest pytorch, it is more easy to manage "device".
Below is a simple example.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#Now send existing model to device.
model_ft = model_ft.to(device)
#Now send input to device and so on.
inputs = inputs.to(device)
With this construct, your code automatically uses appropriate device.
Hope this helps!
| https://stackoverflow.com/questions/54544986/ |
Element wise calculation breaks autograd | I am using pytorch to calculate loss for a logistic regression (I know pytorch can do this automatically but I have to make it myself). My function is defined below but the cast to torch.tensor breaks autograd and gives me w.grad = None. Im new to pytorch so Im sorry.
logistic_loss = lambda X,y,w: torch.tensor([torch.log(1 + torch.exp(-y[i] * torch.matmul(w, X[i,:]))) for i in range(X.shape[0])], requires_grad=True)
| Your post isn't very clear on details and this is a monster of a one-liner. I first reworked it to make a minimal, complete, verifiable example. Please correct me if I misunderstood your intentions and please do it yourself next time.
import torch
# unroll the one-liner to have an easier time understanding what's going on
def logistic_loss(X, y, w):
elementwise = []
for i in range(X.shape[0]):
mm = torch.matmul(w, X[i, :])
exp = torch.exp(-y[i] * mm)
elementwise.append(torch.log(1 + exp))
return torch.tensor(elementwise, requires_grad=True)
# I assume that's the excepted dimensions of your input
X = torch.randn(5, 30, requires_grad=True)
y = torch.randn(5)
w = torch.randn(30)
# I assume you backpropagate from a reduced version
# of your sum, because you can't call .backward on multi-dimensional
# tensors
loss = logistic_loss(X, y, w).mean()
loss.mean().backward()
print(X.grad)
The simplest solution to your problem is to replace torch.tensor(elementwise, requires_grad=True) with torch.stack(elementwise). You can think of torch.tensor as a constructor for entirely new tensors, if your tensor is more of a result of some mathematical expression, you should use operations like torch.stack or torch.cat.
That being said, this code is still wildly inefficient because you do manual looping over i. Instead, you could write simply
def logistic_loss_vectorized(X, y, w):
mm = torch.matmul(X, w)
exp = torch.exp(-y * mm)
return torch.log(1 + exp)
which is mathematically equivalent, but will be much faster in practice, because it allows for better parallelization due to lack of explicit looping.
Note that there is still a numerical issue with this code - you're taking a logarithm of an exponential, but the intermediate result, called exp, is likely to attain very high values, causing loss of precision. There are workarounds for that, which is why the loss functions provided by PyTorch are preferable.
| https://stackoverflow.com/questions/54546058/ |
Have I implemented implemenation of learning rate finder correctly? | Using implementation of lr_finder from https://github.com/davidtvs/pytorch-lr-finder based on paper https://arxiv.org/abs/1506.01186
Without the learning rate finder :
from __future__ import print_function, with_statement, division
import torch
from tqdm.autonotebook import tqdm
from torch.optim.lr_scheduler import _LRScheduler
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.utils.data as data_utils
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from matplotlib import pyplot
from pandas import DataFrame
import torchvision.datasets as dset
import os
import torch.nn.functional as F
import time
import random
import pickle
from sklearn.metrics import confusion_matrix
import pandas as pd
import sklearn
class LRFinder(object):
"""Learning rate range test.
The learning rate range test increases the learning rate in a pre-training run
between two boundaries in a linear or exponential manner. It provides valuable
information on how well the network can be trained over a range of learning rates
and what is the optimal learning rate.
Arguments:
model (torch.nn.Module): wrapped model.
optimizer (torch.optim.Optimizer): wrapped optimizer where the defined learning
is assumed to be the lower boundary of the range test.
criterion (torch.nn.Module): wrapped loss function.
device (str or torch.device, optional): a string ("cpu" or "cuda") with an
optional ordinal for the device type (e.g. "cuda:X", where is the ordinal).
Alternatively, can be an object representing the device on which the
computation will take place. Default: None, uses the same device as `model`.
Example:
>>> lr_finder = LRFinder(net, optimizer, criterion, device="cuda")
>>> lr_finder.range_test(dataloader, end_lr=100, num_iter=100)
Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
fastai/lr_find: https://github.com/fastai/fastai
"""
def __init__(self, model, optimizer, criterion, device=None):
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.history = {"lr": [], "loss": []}
self.best_loss = None
# Save the original state of the model and optimizer so they can be restored if
# needed
self.model_state = model.state_dict()
self.model_device = next(self.model.parameters()).device
self.optimizer_state = optimizer.state_dict()
# If device is None, use the same as the model
if device:
self.device = device
else:
self.device = self.model_device
def reset(self):
"""Restores the model and optimizer to their initial states."""
self.model.load_state_dict(self.model_state)
self.model.to(self.model_device)
self.optimizer.load_state_dict(self.optimizer_state)
def range_test(
self,
train_loader,
val_loader=None,
end_lr=10,
num_iter=100,
step_mode="exp",
smooth_f=0.05,
diverge_th=5,
):
"""Performs the learning rate range test.
Arguments:
train_loader (torch.utils.data.DataLoader): the training set data laoder.
val_loader (torch.utils.data.DataLoader, optional): if `None` the range test
will only use the training loss. When given a data loader, the model is
evaluated after each iteration on that dataset and the evaluation loss
is used. Note that in this mode the test takes significantly longer but
generally produces more precise results. Default: None.
end_lr (float, optional): the maximum learning rate to test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
step_mode (str, optional): one of the available learning rate policies,
linear or exponential ("linear", "exp"). Default: "exp".
smooth_f (float, optional): the loss smoothing factor within the [0, 1[
interval. Disabled if set to 0, otherwise the loss is smoothed using
exponential smoothing. Default: 0.05.
diverge_th (int, optional): the test is stopped when the loss surpasses the
threshold: diverge_th * best_loss. Default: 5.
"""
# Reset test results
self.history = {"lr": [], "loss": []}
self.best_loss = None
# Move the model to the proper device
self.model.to(self.device)
# Initialize the proper learning rate policy
if step_mode.lower() == "exp":
lr_schedule = ExponentialLR(self.optimizer, end_lr, num_iter)
elif step_mode.lower() == "linear":
lr_schedule = LinearLR(self.optimizer, end_lr, num_iter)
else:
raise ValueError("expected one of (exp, linear), got {}".format(step_mode))
if smooth_f < 0 or smooth_f >= 1:
raise ValueError("smooth_f is outside the range [0, 1[")
# Create an iterator to get data batch by batch
iterator = iter(train_loader)
for iteration in tqdm(range(num_iter)):
# Get a new set of inputs and labels
try:
inputs, labels = next(iterator)
except StopIteration:
iterator = iter(train_loader)
inputs, labels = next(iterator)
# Train on batch and retrieve loss
loss = self._train_batch(inputs, labels)
if val_loader:
loss = self._validate(val_loader)
# Update the learning rate
lr_schedule.step()
self.history["lr"].append(lr_schedule.get_lr()[0])
# Track the best loss and smooth it if smooth_f is specified
if iteration == 0:
self.best_loss = loss
else:
if smooth_f > 0:
loss = smooth_f * loss + (1 - smooth_f) * self.history["loss"][-1]
if loss < self.best_loss:
self.best_loss = loss
# Check if the loss has diverged; if it has, stop the test
self.history["loss"].append(loss)
if loss > diverge_th * self.best_loss:
print("Stopping early, the loss has diverged")
break
print("Learning rate search finished. See the graph with {finder_name}.plot()")
def _train_batch(self, inputs, labels):
# Set model to training mode
# self.model.train()
# Move data to the correct device
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# Forward pass
self.optimizer.zero_grad()
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
# Backward pass
loss.backward()
self.optimizer.step()
return loss.item()
def _validate(self, dataloader):
# Set model to evaluation mode and disable gradient computation
running_loss = 0
self.model.eval()
with torch.no_grad():
for inputs, labels in dataloader:
# Move data to the correct device
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# Forward pass and loss computation
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
running_loss += loss.item() * inputs.size(0)
return running_loss / len(dataloader.dataset)
def plot(self, skip_start=10, skip_end=5, log_lr=True):
"""Plots the learning rate range test.
Arguments:
skip_start (int, optional): number of batches to trim from the start.
Default: 10.
skip_end (int, optional): number of batches to trim from the start.
Default: 5.
log_lr (bool, optional): True to plot the learning rate in a logarithmic
scale; otherwise, plotted in a linear scale. Default: True.
"""
if skip_start < 0:
raise ValueError("skip_start cannot be negative")
if skip_end < 0:
raise ValueError("skip_end cannot be negative")
# Get the data to plot from the history dictionary. Also, handle skip_end=0
# properly so the behaviour is the expected
lrs = self.history["lr"]
losses = self.history["loss"]
if skip_end == 0:
lrs = lrs[skip_start:]
losses = losses[skip_start:]
else:
lrs = lrs[skip_start:-skip_end]
losses = losses[skip_start:-skip_end]
# Plot loss as a function of the learning rate
plt.plot(lrs, losses)
if log_lr:
plt.xscale("log")
plt.xlabel("Learning rate")
plt.ylabel("Loss")
plt.show()
class LinearLR(_LRScheduler):
"""Linearly increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(LinearLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr + r * (self.end_lr - base_lr) for base_lr in self.base_lrs]
class ExponentialLR(_LRScheduler):
"""Exponentially increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(ExponentialLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs]
trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
root = './data'
if not os.path.exists(root):
os.mkdir(root)
train_set = dset.MNIST(root=root, train=True, transform=trans, download=True)
test_set = dset.MNIST(root=root, train=False, transform=trans, download=True)
batch_size = 64
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=True)
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(28*28, 500)
self.fc2 = nn.Linear(500, 256)
self.fc3 = nn.Linear(256, 10)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
num_epochs = 2
random_sample_size = 200
# Hyper-parameters
input_size = 100
hidden_size = 100
num_classes = 10
learning_rate = .0001
# Device configuration
device = 'cpu'
model = NeuralNet().to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
# lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
# lr_finder.plot()
# optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])
# print(lr_finder.history['lr'])
predicted_test = []
labels_l = []
actual_values = []
predicted_values = []
N = len(train_loader)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
# images = images.reshape(-1, 50176).to(device)
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
predicted = outputs.data.max(1)[1]
predicted_test.append(predicted.cpu().numpy())
labels_l.append(labels.cpu().numpy())
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
predicted_values.append(np.concatenate(predicted_test).ravel())
actual_values.append(np.concatenate(labels_l).ravel())
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
print('training accuracy : ', 100 * len((np.where(np.array(predicted_values[0])==(np.array(actual_values[0])))[0])) / len(actual_values[0]))
Results :
Epoch [1/2], Step [938/938], Loss: 0.5374
training accuracy : 84.09833333333333
Epoch [2/2], Step [938/938], Loss: 0.2055
training accuracy : 84.09833333333333
With the learning rate finder code being uncommented :
Below commented out code is now not un-commented :
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
lr_finder.plot()
optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])
print(lr_finder.history['lr'])
the model achieves results after two epochs :
Epoch [1/2], Step [938/938], Loss: 3.7311
training accuracy : 9.93
Epoch [2/2], Step [938/938], Loss: 3.5106
training accuracy : 9.93
Can see the training accuracy is much lower 84.09833333333333 versus 9.93 . Should the learning rate finder find a learning rate that allows to achieve greater training set accuracy ?
| The code looks like it's using the implementation correctly. To answer your last question,
Can see the training accuracy is much lower 84.09833333333333 versus 9.93 . Should the learning rate finder find a learning rate that allows to achieve greater training set accuracy ?
Not really. A few points
You are using Adam, which scales the learning rate adaptively for each parameter in the network. The initial learning rate will matter less, as opposed to traditional SGD, for example. The original authors of Adam write
The hyper-parameters have intuitive interpre-tations and typically require little tuning. [1]
A well tuned learning rate should make your network converge faster (i.e in less epochs). It might still find the same local minima as a higher learning rate, but faster. The risk with too high learning rates is that you overshoot the local minima and instead find a poor one. With a tiny learning rate you should get the best training accuracy, but it will take very long.
You are training your model for only 2 epochs. If I had to guess, the algorithm has found that a small learning rate leads to good optima, but since it is small, it requires more time to converge. To test this theory, I would recommend running your training longer.
All that said, your time is probably better spent using Adam with default parameters and directing your attention elsewhere, such as modelling choices (layers, nodes, activations, etc). In my experience standard Adam works really well in most cases.
[1] https://arxiv.org/abs/1412.6980
| https://stackoverflow.com/questions/54553388/ |
LSTM's expected hidden state dimensions doesn't take batch size into account | I have this decoder model, which is supposed to take batches of sentence embeddings (batchsize = 50, hidden size=300) as input and output a batch of one hot representation of predicted sentences:
class DecoderLSTMwithBatchSupport(nn.Module):
# Your code goes here
def __init__(self, embedding_size,batch_size, hidden_size, output_size):
super(DecoderLSTMwithBatchSupport, self).__init__()
self.hidden_size = hidden_size
self.batch_size = batch_size
self.lstm = nn.LSTM(input_size=embedding_size,num_layers=1, hidden_size=hidden_size, batch_first=True)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, my_input, hidden):
print(type(my_input), type(hidden))
output, hidden = self.lstm(my_input, hidden)
output = self.softmax(self.out(output[0]))
return output, hidden
def initHidden(self):
return Variable(torch.zeros(1, self.batch_size, self.hidden_size)).cuda()
However, when I run it using:
decoder=DecoderLSTMwithBatchSupport(vocabularySize,batch_size, 300, vocabularySize)
decoder.cuda()
decoder_input=np.zeros([batch_size,vocabularySize])
for i in range(batch_size):
decoder_input[i] = embeddings[SOS_token]
decoder_input=Variable(torch.from_numpy(decoder_input)).cuda()
decoder_hidden = (decoder.initHidden(),decoder.initHidden())
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input.view(1,batch_size,-1), decoder_hidden)
I get he following error:
Expected hidden[0] size (1, 1, 300), got (1, 50, 300)
What am I missing in order to make the model expect batched hidden states?
| When you create the LSTM, the flag batch_first is not necessary, because it assumes a different shape of your input. From the docs:
If True, then the input and output tensors are provided as (batch,
seq, feature). Default: False
change the LSTM creation to:
self.lstm = nn.LSTM(input_size=embedding_size, num_layers=1, hidden_size=hidden_size)
Also, there is a type error. When you create the decoder_input using torch.from_numpy() it has a dtype=torch.float64, while decoder_input has as default dtype=torch.float32. Change the line where you create the decoder_input to something like
decoder_input = Variable(torch.from_numpy(decoder_input)).cuda().float()
With both changes, it is supposed to work fine :)
| https://stackoverflow.com/questions/54566209/ |
Result of auto-encoder dimensions are incorrect | Using below code I'm attempting to encode image from mnist into a lower dimension representation :
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import metrics
import datetime
from sklearn.preprocessing import MultiLabelBinarizer
import seaborn as sns
sns.set_style("darkgrid")
from ast import literal_eval
import numpy as np
from sklearn.preprocessing import scale
import seaborn as sns
sns.set_style("darkgrid")
import torch
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
%matplotlib inline
low_dim_rep = 32
epochs = 2
cuda = torch.cuda.is_available() # True if cuda is available, False otherwise
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
print('Training on %s' % ('GPU' if cuda else 'CPU'))
# Loading the MNIST data set
transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,))])
mnist = torchvision.datasets.MNIST(root='../data/', train=True, transform=transform, download=True)
# Loader to feed the data batch by batch during training.
batch = 100
data_loader = torch.utils.data.DataLoader(mnist, batch_size=batch, shuffle=True)
encoder = nn.Sequential(
# Encoder
nn.Linear(28 * 28, 64),
nn.PReLU(64),
nn.BatchNorm1d(64),
# Low-dimensional representation
nn.Linear(64, low_dim_rep),
nn.PReLU(low_dim_rep),
nn.BatchNorm1d(low_dim_rep))
decoder = nn.Sequential(
# Decoder
nn.Linear(low_dim_rep, 64),
nn.PReLU(64),
nn.BatchNorm1d(64),
nn.Linear(64, 28 * 28))
autoencoder = nn.Sequential(encoder, decoder)
encoder = encoder.type(FloatTensor)
decoder = decoder.type(FloatTensor)
autoencoder = autoencoder.type(FloatTensor)
optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.00001)
data_size = int(mnist.train_labels.size()[0])
print('data_size' , data_size)
for i in range(epochs):
for j, (images, _) in enumerate(data_loader):
images = images.view(images.size(0), -1) # from (batch 1, 28, 28) to (batch, 28, 28)
images = Variable(images).type(FloatTensor)
autoencoder.zero_grad()
reconstructions = autoencoder(images)
loss = torch.dist(images, reconstructions)
loss.backward()
optimizer.step()
print('Epoch %i/%i loss %.2f' % (i + 1, epochs, loss.data[0]))
print('Optimization finished.')
# Get the encoded images here
encoded_images = []
for j, (images, _) in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images.append(encoder(images))
Upon completion of this code
len(encoded_images) is 600 when I expect the length to match the number of images in mnist : len(mnist) - 60'000.
How to encode the images to a lower dimension representation of 32 ( low_dim_rep = 32 ) ? I've defined the network parameters incorrectly ?
| You have 60000 images in mnist and your batch = 100. That is why your len(encoded_images)=600 because you do 60000/100=600 iterations when generating encoded image. You end up with a list of 600 elements where each element has shape [100, 32]. You can do the following
encoded_images = torch.zeros(len(mnist), 32)
for j, (images, _) in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images[j * batch : (j+1) * batch] = encoder(images)
| https://stackoverflow.com/questions/54568113/ |
Store weight updates for momemntum | I am trying to implement momentum in my implementation of SGD with momentum.
From my understanding this update look like this:
parameters -= (lr * (p.grad*0.1 + p_delta_prev*0.9))
My question is how I should store my previous deltas from every update
Here is what I have in my update function:
#we now want to do the update with momentum
#momentum takes derivative, multiplies it by 0.1, then takes the previous update,
#multiplies it by 0.9 and we add the two together
#alpha = 0.1, beta = 0.9; p-=grad*0.1 + p*0.9
def update(x,y,lr):
wd = 1e-5
y_hat = model(x)
# weight decay
w2 = 0.
for p in model.parameters(): w2 += (p**2).sum()
# add to regular loss
loss = loss_func(y_hat, y) + w2*wd
loss.backward()
with torch.no_grad():
for p in model.parameters():
#p.grad is the slope of the line of that parameter
#current_p-previous_p to get difference
p_update = (lr * (p.grad*0.1 + p*0.9))
p.sub_(p_update)
p.grad.zero_()
return loss.item()
Here the p*0.9 should be replace by the p_delta_prev. But how should I store these deltas for every parameter? If I save them to a tensor wouldn't I would be effectively copying the weight deltas to memory making my model two times the size. What would be a good way to accomplish this? I do not want to use an inbuilt function that does that activation for me. I did look into the pytorch sgd.py and it looks like the store the states.
I have updated the code:
#we now want to do the update with momentum
#momentum takes derivative, multiplys it by 0.1, then takes the previous update,
#multiplies it by 0.9 and we add the two together
#alpha = 0.1, beta = 0.9; p-=grad*0.1 + p*0.9
p_delta = {}
def update(x,y,lr):
wd = 1e-5
y_hat = model(x)
# weight decay
w2 = 0.
for p in model.parameters(): w2 += (p**2).sum()
# add to regular loss
loss = loss_func(y_hat, y) + w2*wd
loss.backward()
with torch.no_grad():
i = 0
for p in model.parameters():
#p.grad is the slope of the line of that parameter
if i not in p_delta:#check if key exists
p_delta[i] = torch.zeros_like(p)
p_update = (lr *p.grad) + (p_delta[i]*0.9)
p_delta[i] = p_update.clone()
p.sub_(p_update)
p.grad.zero_()
print((p_delta[i]))
i+=1
return loss.item()
I think the code in the Excel spreadsheet is incorrect. Jeremy seems to show: lr* ((p.grad*0.1) + (p_delta[i]*0.9)) but many tutorials seem to show: (lr *p.grad) + (p_delta[i]*0.9) If we implement Jeremy’s code the loss actually is slower than vanilla GD. The part of the video is here: https://youtu.be/CJKnDu2dxOE?t=6581
| Yes, it does store the parameter momenta in a dictionary, indexed by their names, as returned by model.named_parameters(). I don't know how to rigorously prove this, but I strongly believe it's impossible to apply momentum without using additional memory twice the size of your model.
That being said, I wouldn't worry, because model size is rarely a big factor in memory consumption of the whole algorithm - keeping the intermediate network activations for backpropagation algorithm is far more expensive. Taking the VGG-16 network as an example, it has 138 million parameters (figure taken from here), which amounts to slightly more than 0.5gb if stored in single precision. Compare this with 6gb+ found on reasonable modern GPUs.
| https://stackoverflow.com/questions/54578255/ |
How to reset Colab after the following CUDA error 'Cuda assert fails: device-side assert triggered'? | I'm running my Jupyter Notebook using Pytorch on Google Colab. After I received the 'Cuda assert fails: device-side assert triggered' I am unable to run any other code that uses my pytorch module. Does anyone know how to reset my code so that my Pytorch functions that were working before can still run?
I've already tried implementing CUDA_LAUNCH_BLOCKING=1but my code still doesn't work as the Assert is still triggered!
| You need to reset the Colab notebook. To run existing Pytorch modules that used to work before, you have to do the following:
Go to 'Runtime' in the tool bar
Click 'Restart and Run all'
This will reset your CUDA assert and flush out the module so that you can have another shot at avoiding the error!
| https://stackoverflow.com/questions/54585685/ |
How to wrap PyTorch functions and implement autograd? | I'm working through the PyTorch tutorial on Defining new autograd functions. The autograd function I want to implement is a wrapper around torch.nn.functional.max_pool1d. Here is what I have so far:
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd as tag
class SquareAndMaxPool1d(tag.Function):
@staticmethod
def forward(ctx, input, kernel_size, stride=None, padding=0, dilation=1, \
return_indices=False, ceil_mode=False):
ctx.save_for_backward( input )
inputC = input.clone() #copy input
inputC *= inputC
output = F.max_pool1d(inputC, kernel_size, stride=stride, \
padding=padding, dilation=dilation, \
return_indices=return_indices, \
ceil_mode=ceil_mode)
return output
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
grad_input = get_max_pool1d_grad_somehow(grad_output)
return 2.0*input*grad_input
My question is: how to I get the gradient of the wrapped function? I know that there are probably other ways to do this given how simple the example I present is, but what I want to do fits this framework and requires me to implement an autograd function.
Edit: After examining this blog post I decided to try the following for backward:
def backward(ctx, grad_output):
input, output = ctx.saved_tensors
grad_input = output.backward(grad_output)
return 2.0*input*grad_input
with output added to the saved variables. I then run the following code:
x = np.random.randn(1,1,5)
xT = torch.from_numpy(x)
xT.requires_grad=True
f = SquareAndMaxPool1d.apply
s = torch.sum(f(xT,2))
s.backward()
and I get Bus error: 10.
Say, xT is tensor([[[ 1.69533562, -0.21779421, 2.28693953, -0.86688095, -1.01033497]]], dtype=torch.float64), then I would expect to find that xT.grad is tensor([[[ 3.39067124, -0. , 9.14775812, -0. , -2.02066994]]], dtype=torch.float64) after calling s.backward() (that is 2*x*grad_of_max_pool, with grad_of_max_pool containing tensor([[[1., 0., 2., 0., 1.]]], dtype=torch.float64)).
I've figured out why I get a Bus error: 10. It appears that the above code leads to a recursive call of my backward at grad_input = output.backward(grad_output). So I need to find some other way to get the gradient of max_pool1d. I know how to implement this in pure Python, but the result would be much slower than if I could wrap the library code.
| You have picked a rather unlucky example. torch.nn.functional.max_pool1d is not an instance of torch.autograd.Function, because it's a PyTorch built-in, defined in C++ code and with an autogenerated Python binding. I am not sure if it's possible to get the backward property via its interface.
Firstly, in case you haven't noticed, you don't need to write any custom code for backpropagation of this formula because both power operation and max_pool1d already have it defined, so their composition also is covered by the autograd. Assuming your goal is an exercise, I would suggest you do it more manually (without falling back to backward of max_pool1d). An example is below
import torch
import torch.nn.functional as F
import torch.autograd as tag
class SquareAndMaxPool1d(tag.Function):
@staticmethod
def forward(ctx, input, kernel_size, **kwargs):
# we're gonna need indices for backward. Currently SquareAnd...
# never actually returns indices, I left it out for simplicity
kwargs['return_indices'] = True
input_sqr = input ** 2
output, indices = F.max_pool1d(input_sqr, kernel_size, **kwargs)
ctx.save_for_backward(input, indices)
return output
@staticmethod
def backward(ctx, grad_output):
input, indices = ctx.saved_tensors
# first we need to reconstruct the gradient of `max_pool1d`
# by putting all the output gradient elements (corresponding to
# input elements which made it through the max_pool1d) in their
# respective places, the rest has gradient of 0. We do it by
# scattering it against a tensor of 0s
grad_output_unpooled = torch.zeros_like(input)
grad_output_unpooled.scatter_(2, indices, grad_output)
# then incorporate the gradient of the "square" part of your
# operator
grad_input = 2. * input * grad_output_unpooled
# the docs for backward
# https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function.backward
# say that "it should return as many tensors, as there were inputs
# to forward()". It fails to mention that if an argument was not a
# tensor, it should return None (I remember reading this somewhere,
# but can't find it anymore). Anyway, we need to
# return a (grad_input, None) tuple to avoid a complaint that two
# outputs were expected
return grad_input, None
We can then use the numerical gradient checker to verify that the operation works as expected.
f = SquareAndMaxPool1d.apply
xT = torch.randn(1, 1, 6, requires_grad=True, dtype=torch.float64)
tag.gradcheck(lambda t: f(t, 2), xT)
I'm sorry if this doesn't address your question of how to get the backward of max_pool1d, but hopefully you find my answer useful enough.
| https://stackoverflow.com/questions/54586938/ |
Pytorch equivalent of Numpy's logical_and and kin? | Does Pytorch have an equivalent of Numpy's element-wise logical operators (logical_and, logical_or, logical_not, and logical_xor)? Calling the Numpy functions on Pytorch tensors seems to work well enough when using the CPU, even producing a Pytorch tensor as output. I mainly ask because I assume this would not work so well if the pytorch calculation were running in the GPU.
I've looked through Pytorch's documentation index at all functions containing the string "and" and none seem relevant.
| Update: With Pytorch 1.2, PyTorch introduced torch.bool datatype, which can be used using torch.BoolTensor:
>>> a = torch.BoolTensor([False, True, True, False]) # or pass [0, 1, 1, 0]
>>> b = torch.BoolTensor([True, True, False, False])
>>> a & b # logical and
tensor([False, True, False, False])
PyTorch supports logical operations on ByteTensor. You can use logical operations using &, |, ^, ~ operators as follows:
>>> a = torch.ByteTensor([0, 1, 1, 0])
>>> b = torch.ByteTensor([1, 1, 0, 0])
>>> a & b # logical and
tensor([0, 1, 0, 0], dtype=torch.uint8)
>>> a | b # logical or
tensor([1, 1, 1, 0], dtype=torch.uint8)
>>> a ^ b # logical xor
tensor([1, 0, 1, 0], dtype=torch.uint8)
>>> ~a # logical not
tensor([1, 0, 0, 1], dtype=torch.uint8)
| https://stackoverflow.com/questions/54590661/ |
Why/how can model.forward() succeed both on input being mini-batch vs single item? | Why and how does this work?
When I run the forward phase on input
being mini-batch tensor
or alternatively being a single input item
model.__call__() (which AFAIK is calling forward() ) swallows that and spills out adequate output (i.e. a tensor of mini-batch of estimates or a single item of estimate)
Adopting testcode from the Pytorch NN example shows what I mean, but I don't get it.
I would expect it to create problems and me forced to transform the single item input into a mini-batch of size 1( reshape (1,xxx)) or likewise, like I did in the code below.
( I did variations of the test to be sure it is e.g. not depending on execution order )
# -*- coding: utf-8 -*-
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
#N, D_in, H, D_out = 64, 1000, 100, 10
N, D_in, H, D_out = 64, 10, 4, 3
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(1):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
model.eval()
print ("###########")
print ("x[0]",x[0])
print ("x[0].size()", x[0].size())
y_1pred = model(x[0])
print ("y_1pred.size()", y_1pred.size())
print (y_1pred)
model.eval()
print ("###########")
print ("x.size()", x.size())
y_pred = model(x)
print ("y_pred.size()", y_pred.size())
print ("y_pred[0]", y_pred[0])
print ("###########")
model.eval()
input_item = x[0]
batch_len1_shape = torch.Size([1,*(input_item.size())])
batch_len1 = input_item.reshape(batch_len1_shape)
y_pred_batch_len1 = model(batch_len1)
print ("input_item",input_item)
print ("input_item.size()", input_item.size())
print ("y_pred_batch_len1.size()", y_pred_batch_len1.size())
print (y_1pred)
raise Exception
This is the output it generates:
###########
x[0] tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
x[0].size() torch.Size([10])
y_1pred.size() torch.Size([3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
###########
x.size() torch.Size([64, 10])
y_pred.size() torch.Size([64, 3])
y_pred[0] tensor([-0.5366, -0.4826, 0.0538], grad_fn=<SelectBackward>)
###########
input_item tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
input_item.size() torch.Size([10])
y_pred_batch_len1.size() torch.Size([1, 3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
| The docs on nn.Linear state that
Input: (N,∗,in_features) where ∗ means any number of additional dimensions
so one would naturally expect that at least two dimensions are necessary. However, if we look under the hood we will see that Linear is implemented in terms of nn.functional.linear, which dispatches to torch.addmm or torch.matmul (depending whether bias == True) which broadcast their argument.
So this behavior is likely a bug (or an error in documentation) and I would not depend on it working in the future, if I were you.
| https://stackoverflow.com/questions/54591124/ |
When to use learning rate finder | Reading the paper '
Cyclical Learning Rates for Training Neural Networks' https://arxiv.org/abs/1506.01186
Does it make sense to use the learning rate finder if the model is over-fitting ? Other than reduce the number of iterations before the model overfit's will using the learning finder prevent over-fitting ?
From reading the paper there is no suggestion this method of reduces over-fitting, is my interpretation correct ?
| I don't think changing the learning rate reduces over-fit. To avoid over-fitting you might want to use L1/L2 regularization and drop-out or some of its variant.
| https://stackoverflow.com/questions/54607530/ |
"RuntimeError: Found 0 files in subfolders of ".. Error about subfolder in Pytorch | I'm based on Window 10, Jupyter Notebook, Pytorch 1.0, Python 3.6.x currently.
At first I confirm to the correct path of files using this code : print(os.listdir('./Dataset/images/')).
and I could check that this path is correct.
but I met Error :
RuntimeError: Found 0 files in subfolders of: ./Dataset/images/
Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif"
What is the matter?
Could you suggest a solution?
I tried to ./dataset/1/images like this method. but the result was same....
img_dir = './Dataset/images/'
img_data = torchvision.datasets.ImageFolder(os.path.join(img_dir), transforms.Compose([
transforms.Scale(256),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
]))
img_batch = data.DataLoader(img_data, batch_size=batch_size,
shuffle = True, drop_last=True)
| Can you post the structure of your files? In your case, it is supposed to be:
img_dir
|_class1
|_a.jpg
|_b.jpg
|_class2
|_a.jpg
|_b.jpg
...
| https://stackoverflow.com/questions/54613573/ |
simple CNN model training using CIFAR-10 dataset STUCK at a low accuracy | Hi i have just learned to implement NN models in pytorch through the udacity course and thus created a simple model with a a few CNN and FC layers. after much struggle i got the model to work. but it seems that it is stuck at the same loss even after repeated executions. i dont know where i am going wrong. Must be some logical error which i cant see.
Here is the code.
model
class cifar_clasify(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3,16,3)
self.BNorm1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16,32,3)
self.BNorm2 = nn.BatchNorm2d(32)
self.fc1 = nn.Linear(32*6*6,256)
self.fc2 = nn.Linear(256,512)
self.fc3 = nn.Linear(512,10)
self.drop = nn.Dropout(p =0.2)
def forward(self,x):
out = self.conv1(x)
out = F.relu(out)
#print(out.shape)
out = F.max_pool2d(out,2)
out = self.BNorm1(out)
#print(out.shape)
out = self.conv2(out)
out = F.relu(out)
#print(out.shape)
out = F.max_pool2d(out,2)
out = self.BNorm2(out)
#print(out.shape)
out = out.view(out.shape[0],-1)
out = self.fc1(out)
out = self.drop(F.relu(out))
out = self.fc2(out)
out = self.drop(F.relu(out))
final = F.log_softmax(F.relu(self.fc3(out)) , dim = 1)
return final
training code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
model = cifar_clasify()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr =0.03)
epoch =2
step = 2
running_loss = 0
accuracy = 0
print_every = 5
model.to(device)
for e in range(epoch):
for inputs,label_ in zip(train_X,train_labels):
step +=1
inputs = inputs.view((-1,3,32,32))
inputs,label_ = inputs.to(device) , label_.to(device)
#inputs.cuda()
#label.cuda()
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps , label_.reshape(1))
loss.backward()
optimizer.step()
running_loss += loss.item()
if step % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for testx, lab in zip(test_X , test_labels):
testx = testx.view((-1,3,32,32))
testx,lab = testx.to(device) , lab.to(device)
lab = lab.reshape(1)
logps = model.forward(testx)
batch_loss = criterion(logps , lab)
#print(batch_loss.item())
test_loss += batch_loss.item()
ps = torch.exp(logps)
top_p , topclass = ps.topk(1,dim = 1)
equals = topclass == lab.view(*topclass.shape)
accuracy += torch.mean(torch.mean(equals.type(torch.FloatTensor))).item()
print(f"Epoch {e+1}/{epoch}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(test_X):.3f}.. "
f"Test accuracy: {accuracy/len(test_X):.3f}")
running_loss = 0
model.train()
Here is the result which i had to stop as it was not improving :
Epoch 1/2.. Train loss: 1.396.. Test loss: 5.288.. Test accuracy: 0.104
step = 5
Epoch 1/2.. Train loss: 3.038.. Test loss: 2.303.. Test accuracy: 0.104
step = 10
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 15
Epoch 1/2.. Train loss: 2.669.. Test loss: 2.318.. Test accuracy: 0.105
step = 20
Epoch 1/2.. Train loss: 3.652.. Test loss: 2.303.. Test accuracy: 0.104
step = 25
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 30
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 35
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 40
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 45
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 50
Epoch 1/2.. Train loss: 2.303.. Test loss: 2.303.. Test accuracy: 0.104
step = 55
Here is the code if you want any other information:
Simple CNN for CIFAR 10 classification in google colab
| Since batch size is 1, use a lower learning rate like 1e-4 or increase the batch size.
I recommend making batch size 16 or larger though.
EDIT: To create a batch of data you can do something like this.
N = input.shape[0] #know the total size/samples in input
for i in range(n_epochs):
# this is to shuffle data
indices = torch.randperm(N)
for idx in range(0, N, batch_size):
batch_input = input[idx:idx+batch_size] # this will get you input of size batch_size
# do whatever you want with the batch_input
# ....
| https://stackoverflow.com/questions/54621042/ |
Using a generator with pickled data in a Dataloader for PyTorch | I have done some preprocessing and feature selection before, and I have a pickle training input data that consists of list of lists, e.g. (but pickled)
[[1,5,45,13], [23,256,4,2], [1,12,88,78], [-1]]
[[12,45,77,325], [23,257,5,28], [3,7,48,178], [12,77,89,99]]
[[13,22,78,89], [12,33,97], [-1], [-1]]
[-1] is a padding token, but I don't think that matters.
Because the file is numerous of gigabytes, I wish to spare memory and use a generator to read in the pickle line by line (list by list). I already found this answer that could be helpful. This would like follows:
def yield_from_pickle(pfin):
with open(pfin, 'rb') as fhin:
while True:
try:
yield pickle.load(fhin)
except EOFError:
break
The next thing is, that I wish to use this data in a PyTorch (1.0.1) Dataloader. From what I found in other answers, I must feed it a Dataset which you can subset, but which must contain __len__ and __getitem__. It could like this:
class TextDataset(Dataset):
def __init__(self, pfin):
self.pfin = pfin
def __len__(self):
# memory-lenient way but exhaust generator?
return sum(1 for _ in self.yield_from_pickle())
def __getitem__(self, index):
# ???
pass
def yield_from_pickle(self):
with open(self.pfin, 'rb') as fhin:
while True:
try:
yield pickle.load(fhin)
except EOFError:
break
But I am not at all sure if this is even possible. How can I implement __len__ and __getitem__ in a sensible way? I don't think what I am doing with __len__ is a good idea because that'll exhaust the generator, and I have no idea at all how to safely implement __getitem__ while retaining the generator.
Is there a better way? To summarize, I want to build a Dataset that can be fed to PyTorch's Dataloader (because of its multiprocessing abilities) but in a memory-efficient way where I don't have to read the whole file into memory.
| See my other answer for your options.
In short, you need to either preprocess each sample into separate files, or use a data format that does not need to be loaded fully into memory for reading.
| https://stackoverflow.com/questions/54621447/ |
PyTorch: accuracy of validation set greater than 100% during training | 1 ) Problem
I observe an odd behaviour during training where my validation-accuracy is above 100% right from the start.
Epoch 0/3
----------
100%|██████████| 194/194 [00:50<00:00, 3.82it/s]
train Loss: 1.8653 Acc: 0.4796
100%|██████████| 194/194 [00:32<00:00, 5.99it/s]
val Loss: 1.7611 Acc: 1.2939
Epoch 1/3
----------
100%|██████████| 194/194 [00:42<00:00, 4.61it/s]
train Loss: 0.8704 Acc: 0.7467
100%|██████████| 194/194 [00:31<00:00, 6.11it/s]
val Loss: 1.0801 Acc: 1.4694
The output indicates that one epoch iterates over 194 batches, which does seem to be correct for the training data (which has a length of 6186, batch_size is 32, hence 32*194 = 6208 and this is ≈6186) but does not match the size of the validation-data (length of 3447, batch_size = 32).
Hence I would expect my validation-loop to generate 108 (3447 / 32 ≈ 108) batches insted of 194.
I thought this behaviour is handled within my for loop at:
for dataset in tqdm(dataloaders[phase]):
But somehow I can't figure out what is wrong here. See point 3) below for my entire code.
2 ) Question
If my assumption above is correct i.e. that this error stems from the for-loop within in my code then I would like to know the following:
How do I need to adjust the for-loop during the validation phase to handle the number of batches that are being used for validation correctly?
3 ) Background:
Following two tutorials, one on how to do transfer-learning (https://discuss.pytorch.org/t/transfer-learning-using-vgg16/20653) and one on how to do data-loading (https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) in pytorch, I am trying to customize the code such that I can perform transfer-learning on a new custom dataset which I want to provide via pandas dataframes.
As such, my training- and validation-data is provided via two dataframes (df_train & df_val) which both contain two columns, one for the path and one for the target. E.g. like this:
url target
0 C:/Users/aaron/Desktop/pics/4ebd... 9
1 C:/Users/aaron/Desktop/pics/7153... 3
2 C:/Users/aaron/Desktop/pics/3ee6... 3
3 C:/Users/aaron/Desktop/pics/4652... 16
4 C:/Users/aaron/Desktop/pics/28ce... 15
...
And their respective length:
print(len(df_train))
print(len(df_val))
>> 6186
>> 3447
My pipeline looks like this:
class CustomDataset(Dataset):
def __init__(self, df, transform=None):
self.dataframe = df_train
self.transform = transform
def __len__(self):
return len(self.dataframe)
def __getitem__(self, idx):
img_name = self.dataframe.iloc[idx, 0]
img = Image.open(img_name)
img_normalized = self.transform(img)
landmarks = self.dataframe.iloc[idx, 1]
sample = {'data': img_normalized, 'label': int(landmarks)}
return sample
train_dataset = CustomDataset(df_train,transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))
val_dataset = CustomDataset(df_val,transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]))
train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=32,shuffle=True, num_workers=0)
val_loader = torch.utils.data.DataLoader(val_dataset,batch_size=32,shuffle=True, num_workers=0)
dataloaders = {'train': train_loader, 'val': val_loader}
dataset_sizes = {'train': len(df_train) ,'val': len(df_val)}
################### Training
from tqdm import tqdm
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for dataset in tqdm(dataloaders[phase]):
inputs, labels = dataset["data"], dataset["label"]
#print(inputs.type())
inputs = inputs.to(device, dtype=torch.float)
labels = labels.to(device,dtype=torch.long)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, len(le.classes_))
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=4)
| Your problem appears to be here:
class CustomDataset(Dataset):
def __init__(self, df, transform=None):
>>>>> self.dataframe = df_train
This should be
self.dataframe = df
In your case, you are inadvertently setting both the train and val CustomDataset to df_train ...
| https://stackoverflow.com/questions/54636288/ |
Torch Dataset Looping too far | Why does this dataset try to iterate past the final element
from torch.utils.data.dataset import Dataset
class DumbDataset(Dataset):
def __init__(self, dct):
self.dct = dct
self.mapping = dict(enumerate(dct))
def __getitem__(self, index):
return self.dct[self.mapping[index]]
def __len__(self):
print('called')
return len(self.dct)
ds = DumbDataset({'a': 'aword', 'b': 'another_words'})
for k in ds: print(k)
This raises KeyError: 2, which I don't understand since length of the object is 2. Shouldn't the iterator get StopIteration once it is exhausted?
| The reason why your code raises KeyError is that Dataset does not implement __iter__() and thus when used in a for-loop Python falls back to starting at index 0 and calling __getitem__ until IndexError is raised, as discussed here. You can modify DumbDataset to work like this by having it raise an IndexError when the index is out of bounds
def __getitem__(self, index):
if index >= len(self): raise IndexError
return self.dct[self.mapping[index]]
and then your loop
for k in ds:
print(k)
will work as you expected. On the other hand, the typical template for torch datasets is that you can either loop through them with indexing
for i in range(len(ds)):
k = ds[k]
print(k)
or that you wrap them in a DataLoader which returns elements in batches
generator = DataLoader(ds)
for k in generator:
print(k)
| https://stackoverflow.com/questions/54640906/ |
How to combine/stack tensors and combine dimensions in PyTorch? | I need to combine 4 tensors, representing greyscale images, of size [1,84,84], into a stack of shape [4,84,84], representing four greyscale images with each image represented as a "channel" in tensor style CxWxH.
I am using PyTorch.
I've tried using torch.stack and torch.cat but if one of these is the solution, I am not having luck figuring out the correct prep/methodology to get my results.
Thank you for any help.
import torchvision.transforms as T
class ReplayBuffer:
def __init__(self, buffersize, batchsize, framestack, device, nS):
self.buffer = deque(maxlen=buffersize)
self.phi = deque(maxlen=framestack)
self.batchsize = batchsize
self.device = device
self._initialize_stack(nS)
def get_stack(self):
#t = torch.cat(tuple(self.phi),dim=0)
t = torch.stack(tuple(self.phi),dim=0)
return t
def _initialize_stack(self, nS):
while len(self.phi) < self.phi.maxlen:
self.phi.append(torch.tensor([1,nS[1], nS[2]]))
a = ReplayBuffer(buffersize=50000, batchsize=64, framestack=4, device='cuda', nS=[1,84,84])
print(a.phi)
s = a.get_stack()
print(s, s.shape)
The above code returns:
print(a.phi)
deque([tensor([ 1, 84, 84]), tensor([ 1, 84, 84]), tensor([ 1, 84, 84]), tensor([ 1, 84, 84])], maxlen=4)
print(s, s.shape)
tensor([[ 1, 84, 84],
[ 1, 84, 84],
[ 1, 84, 84],
[ 1, 84, 84]]) torch.Size([4, 3])
But what I would like is the return to simply be [4, 84, 84]. I suspect this is quite simple but it's escaping me.
| It seems you have misunderstood what torch.tensor([1, 84, 84]) is doing. Let's take a look:
torch.tensor([1, 84, 84])
print(x, x.shape) #tensor([ 1, 84, 84]) torch.Size([3])
You can see from the example above, it gives you a tensor with only one dimension.
From your problem statement, you need a tensor of shape [1,84,84].
Here's how it look like:
from collections import deque
import torch
import torchvision.transforms as T
class ReplayBuffer:
def __init__(self, buffersize, batchsize, framestack, device, nS):
self.buffer = deque(maxlen=buffersize)
self.phi = deque(maxlen=framestack)
self.batchsize = batchsize
self.device = device
self._initialize_stack(nS)
def get_stack(self):
t = torch.cat(tuple(self.phi),dim=0)
# t = torch.stack(tuple(self.phi),dim=0)
return t
def _initialize_stack(self, nS):
while len(self.phi) < self.phi.maxlen:
# self.phi.append(torch.tensor([1,nS[1], nS[2]]))
self.phi.append(torch.zeros([1,nS[1], nS[2]]))
a = ReplayBuffer(buffersize=50000, batchsize=64, framestack=4, device='cuda', nS=[1,84,84])
print(a.phi)
s = a.get_stack()
print(s, s.shape)
Note that torch.cat gives you a tensor of shape [4, 84, 84] and torch.stack gives you a tensor of shape [4, 1, 84, 84]. Their difference can be found at What's the difference between torch.stack() and torch.cat() functions?
| https://stackoverflow.com/questions/54643076/ |
Training (DC)GAN, D(G(z)) goes to 0.5 while D(x) stays 0.9 and G(z) becomes corrupt | I'm currently training a DCGAN for 1x32x32 (channel, height, width) images.
Quite soon in training G(z) becomes reasonably realistic apart from a problem with the 'chessboard' artifacts being visible, but this should go away after lots of training?
However, after a long training session D(G(z)) goes to 0.5000 (and no longer changes) while D(x) stays between 0.8 and 0.9. Whenever D(G(z)) goes to 0.5 it also starts outputting fully black & white images. Hence, the generator no longer produces anything that looks close to what's in the training dataset. G(z) just becomes a black or white square.
The network used is from the original DCGAN paper, adapter for 1x32x32 images. With relu already replaced to leaky relu.
| Solved the problem by switching to WGAN-GP (https://arxiv.org/abs/1704.00028).
Turns out it is more stable while training.
| https://stackoverflow.com/questions/54647599/ |
How to set gradients to Zero without optimizer? | Between mutliple .backward() passes I'd like to set the gradients to zero. Right now I have to do this for every component seperately (here these are x and t), is there a way to do this "globally" for all affected variables? (I imagine something like z.set_all_gradients_to_zero().)
I know there is optimizer.zero_grad() if you use an optimizer, but is there also a direct way without using an optimizer?
import torch
x = torch.randn(3, requires_grad = True)
t = torch.randn(3, requires_grad = True)
y = x + t
z = y + y.flip(0)
z.backward(torch.tensor([1., 0., 0.]), retain_graph = True)
print(x.grad)
print(t.grad)
x.grad.data.zero_() # both gradients need to be set to zero
t.grad.data.zero_()
z.backward(torch.tensor([0., 1., 0.]), retain_graph = True)
print(x.grad)
print(t.grad)
| You can also use nn.Module.zero_grad(). In fact, optim.zero_grad() just calls nn.Module.zero_grad() on all parameters which were passed to it.
There is no reasonable way to do it globally. You can collect your variables in a list
grad_vars = [x, t]
for var in grad_vars:
var.grad = None
or create some hacky function based on vars(). Perhaps it's also possible to inspect the computation graph and zero the gradient of all leaf nodes, but I am not familiar with the graph API. Long story short, you're expected to use the object-oriented interface of torch.nn instead of manually creating tensor variables.
| https://stackoverflow.com/questions/54648053/ |
pytorch torch.jit.trace returns function instead of torch.jit.ScriptModule | I need to run in c++ a pre-trained pytorch nn model (trained in python) to make predictions.
To do so, I'm following the instructions on how to load a pytorch model in c++ given here: https://pytorch.org/tutorials/advanced/cpp_export.html
But when I try to get the torch.jit.ScriptModule via tracing as stated in the first step of the tutorial:
traced_script_module =
torch.jit.trace(model, (input_tensor_1, input_tensor_2))
Instead of returning a torch.jit.ScriptModule, it returns a function:
print(type(traced_script_module))
<type 'function'>
Which, when I run:
traced_script_module.save("model.pt")
then leads into the following error:
Traceback (most recent call last):
File "serialize_model.py", line 60, in <module>
traced_script_module.save("model.pt")
AttributeError: 'function' object has no attribute 'save'
Any ideas on what I'm doing wrong?
| Thanks for asking Jatentaki. I was using PyTorch 0.4 in Python and when I updated to 1.0 it worked.
| https://stackoverflow.com/questions/54650423/ |
Expected input to torch Embedding layer with pre_trained vectors from gensim | I would like to use pre-trained embeddings in my neural network architecture. The pre-trained embeddings are trained by gensim. I found this informative answer which indicates that we can load pre_trained models like so:
import gensim
from torch import nn
model = gensim.models.KeyedVectors.load_word2vec_format('path/to/file')
weights = torch.FloatTensor(model.vectors)
emb = nn.Embedding.from_pretrained(torch.FloatTensor(weights.vectors))
This seems to work correctly, also on 1.0.1. My question is, that I don't quite understand what I have to feed into such a layer to utilise it. Can I just feed the tokens (segmented sentence)? Do I need a mapping, for instance token-to-index?
I found that you can access a token's vector simply by something like
print(weights['the'])
# [-1.1206588e+00 1.1578362e+00 2.8765252e-01 -1.1759659e+00 ... ]
What does that mean for an RNN architecture? Can we simply load in the tokens of the batch sequences? For instance:
for seq_batch, y in batch_loader():
# seq_batch is a batch of sequences (tokenized sentences)
# e.g. [['i', 'like', 'cookies'],['it', 'is', 'raining'],['who', 'are', 'you']]
output, hidden = model(seq_batch, hidden)
This does not seem to work so I am assuming you need to convert the tokens to its index in the final word2vec model. Is that true? I found that you can get the indices of words by using the word2vec model's vocab:
weights.vocab['world'].index
# 147
So as an input to an Embedding layer, should I provide a tensor of int for a sequence of sentences that consist of a sequence of words? Example use with dummy dataloader (cf. example above) and dummy RNN welcome.
| The documentation says the following
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
So if you want to feed in a sentence, you give a LongTensor of indices, each corresponding to a word in the vocabulary, which the nn.Embedding layer will map into word vectors going forward.
Here's an illustration
test_voc = ["ok", "great", "test"]
# The word vectors for "ok", "great" and "test"
# are at indices, 0, 1 and 2, respectively.
my_embedding = torch.rand(3, 50)
e = nn.Embedding.from_pretrained(my_embedding)
# LongTensor of indicies corresponds to a sentence,
# reshaped to (1, 3) because batch size is 1
my_sentence = torch.tensor([0, 2, 1]).view(1, -1)
res = e(my_sentence)
print(res.shape)
# => torch.Size([1, 3, 50])
# 1 is the batch dimension, and there's three vectors of length 50 each
In terms of RNNs, next you can feed that tensor into your RNN module, e.g
lstm = nn.LSTM(input_size=50, hidden_size=5, batch_first=True)
output, h = lstm(res)
print(output.shape)
# => torch.Size([1, 3, 5])
I also recommend you look into torchtext. It can automatate some of the stuff you will have to do manually otherwise.
| https://stackoverflow.com/questions/54655604/ |
how to load images data into pytorch dataLoader? | i am new to deep learning I want to use an algorithm written by pytorch, the example in pytorch tutorial is very specific . i have dataset in my Pc and i want to preprocess them .
thanks
| class Get_Dataset( Dataset ):
def __init__ (self) : #intital function define all member and class variables
super(Get_Dataset , self).__init__()
scale = 255
path = '/home/singhv/data/train/'
trainA = os.listdir(path + 'image')
trainB = os.listdir(path + 'mask')
self.len = min([len(image) , len(mask)])
self.object = np.ones((self.len ,128,128 , 3 ))
self.target = np.ones((self.len , 128 , 128 , 3))
print("Loading Dataset...")
for i in tqdm(range(self.len)) :
self.object[i] = cv2.resize(cv2.imread(path + 'image/' + image[i]), (128,128))
self.target[i] = cv2.resize(cv2.imread(path + 'mask/' + mask[i]), (128,128))
self.object = torch.from_numpy(((self.object/(scale/2))-1 ) ).transpose_(3 , 1).double()
self.target = torch.from_numpy(((self.target/(scale / 2)) -1 )).transpose_(3 , 1).double()
def __getitem__(self , index ) : # function to return dataset size
return self.object[index] , self.target[index]
def __len__(self): #you must have this function
return self.len
train_dataset = Get_Dataset()
train_loader = DataLoader(train_dataset , batch_size = 1 , shuffle = True )
| https://stackoverflow.com/questions/54658108/ |
Under macOS using pip install pytorch failure | When I'm using pip to install pytorch, some exception appeared.
Env:
Sys: MaxOS High Sierra
python version : 3.6
pip version : 19.0.2
input: pip install pytorch
output:
Collecting pytorch
Using cached https://files.pythonhosted.org/packages/a9/41/4487bc23e3ac4d674943176f5aa309427b011e00607eb98899e9d951f67b/pytorch-0.1.2.tar.gz
Building wheels for collected packages: pytorch
Building wheel for pytorch (setup.py) ... error
Complete output from command /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-wheel-p0npaj5r --python-tag cp36:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/setup.py", line 17, in <module>
raise Exception(message)
Exception: You should install pytorch from http://pytorch.org
----------------------------------------
Failed building wheel for pytorch
Running setup.py clean for pytorch
Failed to build pytorch
Installing collected packages: pytorch
Running setup.py install for pytorch ... error
Complete output from command /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-record-us45ly0z/install-record.txt --single-version-externally-managed --compile:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/setup.py", line 13, in <module>
raise Exception(message)
Exception: You should install pytorch from http://pytorch.org
----------------------------------------
Command "/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 -u -c "import setuptools, tokenize;__file__='/private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-record-us45ly0z/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/lw/s7b4_22d1v30nfm0wkys878w0000gn/T/pip-install-ygd1rucx/pytorch/
| You're installing an old package named pytorch on PyPI i.e. pytorch 0.1.2. That's why you're receiving the exception.
You're supposed to install it from the pytorch website. There you'll an option to select your system configuration, it'll give you the command to install it. Also, the latest version of pytorch is named torch on PyPI. So, just do
pip3 install torch # or with pip
If it fails due to cache, try with -no-cache-dir option.
| https://stackoverflow.com/questions/54662230/ |
Confused about torch.nn.Sequential | Supposing we want to add a new layer, say a linear layer, to the end of the classifier of another model, such as VGG16, why exactly do these two implementations lead to different results? More specifically, I don't understand why the first implementation produces 2 classfiers:
vgg = torchvision.models.vgg16(pretrained=True)
vgg.classifer=nn.Sequential(vgg.classifier, nn.Linear(4096,300))
print(vgg)
output:
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
(classifer): Sequential(
(0): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
(1): Linear(in_features=4096, out_features=300, bias=True)
)
)
implementation2:
vgg = models.vgg16(pretrained=True)
vgg=nn.Sequential(vgg, nn.Linear(4096,300))
print(vgg)
output:
Sequential(
(0): VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
(1): Linear(in_features=4096, out_features=300, bias=True)
)
| It's because you have a syntax error in the spelling of classifier. You have written it as
vgg.classifer=nn.Sequential(vgg.classifier, nn.Linear(4096,300))
Note the missing i after f in classifier on LHS. So, you're inadvertently creating a new group of layers named classifer by this line.
After correction:
vgg.classifier=nn.Sequential(vgg.classifier, nn.Linear(4096,300))
Moreover, in the first example, you're replacing the existing classifier layer with a sequential network having classifier portion of original vgg and Linear layer as the last layer.
In the second example, you're recreating the variable vgg with a new sequential network which contains the original vgg network with the addition of Linear layer as the last layer.
vgg=nn.Sequential(vgg, nn.Linear(4096,300))
Note the difference between the above two.
| https://stackoverflow.com/questions/54662346/ |
Pytorch: Why loss functions are implemented both in nn.modules.loss and nn.functional module? | Many loss functions in Pytorch are implemented both in nn.modules.loss and nn.functional.
For example, the two lines of the below return same results.
import torch.nn as nn
import torch.functional as F
nn.L1Loss()(x,y)
F.l1_loss(x,y)
Why are there two implementations?
Consistency for other parametric loss functions
Instantiation of loss function brings something good
otherwise
| I think of it as of a partial application situation - it's useful to be able to "bundle" many of the configuration variables with the loss function object. In most cases, your loss function has to take prediction and ground_truth as its arguments. This makes for a fairly uniform basic API of loss functions. However, they differ in details. For instance, not every loss function has a reduction parameter. BCEWithLogitsLoss has weight and pos_weight parameters; PoissonNLLLoss has log_input, eps. It's handy to write a function like
def one_epoch(model, dataset, loss_fn, optimizer):
for x, y in dataset:
model.zero_grad()
y_pred = model(x)
loss = loss_fn(y_pred, y)
loss.backward()
optimizer.step()
which can work with instantiated BCEWithLogitsLoss equally well as with PoissonNLLLoss. But it cannot work with their functional counterparts, because of the bookkeeping necessary. You would instead have to first create
loss_fn_packed = functools.partial(F.binary_cross_entropy_with_logits, weight=my_weight, reduction='sum')
and only then you can use it with one_epoch defined above. But this packing is already provided with the object-oriented loss API, along with some bells and whistles (since losses subclass nn.Module, you can use forward and backward hooks, move stuff between cpu and gpu, etc).
| https://stackoverflow.com/questions/54662984/ |
How to give a batch of frames to the model in pytorch c++ api? | I've written a code to load the pytorch model in C++ with help of the PyTorch C++ Frontend api. I want to give a batch of frames to a pretrained model in the C++ by using module->forward(batch_frames). But it can forward through a single input.
How can I give a batch of inputs to the model?
A part of code that I want to give the batch is shown below:
cv::Mat frame;
vector<torch::jit::IValue> frame_batch;
// do some pre-processes on each frame and then add it to the frame_batch
//forward through the batch frames
torch::Tensor output = module->forward(frame_batch).toTensor();
| Finally, I used a function in c++ to concatenate images and make a batch of images. Then convert the batch into the torch::tensor and feed the model using the batch. A part of code is given below:
// cat 2 or more images to make a batch
cv::Mat batch_image;
cv::vconcat(image_2, images_1, batch_image);
// do some pre-process on image
auto input_tensor_batch = torch::from_blob(batch_image.data, {size_of_batch, image_height, image_width, 3});
input_tensor_batch = input_tensor_batch.permute({0, 3, 1, 2});
//forward through the batch frames
torch::Tensor output = module->forward({input_tensor_batch}).toTensor();
Note that put the { } in the forward-pass function!
| https://stackoverflow.com/questions/54665137/ |
pytorch: How does loss behave when coming from two networks? | I am trying to implement the following algorithm in this book, section 13.5, in pytorch.
This would require two separate neural networks, (in this question, model1 and model2). One's loss is dependent only on its own output [via delta] (parameterized by w), the other (parameterized by theta), dependent both on its own output [via ln(pi)], and on the other's output [again, via delta].
I want to update each one separately
Assume the following models implement nn.Module:
model1 = Mynet1()
model2 = Mynet2()
val1 = model1(input1)
val2 = model2(input2)
self.optimizer1 = optim.Adam(model1.parameters(), lr1)
self.optimizer2 = optim.Adam(model2.parameters(), lr2)
loss1 = f(val1)
loss2 = f(val1, val2)#THIS IS THE INTERESTING PART
optim1.zero_grad()
loss1.backward
optim1.step()
optim2.zero_grad()
loss2.backward
optim2.step()
I understand that applying backward on loss1, then stepping its optimizer would update model1's parameters.
My question is what happens when activating the same on loss2, model2, optimizer2, where loss 2 is dependant on outputs both from model1 and model2?
How can I make the loss2 update not affect model1 parameters?
| since optim2 has only model2's parameter it will only update model2 if you do optim2.step() as is being done.
However, loss2.backward() will compute gradients for both model1 and model2's params and if you do optim1.step() after that it will update model1's params. If you don't want to compute gradients for model1's param, you can do val1.detach() to detach it from the computational graph.
| https://stackoverflow.com/questions/54677774/ |
Pytorch ValueError: optimizer got an empty parameter list | When trying to create a neural network and optimize it using Pytorch, I am getting
ValueError: optimizer got an empty parameter list
Here is the code.
import torch.nn as nn
import torch.nn.functional as F
from os.path import dirname
from os import getcwd
from os.path import realpath
from sys import argv
class NetActor(nn.Module):
def __init__(self, args, state_vector_size, action_vector_size, hidden_layer_size_list):
super(NetActor, self).__init__()
self.args = args
self.state_vector_size = state_vector_size
self.action_vector_size = action_vector_size
self.layer_sizes = hidden_layer_size_list
self.layer_sizes.append(action_vector_size)
self.nn_layers = []
self._create_net()
def _create_net(self):
prev_layer_size = self.state_vector_size
for next_layer_size in self.layer_sizes:
next_layer = nn.Linear(prev_layer_size, next_layer_size)
prev_layer_size = next_layer_size
self.nn_layers.append(next_layer)
def forward(self, torch_state):
activations = torch_state
for i,layer in enumerate(self.nn_layers):
if i != len(self.nn_layers)-1:
activations = F.relu(layer(activations))
else:
activations = layer(activations)
probs = F.softmax(activations, dim=-1)
return probs
and then the call
self.actor_nn = NetActor(self.args, 4, 2, [128])
self.actor_optimizer = optim.Adam(self.actor_nn.parameters(), lr=args.learning_rate)
gives the very informative error
ValueError: optimizer got an empty parameter list
I find it hard to understand what exactly in the network's definition makes the network have parameters.
I am following and expanding the example I found in Pytorch's tutorial code.
I can't really tell the difference between my code and theirs that makes mine think it has no parameters to optimize.
How to make my network have parameters like the linked example?
| Your NetActor does not directly store any nn.Parameter. Moreover, all other layers it eventually uses in forward are stored as a simple list in self.nn_layers.
If you want self.actor_nn.parameters() to know that the items stored in the list self.nn_layers may contain trainable parameters, you should work with containers.
Specifically, making self.nn_layers to be a nn.ModuleList instead of a simple list should solve your problem:
self.nn_layers = nn.ModuleList()
| https://stackoverflow.com/questions/54678896/ |
How to find the max index for each row in a tensor object? | So I'm creating a pytorch model and for the forward pass, I'm applying my forward pass method to get the scores tensor which contains the prediction scores for each class. The shape of this tensor is [100, 10]. Now, I want to get the accuracy by comparing it to y which contains the actual scores. This tensor has the shape [100]. To compare the two I'll be using torch.mean(scores == y) and I'll count how many are the same.
The problem is that I need to convert the scores tensor so that each row simply contains the index of the highest value in each row. For example if the tensor looked like this,
tensor(
[[0.3232, -0.2321, 0.2332, -0.1231, 0.2435, 0.6728],
[0.2323, -0.1231, -0.5321, -0.1452, 0.5435, 0.1722],
[0.9823, -0.1321, -0.6433, 0.1231, 0.023, 0.0711]]
)
Then I'd want it to be converted so that it looks like this.
tensor([5, 4, 0])
How could I do that?
| Use argmax with desired dim (a.k.a. axis)
a = tensor(
[[0.3232, -0.2321, 0.2332, -0.1231, 0.2435, 0.6728],
[0.2323, -0.1231, -0.5321, -0.1452, 0.5435, 0.1722],
[0.9823, -0.1321, -0.6433, 0.1231, 0.023, 0.0711]]
)
a.argmax(1)
# tensor([ 5, 4, 0])
| https://stackoverflow.com/questions/54681798/ |
requires_grad of params is True even with torch.no_grad() | I am experiencing a strange problem with PyTorch today.
When checking network parameters in the with scope, I am expecting requires_grad to be False, but apparently this is not the case unless I explicitly set all params myself.
Code
Link to Net -> Gist
net = InceptionResnetV2()
with torch.no_grad():
for name, param in net.named_parameters():
print("{} {}".format(name, param.requires_grad))
The above code will tell me all the params are still requiring grad, unless I explicitly specify param.requires_grad = False.
My torch version: 1.0.1.post2
| torch.no_grad() will disable gradient information for the results of operations involving tensors that have their requires_grad set to True. So consider the following:
import torch
net = torch.nn.Linear(4, 3)
input_t = torch.randn(4)
with torch.no_grad():
for name, param in net.named_parameters():
print("{} {}".format(name, param.requires_grad))
out = net(input_t)
print('Output: {}'.format(out))
print('Output requires gradient: {}'.format(out.requires_grad))
print('Gradient function: {}'.format(out.grad_fn))
This prints
weight True
bias True
Output: tensor([-0.3311, 1.8643, 0.2933])
Output requires gradient: False
Gradient function: None
If you remove with torch.no_grad(), you get
weight True
bias True
Output: tensor([ 0.5776, -0.5493, -0.9229], grad_fn=<AddBackward0>)
Output requires gradient: True
Gradient function: <AddBackward0 object at 0x7febe41e3240>
Note that in both cases the module parameters have requires_grad set to True, but in the first case the out tensor doesn't have a gradient function associated with it whereas it does in the second case.
| https://stackoverflow.com/questions/54682457/ |
PyTorch VAE fails conversion to onnx | I'm trying to convert a PyTorch VAE to onnx, but I'm getting: torch.onnx.symbolic.normal does not exist
The problem appears to originate from a reparametrize() function:
def reparametrize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
if self.have_cuda:
eps = torch.normal(torch.zeros(std.size()),torch.ones(std.size())).cuda()
else:
eps = torch.normal(torch.zeros(std.size()),torch.ones(std.size()))
return eps.mul(std).add_(mu)
I also tried:
eps = torch.cuda.FloatTensor(std.size()).normal_()
which produced the error:
Schema not found for node. File a bug report.
Node: %173 : Float(1, 20) = aten::normal(%169, %170, %171, %172), scope: VAE
Input types:Float(1, 20), float, float, Generator
and
eps = torch.randn(std.size()).cuda()
which produced the error:
builtins.TypeError: i_(): incompatible function arguments. The following argument types are supported:
1. (self: torch._C.Node, arg0: str, arg1: int) -> torch._C.Node
Invoked with: %137 : Tensor = onnx::RandomNormal(), scope: VAE, 'shape', 133 defined in (%133 : int[] = prim::ListConstruct(%128, %132), scope: VAE) (occurred when translating randn)
I am using cuda.
Any thoughts appreciated. Perhaps I need to approach the z/latent differently for onnx?
NOTE: Stepping through, I can see that it's finding RandomNormal() for torch.randn(), which should be correct. But I don't really have access to the arguments at that point, so how can I fix it?
| In very short, the code bellow may work. (at least in my environment, it worked w/o errors).
It seems that .size() operator might return variable, not constant, so it causes error for onnx compilation. (I got the same error when changed to use .size())
import torch
import torch.utils.data
from torch import nn
from torch.nn import functional as F
IN_DIMS = 28 * 28
BATCH_SIZE = 10
FEATURE_DIM = 20
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.fc1 = nn.Linear(784, 400)
self.fc21 = nn.Linear(400, FEATURE_DIM)
self.fc22 = nn.Linear(400, FEATURE_DIM)
self.fc3 = nn.Linear(FEATURE_DIM, 400)
self.fc4 = nn.Linear(400, 784)
def encode(self, x):
h1 = F.relu(self.fc1(x))
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.randn(BATCH_SIZE, FEATURE_DIM, device='cuda')
return eps.mul(std).add_(mu)
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
recon_x = self.decode(z)
return recon_x
model = VAE().cuda()
dummy_input = torch.randn(BATCH_SIZE, IN_DIMS, device='cuda')
torch.onnx.export(model, dummy_input, "vae.onnx", verbose=True)
| https://stackoverflow.com/questions/54699201/ |
Moving member tensors with module.to() in PyTorch | I am building a Variational Autoencoder (VAE) in PyTorch and have a problem writing device agnostic code. The Autoencoder is a child of nn.Module with an encoder and decoder network, which are too. All weights of the network can be moved from one device to another by calling net.to(device).
The problem I have is with the reparametrization trick:
encoding = mu + noise * sigma
The noise is a tensor of the same size as mu and sigma and saved as a member variable of the autoencoder module. It is initialized in the constructor and resampled in-place each training step. I do it that way to avoid constructing a new noise tensor each step and pushing it to the desired device. Additionally, I want to fix the noise in the evaluation. Here is the code:
class VariationalGenerator(nn.Module):
def __init__(self, input_nc, output_nc):
super(VariationalGenerator, self).__init__()
self.input_nc = input_nc
self.output_nc = output_nc
embedding_size = 128
self._train_noise = torch.randn(batch_size, embedding_size)
self._eval_noise = torch.randn(1, embedding_size)
self.noise = self._train_noise
# Create encoder
self.encoder = Encoder(input_nc, embedding_size)
# Create decoder
self.decoder = Decoder(output_nc, embedding_size)
def train(self, mode=True):
super(VariationalGenerator, self).train(mode)
self.noise = self._train_noise
def eval(self):
super(VariationalGenerator, self).eval()
self.noise = self._eval_noise
def forward(self, inputs):
# Calculate parameters of embedding space
mu, log_sigma = self.encoder.forward(inputs)
# Resample noise if training
if self.training:
self.noise.normal_()
# Reparametrize noise to embedding space
inputs = mu + self.noise * torch.exp(0.5 * log_sigma)
# Decode to image
inputs = self.decoder(inputs)
return inputs, mu, log_sigma
When I now move the autoencoder to the GPU with net.to('cuda:0') I get an error in forwarding because the noise tensor is not moved.
I don't want to add a device parameter to the constructor, because then it is still not possible to move it to another device later. I also tried to wrap the noise into nn.Parameter so that it is affected by net.to(), but that gives an error from the optimizer, as the noise is flagged as requires_grad=False.
Anyone has a solution to move all of the modules with net.to()?
| A better version of tilman151's second approach is probably to override _apply, rather than to. That way net.cuda(), net.float(), etc will all work as well, since those all call _apply rather than to (as can be seen in the source, which is simpler than you might think):
def _apply(self, fn):
super(VariationalGenerator, self)._apply(fn)
self._train_noise = fn(self._train_noise)
self._eval_noise = fn(self._eval_noise)
return self
| https://stackoverflow.com/questions/54706146/ |
Add channel to MNIST via transform? | I'm trying to use the MNIST dataset from torchvision.datasets.It seems to be provided as an N x H x W (uint8) (batch dimension, height, width) tensor. All the pytorch classes for working on images (for instance Conv2d) however require a N x C x H x W (float32) tensor where C is the number of colour channels. I've tried to add add the ToTensor transform but that didn't add a color channel.
Is there a way using torchvision.transforms to add this additional dimension? For a raw tensor we could just do .unsqueeze(1) but that doesn't look like a very elegant solution. I'm just trying to do it the "proper" way.
Here is the failed conversion.
import torchvision
dataset = torchvision.datasets.MNIST("~/PyTorchDatasets/MNIST/", train=True, transform=torchvision.transforms.ToTensor(), download=True)
print(dataset.train_data[0])
| I had a misconception: dataset.train_data is not affected by the specified transform, only the output of a DataLoader(dataset,...) will be. After checking data from
for data, _ in DataLoader(dataset):
break
we can see that ToTensor actually does exactly what is desired.
| https://stackoverflow.com/questions/54707186/ |
How to do gradient clipping in pytorch? | What is the correct way to perform gradient clipping in pytorch?
I have an exploding gradients problem.
| clip_grad_norm (which is actually deprecated in favor of clip_grad_norm_ following the more consistent syntax of a trailing _ when in-place modification is performed) clips the norm of the overall gradient by concatenating all parameters passed to the function, as can be seen from the documentation:
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
From your example it looks like that you want clip_grad_value_ instead which has a similar syntax and also modifies the gradients in-place:
clip_grad_value_(model.parameters(), clip_value)
Another option is to register a backward hook. This takes the current gradient as an input and may return a tensor which will be used in-place of the previous gradient, i.e. modifying it. This hook is called each time after a gradient has been computed, i.e. there's no need for manually clipping once the hook has been registered:
for p in model.parameters():
p.register_hook(lambda grad: torch.clamp(grad, -clip_value, clip_value))
| https://stackoverflow.com/questions/54716377/ |
PyTorch: What does @weak_script_method decorator do? | In the torch.nn.Linear class (and other classes too), the forward method includes a @weak_script_method decorator as follows:
@weak_script_method
def forward(self, input):
return F.linear(input, self.weight, self.bias)
What does this decorator do? Should I include it if I'm overriding the forward method in my own subclass of the Linear module?
| You can find the exact decorator location to get the idea.
def weak_script_method(fn):
weak_script_methods[fn] = {
"rcb": createResolutionCallback(frames_up=2),
"original_method": fn
}
return fn
But, you shouldn't need to worry about that decorator. This decorator is internal to JIT.
Technically method decorated with @weak_script_method will be added to the weak_script_methods dictionary created in front, like this:
weak_script_methods = weakref.WeakKeyDictionary()
That dict tracks methods to avoid circular dependency problems; methods calling other methods while creating the PyTorch graph.
This really has no much sense unless you understand the concept of TorchScript in general.
The idea of TorchScript is to train models in PyTorch and export models to another non Python production environment (read:C++/C/Cuda) that support static typing.
PyTorch team made TorchScript on limited Python base to support static typing.
By default, Python is dynamically typed language, but with few tricks (read:checks) it can become statically typed language.
And so TorchScript functions are statically-typed subset of Python that contains all of PyTorch's built-in Tensor operations. This difference allows TorchScript modules code to run without the need for a Python interpreter.
You can either convert the existing PyTorch methods to TorchScript using tracing (torch.jit.trace() method), or to create your TorchScripts by hand using @torch.jit.script decorator.
If you use tracing you will get a single class module at the end. Here is the example:
import inspect
import torch
def foo(x, y):
return x + y
traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))
print(type(traced_foo)) #<class 'torch.jit.TopLevelTracedModule'>
print(traced_foo) #foo()
print(traced_foo.forward) #<bound method TopLevelTracedModule.forward of foo()>
lines = inspect.getsource(traced_foo.forward)
print(lines)
Output:
<class 'torch.jit.TopLevelTracedModule'>
foo()
<bound method TopLevelTracedModule.forward of foo()>
def forward(self, *args, **kwargs):
return self._get_method('forward')(*args, **kwargs)
You can investigate further using the inspect module. This was just a showcase how to convert one function using tracing.
| https://stackoverflow.com/questions/54718027/ |
CoreML: creating a custom layer for ONNX RandomNormal | I've trainined a VAE that in PyTorch that I need to convert to CoreML. From this thread PyTorch VAE fails conversion to onnx I was able to get the ONNX model to export, however, this just pushed the problem one step further to the ONNX-CoreML stage.
The original function that contains the torch.randn() call is the reparametrize func:
def reparametrize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
if self.have_cuda:
eps = torch.randn(self.bs, self.nz, device='cuda')
else:
eps = torch.randn(self.bs, self.nz)
return eps.mul(std).add_(mu)
The solution is, of course, to create a custom layer, but I'm having problems creating a layer with no inputs (i.e., it's just a randn() call).
I can get the CoreML conversion to complete with this def:
def convert_randn(node):
params = NeuralNetwork_pb2.CustomLayerParams()
params.className = "RandomNormal"
params.description = "Random normal distribution generator"
params.parameters["dtype"].intValue = node.attrs.get('dtype', 1)
params.parameters["bs"].intValue = node.attrs.get("shape")[0]
params.parameters["nz"].intValue = node.attrs.get("shape")[1]
return params
I do the conversion with:
coreml_model = convert(onnx_model, add_custom_layers=True,
image_input_names = ['input'],
custom_conversion_functions={"RandomNormal": convert_randn})
I should also note that, at the completion of the mlmodel export, the following is printed:
Custom layers have been added to the CoreML model corresponding to the
following ops in the onnx model:
1/1: op type: RandomNormal, op input names and shapes: [], op output
names and shapes: [('62', 'Shape not available')]
Bringing the .mlmodel into Xcode complains that Layer '62' of type 500 has 0 inputs but expects at least 1. So I'm wondering how to specify a kind of "dummy" input to the layer, since it doesn't actually have an input -- it's just a wrapper around torch.randn() (or, more specifically, the onnx RandonNormal op). I should clarify that I do need the whole VAE, not just the decoder, as I'm actually using the entire process to "error correct" my inputs (i.e., the encoder estimates my z vector, based on an input, then the decoder generates the closest generalizable prediction of the input).
Any help greatly appreciated.
UPDATE: Okay, I finally got a version to load in Xcode (thanks to @MattijsHollemans and his book!). The originalConversion.mlmodel is the initial output of converting my model from ONNX to CoreML. To this, I had to manually insert the input for the RandomNormal layer. I made it (64, 28, 28) for no great reason — I know my batch size is 64, and my inputs are 28 x 28 (but presumably it could also be (1, 1, 1), since it's a "dummy"):
spec = coremltools.utils.load_spec('originalConversion.mlmodel')
nn = spec.neuralNetwork
layers = {l.name:i for i,l in enumerate(nn.layers)}
layer_idx = layers["62"] # '62' is the name of the layer -- see above
layer = nn.layers[layer_idx]
layer.input.extend(["dummy_input"])
inp = spec.description.input.add()
inp.name = "dummy_input"
inp.type.multiArrayType.SetInParent()
spec.description.input[1].type.multiArrayType.shape.append(64)
spec.description.input[1].type.multiArrayType.shape.append(28)
spec.description.input[1].type.multiArrayType.shape.append(28)
spec.description.input[1].type.multiArrayType.dataType = ft.ArrayFeatureType.DOUBLE
coremltools.utils.save_spec(spec, "modelWithInsertedInput.mlmodel")
This loads in Xcode, but I have yet to test the functioning of the model in my app. Since the additional layer is simple, and the input is literally a bogus, non-functional input (just to keep Xcode happy), I don't imagine it will be a problem, but I'll post again if it doesn't run properly.
UPDATE 2: Unfortunately, the model doesn't load at runtime. It fails with [espresso] [Espresso::handle_ex_plan] exception=Failed in 2nd reshape after missing custom layer info. What I find very strange and confusing is that, inspecting model.espresso.shape, I see that almost every node has a shape like:
"62" : {
"k" : 0,
"w" : 0,
"n" : 0,
"seq" : 0,
"h" : 0
}
I have two question/concerns: 1) Most obviously, why are all the values zero (this is the case with all but the input nodes), and 2) Why does it appear to be a sequential model, when it's just a fairly conventional VAE? Opening model.espresso.shape for a fully-functioning GAN in the same app, I see that the nodes are of the format:
"54" : {
"k" : 256,
"w" : 16,
"n" : 1,
"h" : 16
}
That is, they contain reasonable shape info, and they don't have seq fields.
Very, very confused...
UPDATE 3: I've also just noticed in the compiler report the error: IMPORTANT: new sequence length computation failed, falling back to old path. Your compilation was sucessful, but please file a radar on Core ML | Neural Networks and attach the model that generated this message.
Here's the original PyTorch model:
class VAE(nn.Module):
def __init__(self, bs, nz):
super(VAE, self).__init__()
self.nz = nz
self.bs = bs
self.encoder = nn.Sequential(
# input is (nc) x 28 x 28
nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
nn.LeakyReLU(0.2, inplace=True),
# size = (ndf) x 14 x 14
nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 2),
nn.LeakyReLU(0.2, inplace=True),
# size = (ndf*2) x 7 x 7
nn.Conv2d(ndf * 2, ndf * 4, 3, 2, 1, bias=False),
nn.BatchNorm2d(ndf * 4),
nn.LeakyReLU(0.2, inplace=True),
# size = (ndf*4) x 4 x 4
nn.Conv2d(ndf * 4, 1024, 4, 1, 0, bias=False),
nn.LeakyReLU(0.2, inplace=True),
)
self.decoder = nn.Sequential(
# input is Z, going into a convolution
nn.ConvTranspose2d( 1024, ngf * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(ngf * 8),
nn.ReLU(True),
# size = (ngf*8) x 4 x 4
nn.ConvTranspose2d(ngf * 8, ngf * 4, 3, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 4),
nn.ReLU(True),
# size = (ngf*4) x 8 x 8
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(ngf * 2),
nn.ReLU(True),
# size = (ngf*2) x 16 x 16
nn.ConvTranspose2d(ngf * 2, nc, 4, 2, 1, bias=False),
nn.Sigmoid()
)
self.fc1 = nn.Linear(1024, 512)
self.fc21 = nn.Linear(512, nz)
self.fc22 = nn.Linear(512, nz)
self.fc3 = nn.Linear(nz, 512)
self.fc4 = nn.Linear(512, 1024)
self.lrelu = nn.LeakyReLU()
self.relu = nn.ReLU()
def encode(self, x):
conv = self.encoder(x);
h1 = self.fc1(conv.view(-1, 1024))
return self.fc21(h1), self.fc22(h1)
def decode(self, z):
h3 = self.relu(self.fc3(z))
deconv_input = self.fc4(h3)
deconv_input = deconv_input.view(-1,1024,1,1)
return self.decoder(deconv_input)
def reparametrize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
eps = torch.randn(self.bs, self.nz, device='cuda') # needs custom layer!
return eps.mul(std).add_(mu)
def forward(self, x):
# print("x", x.size())
mu, logvar = self.encode(x)
z = self.reparametrize(mu, logvar)
decoded = self.decode(z)
return decoded, mu, logvar
| To add an input to your Core ML model, you can do the following from Python:
import coremltools
spec = coremltools.utils.load_spec("YourModel.mlmodel")
nn = spec.neuralNetworkClassifier # or just spec.neuralNetwork
layers = {l.name:i for i,l in enumerate(nn.layers)}
layer_idx = layers["your_custom_layer"]
layer = nn.layers[layer_idx]
layer.input.extend(["dummy_input"])
inp = spec.description.input.add()
inp.name = "dummy_input"
inp.type.doubleType.SetInParent()
coremltools.utils.save_spec(spec, "NewModel.mlmodel")
Here, "your_custom_layer" is the name of the layer you want to add the dummy input to. In your model it looks like it's called 62. You can look at the layers dictionary to see the names of all the layers in the model.
Notes:
If your model is not a classifier, use nn = spec.neuralNetwork instead of neuralNetworkClassifier.
I made the new dummy input have the type "double". That means your custom layer gets a double value as input.
You need to specify a value for this dummy input when using the model.
| https://stackoverflow.com/questions/54718662/ |
Activation gradient penalty | Here's a simple neural network, where I’m trying to penalize the norm of activation gradients:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU()
self.linear = nn.Linear(64 * 5 * 5, 10)
def forward(self, input):
conv1 = self.conv1(input)
pool1 = self.pool(conv1)
self.relu1 = self.relu(pool1)
self.relu1.retain_grad()
conv2 = self.conv2(relu1)
pool2 = self.pool(conv2)
relu2 = self.relu(pool2)
self.relu2 = relu2.view(relu2.size(0), -1)
self.relu2.retain_grad()
return self.linear(relu2)
model = Net()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
for i in range(1000):
output = model(input)
loss = nn.CrossEntropyLoss()(output, label)
optimizer.zero_grad()
loss.backward(retain_graph=True)
grads = torch.autograd.grad(loss, [model.relu1, model.relu2], create_graph=True)
grad_norm = 0
for grad in grads:
grad_norm += grad.pow(2).sum()
grad_norm.backward()
optimizer.step()
However, it does not produce the desired regularization effect. If I do the same thing for weights (instead of activations), it works well. Am I doing this right (in terms of pytorch machinery)? Specifically, what happens in grad_norm.backward() call? I just want to make sure the weight gradients are updated, and not activation gradients. Currently, when I print out gradients for weights and activations immediately before and after that line, both change - so I’m not sure what’s going on.
| I think your code ends up computing some of the gradients twice in each step. I also suspect it actually never zeroes out the activation gradients, so they accumulate across steps.
In general:
x.backward() computes gradient of x wrt. computation graph leaves (e.g. weight tensors and other variables), as well as wrt. nodes explicitly marked with retain_grad(). It accumulates the computed gradient in tensors' .grad attributes.
autograd.grad(x, [y, z]) returns gradient of x wrt. y and z regardless of whether they would normally retain grad or not. By default, it will also accumulate gradient in all leaves' .grad attributes. You can prevent this by passing only_inputs=True.
I prefer to use backward() only for the optimization step, and autograd.grad() whenever my goal is to obtain "reified" gradients as intermediate values for another computation. This way, I can be sure that no unwanted gradients remain lying around in tensors' .grad attributes after I'm done with them.
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU()
self.linear = nn.Linear(64 * 5 * 5, 10)
def forward(self, input):
conv1 = self.conv1(input)
pool1 = self.pool(conv1)
self.relu1 = self.relu(pool1)
conv2 = self.conv2(self.relu1)
pool2 = self.pool(conv2)
self.relu2 = self.relu(pool2)
relu2 = self.relu2.view(self.relu2.size(0), -1)
return self.linear(relu2)
model = Net()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
grad_penalty_weight = 10.
for i in range(1000000):
# Random input and labels; we're not really learning anything
input = torch.rand(1, 3, 32, 32)
label = torch.randint(0, 10, (1,))
output = model(input)
loss = nn.CrossEntropyLoss()(output, label)
# This is where the activation gradients are computed
# only_inputs is optional here, since we're going to call optimizer.zero_grad() later
# But it makes clear that we're *only* interested in the activation gradients at this point
grads = torch.autograd.grad(loss, [model.relu1, model.relu2], create_graph=True, only_inputs=True)
grad_norm = 0
for grad in grads:
grad_norm += grad.pow(2).sum()
optimizer.zero_grad()
loss = loss + grad_norm * grad_penalty_weight
loss.backward()
optimizer.step()
This code appears to work, in that the activation gradients do get smaller.
I cannot comment on the viability of this technique as a regularization method.
| https://stackoverflow.com/questions/54727099/ |
My numpy and pytorch codes have totally different results | I wanted to calculate the sum of 1st to K-th power of an array and equally calculate the sum of 1st to k-th power of a tensor. I found out that the following codes and their results are totally different and I don't know why.
I debugged the code and I know that the results are equal in the first round.
Numpy code:
adj_k_prob = adj_prob
adj_k_pow = adj_prob
for i in range(K):
adj_k_pow = np.matmul(adj_prob, adj_k_pow)
adj_k_prob += adj_k_pow
Pytorch code:
adj_k_prob = adj_prob_tensor
adj_k_pow = adj_prob_tensor
for i in range(K):
adj_k_pow = torch.matmul(adj_prob_tensor, adj_k_pow)
adj_k_prob += adj_k_pow
The value of adj_prob_tensor and adj_prob at the start of loop are as follow:
tensor([[0.0000, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429],
[0.2500, 0.0000, 0.2500, 0.2500, 0.0000, 0.0000, 0.0000, 0.2500],
[0.2500, 0.2500, 0.0000, 0.2500, 0.0000, 0.0000, 0.0000, 0.2500],
[0.2500, 0.2500, 0.2500, 0.0000, 0.0000, 0.0000, 0.0000, 0.2500],
[0.5000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.5000, 0.0000],
[0.5000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.5000, 0.0000],
[0.3333, 0.0000, 0.0000, 0.0000, 0.3333, 0.3333, 0.0000, 0.0000],
[0.2500, 0.2500, 0.2500, 0.2500, 0.0000, 0.0000, 0.0000, 0.0000]])
Is there anything that I should check for it?
| The problem was in using assignment. I should have used .clone() for pytorch and .copy() in numpy.
| https://stackoverflow.com/questions/54727465/ |
Concat tensors in PyTorch | I have a tensor called data of the shape [128, 4, 150, 150] where 128 is the batch size, 4 is the number of channels, and the last 2 dimensions are height and width. I have another tensor called fake of the shape [128, 1, 150, 150].
I want to drop the last list/array from the 2nd dimension of data; the shape of data would now be [128, 3, 150, 150]; and concatenate it with fake giving the output dimension of the concatenation as [128, 4, 150, 150].
Basically, in other words, I want to concatenate the first 3 dimensions of data with fake to give a 4-dimensional tensor.
I am using PyTorch and came across the functions torch.cat() and torch.stack()
Here is a sample code I've written:
fake_combined = []
for j in range(batch_size):
fake_combined.append(torch.stack((data[j][0].to(device), data[j][1].to(device), data[j][2].to(device), fake[j][0].to(device))))
fake_combined = torch.tensor(fake_combined, dtype=torch.float32)
fake_combined = fake_combined.to(device)
But I am getting an error in the line:
fake_combined = torch.tensor(fake_combined, dtype=torch.float32)
The error is:
ValueError: only one element tensors can be converted to Python scalars
Also, if I print the shape of fake_combined, I get the output as [128,] instead of [128, 4, 150, 150]
And when I print the shape of fake_combined[0], I get the output as [4, 150, 150], which is as expected.
So my question is, why am I not able to convert the list to tensor using torch.tensor(). Am I missing something? Is there any better way to do what I intend to do?
Any help will be appreciated! Thanks!
| You could also just assign to that particular dimension.
orig = torch.randint(low=0, high=10, size=(2,3,2,2))
fake = torch.randint(low=111, high=119, size=(2,1,2,2))
orig[:,[2],:,:] = fake
Original Before
tensor([[[[0, 1],
[8, 0]],
[[4, 9],
[6, 1]],
[[8, 2],
[7, 6]]],
[[[1, 1],
[8, 5]],
[[5, 0],
[8, 6]],
[[5, 5],
[2, 8]]]])
Fake
tensor([[[[117, 115],
[114, 111]]],
[[[115, 115],
[118, 115]]]])
Original After
tensor([[[[ 0, 1],
[ 8, 0]],
[[ 4, 9],
[ 6, 1]],
[[117, 115],
[114, 111]]],
[[[ 1, 1],
[ 8, 5]],
[[ 5, 0],
[ 8, 6]],
[[115, 115],
[118, 115]]]])
Hope this helps! :)
| https://stackoverflow.com/questions/54727686/ |
Calculating Variance in Pytorch on Spatial axis | I am trying to calculate variance in Pytorch but unable to do on multiple axis.
I have similar thing done in Tensorflow but unable to do it on Pytorch as torch.var function takes int as dimension instead of axes.
Below code is channel last code, I expect axes=[2,3]
Lambda(lambda x: tf.nn.moments(x, axes=[1, 2]))
For example, if input_dims = (5, 10, 25, 25) then output_dims should be (5,10, 1, 1).
| One thing you can do is to use tensor.view() to flatten all the dimensions that you want to calculate the variance for into one dimension before you apply the var() method:
torch.var(x.view(x.shape[0], x.shape[1], 1, -1,), dim=3, keepdim=True)
I used keepdim=True to keep the dimension that we calculate the variance for to get the desired output shape.
| https://stackoverflow.com/questions/54732466/ |
Pytorch: How to create an update rule that doesn't come from derivatives? | I want to implement the following algorithm, taken from this book, section 13.6:
I don't understand how to implement the update rule in pytorch (the rule for w is quite similar to that of theta).
As far as I know, torch requires a loss for loss.backwward().
This form does not seem to apply for the quoted algorithm.
I'm still certain there is a correct way of implementing such update rules in pytorch.
Would greatly appreciate a code snippet of how the w weights should be updated, given that V(s,w) is the output of the neural net, parameterized by w.
EDIT: Chris Holland suggested a way to implement, and I implemented it. It does not converge on Cartpole, and I wonder if I did something wrong.
The critic does converge on the solution to the function gamma*f(n)=f(n)-1 which happens to be the sum of the series gamma+gamma^2+...+gamma^inf
meaning, gamma=1 diverges. gamma=0.99 converges on 100, gamma=0.5 converges on 2 and so on. Regardless of the actor or policy.
The code:
def _update_grads_with_eligibility(self, is_critic, delta, discount, ep_t):
gamma = self.args.gamma
if is_critic:
params = list(self.critic_nn.parameters())
lamb = self.critic_lambda
eligibilities = self.critic_eligibilities
else:
params = list(self.actor_nn.parameters())
lamb = self.actor_lambda
eligibilities = self.actor_eligibilities
is_episode_just_started = (ep_t == 0)
if is_episode_just_started:
eligibilities.clear()
for i, p in enumerate(params):
if not p.requires_grad:
continue
eligibilities.append(torch.zeros_like(p.grad, requires_grad=False))
# eligibility traces
for i, p in enumerate(params):
if not p.requires_grad:
continue
eligibilities[i][:] = (gamma * lamb * eligibilities[i]) + (discount * p.grad)
p.grad[:] = delta.squeeze() * eligibilities[i]
and
expected_reward_from_t = self.critic_nn(s_t)
probs_t = self.actor_nn(s_t)
expected_reward_from_t1 = torch.tensor([[0]], dtype=torch.float)
if s_t1 is not None: # s_t is not a terminal state, s_t1 exists.
expected_reward_from_t1 = self.critic_nn(s_t1)
delta = r_t + gamma * expected_reward_from_t1.data - expected_reward_from_t.data
negative_expected_reward_from_t = -expected_reward_from_t
self.critic_optimizer.zero_grad()
negative_expected_reward_from_t.backward()
self._update_grads_with_eligibility(is_critic=True,
delta=delta,
discount=discount,
ep_t=ep_t)
self.critic_optimizer.step()
EDIT 2:
Chris Holland's solution works. The problem originated from a bug in my code that caused the line
if s_t1 is not None:
expected_reward_from_t1 = self.critic_nn(s_t1)
to always get called, thus expected_reward_from_t1 was never zero, and thus no stopping condition was specified for the bellman equation recursion.
With no reward engineering, gamma=1, lambda=0.6, and a single hidden layer of size 128 for both actor and critic, this converged on a rather stable optimal policy within 500 episodes.
Even faster with gamma=0.99, as the graph shows (best discounted episode reward is about 86.6).
BIG thank you to @Chris Holland, who "gave this a try"
| I am gonna give this a try.
.backward() does not need a loss function, it just needs a differentiable scalar output. It approximates a gradient with respect to the model parameters. Let's just look at the first case the update for the value function.
We have one gradient appearing for v, we can approximate this gradient by
v = model(s)
v.backward()
This gives us a gradient of v which has the dimension of your model parameters. Assuming we already calculated the other parameter updates, we can calculate the actual optimizer update:
for i, p in enumerate(model.parameters()):
z_theta[i][:] = gamma * lamda * z_theta[i] + l * p.grad
p.grad[:] = alpha * delta * z_theta[i]
We can then use opt.step() to update the model parameters with the adjusted gradient.
| https://stackoverflow.com/questions/54734556/ |
Column-dependent bounds in torch.clamp | I would like to do something similar to np.clip on PyTorch tensors on a 2D array. More specifically, I would like to clip each column in a specific range of value (column-dependent). For example, in numpy, you could do:
x = np.array([-1,10,3])
low = np.array([0,0,1])
high = np.array([2,5,4])
clipped_x = np.clip(x, low, high)
clipped_x == np.array([0,5,3]) # True
I found torch.clamp, but unfortunately it does not support multidimensional bounds (only one scalar value for the entire tensor). Is there a "neat" way to extend that function to my case?
Thanks!
| Not as neat as np.clip, but you can use torch.max and torch.min:
In [1]: x
Out[1]:
tensor([[0.9752, 0.5587, 0.0972],
[0.9534, 0.2731, 0.6953]])
Setting the lower and upper bound per column
l = torch.tensor([[0.2, 0.3, 0.]])
u = torch.tensor([[0.8, 1., 0.65]])
Note that the lower bound l and upper bound u are 1-by-3 tensors (2D with singleton dimension). We need these dimensions for l and u to be broadcastable to the shape of x.
Now we can clip using min and max:
clipped_x = torch.max(torch.min(x, u), l)
Resulting with
tensor([[0.8000, 0.5587, 0.0972],
[0.8000, 0.3000, 0.6500]])
| https://stackoverflow.com/questions/54738045/ |
PyTorch: What's the difference between state_dict and parameters()? | In order to access a model's parameters in pytorch, I saw two methods:
using state_dict and using parameters()
I wonder what's the difference, or if one is good practice and the other is bad practice.
Thanks
| The parameters() only gives the module parameters i.e. weights and biases.
Returns an iterator over module parameters.
You can check the list of the parameters as follows:
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
On the other hand, state_dict returns a dictionary containing a whole state of the module. Check its source code that contains not just the call to parameters but also buffers, etc.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are the corresponding parameter and buffer names.
Check all keys that state_dict contains using:
model.state_dict().keys()
For example, in state_dict, you'll find entries like bn1.running_mean and running_var, which are not present in .parameters().
If you only want to access parameters, you can simply use .parameters(), while for purposes like saving and loading model as in transfer learning, you'll need to save state_dict not just parameters.
| https://stackoverflow.com/questions/54746829/ |
Input dimension error on pytorch's forward check | I am creating an RNN with pytorch, it looks like this:
class MyRNN(nn.Module):
def __init__(self, batch_size, n_inputs, n_neurons, n_outputs):
super(MyRNN, self).__init__()
self.n_neurons = n_neurons
self.batch_size = batch_size
self.n_inputs = n_inputs
self.n_outputs = n_outputs
self.basic_rnn = nn.RNN(self.n_inputs, self.n_neurons)
self.FC = nn.Linear(self.n_neurons, self.n_outputs)
def init_hidden(self, ):
# (num_layers, batch_size, n_neurons)
return torch.zeros(1, self.batch_size, self.n_neurons)
def forward(self, X):
self.batch_size = X.size(0)
self.hidden = self.init_hidden()
lstm_out, self.hidden = self.basic_rnn(X, self.hidden)
out = self.FC(self.hidden)
return out.view(-1, self.n_outputs)
My input x looks like this:
tensor([[-1.0173e-04, -1.5003e-04, -1.0218e-04, -7.4541e-05, -2.2869e-05,
-7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
-9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01],
[-1.1193e-04, -1.6928e-04, -1.0218e-04, -7.4541e-05, -2.2869e-05,
-7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
-9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01],
...
[-6.9490e-05, -8.9197e-05, -1.0218e-04, -7.4541e-05, -2.2869e-05,
-7.7171e-02, -4.4630e-03, -5.0750e-05, -1.7911e-04, -2.8082e-04,
-9.2992e-06, -1.5608e-05, -3.5471e-05, -4.9127e-05, -3.2883e-01]],
dtype=torch.float64)
and is a batch of 64 vectors with size 15.
When trying to test this model by doing:
BATCH_SIZE = 64
N_INPUTS = 15
N_NEURONS = 150
N_OUTPUTS = 1
model = MyRNN(BATCH_SIZE, N_INPUTS, N_NEURONS, N_OUTPUTS)
model(x)
I get the following error:
File "/home/tt/anaconda3/envs/venv/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 126, in check_forward_args
expected_input_dim, input.dim()))
RuntimeError: input must have 3 dimensions, got 2
How can I fix it?
| You are missing one of the required dimensions for the RNN layer.
Per the documentation, your input size needs to be of shape (sequence length, batch, input size).
So - with the example above, you are missing one of these. Based on your variable names, it appears you are trying to pass 64 examples of 15 inputs each... if that’s true, you are missing sequence length.
With an RNN, the sequence length is the number of times you want the layer to recur. For example, in NLP your sequence length might be equal to the number of words in a sentence, while batch size would be the number of sentences you are passing, and input size would be the vector size of each word.
You might not need an RNN here if you are just trying to do use 64 samples of size 15.
| https://stackoverflow.com/questions/54749244/ |
Stacking up of LSTM outputs in pytorch | I was going through some tutorial about the sentiment analysis using lstm network.
The below code said that its stacks up the lstm output. I Don't know how it works.
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
| It indeed stacks the output, the comment by kHarshit is misleading here!
To visualize this, let us review the output of the previous line in the tutorial (accessed May 1st, 2019):
lstm_out, hidden = self.lstm(embeds, hidden)
The output dimension of this will be [sequence_length, batch_size, hidden_size*2], as per the documentation. Here, the length of twice the input comes from having a bidirectional LSTM. Therefore, your first half of the last dimension will always be the forward output, and then afterwards the backwards output (I'm not entirely sure on the direction of that, but it seems to me that it is already in the right direction).
Then, the actual line that you are concerned about:
We're ignoring the specifics of .contiguous() here, but you can read up on it in this excellent answer on Stackoverflow. In summary, it basically makes sure that your torch.Tensor is in the right alignment in memory.
Lastly, .view() allows you to reshape a resulting tensor in a specific way. Here, we're aiming for a shape that has two dimensions (as defined by the number of input arguments to .view(). Specifically, the second dimension is supposedly having the size hidden_dim. -1 for the first dimension simply means that we're redistributing the vector dimension in such a way that we don't care about the exact dimension, but simply satisfy the other dimension's requirements.
So, if you have a vector of, say, length 40, and want to reshape that one into a 2D-Tensor of (-1, 10), then the resulting tensor would have shape (4, 10).
As we've previously said that the first half of the vector (length hidden_dim) is the forward output, and the latter half is the second half, then the resulting split into a tensor of (-1, hidden_dim) will be resulting in a tensor of (2, hidden_dim), where the first row contains the forward output, "stacked" on top of the second row, which equals the reverse layer's output.
Visual example:
lstm_out, hidden = self.lstm(embeds, hidden)
print(lstm_out) # imagine a sample output like [1,0 , 2,0]
# forward out | backward out
stacked = lstm_out.contiguous().view(-1,hidden_dim) # hidden_dim = 2
print(stacked) # torch.Tensor([[1,0],
# [2,0]])
| https://stackoverflow.com/questions/54749665/ |
How does Module.parameters() find the parameters? | I noticed that whenever you create a new net extending torch.nn.Module, you can immediately call net.parameters() to find the parameters relevant for backpropagation.
import torch
class MyNet(torch.nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.fc = torch.nn.Linear(5, 5)
def forward(self, x):
return self.fc(x)
net = MyNet()
print(list(net.parameters()))
But then I wondered, how is this even possible? I just assigned this Linear layer object to a member variable but it is not recorded anywhere else (or is it?). Somehow MyNet must be able to keep track of the parameters used but how?
| It's simple really, just go through attributes via meta-programming and check their type
class Example():
def __init__(self):
self.special_thing = nn.Parameter(torch.rand(2))
self.something_else = "ok"
def get_parameters(self):
for key, value in self.__dict__.items():
if type(value) == nn.Parameter:
print(key, "is a parameter!")
e = Example()
e.get_parameters()
# => special_thing is a parameter!
| https://stackoverflow.com/questions/54751318/ |
Autograd.grad() for Tensor in pytorch | I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n).
I’m creating a new loss and I would like to know the gradient of Y w.r.t. X. Something that in tensorflow would be like:
tf.gradients(ys=Y, xs=X)
Unfortunately, I’ve been making tests with torch.autograd.grad(), but I could not figure out how to do it. I get errors like: “RunTimeerror: grad can be implicitly created only for scalar outputs”.
What should be the inputs in torch.autograd.grad() if I want to know the gradient of Y w.r.t. X?
|
Let's start from simple working example with plain loss function and regular backward. We will build short computational graph and do some grad computations on it.
Code:
import torch
from torch.autograd import grad
import torch.nn as nn
# Create some dummy data.
x = torch.ones(2, 2, requires_grad=True)
gt = torch.ones_like(x) * 16 - 0.5 # "ground-truths"
# We will use MSELoss as an example.
loss_fn = nn.MSELoss()
# Do some computations.
v = x + 2
y = v ** 2
# Compute loss.
loss = loss_fn(y, gt)
print(f'Loss: {loss}')
# Now compute gradients:
d_loss_dx = grad(outputs=loss, inputs=x)
print(f'dloss/dx:\n {d_loss_dx}')
Output:
Loss: 42.25
dloss/dx:
(tensor([[-19.5000, -19.5000], [-19.5000, -19.5000]]),)
Ok, this works! Now let's try to reproduce error "grad can be implicitly created only for scalar outputs". As you can notice, loss in previous example is a scalar. backward() and grad() by defaults deals with single scalar value: loss.backward(torch.tensor(1.)). If you try to pass tensor with more values you will get an error.
Code:
v = x + 2
y = v ** 2
try:
dy_hat_dx = grad(outputs=y, inputs=x)
except RuntimeError as err:
print(err)
Output:
grad can be implicitly created only for scalar outputs
Therefore, when using grad() you need to specify grad_outputs parameter as follows:
Code:
v = x + 2
y = v ** 2
dy_dx = grad(outputs=y, inputs=x, grad_outputs=torch.ones_like(y))
print(f'dy/dx:\n {dy_dx}')
dv_dx = grad(outputs=v, inputs=x, grad_outputs=torch.ones_like(v))
print(f'dv/dx:\n {dv_dx}')
Output:
dy/dx:
(tensor([[6., 6.],[6., 6.]]),)
dv/dx:
(tensor([[1., 1.], [1., 1.]]),)
NOTE: If you are using backward() instead, simply do y.backward(torch.ones_like(y)).
| https://stackoverflow.com/questions/54754153/ |
Using autograd.grad() as a parameter for a loss function (pytorch) | I want to compute the gradient between two tensors in a net. The input X tensor is sent through a set of convolutional layers which give me back and output Y tensor.
I’m creating a new loss and I would like to know the MSE between gradient of norm(Y) w.r.t. each element of X. Here the code:
# Staring tensors
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)
# Define loss
loss_fn = nn.MSELoss()
#Make some calculations
V = Y*X+2
# Compute the norm
V_norm = V.norm()
# Computing gradient to calculate the loss
for i in range(len(V)):
if i == 0:
grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X[i])
else:
grad_tensor_ = torch.autograd.grad(outputs=V_norm, inputs=X[i])
grad_tensor = torch.cat((grad_tensor, grad_tensor_), dim=0)
# Grund truth
gt = grad_tensor * 0 + 1
#Loss
loss_g = loss_fn(grad_tensor, gt)
print(loss_g)
Unfortunately, I’ve been making tests with torch.autograd.grad(), but I could not figure out how to do it. I get teh following error: RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
Setting allow_unused=True gives me back None which is not an option. Not sure how to compute the loss between the gradients and the norm. Any idea about how to code this loss?
| You are receiving mentioned error because you are trying to feed a slice of tensor X: X[i] to grad(), and it is going to be considered as a separate tensor, outside of your main computational graph. Not sure, but seems it returns new tensor while performing slicing.
But you don't need a for loop to compute gradients:
Code:
import torch
import torch.nn as nn
torch.manual_seed(42)
# Create some data.
X = torch.rand(40, requires_grad=True)
Y = torch.rand(40, requires_grad=True)
# Define loss.
loss_fn = nn.MSELoss()
# Do some computations.
V = Y * X + 2
# Compute the norm.
V_norm = V.norm()
print(f'V norm: {V_norm}')
# Computing gradient to calculate the loss
grad_tensor = torch.autograd.grad(outputs=V_norm, inputs=X)[0] # [0] - Because grad returs tuple, so we need to unpack it
print(f'grad_tensor:\n {grad_tensor}')
# Grund truth
gt = grad_tensor * 0 + 1
loss_g = loss_fn(grad_tensor, gt)
print(f'loss_g: {loss_g}')
Output:
V norm: 14.54827
grad_tensor:
tensor([0.1116, 0.0584, 0.1109, 0.1892, 0.1252, 0.0420, 0.1194, 0.1000, 0.1404,
0.0272, 0.0007, 0.0460, 0.0168, 0.1575, 0.1097, 0.1120, 0.1168, 0.0771,
0.1371, 0.0208, 0.0783, 0.0226, 0.0987, 0.0512, 0.0929, 0.0573, 0.1464,
0.0286, 0.0293, 0.0278, 0.1896, 0.0939, 0.1935, 0.0123, 0.0006, 0.0156,
0.0236, 0.1272, 0.1109, 0.1456])
loss_g: 0.841885
Loss between the grads and the norm
You also mentioned that you want to compute loss between the gradients and the norm, it is possible. And there are two possible options of it:
You want to include your loss calculation to your computational graph, in this case use:
loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)
You want just to compute loss and you don't want to start backward path from the loss, in this case don't forget to use torch.no_grad(), otherwise autograd will track this changes and add loss computation to your computational graph.
with torch.no_grad():
loss_norm_vs_grads = loss_fn(torch.ones_like(grad_tensor) * V_norm, grad_tensor)
| https://stackoverflow.com/questions/54763081/ |
Why aren't torch.nn.Parameter listed when net is printed? | I recently had to construct a module that required a tensor to be included. While back propagation worked perfectly using torch.nn.Parameter, it did not show up when printing the net object. Why isn't this parameter included in contrast to other modules like layer? (Shouldn't it behave just like layer?)
import torch
import torch.nn as nn
class MyNet(torch.nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.layer = nn.Linear(10, 10)
self.parameter = torch.nn.Parameter(torch.zeros(10,10, requires_grad=True))
net = MyNet()
print(net)
Output:
MyNet(
(layer): Linear(in_features=10, out_features=10, bias=True)
)
| When you call print(net), the __repr__ method is called. __repr__ gives the “official” string representation of an object.
In PyTorch's nn.Module (base class of your MyNet model), the __repr__ is implemented like this:
def __repr__(self):
# We treat the extra repr like the sub-module, one item per line
extra_lines = []
extra_repr = self.extra_repr()
# empty string will be split into list ['']
if extra_repr:
extra_lines = extra_repr.split('\n')
child_lines = []
for key, module in self._modules.items():
mod_str = repr(module)
mod_str = _addindent(mod_str, 2)
child_lines.append('(' + key + '): ' + mod_str)
lines = extra_lines + child_lines
main_str = self._get_name() + '('
if lines:
# simple one-liner info, which most builtin Modules will use
if len(extra_lines) == 1 and not child_lines:
main_str += extra_lines[0]
else:
main_str += '\n ' + '\n '.join(lines) + '\n'
main_str += ')'
return main_str
Note that the above method returns main_str which contains call to only _modules and extra_repr, thus it prints only modules by default.
PyTorch also provides extra_repr() method which you can implement yourself for extra representation of the module.
To print customized extra information, you should reimplement this method in your own modules. Both single-line and multi-line strings are acceptable.
| https://stackoverflow.com/questions/54770249/ |
How to upscale image in pytorch? | how to upscale an image in Pytorch without defining height and width using transforms?
('--upscale_factor', type=int, required=True, help="super resolution upscale factor")
| This might do the Job
transforms.Compose([transforms.resize(ImageSize*Scaling_Factor)])
| https://stackoverflow.com/questions/54771426/ |
Simple way to load specific sample using Pytorch dataloader | I am currently training a 3D CNN for binary classification with relatively sparse labels (~ 1% of voxels in label data correspond to target class).
In order to perform basic sanity checks during the training (e.g. does the network learn at all?) it would be handy to present the network with a small, handpicked subset of training examples having an above-average fraction of target class labels.
As suggested by the Pytorch documentation, I implemented my own dataset class (inheriting from torch.utils.data.Dataset) which provides training examples via it's __get_item__ method to the torch.utils.data.DataLoader.
In the pytorch tutorials I found, the DataLoader is used as an iterator to generate the training loop like so:
for i, data in enumerate(self.dataloader):
# Get training data
inputs, labels = data
# Train the network
# [...]
What I am wondering now is whether there exist a simple way to load a single or a couple of specific training examples (using a the linear index understood by Dataset's __get_item__ method). However, DataLoader does not have a __get_item__ method and repeatedly calling __next__ until I reach the desired index does not seem elegant.
Apparently one possible way to solve this would be to define a custom sampler or batch_sampler inheriting from the abstract torch.utils.data.Sampler. But this seems over the top to retrieve a few specific samples.
I suppose I am overlooking something very simple and obvious here. Any advice appreciated!
| Just in case anyone with a similar question comes across this at some point:
The quick-and-dirty workaround I ended up using was to bypass the dataloader in the training loop by directly accessing it's associated dataset attribute. Suppose we want to quickly check if our network learns at all by repeatedly presenting it a single, handpicked training example with linear index sample_idx (as defined by the dataset class).
Then one can do something like this:
for i, _ in enumerate(self.dataloader):
# Get training data
# inputs, labels = data
inputs, labels = self.dataloader.dataset[sample_idx]
inputs = inputs.unsqueeze(0)
labels = labels.unsqueeze(0)
# Train the network
# [...]
EDIT:
One brief remark, since some people seem to be finding this workaround helpful: When using this hack I found it to be crucial to instantiate the DataLoader with num_workers = 0. Otherwise, memory segmentation errors might occur in which case you could end up with very weird looking training data.
| https://stackoverflow.com/questions/54773106/ |
How to change default optimization in spotlight from pytorch e.g. torch.optim.SGD? | I'm currently using spotlight https://github.com/maciejkula/spotlight/tree/master/spotlight
to implement Matrix Factorization in recommender system. spotlight is based on pytorch, it's a integrated platform implementing RS. In spotlight/factorization/explicit, it uses torch.optim.Adam as optimizer, I want to change it to torch.optim.SGD.
I tried
emodel = ExplicitFactorizationModel(n_iter=15,
embedding_dim=32,
use_cuda=False,
loss='regression',
l2=0.00005,
optimizer_func=optim.SGD(lr=0.001, momentum=0.9))
but it gives:TypeError: init() missing 1 required positional argument: 'params'
Any suggestions?
| You could use partial from functools to first set the learning rate and momentum and then pass this class to ExplicitFactorizationModel. Something like:
from functools import partial
SDG_fix_lr_momentum = partial(torch.optim.SGD, lr=0.001, momentum=0.9)
emodel = ExplicitFactorizationModel(n_iter=15,
embedding_dim=32,
use_cuda=False,
loss='regression',
l2=0.00005,
optimizer_func=SDG_fix_lr_momentum)
| https://stackoverflow.com/questions/54775494/ |
Does pytorch do eager pruning of its computational graph? | This is a very simple example:
import torch
x = torch.tensor([1., 2., 3., 4., 5.], requires_grad=True)
y = torch.tensor([2., 2., 2., 2., 2.], requires_grad=True)
z = torch.tensor([1., 1., 0., 0., 0.], requires_grad=True)
s = torch.sum(x * y * z)
s.backward()
print(x.grad)
This will print,
tensor([2., 2., 0., 0., 0.]),
since, of course, ds/dx is zero for the entries where z is zero.
My question is: Is pytorch smart and stop the computations when it reaches a zero? Or does in fact do the calculation "2*5", only to later do "10 * 0 = 0"?
In this simple example it doesn't make a big difference, but in the (bigger) problem I am looking at, this will make a difference.
Thank you for any input.
| No, pytorch does no such thing as pruning any subsequent calculations when zero is reached. Even worse, due to how float arithmetic works all subsequent multiplication by zero will take roughly the same time as any regular multiplication.
For some cases there are ways around it though, for example if you want to use a masked loss you can just set the masked outputs to be zero, or detach them from gradients.
This example makes the difference clear:
def time_backward(do_detach):
x = torch.tensor(torch.rand(100000000), requires_grad=True)
y = torch.tensor(torch.rand(100000000), requires_grad=True)
s2 = torch.sum(x * y)
s1 = torch.sum(x * y)
if do_detach:
s2 = s2.detach()
s = s1 + 0 * s2
t = time.time()
s.backward()
print(time.time() - t)
time_backward(do_detach= False)
time_backward(do_detach= True)
outputs:
0.502875089645
0.198422908783
| https://stackoverflow.com/questions/54781966/ |
What kind of tricks we can play with to further refine the trained neural network model so that it has lower objective function value? | I ask this question because many deep learning frameworks, such as Caffe, supports model refining function. For example, in Caffe, we can use snapshot to initialling the neural network parameters and then continue performing training as the following command shows:
./caffe train -solver solver_file.prototxt -snapshot snap_file.solverstate
In order to further train the model, the following tricks I can play with:
use smaller learning rate
change optimisation method. For example, change stochastic gradient descent to ADAM algorithm
Any other tricks I can play with?
ps: I understand that reducing the loss function value of the training samples does not mean that we can get a better model.
| The question is way too broad, I think. However, this is a common practice, especially in case of a small training set. I would rank possible methods like this:
smaller learning rate
more/different data augmentation
add noise to train set (related to data augmentation, indeed)
fine-tune on subset of the training set.
The very last one is indeed a very powerful method to finalize the model that performs poor on some corner cases. You can then make a 'difficult' train subset in order to bias model towards it. I personally use it very often.
| https://stackoverflow.com/questions/54789401/ |
Selecting second dim of tensor using an index tensor | I have a 2D tensor and an index tensor. The 2D tensor has a batch dimension, and a dimension with 3 values. I have an index tensor that selects exactly 1 element of the 3 values. What is the "best" way to product a slice containing just the elements in the index tensor?
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9]])
t = tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
i = torch.tensor([0,0,1], dtype=torch.int64)
tensor([0, 0, 1])
Expected output...
tensor([1, 4, 8])
| An example of the answer is as follows.
import torch
t = torch.tensor([[1,2,3], [4,5,6], [7,8,9]])
col_i = [0, 0, 1]
row_i = range(3)
print(t[row_i, col_i])
# tensor([1, 4, 8])
| https://stackoverflow.com/questions/54799650/ |
Pytorch resumes training after every training session | I have a dataset which is partitioned into smaller datasets.
I want to train 3 models for each partition of the dataset, but I need all training sessions to start from the same initialised network parameters.
so it looks like this:
modelList = []
thisCNN = NNet()
for x in range(3):
train = torch.utils.data.DataLoader(Subset(train_set, indexes[x]), batch_size=32)
bb = trainMyNet(thisCNN, train, test)
modelList.append(list(bb.parameters()))
print modelList[0][1]
print modelList[1][1]
print modelList[2][1]
In the printing, I am getting the same exact parameters for every saved model which is strange, and also I noticed that after every iteration the model in fact resumes the training from the previous iteration as it results in lower loss per iteration.
What I am trying to achieve, is, per iteration to get a new model on the current subset x but the training should start with the same initial thisCNN = NNet() weights.
| When you call bb = trainMyNet(thisCNN, train, test) you are not taking a copy of thisCNN, but it is the same model you are updating in each iteration. To get your code working you should probably pass a copy of this model:
from copy import deepcopy
modelList = []
thisCNN = NNet()
for x in range(3):
train = torch.utils.data.DataLoader(Subset(train_set, indexes[x]), batch_size=32)
bb = trainMyNet(deepcopy(thisCNN), train, test)
modelList.append(list(bb.parameters()))
print modelList[0][1]
print modelList[1][1]
print modelList[2][1]
This should initialize all models as thisCNN and make sure that they are different after training.
| https://stackoverflow.com/questions/54808117/ |
PyTorch version of as simple Keras LSTM model | Trying to translate a simple LSTM model in Keras to PyTorch code. The Keras model converges after just 200 epochs, while the PyTorch model:
needs many more epochs to reach the same loss level (200 vs. ~8000)
seems to overfit the inputs because the predicted value is not near 100
This is the Keras code:
from numpy import array
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
X = array([10,20,30,20,30,40,30,40,50,40,50,60,50,60,70,60,70,80]).reshape((6,3,1))
y = array([40,50,60,70,80,90])
model = Sequential()
model.add(LSTM(50, activation='relu', recurrent_activation='sigmoid', input_shape=(3, 1)))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit(X, y, epochs=200, verbose=1)
x_input = array([70, 80, 90]).reshape((1, 3, 1))
yhat = model.predict(x_input, verbose=0)
print(yhat)
And this is the equivalent PyTorch code:
from numpy import array
import torch
import torch.nn as nn
import torch.nn.functional as F
X = torch.tensor([10,20,30,20,30,40,30,40,50,40,50,60,50,60,70,60,70,80]).float().reshape(6,3,1)
y = torch.tensor([40,50,60,70,80,90]).float().reshape(6,1)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.lstm = nn.LSTM(input_size=1, hidden_size=50, num_layers=1, batch_first=True)
self.fc = nn.Linear(50, 1)
def forward(self, x):
batches = x.size(0)
h0 = torch.zeros([1, batches, 50])
c0 = torch.zeros([1, batches, 50])
(x, _) = self.lstm(x, (h0, c0))
x = x[:,-1,:] # Keep only the output of the last iteration. Before shape (6,3,50), after shape (6,50)
x = F.relu(x)
x = self.fc(x)
return x
model = Model()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters())
n_epochs = 8000
for epoch in range(n_epochs):
model.train()
optimizer.zero_grad()
y_ = model(X)
loss = criterion(y_, y)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}/{n_epochs}, loss = {loss.item()}")
model.eval()
x_input = torch.tensor([70, 80, 90]).float().reshape((1, 3, 1))
yhat = model(x_input)
print(yhat)
The only possible difference is the initial weight and bias values, but I don't think that slightly different weights and biases may account for such a big difference in behavior.
What am I missing in the PyTorch code?
| The behaviour difference is because of the activation function in the LSTM API. By changing the activation to tanh, I can reproduce the problem in Keras too.
model.add(LSTM(50, activation='tanh', recurrent_activation='sigmoid', input_shape=(3, 1)))
There is no option to change the activation function to 'relu' in the pytorch LSTM API.
https://pytorch.org/docs/stable/nn.html#lstm
Taking the LSTM implementation from here, https://github.com/huggingface/torchMoji/blob/master/torchmoji/lstm.py
and changing hardsigmoid/tanh to sigmoid/relu, the model converges in pytorch as well.
| https://stackoverflow.com/questions/54815899/ |
pytorch 0.4.1 - ‘LSTM’ object has no attribute ‘weight_ih_l’ | Simple question. I’d like to see the initialized parameter of LSTM. How do I see it?
Do I need to always put lstm in the model to see the params?
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
torch.__version__
lstm = nn.LSTM(3, 3)
lstm.weight_ih_l
--------------------------------------------------------------------------- AttributeError Traceback (most recent call
last) in ()
9 lstm = nn.LSTM(3, 3)
10
---> 11 lstm.weight_ih_l
~/anaconda3/envs/pytorch0.41/lib/python3.6/site-packages/torch/nn/modules/module.py
in getattr(self, name)
516 return modules[name]
517 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 518 type(self).name, name))
519
520 def setattr(self, name, value):
AttributeError: 'LSTM' object has no attribute 'weight_ih_l'
| nn.LSTM is implemented with nn.RNNBase which puts all the parameters inside the OrderedDict: _parameters. So to 'see' the initialized parameters, you can simply do:
import torch
torch.manual_seed(1)
import torch.nn as nn
lstm = nn.LSTM(3, 3)
print(lstm._parameters['weight_ih_l0'])
Also, to know what are the keys value in that OrderedDict, you can simply do:
print(lstm._all_weights).
| https://stackoverflow.com/questions/54817864/ |
tensorflow equivalent of pytorch ReplicationPad2d | I'm trying to figure out how to do the tensorflow equivalent of the following padding in pytorch:
nn.ReplicationPad2d((1, 0, 1, 0))
I've tried the following, but this only seems to work if the input tensor is actually 2x2:
tf.pad(my_tensor, [[1, 0], [1, 0]], "SYMMETRIC")
| The equivalent for Tensorflow is tf.pad(my_tensor,[[0,0],[0,0],[1,0],[1,0]],"SYMMETRIC"). (This assumes that you are interested in operating on 4D tensors, with the first two dimensions being batch and channel).
In Tensorflow, you need to explicitly give the padding for all of the four dimensions. If you don't want the batch and channel dimensions to be padded (in convolutional networks you typically do not need them padded), you need to explicitly ask for zero padding in both of these dimensions, on both sides of the tensor. This is why I added the [0,0],[0,0] before your [1,0],[1,0].
In Pytorch, an instance of nn.ReplicationPad2d is already assumed to be padding a 4D tensor, without padding the the first two dimensions. That's why you initialize the instance by specifying the padding only in the two additional dimensions.
| https://stackoverflow.com/questions/54818515/ |
RNN model (GRU) of word2vec to regression not learning | I am converting Keras code into PyTorch because I am more familiar with the latter than the former. However, I found that it is not learning (or only barely).
Below I have provided almost all of my PyTorch code, including the initialisation code so that you can try it out yourself. The only thing you would need to provide yourself, is the word embeddings (I'm sure you can find many word2vec models online). The first input file should be a file with tokenised text, the second input file should be a file with floating-point numbers, one per line. Because I have provided all the code, this question may seem huge and too broad. However, my question is specific enough I think: what is wrong in my model or training loop that causes my model to not or barely improve. (See below for results.)
I have tried to provide many comments where applicable, and I have provided the shape transformations as well so you do not have to run the code to see what is going on. The data prep methods are not important to inspect.
The most important parts are the forward method of the RegressorNet, and the training loop of RegressionNN (admittedly, these names were badly chosen). I think the mistake is there somewhere.
from pathlib import Path
import time
import numpy as np
import torch
from torch import nn, optim
from torch.utils.data import DataLoader
import gensim
from scipy.stats import pearsonr
from LazyTextDataset import LazyTextDataset
class RegressorNet(nn.Module):
def __init__(self, hidden_dim, embeddings=None, drop_prob=0.0):
super(RegressorNet, self).__init__()
self.hidden_dim = hidden_dim
self.drop_prob = drop_prob
# Load pretrained w2v model, but freeze it: don't retrain it.
self.word_embeddings = nn.Embedding.from_pretrained(embeddings)
self.word_embeddings.weight.requires_grad = False
self.w2v_rnode = nn.GRU(embeddings.size(1), hidden_dim, bidirectional=True, dropout=drop_prob)
self.dropout = nn.Dropout(drop_prob)
self.linear = nn.Linear(hidden_dim * 2, 1)
# LeakyReLU rather than ReLU so that we don't get stuck in a dead nodes
self.lrelu = nn.LeakyReLU()
def forward(self, batch_size, sentence_input):
# shape sizes for:
# * batch_size 128
# * embeddings of dim 146
# * hidden dim of 200
# * sentence length of 20
# sentence_input: torch.Size([128, 20])
# Get word2vec vector representation
embeds = self.word_embeddings(sentence_input)
# embeds: torch.Size([128, 20, 146])
# embeds.view(-1, batch_size, embeds.size(2)): torch.Size([20, 128, 146])
# Input vectors into GRU, only keep track of output
w2v_out, _ = self.w2v_rnode(embeds.view(-1, batch_size, embeds.size(2)))
# w2v_out = torch.Size([20, 128, 400])
# Leaky ReLU it
w2v_out = self.lrelu(w2v_out)
# Dropout some nodes
if self.drop_prob > 0:
w2v_out = self.dropout(w2v_out)
# w2v_out: torch.Size([20, 128, 400
# w2v_out[-1, :, :]: torch.Size([128, 400])
# Only use the last output of a sequence! Supposedly that cell outputs the final information
regression = self.linear(w2v_out[-1, :, :])
regression: torch.Size([128, 1])
return regression
class RegressionRNN:
def __init__(self, train_files=None, test_files=None, dev_files=None):
print('Using torch ' + torch.__version__)
self.datasets, self.dataloaders = RegressionRNN._set_data_loaders(train_files, test_files, dev_files)
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.model = self.w2v_vocab = self.criterion = self.optimizer = self.scheduler = None
@staticmethod
def _set_data_loaders(train_files, test_files, dev_files):
# labels must be the last input file
datasets = {
'train': LazyTextDataset(train_files) if train_files is not None else None,
'test': LazyTextDataset(test_files) if test_files is not None else None,
'valid': LazyTextDataset(dev_files) if dev_files is not None else None
}
dataloaders = {
'train': DataLoader(datasets['train'], batch_size=128, shuffle=True, num_workers=4) if train_files is not None else None,
'test': DataLoader(datasets['test'], batch_size=128, num_workers=4) if test_files is not None else None,
'valid': DataLoader(datasets['valid'], batch_size=128, num_workers=4) if dev_files is not None else None
}
return datasets, dataloaders
@staticmethod
def prepare_lines(data, split_on=None, cast_to=None, min_size=None, pad_str=None, max_size=None, to_numpy=False,
list_internal=False):
""" Converts the string input (line) to an applicable format. """
out = []
for line in data:
line = line.strip()
if split_on:
line = line.split(split_on)
line = list(filter(None, line))
else:
line = [line]
if cast_to is not None:
line = [cast_to(l) for l in line]
if min_size is not None and len(line) < min_size:
# pad line up to a number of tokens
line += (min_size - len(line)) * ['@pad@']
elif max_size and len(line) > max_size:
line = line[:max_size]
if list_internal:
line = [[item] for item in line]
if to_numpy:
line = np.array(line)
out.append(line)
if to_numpy:
out = np.array(out)
return out
def prepare_w2v(self, data):
idxs = []
for seq in data:
tok_idxs = []
for word in seq:
# For every word, get its index in the w2v model.
# If it doesn't exist, use @unk@ (available in the model).
try:
tok_idxs.append(self.w2v_vocab[word].index)
except KeyError:
tok_idxs.append(self.w2v_vocab['@unk@'].index)
idxs.append(tok_idxs)
idxs = torch.tensor(idxs, dtype=torch.long)
return idxs
def train(self, epochs=10):
valid_loss_min = np.Inf
train_losses, valid_losses = [], []
for epoch in range(1, epochs + 1):
epoch_start = time.time()
train_loss, train_results = self._train_valid('train')
valid_loss, valid_results = self._train_valid('valid')
# Calculate Pearson correlation between prediction and target
try:
train_pearson = pearsonr(train_results['predictions'], train_results['targets'])
except FloatingPointError:
train_pearson = "Could not calculate Pearsonr"
try:
valid_pearson = pearsonr(valid_results['predictions'], valid_results['targets'])
except FloatingPointError:
valid_pearson = "Could not calculate Pearsonr"
# calculate average losses
train_loss = np.mean(train_loss)
valid_loss = np.mean(valid_loss)
train_losses.append(train_loss)
valid_losses.append(valid_loss)
# print training/validation statistics
print(f'----------\n'
f'Epoch {epoch} - completed in {(time.time() - epoch_start):.0f} seconds\n'
f'Training Loss: {train_loss:.6f}\t Pearson: {train_pearson}\n'
f'Validation loss: {valid_loss:.6f}\t Pearson: {valid_pearson}')
# validation loss has decreased
if valid_loss <= valid_loss_min and train_loss > valid_loss:
print(f'!! Validation loss decreased ({valid_loss_min:.6f} --> {valid_loss:.6f}). Saving model ...')
valid_loss_min = valid_loss
if train_loss <= valid_loss:
print('!! Training loss is lte validation loss. Might be overfitting!')
# Optimise with scheduler
if self.scheduler is not None:
self.scheduler.step(valid_loss)
print('Done training...')
def _train_valid(self, do):
""" Do training or validating. """
if do not in ('train', 'valid'):
raise ValueError("Use 'train' or 'valid' for 'do'.")
results = {'predictions': np.array([]), 'targets': np.array([])}
losses = np.array([])
self.model = self.model.to(self.device)
if do == 'train':
self.model.train()
torch.set_grad_enabled(True)
else:
self.model.eval()
torch.set_grad_enabled(False)
for batch_idx, data in enumerate(self.dataloaders[do], 1):
# 1. Data prep
sentence = data[0]
target = data[-1]
curr_batch_size = target.size(0)
# Returns list of tokens, possibly padded @pad@
sentence = self.prepare_lines(sentence, split_on=' ', min_size=20, max_size=20)
# Converts tokens into w2v IDs as a Tensor
sent_w2v_idxs = self.prepare_w2v(sentence)
# Converts output to Tensor of floats
target = torch.Tensor(self.prepare_lines(target, cast_to=float))
# Move input to device
sent_w2v_idxs, target = sent_w2v_idxs.to(self.device), target.to(self.device)
# 2. Predictions
pred = self.model(curr_batch_size, sentence_input=sent_w2v_idxs)
loss = self.criterion(pred, target)
# 3. Optimise during training
if do == 'train':
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# 4. Save results
pred = pred.detach().cpu().numpy()
target = target.cpu().numpy()
results['predictions'] = np.append(results['predictions'], pred, axis=None)
results['targets'] = np.append(results['targets'], target, axis=None)
losses = np.append(losses, float(loss))
torch.set_grad_enabled(True)
return losses, results
if __name__ == '__main__':
HIDDEN_DIM = 200
# Load embeddings from pretrained gensim model
embed_p = Path('path-to.w2v_model').resolve()
w2v_model = gensim.models.KeyedVectors.load_word2vec_format(str(embed_p))
# add a padding token with only zeros
w2v_model.add(['@pad@'], [np.zeros(w2v_model.vectors.shape[1])])
embed_weights = torch.FloatTensor(w2v_model.vectors)
# Text files are used as input. Every line is one datapoint.
# *.tok.low.*: tokenized (space-separated) sentences
# *.cross: one floating point number per line, which we are trying to predict
regr = RegressionRNN(train_files=(r'train.tok.low.en',
r'train.cross'),
dev_files=(r'dev.tok.low.en',
r'dev.cross'),
test_files=(r'test.tok.low.en',
r'test.cross'))
regr.w2v_vocab = w2v_model.vocab
regr.model = RegressorNet(HIDDEN_DIM, embed_weights, drop_prob=0.2)
regr.criterion = nn.MSELoss()
regr.optimizer = optim.Adam(list(regr.model.parameters())[0:], lr=0.001)
regr.scheduler = optim.lr_scheduler.ReduceLROnPlateau(regr.optimizer, 'min', factor=0.1, patience=5, verbose=True)
regr.train(epochs=100)
For the LazyTextDataset, you can refer to the class below.
from torch.utils.data import Dataset
import linecache
class LazyTextDataset(Dataset):
def __init__(self, paths):
# labels are in the last path
self.paths, self.labels_path = paths[:-1], paths[-1]
with open(self.labels_path, encoding='utf-8') as fhin:
lines = 0
for line in fhin:
if line.strip() != '':
lines += 1
self.num_entries = lines
def __getitem__(self, idx):
data = [linecache.getline(p, idx + 1) for p in self.paths]
label = linecache.getline(self.labels_path, idx + 1)
return (*data, label)
def __len__(self):
return self.num_entries
As I wrote before, I am trying to convert a Keras model to PyTorch. The original Keras code does not use an embedding layer, and uses pre-built word2vec vectors per sentence as input. In the model below, there is no embedding layer. The Keras summary looks like this (I don't have access to the base model setup).
Layer (type) Output Shape Param # Connected to
====================================================================================================
bidirectional_1 (Bidirectional) (200, 400) 417600
____________________________________________________________________________________________________
dropout_1 (Dropout) (200, 800) 0 merge_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (200, 1) 801 dropout_1[0][0]
====================================================================================================
The issue is that with identical input, the Keras model works and gets a +0.5 Pearson correlation between predicted and actual labels. The PyTorch model above, though, does not seem to work at all. To give you an idea, here is the loss (mean squared error) and Pearson (correlation coefficient, p-value) after the first epoch:
Epoch 1 - completed in 11 seconds
Training Loss: 1.684495 Pearson: (-0.0006077809280690612, 0.8173368901481127)
Validation loss: 1.708228 Pearson: (0.017794288315261794, 0.4264098054188664)
And after the 100th epoch:
Epoch 100 - completed in 11 seconds
Training Loss: 1.660194 Pearson: (0.0020315421756790806, 0.4400929436716754)
Validation loss: 1.704910 Pearson: (-0.017288118524826892, 0.4396865964324158)
The loss is plotted below (when you look at the Y-axis, you can see the improvements are minimal).
A final indicator that something may be wrong, is that for my 140K lines of input, each epoch only takes 10 seconds on my GTX 1080TI. I feel that his is not much and I would guess that the optimisation is not working/running. I cannot figure out why, though. To issue will probably be in my train loop or the model itself, but I cannot find it.
Again, something must be going wrong because:
- the Keras model does perform well;
- the training speed is 'too fast' for 140K sentences
- almost no improvemnts after training.
What am I missing? The issue is more than likely present in the training loop or in the network structure.
| TL;DR: Use permute instead of view when swapping axes, see the end of answer to get an intuition about the difference.
About RegressorNet (neural network model)
No need to freeze embedding layer if you are using from_pretrained. As documentation states, it does not use gradient updates.
This part:
self.w2v_rnode = nn.GRU(embeddings.size(1), hidden_dim, bidirectional=True, dropout=drop_prob)
and especially dropout without providable num_layers is totally pointless (as no dropout can be specified with shallow one layer network).
BUG AND MAIN ISSUE: in your forward function you are using view instead of permute, here:
w2v_out, _ = self.w2v_rnode(embeds.view(-1, batch_size, embeds.size(2)))
See this answer and appropriate documentation for each of those functions and try to use this line instead:
w2v_out, _ = self.w2v_rnode(embeds.permute(1, 0, 2))
You may consider using batch_first=True argument during w2v_rnode creation, you won't have to permute indices that way.
Check documentation of torch.nn.GRU, you are after last step of the sequence, not after all of the sequences you have there, so you should be after:
_, last_hidden = self.w2v_rnode(embeds.permute(1, 0, 2))
but I think this part is fine otherwise.
Data preparation
No offence, but prepare_lines is very unreadable and seems pretty hard to maintain as well, not to say spotting an eventual bug (I suppose it lies in here).
First of all, it seems like you are padding manually. Please don't do it that way, use torch.nn.pad_sequence to work with batches!
In essence, first you encode each word in every sentence as index pointing into embedding (as you seem to do in prepare_w2v), after that you use torch.nn.pad_sequence and torch.nn.pack_padded_sequence or torch.nn.pack_sequence if the lines are already sorted by length.
Proper batching
This part is very important and it seems you are not doing that at all (and likely this is the second error in your implementation).
PyTorch's RNN cells take inputs not as padded tensors, but as torch.nn.PackedSequence objects. This is an efficient object storing indices which specify unpadded length of each sequence.
See more informations on the topic here, here and in many other blog posts throughout the web.
First sequence in batch has to be the longest, and all others have to be provided in the descending length. What follows is:
You have to sort your batch each time by sequences length and sort your targets in an analogous way OR
Sort your batch, push it through the network and unsort it afterwards to match with your targets.
Either is fine, it's your call what seems to be more intuitive for you.
What I like to do is more or less the following, hope it helps:
Create unique indices for each word and map each sentence appropriately (you've already done it).
Create regular torch.utils.data.Dataset object returning single sentence for each geitem, where it is returned as a tuple consisting of features (torch.Tensor) and labels (single value), seems like you're doing it as well.
Create custom collate_fn for use with torch.utils.data.DataLoader, which is responsible for sorting and padding each batch in this scenario (+ it returns lengths of each sentence to be passed into neural network).
Using sorted and padded features and their lengths I'm using torch.nn.pack_sequence inside neural network's forward method (do it after embedding!) to push it through RNN layer.
Depending on the use-case I unpack them using torch.nn.pad_packed_sequence. In your case, you only care about last hidden state, hence you don't have to do that. If you were using all of the hidden outputs (like is the case with, say, attention networks), you would add this part.
When it comes to the third point, here is a sample implementation of collate_fn, you should get the idea:
import torch
def length_sort(features):
# Get length of each sentence in batch
sentences_lengths = torch.tensor(list(map(len, features)))
# Get indices which sort the sentences based on descending length
_, sorter = sentences_lengths.sort(descending=True)
# Pad batch as you have the lengths and sorter saved already
padded_features = torch.nn.utils.rnn.pad_sequence(features, batch_first=True)
return padded_features, sentences_lengths, sorter
def pad_collate_fn(batch):
# DataLoader return batch like that unluckily, check it on your own
features, labels = (
[element[0] for element in batch],
[element[1] for element in batch],
)
padded_features, sentences_lengths, sorter = length_sort(features)
# Sort by length features and labels accordingly
sorted_padded_features, sorted_labels = (
padded_features[sorter],
torch.tensor(labels)[sorter],
)
return sorted_padded_features, sorted_labels, sentences_lengths
Use those as collate_fn in DataLoaders and you should be just about fine (maybe with minor adjustments, so it's essential you understand the idea standing behind it).
Other possible problems and tips
Training loop: great place for a lot of small errors, you may want to minimalize those by using PyTorch Ignite. I am having unbelievably hard time going through your Tensorflow-like-Estimator-like-API-like training loop (e.g. self.model = self.w2v_vocab = self.criterion = self.optimizer = self.scheduler = None this). Please, don't do it this way, separate each task (data creating, data loading, data preparation, model setup, training loop, logging) into it's own respective module. All in all there is a reason why PyTorch/Keras is more readable and sanity-preserving than Tensorflow.
Make the first row of your embedding equal to vector containg zeros: By default, torch.nn.functional.embedding expects the first row to be used for padding. Hence you should start your unique indexing for each word at 1 or specify an argument padding_idx to different value (though I highly discourage this approach, confusing at best).
I hope this answer helps you at least a little bit, if something is unclear post a comment below and I'll try to explain it from a different perspective/more detail.
Some final comments
This code is not reproducible, nor the question's specific. We don't have the data you are using, neither we got your word vectors, random seed is not fixed etc.
PS. One last thing: Check your performance on really small subset of your data (say 96 examples), if it does not converge, it is very likely you indeed have a bug in your code.
About the times: they are probably off (due to not sorting and not padding I suppose), usually Keras and PyTorch's times are quite similar (if I understood this part of your question as intended) for correct and efficient implementations.
Permute vs view vs reshape explanation
This simple example show the differences between permute() and view(). The first one swaps axes, while the second does not change memory layout, just chunks the array into desired shape (if possible).
import torch
a = torch.tensor([[1, 2], [3, 4], [5, 6]])
print(a)
print(a.permute(1, 0))
print(a.view(2, 3))
And the output would be:
tensor([[1, 2],
[3, 4],
[5, 6]])
tensor([[1, 3, 5],
[2, 4, 6]])
tensor([[1, 2, 3],
[4, 5, 6]])
reshape is almost like view, was added for those coming from numpy, so it's easier and more natural for them, but it has one important difference:
view never copies data and work only on contiguous memory (so after permutation like the one above your data may not be contiguous, hence acces to it might be slower)
reshape can copy data if needed, so it would work for non-contiguous arrays as well.
| https://stackoverflow.com/questions/54824768/ |
Compute maxima and minima of a 4D tensor in PyTorch | Suppose that we have a 4-dimensional tensor, for instance
import torch
X = torch.rand(2, 3, 4, 4)
tensor([[[[-0.9951, 1.6668, 1.3140, 1.4274],
[ 0.2614, 2.6442, -0.3041, 0.7337],
[-1.2690, 0.0125, -0.3885, 0.0535],
[ 1.5270, -0.1186, -0.4458, 0.1389]],
[[ 0.9125, -1.2998, -0.4277, -0.2688],
[-1.6917, -0.8855, -0.2784, -0.6717],
[ 1.1417, 0.4574, 0.4803, -1.6637],
[ 0.7322, 0.2654, -0.1525, 1.7285]],
[[ 1.8310, -1.5765, 0.1392, 1.3431],
[-0.6641, -1.5090, -0.4893, -1.4110],
[ 0.5875, 0.7528, -0.6482, -0.2547],
[-2.3133, 0.3888, 2.1428, 0.2331]]]])
I want to compute the maximum and the minimum values of X over the dimensions 2 and 3, that is, to compute two tensors of size (2,3,1,1), one for the maximum and one for the minimum values of the 4x4 blocks.
I started by trying to do that with torch.max() and torch.min(), but I had no luck. I would expect the dim argument of the above functions to be able to take tuple values, but it can take only an integer. So I don't know how to proceed.
However, specifically for the maximum values, I decided to use torch.nn.MaxPool2d() with kernel_size=4 and stride=4. This indeed did the job:
max_pool = nn.MaxPool2d(kernel_size=4, stride=4)
X_max = max_pool(X)
tensor([[[[2.6442]],
[[1.7285]],
[[2.1428]]]])
But, afaik, there's no similar layer for "min"-pooling. Could you please help me on how to compute the minima similarly to the maxima?
Thank you.
| Just calculate the max for both dimensions sequentially, it gives the same result:
tup = (2,3)
for dim in tup:
X = torch.max(X,dim=dim,keepdim=True)[0]
| https://stackoverflow.com/questions/54833289/ |
Given list of image filenames for eachset,Split large dataset to train/valid/test directories? | I am trying to split a large dataset into train/valid/test sets from Food101 dataset for image classification
and the structure of a dataset is like this and has all images in one folder
'',
'Structure:',
'----------',
'pec/',
' images/',
' <class_name>/',
' <image_id>.jpg',
' meta/',
' classes.txt',
' labels.txt',
' test.json',
' test.txt',
' train.json',
' train.txt',
'',
'All images can be found in the "images" folder and are organized per class. All',
'image ids are unique and correspond to the foodspotting.com review ids.
'',
'The test/train splitting used in the experiment of our paper can be found in',
'the "meta" directory.', (edited) ```
I want to divide images dataset to train/valid/test with the list of filenames given in train.txt and test.txt, which author used
the Shape of train,valid, test lists: (101, 600) , (101, 150) , 25250
In colab ,i run following code
for x in range(train.shape[0]):
for y in range(train.shape[1]):
temp = train[x,y] + ".jpg"
foldername = temp.split('/')[0]
!mv /content/food-101/images/$temp /content/food101/train/$foldername/
Individually moving images by running nested loop by taking filenames in lists, is taking forever time to create folders as there are 100100 images total so,
I have a list of filenames for train/valid and test set but how to make them into folders so that we can feed it to image classifier in pytorch image folder format(i mean train / valid / test set are three different folders and each folder has subfolders of each class)
Please Do tell if anyone knows how to do this please, and please I really need your help here, THanks:smile:
| It seems i have been going all wrong about the solution, i need not move the images all i need to change is path to images in the required format through os module
Below is the code for doing it. Say you have list of filenames in valid list
#for valid set
v = valid.reshape(15150,)
or_fpath = '/content/food-101/images/' #path of original folder
cp_fpath = '/content/food101/valid/' #path of destination folder
for y in tqdm(v):
foldername = y.split('/')[0]
img = y.split('/')[1] +'.jpg'
ip_path = or_fpath+foldername
op_path = cp_fpath+foldername
if not os.path.exists(op_path):
os.mkdir(op_path)
os.rename(os.path.join(ip_path, img), os.path.join(op_path, img))
Thanks!
note: if you have even better answer please share Thanks
| https://stackoverflow.com/questions/54838339/ |
Import error on Windows10 with pytorch0.4 | Discription
I am trying to install pytorch 0.4 on Windows10.
My enviroment settings:
- Windows10
- cuda9.0
- python 3.6
- pytorch 0.4
- anaconda
I tried by using both conda install -n myenv and pip install $path:whl and both failed.
Error
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Anaconda\envs\py3.6_pytorch0.4\lib\site-packages\torch\__init__.py", line 80, in <module>
from torch._C import *
ImportError: DLL load failed:
I found the related a issue #4518 under pytorch but answers under that issue do not work for me.
What I have tried
add all conda related path to environment path
change the directory (cd)
install vs_runtime under this conda env
None of those works.
But if I install pytorch under conda's base environment, it works well.
So what's going on here?
Update:
When we tried to install pytorch on windows, a lot of packages need to be installed at the same time. We can simply follow the steps on official website [link](https://pytorch.org/),for python 3.6 and cuda9.0 the installation command is as
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
If we want to install former version of pytorch, we can assign the version, eg. 0.4 as
conda install pytorch=0.4 torchvision cudatoolkit=9.0 -c pytorch
| After you create the new environment with conda, execute conda install -c pytorch pytorch to install pytorch.
pip does not work that well with external, non-Python dependencies. Not unlikely in your case path to the DLL is not set correctly (just a guess).
| https://stackoverflow.com/questions/54839446/ |
No module named "Torch" | I successfully installed pytorch via conda:
conda install pytorch-cpu torchvision-cpu -c pytorch
I also successfully installed pytorch via pip:
pip3 install https://download.pytorch.org/whl/cpu/torch-1.0.1-cp36-cp36m-win_amd64.whl
pip3 install torchvision
But, it only works in a jupyter notebook. Whenever I try to execute a script from the console, I get the error message:
No module named "torch"
| Try to install PyTorch using pip:
First create a Conda environment using:
conda create -n env_pytorch python=3.6
Activate the environment using:
conda activate env_pytorch
Now install PyTorch using pip:
pip install torchvision
Note: This will install both torch and torchvision.
Now go to Python shell and import using the command:
import torch
import torchvision
| https://stackoverflow.com/questions/54843067/ |
PyTorch get all layers of model | What's the easiest way to take a pytorch model and get a list of all the layers without any nn.Sequence groupings? For example, a better way to do this?
import pretrainedmodels
def unwrap_model(model):
for i in children(model):
if isinstance(i, nn.Sequential): unwrap_model(i)
else: l.append(i)
model = pretrainedmodels.__dict__['xception'](num_classes=1000, pretrained='imagenet')
l = []
unwrap_model(model)
print(l)
| You can iterate over all modules of a model (including those inside each Sequential) with the modules() method. Here's a simple example:
>>> model = nn.Sequential(nn.Linear(2, 2),
nn.ReLU(),
nn.Sequential(nn.Linear(2, 1),
nn.Sigmoid()))
>>> l = [module for module in model.modules() if not isinstance(module, nn.Sequential)]
>>> l
[Linear(in_features=2, out_features=2, bias=True),
ReLU(),
Linear(in_features=2, out_features=1, bias=True),
Sigmoid()]
| https://stackoverflow.com/questions/54846905/ |
element 0 of tensors does not require grad and does not have a grad_fn | I am trying to apply reiforcement learning mechanism to classification tasks.
I know it is useless thing to do because deep learning can overperform rl in the tasks. Anyway in research purposes I am doing.
I reward agent if he's correct positive 1 or not negative -1
and computate loss FUNC with predicted_action(predicted_class) and reward.
But I get an error:
element 0 of tensors does not require grad and does not have a grad_fn
# creating model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.pipe = nn.Sequential(nn.Linear(9, 120),
nn.ReLU(),
nn.Linear(120, 64),
nn.ReLU(),
nn.Linear(64,2),
nn.Softmax()
)
def forward(self, x):
return self.pipe(x)
def env_step(action, label, size):
total_reward = []
for i in range(size):
reward = 0
if action[i] == label[i]:
total_reward.append(reward+1)
continue
else:
total_reward.append(reward-1)
continue
return total_reward
if __name__=='__main__':
epoch_size = 100
net = Net()
criterion = nn.MSELoss()
optimizer = optim.Adam(params=net.parameters(), lr=0.01)
total_loss = deque(maxlen = 50)
for epoch in range(epoch_size):
batch_index = 0
for i in range(13):
# batch sample
batch_xs = torch.FloatTensor(train_state[batch_index: batch_index+50]) # make tensor
batch_ys = torch.from_numpy(train_label[batch_index: batch_index+50]).type('torch.LongTensor') # make tensor
# action_prob; e.g classification prob
actions_prob = net(batch_xs)
#print(actions_prob)
action = torch.argmax(actions_prob, dim=1).unsqueeze(1)
#print(action)
reward = np.array(env_step(action, batch_ys, 50))
#print(reward)
reward = torch.from_numpy(reward).unsqueeze(1).type('torch.FloatTensor')
#print(reward)
action = action.type('torch.FloatTensor')
optimizer.zero_grad()
loss = criterion(action, reward)
loss.backward()
optimizer.step()
batch_index += 50
| action is produced by the argmax funtion, which is not differentiable. You instead want take the loss between the reward and the responsible probability for the action taken.
Often, the "loss" chosen for the policy in reinfocement learning is the so called score function:
Which is the product of the log of the responsible probablity for the action a taken times the reward gained.
| https://stackoverflow.com/questions/54849812/ |
List not populated correctly unless use PyTorch clone() | I'm attempting to add the final weights of each trained model to a list using below code :
%reset -f
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.utils.data as data_utils
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from matplotlib import pyplot
from pandas import DataFrame
import torchvision.datasets as dset
import os
import torch.nn.functional as F
import time
import random
import pickle
from sklearn.metrics import confusion_matrix
import pandas as pd
import sklearn
trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
root = './data'
if not os.path.exists(root):
os.mkdir(root)
train_set = dset.MNIST(root=root, train=True, transform=trans, download=True)
test_set = dset.MNIST(root=root, train=False, transform=trans, download=True)
batch_size = 64
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=True)
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(28*28, 500)
self.fc2 = nn.Linear(500, 256)
self.fc3 = nn.Linear(256, 2)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
num_epochs = 2
random_sample_size = 200
values_0_or_1 = [t for t in train_set if (int(t[1]) == 0 or int(t[1]) == 1)]
values_0_or_1_testset = [t for t in test_set if (int(t[1]) == 0 or int(t[1]) == 1)]
print(len(values_0_or_1))
print(len(values_0_or_1_testset))
train_loader_subset = torch.utils.data.DataLoader(
dataset=values_0_or_1,
batch_size=batch_size,
shuffle=True)
test_loader_subset = torch.utils.data.DataLoader(
dataset=values_0_or_1_testset,
batch_size=batch_size,
shuffle=False)
train_loader = train_loader_subset
# Hyper-parameters
input_size = 100
hidden_size = 100
num_classes = 2
# learning_rate = 0.00001
learning_rate = .0001
# Device configuration
device = 'cpu'
print_progress_every_n_epochs = 1
model = NeuralNet().to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
N = len(train_loader)
# Train the model
total_step = len(train_loader)
most_recent_prediction = []
test_actual_predicted_dict = {}
rm = random.sample(list(values_0_or_1), random_sample_size)
train_loader_subset = data_utils.DataLoader(rm, batch_size=4)
weights_without_clone = []
weights_with_clone = []
for i in range(0 , 2) :
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader_subset):
# Move tensors to the configured device
images = images.reshape(-1, 2).to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch) % print_progress_every_n_epochs == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
print('model fc2 weights ' , model.fc2.weight.data)
weights_without_clone.append(model.fc2.weight.data)
weights_with_clone.append(model.fc2.weight.data.clone())
Output of model :
12665
2115
Epoch [1/2], Step [50/198], Loss: 0.0968
Epoch [2/2], Step [50/198], Loss: 0.0082
model fc2 weights tensor([[-3.9507e-02, -4.0454e-02, 3.5576e-03, ..., 6.2181e-03,
4.1372e-02, -6.2960e-03],
[ 1.8778e-02, 2.7049e-02, -3.5624e-02, ..., 2.6797e-02,
2.2041e-03, -4.2284e-02],
[ 1.9571e-02, -3.2545e-02, 2.6618e-02, ..., -1.6139e-02,
4.1192e-02, -2.3458e-02],
...,
[-4.6123e-03, 2.6943e-02, 3.9979e-02, ..., -3.3848e-02,
3.6096e-02, 2.4211e-02],
[-1.4698e-02, 9.7528e-04, -2.5244e-03, ..., -3.3145e-02,
1.0888e-02, 3.1091e-02],
[-1.7451e-02, -2.1646e-02, 2.5885e-02, ..., 4.0453e-02,
-6.5324e-03, -3.5410e-02]])
Epoch [1/2], Step [50/198], Loss: 0.0025
Epoch [2/2], Step [50/198], Loss: 0.0013
model fc2 weights tensor(1.00000e-02 *
[[-3.9891, -4.0454, 0.3558, ..., 0.7168, 4.1902, -0.6253],
[ 1.8766, 2.7049, -3.5632, ..., 2.6785, 0.2192, -4.2297],
[ 2.1426, -3.2545, 2.6621, ..., -1.6285, 4.1196, -2.2653],
...,
[-0.4930, 2.6943, 3.9971, ..., -3.2940, 3.6641, 2.4248],
[-1.5160, 0.0975, -0.2524, ..., -3.1938, 1.1753, 3.1065],
[-1.8116, -2.1646, 2.5883, ..., 4.1355, -0.5921, -3.5416]])
Printing the values of weights_without_clone :
print(weights_without_clone[0])
print(weights_without_clone[1])
outputs :
tensor(1.00000e-02 *
[[-3.9891, -4.0454, 0.3558, ..., 0.7168, 4.1902, -0.6253],
[ 1.8766, 2.7049, -3.5632, ..., 2.6785, 0.2192, -4.2297],
[ 2.1426, -3.2545, 2.6621, ..., -1.6285, 4.1196, -2.2653],
...,
[-0.4930, 2.6943, 3.9971, ..., -3.2940, 3.6641, 2.4248],
[-1.5160, 0.0975, -0.2524, ..., -3.1938, 1.1753, 3.1065],
[-1.8116, -2.1646, 2.5883, ..., 4.1355, -0.5921, -3.5416]])
tensor(1.00000e-02 *
[[-3.9891, -4.0454, 0.3558, ..., 0.7168, 4.1902, -0.6253],
[ 1.8766, 2.7049, -3.5632, ..., 2.6785, 0.2192, -4.2297],
[ 2.1426, -3.2545, 2.6621, ..., -1.6285, 4.1196, -2.2653],
...,
[-0.4930, 2.6943, 3.9971, ..., -3.2940, 3.6641, 2.4248],
[-1.5160, 0.0975, -0.2524, ..., -3.1938, 1.1753, 3.1065],
[-1.8116, -2.1646, 2.5883, ..., 4.1355, -0.5921, -3.5416]])
Printing the values of weights_with_clone :
print(weights_with_clone[0])
print(weights_with_clone[1])
outputs :
tensor([[-3.9507e-02, -4.0454e-02, 3.5576e-03, ..., 6.2181e-03,
4.1372e-02, -6.2960e-03],
[ 1.8778e-02, 2.7049e-02, -3.5624e-02, ..., 2.6797e-02,
2.2041e-03, -4.2284e-02],
[ 1.9571e-02, -3.2545e-02, 2.6618e-02, ..., -1.6139e-02,
4.1192e-02, -2.3458e-02],
...,
[-4.6123e-03, 2.6943e-02, 3.9979e-02, ..., -3.3848e-02,
3.6096e-02, 2.4211e-02],
[-1.4698e-02, 9.7528e-04, -2.5244e-03, ..., -3.3145e-02,
1.0888e-02, 3.1091e-02],
[-1.7451e-02, -2.1646e-02, 2.5885e-02, ..., 4.0453e-02,
-6.5324e-03, -3.5410e-02]])
tensor(1.00000e-02 *
[[-3.9891, -4.0454, 0.3558, ..., 0.7168, 4.1902, -0.6253],
[ 1.8766, 2.7049, -3.5632, ..., 2.6785, 0.2192, -4.2297],
[ 2.1426, -3.2545, 2.6621, ..., -1.6285, 4.1196, -2.2653],
...,
[-0.4930, 2.6943, 3.9971, ..., -3.2940, 3.6641, 2.4248],
[-1.5160, 0.0975, -0.2524, ..., -3.1938, 1.1753, 3.1065],
[-1.8116, -2.1646, 2.5883, ..., 4.1355, -0.5921, -3.5416]])
Why is 1.00000e-02 * prepended to the final weight value of the second model ?
Why is using clone() required in order to add final weights for each iteration as omitting clone() the same weights are added to each iteration ? :
weights_without_clone.append(model.fc2.weight.data)
weights_with_clone.append(model.fc2.weight.data.clone())
|
First of all, I am going to reproduce your case. I will use very simple model:
Code:
import torch
import torch.nn as nn
import torch.optim as optim
torch.manual_seed(42)
# Some dummy data:
X = torch.randn(100, 5, requires_grad=True, dtype=torch.float)
Y = torch.randn(100, 5, requires_grad=True, dtype=torch.float)
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(5, 5, bias=False)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(5, 5, bias=False)
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
def train(model, x, y, loss_fn, optimizer, n_epochs=1000, print_loss=True):
weights = []
for i in range(n_epochs):
y_hat = model(x)
loss = loss_fn(y_hat, y)
optimizer.zero_grad()
loss.backward()
if print_loss:
print(f'| {i+1} | Loss: {loss.item():.4f}')
optimizer.step()
print('W:\n', model.fc2.weight.data)
weights.append(model.fc2.weight.data)
return weights
torch.manual_seed(42)
model = Model()
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
n_epochs = 2
weights = train(model=model,
x=X,
y=Y,
loss_fn=loss_fn,
optimizer=optimizer,
n_epochs=n_epochs,
print_loss=True)
Output:
| 1 | Loss: 1.0285
W:
tensor([[-0.2052, -0.1257, -0.2684, 0.0425, -0.4413],
[ 0.4034, -0.3797, 0.3448, 0.0741, -0.1450],
[ 0.2759, 0.0695, 0.3608, 0.0487, -0.1411],
[ 0.1201, -0.1213, 0.1881, 0.3990, 0.2583],
[-0.1956, 0.2581, 0.0798, 0.2270, -0.2725]])
| 2 | Loss: 1.0279
W:
tensor([[-0.2041, -0.1251, -0.2679, 0.0428, -0.4410],
[ 0.4030, -0.3795, 0.3444, 0.0738, -0.1447],
[ 0.2755, 0.0693, 0.3603, 0.0484, -0.1411],
[ 0.1200, -0.1213, 0.1879, 0.3987, 0.2580],
[-0.1958, 0.2580, 0.0796, 0.2269, -0.2725]])
Ok, it works well. Let's now look at weights:
Code:
print(*weights, sep='\n')
Output:
tensor([[-0.2041, -0.1251, -0.2679, 0.0428, -0.4410],
[ 0.4030, -0.3795, 0.3444, 0.0738, -0.1447],
[ 0.2755, 0.0693, 0.3603, 0.0484, -0.1411],
[ 0.1200, -0.1213, 0.1879, 0.3987, 0.2580],
[-0.1958, 0.2580, 0.0796, 0.2269, -0.2725]])
tensor([[-0.2041, -0.1251, -0.2679, 0.0428, -0.4410],
[ 0.4030, -0.3795, 0.3444, 0.0738, -0.1447],
[ 0.2755, 0.0693, 0.3603, 0.0484, -0.1411],
[ 0.1200, -0.1213, 0.1879, 0.3987, 0.2580],
[-0.1958, 0.2580, 0.0796, 0.2269, -0.2725]])
Ok, it is not what we want, but actually it is expected behavior. If you look once again, you will see that values in the list correspond to weights values from second epoch. That means we were appending not new tensors, but assignments that point to real weights storage, and that's why we just have the same final results.
In other words, you are getting the same values when using regular append, because gradients still propagate to the original weights tensor. And appended "weights tensors" are pointing to the same tensor from model that changes during backprop.
That's why you need to use clone to create a new tensor, BUT it is recommended to use tensor.clone().detach() whereas clone is recorded to computational graph, this means if you backprop through this cloned tensor,
Gradients propagating to the cloned tensor will propagate to the original tensor. clone docs
So, if you want to append your weights safely use this:
weights.append(model.fc2.weight.data.clone().detach())
| https://stackoverflow.com/questions/54852644/ |
Equal output values given for Multiclass Classification | I'm trying to build a CNN for predicting the number of fingers in an image, using PyTorch. The network:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.Layer1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2)),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3, 3)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2)),
nn.Conv2d(in_channels=256, out_channels=16, kernel_size=(1, 1)),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3, 3)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2)),
nn.Conv2d(in_channels=256, out_channels=16, kernel_size=(1, 1)),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2)),
nn.Conv2d(in_channels=128, out_channels=16, kernel_size=(1, 1)),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2)),
nn.Conv2d(in_channels=128, out_channels=16, kernel_size=(1, 1)),
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3)),
nn.ReLU(),
nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3, 3)),
nn.ReLU(),
)
self.Layer2 = nn.Sequential(
nn.Linear(1536, 100),
nn.Tanh(),
nn.Linear(100, 6),
nn.Softmax()
)
self.optimizer = optimizers.Adadelta(self.parameters())
def forward(self, X):
X = self.Layer1(X)
print(X.shape)
X = self.Layer2(X.reshape(1, 1536))
X = X.squeeze()
return X
def calc_loss(self, X, num):
out = self.forward(X).unsqueeze(dim=0)
print("Output: "+str(out))
target = torch.tensor([num], dtype=torch.int64).cuda()
criterion = nn.CrossEntropyLoss()
loss = criterion(out, target)
return loss
def train_step(self, X, Y):
loss = self.calc_loss(X, Y)
print(loss)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
However, after the training is complete, all predictions have almost the same values (around 0.15 ~ 0.18).
It appears as though the network averages out the output probabilities to minimize loss, instead of learning the actual values.
I get the same result whether I use Softmax for the last layer with cross-entropy loss, or Sigmoid with binary cross-entropy, MSE, or SmoothL1Loss.
In case of using Adam optimizer, I get similar results, only in the range of 1e-12 ~ 1e-14.
What am I missing?
| If you are using CrossEntropyLoss, you don't need to use Softmax in your forward. It is already included to CrossEntropyLoss, so you need the "raw" output. But, if you need Softmax during inference time, use NLLLoss + 'Softmax' instead.
You can find more info here
| https://stackoverflow.com/questions/54853225/ |
'ToPILImage' object has no attribute 'show' | I'm doing an image processing task and I want to concat two sites of pictures. For concatting, I first converted the image to tensor, then converted the tensor to a PIL image to display it, but it was reported incorrectly.Could someone please help me?
Here is my code:
import skimage.io as io
import torch
from torchvision import models, transforms
from PIL import Image
import matplotlib.pyplot as plt
from torchvision.transforms import ToPILImage
import numpy as np
from skimage import data_dir,io,color
coll1 = io.ImageCollection('F:\\code1/*.jpg')
coll2 = io.ImageCollection('F:\\code2/*.jpg')
a = torch.tensor(coll1)
print(a)
print(a.shape)
b = torch.tensor(coll2)
print(b)
print(b.shape)
c=torch.cat((a,b),1)
print(c.shape)
print(c)
img= transforms.ToPILImage()
img.show()
and here is the error code:
Traceback (most recent call last): File "F:/filelist.py", line 39,
in
img.show() AttributeError: 'ToPILImage' object has no attribute 'show'
| The ToPILImage method accepts a tensor or an ndarray as input, source.
You will need to cast a single image tensor to the ToPILImage method. From your post, I suspect you are passing batches of image tensor instead of one, hence the error.
Assumed if you want to visualize image from tensor c,
img = transforms.ToPILImage()(c)
img.show()
| https://stackoverflow.com/questions/54862480/ |
Deal with negative values resulting from my nn model | I have a simple nn model that looks like this
class TestRNN(nn.Module):
def __init__(self, batch_size, n_steps, n_inputs, n_neurons, n_outputs):
super(TestRNN, self).__init__()
...
self.basic_rnn = nn.RNN(self.n_inputs, self.n_neurons)
self.FC = nn.Linear(self.n_neurons, self.n_outputs)
def forward(self, X):
...
lstm_out, self.hidden = self.basic_rnn(X, self.hidden)
out = self.FC(self.hidden)
return out.view(-1, self.n_outputs)
and I am using criterion = nn.CrossEntropyLoss() for calculating my error. The operation order goes something like this:
# get the inputs
x, y = data
# forward + backward + optimize
outputs = model(x)
loss = criterion(outputs, y)
Where my training data x is normalized and looks like this:
tensor([[[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[2.6164e-02, 2.6164e-02, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 1.3108e-05],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[9.5062e-01, 3.1036e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[0.0000e+00, 1.3717e-05, 3.2659e-07, ..., 0.0000e+00,
0.0000e+00, 3.2659e-07]],
[[5.1934e-01, 5.4041e-01, 6.8083e-06, ..., 0.0000e+00,
0.0000e+00, 6.8083e-06],
[5.2340e-01, 6.0007e-01, 2.7062e-06, ..., 0.0000e+00,
0.0000e+00, 2.7062e-06],
[8.1923e-01, 5.7346e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0714e-01, 7.0708e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 7.0407e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
...,
[[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.1852e-01, 2.3411e-02, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0775e-01, 7.0646e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 3.9888e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[5.9611e-01, 5.8796e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0710e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.7538e-01, 2.4842e-01, 1.7787e-06, ..., 0.0000e+00,
0.0000e+00, 1.7787e-06],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]],
[[5.2433e-01, 5.2433e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[1.3155e-01, 1.3155e-01, 0.0000e+00, ..., 8.6691e-02,
9.7871e-01, 0.0000e+00],
[7.4412e-01, 6.6311e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[7.0711e-01, 7.0711e-01, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 9.6093e-07]]])
While a typical output and y passed to the criterion function look like this:
tensor([[-0.0513],
[-0.0445],
[-0.0514],
[-0.0579],
[-0.0539],
[-0.0323],
[-0.0521],
[-0.0294],
[-0.0372],
[-0.0518],
[-0.0516],
[-0.0501],
[-0.0312],
[-0.0496],
[-0.0436],
[-0.0514],
[-0.0518],
[-0.0465],
[-0.0530],
[-0.0471],
[-0.0344],
[-0.0502],
[-0.0536],
[-0.0594],
[-0.0356],
[-0.0371],
[-0.0513],
[-0.0528],
[-0.0621],
[-0.0404],
[-0.0403],
[-0.0562],
[-0.0510],
[-0.0580],
[-0.0516],
[-0.0556],
[-0.0063],
[-0.0459],
[-0.0494],
[-0.0460],
[-0.0631],
[-0.0525],
[-0.0454],
[-0.0509],
[-0.0522],
[-0.0426],
[-0.0527],
[-0.0423],
[-0.0572],
[-0.0308],
[-0.0452],
[-0.0555],
[-0.0479],
[-0.0513],
[-0.0514],
[-0.0498],
[-0.0514],
[-0.0471],
[-0.0505],
[-0.0467],
[-0.0485],
[-0.0520],
[-0.0517],
[-0.0442]], device='cuda:0', grad_fn=<ViewBackward>)
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
When the criterion is being applied I get the following error (running with CUDA_LAUNCH_BLOCKING=1):
/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=111 error=59 : device-side assert triggered
The fact that my model outputs negative values is causing the error message above, how can I resolve this issue?
| TL;DR
You have two options:
Make the second dimension of outputs be of size 2 instead of 1.
Use nn.BCEWithLogitsLoss instead of nn.CrossEntropyLoss
I think that the problem is not the negative numbers. It is the shape of outputs.
Looking at your array y, I see that you have 2 different classes (maybe even more, but let's suppose that it's 2). This means that the last dimension of outputs should be 2. The reason is that outputs needs to give a "score" to each one of the 2 different classes (see the documentation). The score can be negative, zero or positive. But the shape of your outputs is [64,1], and not [64,2] as required.
One of the steps of the nn.CrossEntropyLoss() object will be to convert these scores to a probability distribution over the two classes. This is done using a softmax operation. However, When doing binary classification (that is, classification with only 2 classes, as in our current case), there is another option: Give a score for only one class, convert it to a probability for that class using a sigmoid function, and then perform "1-p" on this to get the probability for the other class. This option means that outputs needs to give a score for only one of the two classes, as in you current case. To chose this options, you will need to change nn.CrossEntropyLoss with nn.BCEWithLogitsLoss. You can then pass to it outputs and y as you are currently doing (note however that the shape of outputs needs to be precisely the shape of y, so in your example you will need to pass outputs[:,0] instead of outputs. Also you will need to convert y to a float: y.float(). Thus the call is criterion(outputs[:,0], y.float()))
| https://stackoverflow.com/questions/54870863/ |
How to install torch audio on Windows 10 conda? | In Anaconda Python 3.6.7 with PyTorch installed, on Windows 10, I do this sequence:
conda install -c conda-forge librosa
conda install -c groakat sox
then in a fresh download from https://github.com/pytorch/audio I do
python setup.py install
and it runs for a while and ends like this:
torchaudio/torch_sox.cpp(3): fatal error C1083: Cannot open include file: 'sox.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.15.26726\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
I am trying to reproduce this OpenNMT-py speech training demo on Windows: http://opennmt.net/OpenNMT-py/speech2text.html
| I managed to compile torchaudio with sox in Windows 10, but is a bit tricky.
Unfortunately the sox_effects are not usable, this error shows up:
RuntimeError: Error opening output memstream/temporary file
But you can use the other torchaudio functionalities.
The steps I followed for Windows 10 64bit are:
TORCHAUDIO WINDOWS10 64bit
Note: I mix some command lines unix-like syntax, you can use file explorer or whatever
preliminar arrangements
Download sox sources
$ git clone git://git.code.sf.net/p/sox/code sox
Download other sox source to get lpc10
$ git clone https://github.com/chirlu/sox/tree/master/lpc10 sox2
$ cp -R sox2/lpc10 sox
IMPORTANT get VisualStudio2019 and BuildTools installed
lpc10 lib
4.0. Create a VisualStudio CMake project for lpc10 and build it
Start window -> open local folder -> sox/lpc10
(it reads CMakeLists.txt automatically)
Build->build All
4.2. Copy lpc10.lib to sox
$ mkdir -p sox/src/out/build/x64-Debug
$ cp sox/lpc10/out/build/x64-Debug/lpc10.lib sox/src/out/build/x64-Debug
gsm lib
5.0. Create a CMake project for libgsm and compile it as before with lpc10
5.1. Copy gsm.lib to sox
$ mkdir -p sox/src/out/build/x64-Debug
$ cp sox/libgsm/out/build/x64-Debug/gsm.lib sox/src/out/build/x64-Debug
sox lib
6.0. Create a CMake project for sox in VS
6.1. Edit some files:
CMakeLists.txt: (add at the very beginning)
project(sox)
sox_i.h: (add under stdlib.h include line)
#include <wchar.h> /* For off_t not found in stdio.h */
#define UINT16_MAX ((int16_t)-1)
#define INT32_MAX ((int32_t)-1)
sox.c: (add under time.h include line)
`#include <sys/timeb.h>`
6.2. Build sox with VisualStudio
6.3. Copy the libraries where python will find them, I use a conda environment:
$ cp sox/src/out/build/x64-Debug/libsox.lib envs\<envname>\libs\sox.lib
$ cp sox/src/out/build/x64-Debug/gsm.lib envs\<envname>\libs
$ cp sox/src/out/build/x64-Debug/lpc10.lib envs\<envname>\libs
torchaudio
$ activate <envname>
7.0. Download torchaudio from github
$ git clone https://github.com/pytorch/audio thaudio
7.1. Update setup.py, after the "else:" statement of "if IS_WHEEL..."
$ vi thaudio/setup.py
if IS_WHEEL...
else:
audio_path = os.path.dirname(os.path.abspath(__file__))
# Add include path for sox.h, I tried both with the same outcome
include_dirs += [os.path.join(audio_path, '../sox/src')]
#include_dirs += [os.path.join(audio_path, 'torchaudio/sox')]
# Add more libraries
#libraries += ['sox']
libraries += ['sox','gsm','lpc10']
7.2. Edit sox.cpp from torchaudio because dynamic arrays are not allowed:
$ vi thaudio/torchaudio/torch_sox.cpp
//char* sox_args[max_num_eopts];
char* sox_args[20]; //Value of MAX_EFFECT_OPTS
7.3. Build and install
$ cd thaudio
$ python setup.py install
It will print out tons of warnings about type conversion and some library conflict with MSVCRTD but "works".
And thats all.
| https://stackoverflow.com/questions/54872876/ |
How can I use pytorch pre-trained model without installing pytorch? | I only want to use pre-trained model in pytorch without installing the whole package.
Can I just copy the model module from pytorch?
| I'm afraid you cannot do that: in order to run the model, you need not only the trained weights ('.pth.tar' file) but also the "structure" of the net: that is, the layers, how they are connected to each other etc. This network structure is coded in python and requires pytorch to be installed.
| https://stackoverflow.com/questions/54880783/ |
How to to drop a specific labeled pixels in semantic segmentation | I am new to semantic segmentation. I used the FCN to train my dataset. In the data set there are some pixels for the unknown class. I would like to exclude this class from my loss. So I defined a weight based on the class distribution of whole dataset and set the weight for the unknown class to zero as following. But I am still getting prediction for this class. Do you have any idea how to properly exclude one specific class?
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits
(logits=logits, labels=tf.squeeze(annotation,
squeeze_dims=[3]),name="entropy"))
weighted_losses = (loss * weights)
train_op = optimizer.minimize(weighted_losses,
var_list=tf.trainable_variables(),
global_step=tf.train.get_global_step())
I do not know pytorch, but I heard that there is some thing for this purpose "ignore_index" in loss function and you can ignore a specific class. If this is right approach to my problem, do you know if there is some thing equivalent in tensorflow?
| For semantic segmentation you have 2 "special" labels: the one is "background" (usually 0), and the other one is "ignore" (usually 255 or -1).
"Background" is like all other semantic labels meaning "I know this pixel does not belong to any of the semantic categories I am working with". It is important for your model to correctly output "background" whenever applicable.
"Ignore" label is not a label that your model can predict - it is "outside" its range. This label only exists in the training annotation meaning "we were unsure how this pixel should be labeled, so just ignore it".
When there are "ignore" pixels in your target labels, your model cannot (and should not) output "ignore" labels. Nevertheless, your model should output something. The fact that this pixel is labeled "ignore" means that whatever your model outputs for that pixel will be ignored by the loss function (assuming you told the loss to ignore "ignore" pixels). Moreover, if your test/validation sets have "ignore" labels means that whatever your model outputs for these pixels, it would simply be ignored by the scoring mechanism and won't be counted as either a correct or incorrect prediction.
To summarize: even when the ground truth has "ignore" labels, the model cannot and should not output "ignore". It simply outputs whatever valid label it feels like and it is perfectly okay.
for tensorflow you can checkout this thread.
| https://stackoverflow.com/questions/54887933/ |
TextLMDataBunch Memory issue Language Model Fastai | I have a dataset with 45 million rows of data. I have three 6gb ram gpu. I am trying to train a language model on the data.
For that, I am trying to load the data as the fastai data bunch. But this part always fails because of the memory issue.
data_lm = TextLMDataBunch.from_df('./', train_df=df_trn,
valid_df=df_val, bs=10)
How do I handle this issue?
| When you use this function, your Dataframe is loaded in memory. Since you have a very big dataframe, this causes your memory error. Fastai handles tokenization with a chunksize, so you should still be able to tokenize your text.
Here are two things you should try :
Add a chunksize argument (the default value is 10k) to your TextLMDataBunch.from_df, so that the tokenization process needs less memory.
If this is not enough, I would suggest not to load your whole dataframe into memory. Unfortunately, even if you use TextLMDataBunch.from_folder, it just loads the full DataFrame and pass it to TextLMDataBunch.from_df, you might have to create your own DataBunch constructor. Feel free to comment if you need help on that.
| https://stackoverflow.com/questions/54890488/ |
Using PyTorch on AWS Lambda | Has anyone had any luck being able to use PyTorch on AWS Lambda for feature extraction from images or just using the framework at all? I finally got PyTorch, numpy, and pillow zipped in a folder under the uncompressed size limit (which is actually around 262 MB) but I had to build PyTorch from source to do this. The problem I am having now is that Lambda has a very old version of gcc running on it (4.8.3) which is very buggy and missing whole header files altogether. I believe the Pytorch docs state you should be using at least gcc 7 or later but I'm hoping someone may have found a way around this? I built the source using gcc 7.5 but then when I tried to import torch Lambda obviously used it's installed version of 4.8.3 causing an error on import: Floating point exception (core dumped) which stems from the old version of gcc. Is there a possible solution around this? I've been at this for a day and a half now so any help would be great. I think the bottom line is I am facing this similar issue. Better yet does anyone have a Pytorch lambda layer I could use?
| I was able to utilize the below layers for using pytorch on AWS Lambda:
arn:aws:lambda:AWS_REGION:934676248949:layer:pytorchv1-py36:1 PyTorch 1.0.1
arn:aws:lambda:AWS_REGION:934676248949:layer:pytorchv1-py36:2 PyTorch 1.1.0
Found these on Fastai production deployment page, thanks to Matt McClean
| https://stackoverflow.com/questions/54893935/ |
RuntimeError when changing the values of specific parts of a `torch.Tensor` | Say I have a 3 dimentional tensor x initialized with zeros:
x = torch.zeros((2, 2, 2))
and an other 3 dimentional tensor y
y = torch.ones((2, 1, 2))
I am trying to change the values of the first line of x[0] and x[1] like this
x[:, 0, :] = y
but I get this error:
RuntimeError: expand(torch.FloatTensor{[2, 1, 2]}, size=[2, 2]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
It is as if the tensor y was getting squeezed somehow. Is there a way around this?
| I found a straight forward way to do it:
x[:, 0, :] = y[:, 0, :]
| https://stackoverflow.com/questions/54897612/ |
PyTorch Datasets: Converting entire Dataset to NumPy | I'm trying to convert the Torchvision MNIST train and test datasets into NumPy arrays but can't find documentation to actually perform the conversion.
My goal would be to take an entire dataset and convert it into a single NumPy array, preferably without iterating through the entire dataset.
I've looked at How do I turn a Pytorch Dataloader into a numpy array to display image data with matplotlib? but it doesn't address my issue.
So my question is, utilizing torch.utils.data.DataLoader, how would I go about converting the datasets (train/test) into two NumPy arrays such that all of the examples are present?
Note: I've left the batch size as the default of 1 for now; I could set it to 60,000 for train and 10,000 for test, but I'd prefer to not use magic numbers of that sort.
Thank you.
| If I understand you correctly, you want to get the whole train dataset of MNIST images (in total 60000 images, each image of size 1x28x28 array with 1 for color channel) as a numpy array of size (60000, 1, 28, 28)?
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
# Transform to normalized Tensors
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
train_dataset = datasets.MNIST('./MNIST/', train=True, transform=transform, download=True)
# test_dataset = datasets.MNIST('./MNIST/', train=False, transform=transform, download=True)
train_loader = DataLoader(train_dataset, batch_size=len(train_dataset))
# test_loader = DataLoader(test_dataset, batch_size=len(test_dataset))
train_dataset_array = next(iter(train_loader))[0].numpy()
# test_dataset_array = next(iter(test_loader))[0].numpy()
This is the result:
>>> train_dataset_array
array([[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]],
[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]],
[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]],
...,
[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]],
[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]],
[[[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
...,
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296],
[-0.42421296, -0.42421296, -0.42421296, ..., -0.42421296,
-0.42421296, -0.42421296]]]], dtype=float32)
Edit: You can also get the labels by next(iter(train_loader))[1].numpy(). Alternatively you can use train_dataset.data.numpy() and train_dataset.targets.numpy(), but note that the data will not be transformed by transform as is done when using the dataloader.
| https://stackoverflow.com/questions/54897646/ |
DataLoader Class Errors Pytorch | I am beginner pytorch user, and I am trying to use dataloader.
Actually, I am trying to implement this into my network but it takes a very long time to load. And so, I debugged my network to see if the network itself has the problem, but it turns out it has something to with my dataloader class. Here is the code:
from torch.utils.data import Dataset, DataLoader
import numpy as np
import pandas as pd
class DiabetesDataset(Dataset):
def __init__(self, csv):
self.xy = pd.read_csv(csv)
def __len__(self):
return len(self.xy)
def __getitem__(self, index):
self.x_data = torch.Tensor(xy.iloc[:, 0:-1].values)
self.y_data = torch.Tensor(xy.iloc[:, [-1]].values)
return self.x_data[index], self.y_data[index]
dataset = DiabetesDataset("trial.csv")
train_loader = DataLoader(dataset=dataset,
batch_size=1,
shuffle=True,
num_workers=2)`
for a in train_loader:
print(a)
To verify that the dataloader causes all the delay, I created a dummy csv file with 2 columns of 1s and 2s, for a total of 10 samples for each columns. Then, I looped over the train_loader object, it has been more than 1 hr and it is still running, considering that the sample size is small and batch size is set to 1.
I am not sure as to what the error to my code is and it is causing this issue.
Any comments/inputs are greatly appreciated!
| There are some bugs in your code - could you check if this works (it is working on my computer with your toy example):
from torch.utils.data import Dataset, DataLoader
import numpy as np
import pandas as pd
import torch
class DiabetesDataset(Dataset):
def __init__(self, csv):
self.xy = pd.read_csv(csv)
def __len__(self):
return len(self.xy)
def __getitem__(self, index):
x_data = torch.Tensor(self.xy.iloc[:, 0:-1].values)
y_data = torch.Tensor(self.xy.iloc[:, [-1]].values)
return x_data[index], y_data[index]
dataset = DiabetesDataset("trial.csv")
train_loader = DataLoader(
dataset=dataset,
batch_size=1,
shuffle=True,
num_workers=2)
if __name__ == '__main__':
for a in train_loader:
print(a)
Edit: Your code is not working because you are missing a self in the __getitem__ method (self.xy.iloc...) and because you do not have a if __name__ == '__main__ at the end of your script. For the second error, see RuntimeError on windows trying python multiprocessing
| https://stackoverflow.com/questions/54898145/ |
How to load images in the same folder in Pytorch? | I want to load all of the images from the folder /img and /mask respectively. The data structure can be shown as follows:
data
img
0.png
1.png
2.png
3.png
...
mask
label_0.png
label_1.png
label_2.png
...
Hopefully for help.
| If you want to load all the images from the two folders then you can try cv2
import cv2
img = []
for i in range(n): # n = number of images in img folder
img_path = f'~data\img\{i}.png' # replace ~ with full path
img.append(cv2.imread(img_path))
for i in range(n): # n = number of images in mask folder
img_path = f'~data\mask\lable_{i}.png' # replace ~ with full path
img.append(cv2.imread(img_path))
| https://stackoverflow.com/questions/54898655/ |
how to modify rnn cells in pytorch? | If I want to change the compute rules in a RNN cell (e.g. GRU cell), what should I do?
I do not want to implement it via for or while loop considering the issue of efficiency.
I have viewed the source code of pytorch, but it seems that the major components of rnn cells are implement in c code which I cannot find and modify.
You can answer this question through an example: implement GRU cell without the existing version.
thank you ~
| Yes, you implement it "via for or while loop".
Since Pytorch 1.0 there is JIT https://pytorch.org/docs/stable/jit.html that works pretty well (probably better to use the latest git version of PyTorch because of recent improvements to JIT), and depending on your network and implementation in can as fast as a native PyTorch C++ implementation (but still slower than CuDNN).
You can see example implementations at https://github.com/pytorch/benchmark/blob/master/rnns/fastrnns/custom_lstms.py
| https://stackoverflow.com/questions/54903778/ |
Understanding Feature Maps in Convolutional Layers (PyTorch) | I've got this segment of code in a discriminator network for MNIST:
nn.Conv2d(1, 64, 4, 2, 1),
From my understanding, there is 1 input channel (the MNIST image), then we apply a 4x4 kernel to the image in strides of 2 to produce 64 feature maps. Does this mean that we actually have 64 kernels at this layer? Because in order to get 64 different feature maps, we would need 64 separate kernels to convolve over the image?
Then after some ReLu, we have another convolution:
nn.Conv2d(64, 128, 4, 2, 1),
How do we get from 64 to 128? From my understanding of the first example, we have 64 seperate kernels that can produce 64 seperate feature maps. But here we go from 64 feature maps to 128 feature maps? Does that mean that we only have two kernels?
I hope someone can shine some light on whether my understanding is correct!
| Your understanding in the first example is correct, you have 64 different kernels to produce 64 different feature maps.
In case of the second example, so the number of input channels not beeing one, you still have as "many" kernels as the number of output feature maps (so 128), which each are trained on a linear combination of the input feature maps. So in your case each of these kernels would have 4x4x64 trainable weights.
| https://stackoverflow.com/questions/54904608/ |
Custom weight initialisation causing error - pytorch | %reset -f
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
import torch.utils.data as data_utils
import torch.nn as nn
import torch.nn.functional as F
num_epochs = 20
x1 = np.array([0,0])
x2 = np.array([0,1])
x3 = np.array([1,0])
x4 = np.array([1,1])
num_epochs = 200
x = torch.tensor([x1,x2,x3,x4]).float()
y = torch.tensor([0,1,1,0]).long()
train = data_utils.TensorDataset(x,y)
train_loader = data_utils.DataLoader(train , batch_size=2 , shuffle=True)
device = 'cpu'
input_size = 2
hidden_size = 100
num_classes = 2
learning_rate = .0001
torch.manual_seed(24)
def weights_init(m):
m.weight.data.normal_(0.0, 1)
class NeuralNet(nn.Module) :
def __init__(self, input_size, hidden_size, num_classes) :
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size , hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size , num_classes)
def forward(self, x) :
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
model.apply(weights_init)
criterionCE = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for i in range(0 , 1) :
total_step = len(train_loader)
for epoch in range(num_epochs) :
for i,(images , labels) in enumerate(train_loader) :
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterionCE(outputs , labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
outputs = model(x)
print(outputs.data.max(1)[1])
I'm using to initialize the weights :
def weights_init(m):
m.weight.data.normal_(0.0, 1)
But following error is thrown :
~/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
533 return modules[name]
534 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 535 type(self).__name__, name))
536
537 def __setattr__(self, name, value):
AttributeError: 'ReLU' object has no attribute 'weight'
Is this the correct method to initialize the weights ?
Also, should object be of type nn.Module , not Relu ?
| You are trying to set the weights of a weight-free layer (ReLU).
Inside weights_init, you should check the type of layers before initializing weights. For instance:
def weights_init(m):
if type(m) == nn.Linear:
m.weight.data.normal_(0.0, 1)
See How to initialize weights in PyTorch?.
| https://stackoverflow.com/questions/54911328/ |
What is the class definition of nn.Linear in PyTorch? | What is self.hidden in the following code?
import torch.nn as nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super().__init__()
self.hidden = nn.Linear(784, 256)
self.output = nn.Linear(256, 10)
def forward(self, x):
x = F.sigmoid(self.hidden(x))
x = F.softmax(self.output(x), dim=1)
return x
self.hidden is nn.Linear and it can take a tensor x as argument.
|
What is the class definition of nn.Linear in pytorch?
From documentation:
CLASS torch.nn.Linear(in_features, out_features, bias=True)
Applies a linear transformation to the incoming data: y = x*W^T + b
Parameters:
in_features – size of each input sample (i.e. size of x)
out_features – size of each output sample (i.e. size of y)
bias – If set to False, the layer will not learn an additive bias. Default: True
Note that the weights W have shape (out_features, in_features) and biases b have shape (out_features). They are initialized randomly and can be changed later (e.g. during the training of a Neural Network they are updated by some optimization algorithm).
In your Neural Network, the self.hidden = nn.Linear(784, 256) defines a hidden (meaning that it is in between of the input and output layers), fully connected linear layer, which takes input x of shape (batch_size, 784), where batch size is the number of inputs (each of size 784) which are passed to the network at once (as a single tensor), and transforms it by the linear equation y = x*W^T + b into a tensor y of shape (batch_size, 256). It is further transformed by the sigmoid function, x = F.sigmoid(self.hidden(x)) (which is not a part of the nn.Linear but an additional step).
Let's see a concrete example:
import torch
import torch.nn as nn
x = torch.tensor([[1.0, -1.0],
[0.0, 1.0],
[0.0, 0.0]])
in_features = x.shape[1] # = 2
out_features = 2
m = nn.Linear(in_features, out_features)
where x contains three inputs (i.e. the batch size is 3), x[0], x[1] and x[3], each of size 2, and the output is going to be of shape (batch size, out_features) = (3, 2).
The values of the parameters (weights and biases) are:
>>> m.weight
tensor([[-0.4500, 0.5856],
[-0.1807, -0.4963]])
>>> m.bias
tensor([ 0.2223, -0.6114])
(because they were initialized randomly, most likely you will get different values from the above)
The output is:
>>> y = m(x)
tensor([[-0.8133, -0.2959],
[ 0.8079, -1.1077],
[ 0.2223, -0.6114]])
and (behind the scenes) it is computed as:
y = x.matmul(m.weight.t()) + m.bias # y = x*W^T + b
i.e.
y[i,j] == x[i,0] * m.weight[j,0] + x[i,1] * m.weight[j,1] + m.bias[j]
where i is in interval [0, batch_size) and j in [0, out_features).
| https://stackoverflow.com/questions/54916135/ |
Is it possible to freeze only certain embedding weights in the embedding layer in pytorch? | When using GloVe embedding in NLP tasks, some words from the dataset might not exist in GloVe. Therefore, we instantiate random weights for these unknown words.
Would it be possible to freeze weights gotten from GloVe, and train only the newly instantiated weights?
I am only aware that we can set:
model.embedding.weight.requires_grad = False
But this makes the new words untrainable..
Or are there better ways to extract semantics of words..
| 1. Divide embeddings into two separate objects
One approach would be to use two separate embeddings one for pretrained, another for the one to be trained.
The GloVe one should be frozen, while the one for which there is no pretrained representation would be taken from the trainable layer.
If you format your data that for pretrained token representations it is in smaller range than the tokens without GloVe representation it could be done. Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500]. I would go with something along those lines:
import numpy as np
import torch
class YourNetwork(torch.nn.Module):
def __init__(self, glove_embeddings: np.array, how_many_tokens_not_present: int):
self.pretrained_embedding = torch.nn.Embedding.from_pretrained(glove_embeddings)
self.trainable_embedding = torch.nn.Embedding(
how_many_tokens_not_present, glove_embeddings.shape[1]
)
# Rest of your network setup
def forward(self, batch):
# Which tokens in batch do not have representation, should have indices BIGGER
# than the pretrained ones, adjust your data creating function accordingly
mask = batch > self.pretrained_embedding.num_embeddings
# You may want to optimize it, you could probably get away without copy, though
# I'm not currently sure how
pretrained_batch = batch.copy()
pretrained_batch[mask] = 0
embedded_batch = self.pretrained_embedding(pretrained_batch)
# Every token without representation has to be brought into appropriate range
batch -= self.pretrained_embedding.num_embeddings
# Zero out the ones which already have pretrained embedding
batch[~mask] = 0
non_pretrained_embedded_batch = self.trainable_embedding(batch)
# And finally change appropriate tokens from placeholder embedding created by
# pretrained into trainable embeddings.
embedded_batch[mask] = non_pretrained_embedded_batch[mask]
# Rest of your code
...
Let's say your pretrained indices are in the range [0, 300], while those without representation are [301, 500].
2. Zero gradients for specified tokens.
This one is a bit tricky, but I think it's pretty concise and easy to implement. So, if you obtain the indices of tokens which got no GloVe representation, you can explicitly zero their gradient after backprop, so those rows will not get updated.
import torch
embedding = torch.nn.Embedding(10, 3)
X = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])
values = embedding(X)
loss = values.mean()
# Use whatever loss you want
loss.backward()
# Let's say those indices in your embedding are pretrained (have GloVe representation)
indices = torch.LongTensor([2, 4, 5])
print("Before zeroing out gradient")
print(embedding.weight.grad)
print("After zeroing out gradient")
embedding.weight.grad[indices] = 0
print(embedding.weight.grad)
And the output of the second approach:
Before zeroing out gradient
tensor([[0.0000, 0.0000, 0.0000],
[0.0417, 0.0417, 0.0417],
[0.0833, 0.0833, 0.0833],
[0.0417, 0.0417, 0.0417],
[0.0833, 0.0833, 0.0833],
[0.0417, 0.0417, 0.0417],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0417, 0.0417, 0.0417]])
After zeroing out gradient
tensor([[0.0000, 0.0000, 0.0000],
[0.0417, 0.0417, 0.0417],
[0.0000, 0.0000, 0.0000],
[0.0417, 0.0417, 0.0417],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000],
[0.0417, 0.0417, 0.0417]])
| https://stackoverflow.com/questions/54924582/ |
Pytorch CNN error: Expected input batch_size (4) to match target batch_size (64) | I've been teaching myself this since November and any help on this would be really appreciated, thank you for looking, as I seem to be going round in circles. I am trying to use a Pytorch CNN example that was used with the Mnist dataset. Now I am trying to modify the CNN for facial key point recognition. I am using the Kaggle dataset (CSV) of 7048 training images and key points (15 key points per face) and 1783 test images. I split training dataset and converted the images to jpeg, made separate file for the key points (shape 15, 2). I have made dataset and data loader and can iterate through and display images and plot key points. When I run the CNN I am getting this error.
> Net(
(conv1): Conv2d(1, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv2_drop): Dropout2d(p=0.5)
(fc1): Linear(in_features=589824, out_features=100, bias=True)
(fc2): Linear(in_features=100, out_features=30, bias=True)
)
Data and target shape: torch.Size([64, 96, 96]) torch.Size([64, 15, 2])
Data and target shape: torch.Size([64, 1, 96, 96]) torch.Size([64, 15, 2])
Traceback (most recent call last):
File "/home/keith/PycharmProjects/FacialLandMarks/WorkOut.py", line 416, in <module>
main()
File "/home/keith/PycharmProjects/FacialLandMarks/WorkOut.py", line 412, in main
train(args, model, device, train_loader, optimizer, epoch)
File "/home/keith/PycharmProjects/FacialLandMarks/WorkOut.py", line 324, in train
loss = F.nll_loss(output, target)
File "/home/keith/Desktop/PycharmProjects/fkp/FacialLandMarks/lib/python3.6/site-packages/torch/nn/functional.py", line 1788, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (4) to match target batch_size (64).
Process finished with exit code 1
Here are some links I have read, I could not figure out the problem
but may help some one else.
https://github.com/pytorch/pytorch/issues/11762
How do I modify this PyTorch convolutional neural network to accept a 64 x 64 image and properly output predictions?
pytorch-convolutional-neural-network-to-accept-a-64-x-64-im
Pytorch Validating Model Error: Expected input batch_size (3) to match target batch_size (4)
model-error-expected-input-batch-size-3-to-match-target-ba
Here is my code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=(2, 2))
self.conv2 = nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=(2, 2))
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(64 * 96 * 96, 100)
self.fc2 = nn.Linear(100, 30) # 30 is x and y key points
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 64 * 96 * 96)
# x = x.view(x.size(0), -1)
# x = x.view(x.size()[0], 30, -1)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, batch in enumerate(train_loader):
data = batch['image']
target = batch['key_points']
print('Data and target shape: ', data.shape, ' ', target.shape)
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
data = data.unsqueeze(1).float()
print('Data and target shape: ', data.shape, ' ', target.shape)
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# def test(args, model, device, test_loader):
# model.eval()
# test_loss = 0
# correct = 0
# with torch.no_grad():
# for data, target in test_loader:
# data, target = data.to(device), target.to(device)
# output = model(data)
# test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
# pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
# correct += pred.eq(target.view_as(pred)).sum().item()
#
# test_loss /= len(test_loader.dataset)
# print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
# test_loss, correct, len(test_loader.dataset),
# 100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='Project')
parser.add_argument('--batch-size', type=int, default=64, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N', # ======== epoch
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_data_set = FaceKeyPointDataSet(csv_file='faces/Kep_points_and_id.csv',
root_dir='faces/',
transform=transforms.Compose([
# Rescale(96),
ToTensor()
]))
train_loader = DataLoader(train_data_set, batch_size=args.batch_size,
shuffle=True)
print('Number of samples: ', len(train_data_set))
print('Number of train_loader: ', len(train_loader))
model = Net().to(device)
print(model)
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
# test(args, model, device, test_loader)
if __name__ == '__main__':
main()
| to understand what went wrong you can print shape after every step in forward :
# Input data
torch.Size([64, 1, 96, 96])
x = F.relu(F.max_pool2d(self.conv1(x), 2))
torch.Size([64, 32, 48, 48])
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
torch.Size([64, 64, 24, 24])
x = x.view(-1, 64 * 96 * 96)
torch.Size([4, 589824])
x = F.relu(self.fc1(x))
torch.Size([4, 100])
x = F.dropout(x, training=self.training)
torch.Size([4, 100])
x = self.fc2(x)
torch.Size([4, 30])
return F.log_softmax(x, dim=1)
torch.Size([4, 30])
Your maxpool2d layers reduce the height and width of your feature maps.
The 'view' should be x = x.view(-1, 64 * 24 * 24)
the first linear layer of size : self.fc1 = nn.Linear(64 * 24 * 24, 100)
this will give your output = model(data) final shape of torch.Size([64, 30])
But this code will still face a problem in calculating the Negative Log Likelihood Loss :
The input is expected to contain scores for each class. input has to
be a 2D Tensor of size (minibatch, C). This criterion expects a class
index (0 to C-1) as the target for each value of a 1D tensor of size
minibatch
where class indices are just labels :
values representing a class. For example:
0 - class0, 1 - class1,
Since your last nn layer outputs a softmax over 30 classes, i'm assuming that is the output classes you want to classify into,
so transformation for target :
target = target.view(64, -1) # gives 64X30 ie, 30 values per channel
loss = F.nll_loss(x, torch.max(t, 1)[1]) # takes max amongst the 30 values as class label
This is when the target is a probability distribution over 30 classes, if not can do a soft-max before that. Thus the maximum value in the 30 values will represent the highest probability - thus that class which is exactly what your output represents and thus you calculate a nll between the two values. .
| https://stackoverflow.com/questions/54928638/ |
cudnn error while running a pyorch code on gpu | I have the following error:
Traceback (most recent call last):
File "odenet_mnist.py", line 343, in <module>
logits = model(x)
File "/home/subhashnerella/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/subhashnerella/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/subhashnerella/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/subhashnerella/.conda/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
This is not my code. I am just trying to run the code of a recent paper "Neural Ordinary Differential Equations"-by Chen et al.
Here is the link to the code.
gpu: nvidia 2080Ti
pytorch version:'1.0.1.post2'
cuda 9.0
python 3.7.2
cudnn:
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 4
#define CUDNN_PATCHLEVEL 2
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
I am new to pytorch. Why am i getting this error and how do i fix it?
| RTX2080Ti needs CUDA10 to work properly.Install the PyTorch binaries containing CUDA10
| https://stackoverflow.com/questions/54930268/ |
python - tensor : access a value | Given below is the output of VGG16 model. The command VGG16.classifier[6] output shows Linear(in_features=25088, out_features=4096, bias=True) I'm not able to understand how this works. ALso,how can I print the values of linear
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace)
(16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(18): ReLU(inplace)
(19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace)
(23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(25): ReLU(inplace)
(26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(27): ReLU(inplace)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace)
(30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
| VGG16 model is divided into two groups of layers named features and classifier. You can access them as VGG16.features and VGG16.classifier:
>>> VGG16 = torchvision.models.vgg16(pretrained=True)
>>> VGG16.classifier
Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace)
(2): Dropout(p=0.5)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace)
(5): Dropout(p=0.5)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
Further, you can access each layer of these groups of layers using indices. For example, to access the first layer of classifier portion of model, you can do:
>>> VGG16.classifier[0] # first layer of classifier portion
Linear(in_features=25088, out_features=4096, bias=True)
# and so on...
>>> VGG16.classifier[3] # fourth layer of classifier portion
Linear(in_features=4096, out_features=4096, bias=True)
| https://stackoverflow.com/questions/54934668/ |
Scatter homogenous list of values to PyTorch tensor | Consider the following list:
[[3], [1, 2], [4], [0], [2]]
And zeros tensor of size (5, 5)
I want to fill these indices according to their index in the list to the tensor with 1.
So, the expected output should be:
tensor([[0., 0., 0., 1., 0.],
[0., 1., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0.],
[0., 0., 1., 0., 0.]])
What happened above is this:
at the index [0, 3] put 1 (the case for the first element in my list).
A very similar case is achievable through using Tensor.scatter_. However, since it takes a tensor as the argument (index); you cannot create a tensor from a list if it contains a sub-list with a different size than the other elements, which is the case with [1, 2] in my list (this is actually the problem).
The scatter method could be used if the list is all of same size as the following:
tensor.scatter_(1, torch.tensor(index), 1)
Numpy solutions are acceptable
| You can solve this by modifying your index list to have the same number of indices in each element.
max_length = max([len(l) for l in index])
index = [l + l[-1:] * (max_length - len(l)) for l in index]
This code will repeat the last element of each sub-list until they are all the same size. You can then pass it to the scatter_ function as you wrote in your question.
| https://stackoverflow.com/questions/54935503/ |
How to access pytorch model parameters by index | If I have network with let's say 10 layers including biases, how can I access its i'th layer parameters just by index?
Currently, what I am doing is something like this
for parameter in myModel.parameters():
parameter.data /= 5
How could I access parameter.data with an index? For example I'd like to access 9th layer without iterating, such as myModel.parameter.data[8] or something similar.
| simply do a :
layers=[x.data for x in myModel.parameters()]
Now it will be a list of weights and biases, in order to access weights of the first layer you can do:
print(layers[0])
in order to access biases of the first layer:
print(layers[1])
and so on.
Remember if bias is false for any particular layer it will have no entries at all, so for example if bias is false for second layer, then layers[3] will actually give weights of the third layer.
| https://stackoverflow.com/questions/54942416/ |
Understanding ELMo's number of presentations | I am trying my hand at ELMo by simply using it as part of a larger PyTorch model. A basic example is given here.
This is a torch.nn.Module subclass that computes any number of ELMo
representations and introduces trainable scalar weights for each. For
example, this code snippet computes two layers of representations (as
in the SNLI and SQuAD models from our paper):
from allennlp.modules.elmo import Elmo, batch_to_ids
options_file = "https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway/elmo_2x4096_512_2048cnn_2xhighway_options.json"
weight_file = "https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway/elmo_2x4096_512_2048cnn_2xhighway_weights.hdf5"
# Compute two different representation for each token.
# Each representation is a linear weighted combination for the
# 3 layers in ELMo (i.e., charcnn, the outputs of the two BiLSTM))
elmo = Elmo(options_file, weight_file, 2, dropout=0)
# use batch_to_ids to convert sentences to character ids
sentences = [['First', 'sentence', '.'], ['Another', '.']]
character_ids = batch_to_ids(sentences)
embeddings = elmo(character_ids)
# embeddings['elmo_representations'] is length two list of tensors.
# Each element contains one layer of ELMo representations with shape
# (2, 3, 1024).
# 2 - the batch size
# 3 - the sequence length of the batch
# 1024 - the length of each ELMo vector
My question concerns the 'representations'. Can you compare them to normal word2vec output layers? You can choose how many ELMo will give back (increasing an n-th dimension), but what is the difference between these generated representations and what is their typical use?
To give you an idea, for the above code, embeddings['elmo_representations'] returns a list of two items (the two representation layers) but they are identical.
In short, how can one define the 'representations' in ELMo?
| See Section 3.2 of the original paper.
ELMo is a task specific combination of the intermediate layer representations in the biLM. For each token, a L-layer biLM computes a set of 2L+ 1representations
Previously in Section 3.1, it is said that:
Recent state-of-the-art neural language models compute a context-independent token representation (via token embeddings or a CNN over characters) then pass it through L layers of forward LSTMs. At each position k, each LSTM layer outputs a context-dependent representation. The top layer LSTM output is used to predict the next token with a Softmax layer.
To answer your question, the representations are these L LSTM-based context-dependent representations.
| https://stackoverflow.com/questions/54947258/ |
Pytorch torchvision MNIST download | I'm new to Pytorch and torchvision. I followed a tutorial that is roughly a year old where he tried to download mnist via python and torchvision.
This is how:
import torch
from torchvision import datasets, transforms
kwargs = {'num_workers': 1, 'pin_memory': True}
train = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size=64, shuffle=True, **kwargs)
test = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False,
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])),
batch_size=64, shuffle=True, **kwargs)
Now my problem is that I get this error:
Traceback (most recent call last):
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data\MNIST\raw\train-images-idx3-ubyte.gz
File "C:/Users/Nico/PycharmProjects/PyTorch/mnist.py", line 13, in
transforms.Normalize((0.1307,), (0.3081,))])),
File "C:\Users\Nico\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\datasets\mnist.py", line 68, in init
self.download()
File "C:\Users\Nico\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\datasets\mnist.py", line 143, in download
download_url(url, root=self.raw_folder, filename=filename, md5=None)
File "C:\Users\Nico\AppData\Local\Programs\Python\Python37\lib\site-packages\torchvision\datasets\utils.py", line 73, in download_url
reporthook=gen_bar_updater(tqdm())
TypeError: init() missing 1 required positional argument: 'total'
Do any of you guys know what I have to change, or how I can download/use them? As I said earlier I'm new to it and I don't have any clue.
I hope you guys can help me, thanks in advance.
Greetings Nico aka. Myridor
| So the problem wasn't the code or the naming or anything.
It was the torchvision version. I had 0.2.2.post2 and it worked with 0.2.1!
| https://stackoverflow.com/questions/54950428/ |
How to sort a dataset in pytorch | I would like to sort my dataset by the numerical values in the labels.
Is there a function from pytorch to handle this efficiently?
my dataset type() is in this from:
<class 'torchvision.datasets.mnist.MNIST'>
| There is no generic way to do this efficiently, as the dataset class is only implements a __getitem__ and __len__ method, and don't necessarily have any "stored" information about the labels.
In the case of the MNIST dataset class however you can sort the dataset from the label list.
For example when you want to list the indices that have the label 5.
mnist = torchvision.datasets.mnist.MNIST("/")
labels = mnist.train_labels
fives = (labels == 5).nonzero()
| https://stackoverflow.com/questions/54964330/ |
Pytorch Indexing | I have a tensor [[1,2],[4,5],[7,8]] and a tensor with indices [0,1,0].
I want to apply them to second dimension so that it returns: [1,5,8].
How should I do that?
Thanks!
| import torch
arr=torch.tensor([[1,2],[4,5],[7,8]])
indices_arr=torch.tensor([0,1,0])
ret=arr[[0,1,2],indices_arr]
# print(ret)
# tensor([1, 5, 7])
| https://stackoverflow.com/questions/54964521/ |
How does pytorch backprop through argmax? | I'm building Kmeans in pytorch using gradient descent on centroid locations, instead of expectation-maximisation. Loss is the sum of square distances of each point to its nearest centroid. To identify which centroid is nearest to each point, I use argmin, which is not differentiable everywhere. However, pytorch is still able to backprop and update weights (centroid locations), giving similar performance to sklearn kmeans on the data.
Any ideas how this is working, or how I can figure this out within pytorch? Discussion on pytorch github suggests argmax is not differentiable: https://github.com/pytorch/pytorch/issues/1339.
Example code below (on random pts):
import numpy as np
import torch
num_pts, batch_size, n_dims, num_clusters, lr = 1000, 100, 200, 20, 1e-5
# generate random points
vector = torch.from_numpy(np.random.rand(num_pts, n_dims)).float()
# randomly pick starting centroids
idx = np.random.choice(num_pts, size=num_clusters)
kmean_centroids = vector[idx][:,None,:] # [num_clusters,1,n_dims]
kmean_centroids = torch.tensor(kmean_centroids, requires_grad=True)
for t in range(4001):
# get batch
idx = np.random.choice(num_pts, size=batch_size)
vector_batch = vector[idx]
distances = vector_batch - kmean_centroids # [num_clusters, #pts, #dims]
distances = torch.sum(distances**2, dim=2) # [num_clusters, #pts]
# argmin
membership = torch.min(distances, 0)[1] # [#pts]
# cluster distances
cluster_loss = 0
for i in range(num_clusters):
subset = torch.transpose(distances,0,1)[membership==i]
if len(subset)!=0: # to prevent NaN
cluster_loss += torch.sum(subset[:,i])
cluster_loss.backward()
print(cluster_loss.item())
with torch.no_grad():
kmean_centroids -= lr * kmean_centroids.grad
kmean_centroids.grad.zero_()
| As alvas noted in the comments, argmax is not differentiable. However, once you compute it and assign each datapoint to a cluster, the derivative of loss with respect to the location of these clusters is well-defined. This is what your algorithm does.
Why does it work? If you had only one cluster (so that the argmax operation didn't matter), your loss function would be quadratic, with minimum at the mean of the data points. Now with multiple clusters, you can see that your loss function is piecewise (in higher dimensions think volumewise) quadratic - for any set of centroids [C1, C2, C3, ...] each data point is assigned to some centroid CN and the loss is locally quadratic. The extent of this locality is given by all alternative centroids [C1', C2', C3', ...] for which the assignment coming from argmax remains the same; within this region the argmax can be treated as a constant, rather than a function and thus the derivative of loss is well-defined.
Now, in reality, it's unlikely you can treat argmax as constant, but you can still treat the naive "argmax-is-a-constant" derivative as pointing approximately towards a minimum, because the majority of data points are likely to indeed belong to the same cluster between iterations. And once you get close enough to a local minimum such that the points no longer change their assignments, the process can converge to a minimum.
Another, more theoretical way to look at it is that you're doing an approximation of expectation maximization. Normally, you would have the "compute assignments" step, which is mirrored by argmax, and the "minimize" step which boils down to finding the minimizing cluster centers given the current assignments. The minimum is given by d(loss)/d([C1, C2, ...]) == 0, which for a quadratic loss is given analytically by the means of data points within each cluster. In your implementation, you're solving the same equation but with a gradient descent step. In fact, if you used a 2nd order (Newton) update scheme instead of 1st order gradient descent, you would be implicitly reproducing exactly the baseline EM scheme.
| https://stackoverflow.com/questions/54969646/ |
How to add augmented images to original dataset using Pytorch? | From my understanding, RandomHorizontalFlip etc. replace image rather than adding new images to dataset. How do I increase my dataset size by adding augmented images to dataset using PyTorch?
I have gone through the links posted & haven't found a solution. I want to increase the data size by adding flipped/rotated images - but the post addresses the in-place processing of images.
Thanks.
| Why do you want it? Generally speaking, it is enough to increase the number of epochs over the dataset, and your model will see the original and the augmented version of every image at least once (assuming a relatively high number of epochs).
Explanation:
For instance, if your augmentation has a chance of 50% to be applied, after 100 epochs, for every sample you will get ~50 samples of the original image and ~50 augmented samples. So, increasing the dataset size is equivalent to add epochs but (maybe) less efficient in terms of memory (need to store the images in memory to have high performances).
| https://stackoverflow.com/questions/54969705/ |
RuntimeError: size mismatch, m1: [4 x 3136], m2: [64 x 5] at c:\a\w\1\s\tmp_conda_3.7_1 | I used python 3 and when i insert transform random crop size 224 it gives miss match error.
here my code
what did i wrong ?
| Your code makes variations on resnet: you changed the number of channels, the number of bottlenecks at each "level", and you removed a "level" entirely. As a result, the dimension of the feature map you have at the end of layer3 is not 64: you have a larger spatial dimension than you anticipated by the nn.AvgPool2d(8). The error message you got actually tells you that the output of level3 is of shape 64x56x56 and after avg pooling with kernel and stride 8 you have 64x7x7=3136 dimensional feature vector, instead of only 64 you are expecting.
What can you do?
As opposed to "standard" resnet, you removed stride from conv1 and you do not have max pool after conv1. Moreover, you removed layer4 which also have a stride. Therefore, You can add pooling to your net to reduce the spatial dimensions of layer3.
Alternatively, you can replace nn.AvgPool(8) with nn.AdaptiveAvgPool2d([1, 1]) an avg pool that outputs only one feature regardless of the spatial dimensions of the input feature map.
| https://stackoverflow.com/questions/54976741/ |
What is loss_cls and loss_bbox and why are they always zero in training | I'm trying to train a custom dataset on using faster_rcnn using the Pytorch implementation of Detectron here. I have made changes to the dataset and configuration according to the guidelines in the repo.
The training process is carried out successfully, but the loss_cls and loss_bbox values are 0 from the beginning and even though the training is completed, final output cannot be used to make an evaluation or an inference.
I would like to know what these two mean and how to get those values to change during the training. The exact model I'm using here is e2e_faster_rcnn_R-50-FPN_1x
Any help regarding this would be appreciated. I' using Ubuntu 16.04 with Python 3.6 on Anaconda, CUDA 9, cuDNN 7.
| What are the two losses?
When training a multi-object detector, you usually have (at least) two types of losses:
loss_bbox: a loss that measures how "tight" the predicted bounding boxes are to the ground truth object (usually a regression loss, L1, smoothL1 etc.).
loss_cls: a loss that measures the correctness of the classification of each predicted bounding box: each box may contain an object class, or a "background". This loss is usually called cross entropy loss.
Why are the losses always zero?
When training a detector, the model predict quite a few (~1K) possible boxes per image. Most of them are empty (i.e. belongs to "background" class). The loss function associate each of the predicted boxes with the ground truth boxes annotation of the image.
If a predicted box has a significant overlap with a ground truth box then loss_bbox and loss_cls are computed to see how well the model is able to predict the ground truth box.
On the other hand, if a predicted box has no overlap with any ground truth box, than only loss_cls is computed for the "background" class.
However, if there is only very partial overlap with ground truth the predicted box is "discarded" and no loss is computed. I suspect, for some reason, this is the case for your training session.
I suggest you check the parameters that determines the association between predicted boxed and ground truth annotations. Moreover, look at the parameters of your "anchors": these parameters determines the scale and aspect ratios of the predicted boxes.
| https://stackoverflow.com/questions/54977311/ |
Why feature extraction of text don't return all possible feature names? | Here is the snippet of code from the book
Natural Language Processing with PyTorch:
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
import seaborn as sns
corpus = ['Time flies flies like an arrow.', 'Fruit flies like a banana.']
one_hot_vectorizer = CountVectorizer()
vocab = one_hot_vectorizer.get_feature_names()
The value of vocab :
vocab = ['an', 'arrow', 'banana', 'flies', 'fruit', 'like', 'time']
Why is not there an 'a' among the extracted feature names? If it is excluded as too common word automatically, why "an" is not excluded for the same reasons? How to make .get_feature_names() filter other words as well?
| Very good question! Though this is not a pytorch question but a sklearn one =)
I encourage to first go through this https://www.kaggle.com/alvations/basic-nlp-with-nltk, esp. the "Vectorization with sklearn" section
TL;DR
If we use the CountVectorizer,
from io import StringIO
from sklearn.feature_extraction.text import CountVectorizer
sent1 = "The quick brown fox jumps over the lazy brown dog."
sent2 = "Mr brown jumps over the lazy fox."
with StringIO('\n'.join([sent1, sent2])) as fin:
# Create the vectorizer
count_vect = CountVectorizer()
count_vect.fit_transform(fin)
# We can check the vocabulary in our vectorizer
# It's a dictionary where the words are the keys and
# The values are the IDs given to each word.
print(count_vect.vocabulary_)
[out]:
{'brown': 0,
'dog': 1,
'fox': 2,
'jumps': 3,
'lazy': 4,
'mr': 5,
'over': 6,
'quick': 7,
'the': 8}
We didn't tell the vectorizer to remove punctuation and tokenize and lowercase, how did they do it?
Also, the is in the vocabulary, it's a stopword, we want it gone...
And jumps isn't stemmed or lemmatized!
If we look at the documentation of the CountVectorizer in sklearn, we see:
CountVectorizer(
input=’content’, encoding=’utf-8’,
decode_error=’strict’, strip_accents=None,
lowercase=True, preprocessor=None,
tokenizer=None, stop_words=None,
token_pattern=’(?u)\b\w\w+\b’, ngram_range=(1, 1),
analyzer=’word’, max_df=1.0, min_df=1,
max_features=None, vocabulary=None,
binary=False, dtype=<class ‘numpy.int64’>)
And more specifically:
analyzer : string, {‘word’, ‘char’, ‘char_wb’} or callable
Whether the feature should be made of word or character n-grams.
Option ‘char_wb’ creates character n-grams only from text inside word
boundaries; n-grams at the edges of words are padded with space. If a
callable is passed it is used to extract the sequence of features out
of the raw, unprocessed input.
preprocessor : callable or None (default)
Override the preprocessing (string transformation) stage while
preserving the tokenizing and n-grams generation steps.
tokenizer : callable or None (default)
Override the string tokenization step while preserving the
preprocessing and n-grams generation steps. Only applies if analyzer
== 'word'.
stop_words : string {‘english’}, list, or None (default)
If ‘english’, a built-in stop word list for English is used. If a
list, that list is assumed to contain stop words, all of which will be
removed from the resulting tokens. Only applies if analyzer == 'word'.
If None, no stop words will be used.
lowercase : boolean, True by default
Convert all characters to lowercase before tokenizing.
But in the case of the example from http://shop.oreilly.com/product/0636920063445.do, it's not exactly the stopwords causing the issue.
If we explicitly use the English stopwords from https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/stop_words.py
>>> from sklearn.feature_extraction.text import CountVectorizer
>>> one_hot_vectorizer = CountVectorizer(stop_words='english')
>>> one_hot_vectorizer.fit(corpus)
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words='english',
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)
>>> one_hot_vectorizer.get_feature_names()
['arrow', 'banana', 'flies', 'fruit', 'like', 'time']
So what exactly is happening in the case where the stop_words argument is left as None?
Lets try an experiment where I add some single character words to the input:
>>> corpus = ['Time flies flies like an arrow 1 2 3.', 'Fruit flies like a banana x y z.']
>>> one_hot_vectorizer = CountVectorizer()
>>> one_hot_vectorizer.fit(corpus)
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)
>>> one_hot_vectorizer.get_feature_names()
['an', 'arrow', 'banana', 'flies', 'fruit', 'like', 'time']
They're all gone again!!!
Now if we dig into the docs, https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L738
token_pattern : string
Regular expression denoting what constitutes a "token", only used
if analyzer == 'word'. The default regexp select tokens of 2
or more alphanumeric characters (punctuation is completely ignored
and always treated as a token separator).
Ah ha, that's why all the single character tokens gets deleted!
The default pattern for CountVectorizer is token_pattern=r"(?u)\b\w\w+\b", to enable it to take single character, you can try:
>>> one_hot_vectorizer = CountVectorizer(token_pattern=r"(?u)\b\w+\b")
>>> one_hot_vectorizer.fit(corpus)
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8', input='content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w+\\b', tokenizer=None,
vocabulary=None)
>>> one_hot_vectorizer.get_feature_names()
['1', '2', '3', 'a', 'an', 'arrow', 'banana', 'flies', 'fruit', 'like', 'time', 'x', 'y', 'z']
| https://stackoverflow.com/questions/54977756/ |
Subsets and Splits