instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Understanding accumulated gradients in PyTorch | I am trying to comprehend inner workings of the gradient accumulation in PyTorch. My question is somewhat related to these two:
Why do we need to call zero_grad() in PyTorch?
Why do we need to explicitly call zero_grad()?
Comments to the accepted answer to the second question suggest that accumulated gradients can be used if a minibatch is too large to perform a gradient update in a single forward pass, and thus has to be split into multiple sub-batches.
Consider the following toy example:
import numpy as np
import torch
class ExampleLinear(torch.nn.Module):
def __init__(self):
super().__init__()
# Initialize the weight at 1
self.weight = torch.nn.Parameter(torch.Tensor([1]).float(),
requires_grad=True)
def forward(self, x):
return self.weight * x
if __name__ == "__main__":
# Example 1
model = ExampleLinear()
# Generate some data
x = torch.from_numpy(np.array([4, 2])).float()
y = 2 * x
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
y_hat = model(x) # forward pass
loss = (y - y_hat) ** 2
loss = loss.mean() # MSE loss
loss.backward() # backward pass
optimizer.step() # weight update
print(model.weight.grad) # tensor([-20.])
print(model.weight) # tensor([1.2000]
Which is exactly the result one would expect. Now assume that we want to process the dataset sample-by-sample utilizing gradient accumulation:
# Example 2: MSE sample-by-sample
model2 = ExampleLinear()
optimizer = torch.optim.SGD(model2.parameters(), lr=0.01)
# Compute loss sample-by-sample, then average it over all samples
loss = []
for k in range(len(y)):
y_hat = model2(x[k])
loss.append((y[k] - y_hat) ** 2)
loss = sum(loss) / len(y)
loss.backward() # backward pass
optimizer.step() # weight update
print(model2.weight.grad) # tensor([-20.])
print(model2.weight) # tensor([1.2000]
Again as expected, the gradient is calculated when the .backward() method is called.
Finally to my question: what exactly happens 'under the hood'?
My understanding is that the computational graph is dynamically updated going from <PowBackward> to <AddBackward> <DivBackward> operations for the loss variable, and that no information about the data used for each forward pass is retained anywhere except for the loss tensor which can be updated until the backward pass.
Are there any caveats to the reasoning in the above paragraph? Lastly, are there any best practices to follow when using gradient accumulation (i.e. can the approach I use in Example 2 backfire somehow)?
| You are not actually accumulating gradients. Just leaving off optimizer.zero_grad() has no effect if you have a single .backward() call, as the gradients are already zero to begin with (technically None but they will be
automatically initialised to zero).
The only difference between your two versions, is how you calculate the final loss. The for loop of the second example does the same calculations as PyTorch does in the first example, but you do them individually, and PyTorch cannot optimise (parallelise and vectorise) your for loop, which makes an especially staggering difference on GPUs, granted that the tensors aren't tiny.
Before getting to gradient accumulation, let's start with your question:
Finally to my question: what exactly happens 'under the hood'?
Every operation on tensors is tracked in a computational graph if and only if one of the operands is already part of a computational graph. When you set requires_grad=True of a tensor, it creates a computational graph with a single vertex, the tensor itself, which will remain a leaf in the graph. Any operation with that tensor will create a new vertex, which is the result of the operation, hence there is an edge from the operands to it, tracking the operation that was performed.
a = torch.tensor(2.0, requires_grad=True)
b = torch.tensor(4.0)
c = a + b # => tensor(6., grad_fn=<AddBackward0>)
a.requires_grad # => True
a.is_leaf # => True
b.requires_grad # => False
b.is_leaf # => True
c.requires_grad # => True
c.is_leaf # => False
Every intermediate tensor automatically requires gradients and has a grad_fn, which is the function to calculate the partial derivatives with respect to its inputs. Thanks to the chain rule, we can traverse the whole graph in reverse order to calculate the derivatives with respect to every single leaf, which are the parameters we want to optimise. That's the idea of backpropagation, also known as reverse mode differentiation. For more details I recommend reading Calculus on Computational Graphs: Backpropagation.
PyTorch uses that exact idea, when you call loss.backward() it traverses the graph in reverse order, starting from loss, and calculates the derivatives for each vertex. Whenever a leaf is reached, the calculated derivative for that tensor is stored in its .grad attribute.
In your first example, that would lead to:
MeanBackward -> PowBackward -> SubBackward -> MulBackward`
The second example is almost identical, except that you calculate the mean manually, and instead of having a single path for the loss, you have multiple paths for each element of the loss calculation. To clarify, the single path also calculates the derivatives of each element, but internally, which again opens up the possibilities for some optimisations.
# Example 1
loss = (y - y_hat) ** 2
# => tensor([16., 4.], grad_fn=<PowBackward0>)
# Example 2
loss = []
for k in range(len(y)):
y_hat = model2(x[k])
loss.append((y[k] - y_hat) ** 2)
loss
# => [tensor([16.], grad_fn=<PowBackward0>), tensor([4.], grad_fn=<PowBackward0>)]
In either case a single graph is created that is backpropagated exactly once, that's the reason it's not considered gradient accumulation.
Gradient Accumulation
Gradient accumulation refers to the situation, where multiple backwards passes are performed before updating the parameters. The goal is to have the same model parameters for multiple inputs (batches) and then update the model's parameters based on all these batches, instead of performing an update after every single batch.
Let's revisit your example. x has size [2], that's the size of our entire dataset. For some reason, we need to calculate the gradients based on the whole dataset. That is naturally the case when using a batch size of 2, since we would have the whole dataset at once. But what happens if we can only have batches of size 1? We could run them individually and update the model after each batch as usual, but then we don't calculate the gradients over the whole dataset.
What we need to do, is run each sample individually with the same model parameters and calculate the gradients without updating the model. Now you might be thinking, isn't that what you did in the second version? Almost, but not quite, and there is a crucial problem in your version, namely that you are using the same amount of memory as in the first version, because you have the same calculations and therefore the same number of values in the computational graph.
How do we free memory? We need to get rid of the tensors of the previous batch and also the computational graph, because that uses a lot of memory to keep track of everything that's necessary for the backpropagation. The computational graph is automatically destroyed when .backward() is called (unless retain_graph=True is specified).
def calculate_loss(x: torch.Tensor) -> torch.Tensor:
y = 2 * x
y_hat = model(x)
loss = (y - y_hat) ** 2
return loss.mean()
# With mulitple batches of size 1
batches = [torch.tensor([4.0]), torch.tensor([2.0])]
optimizer.zero_grad()
for i, batch in enumerate(batches):
# The loss needs to be scaled, because the mean should be taken across the whole
# dataset, which requires the loss to be divided by the number of batches.
loss = calculate_loss(batch) / len(batches)
loss.backward()
print(f"Batch size 1 (batch {i}) - grad: {model.weight.grad}")
print(f"Batch size 1 (batch {i}) - weight: {model.weight}")
# Updating the model only after all batches
optimizer.step()
print(f"Batch size 1 (final) - grad: {model.weight.grad}")
print(f"Batch size 1 (final) - weight: {model.weight}")
Output (I removed the Parameter containing messages for readability):
Batch size 1 (batch 0) - grad: tensor([-16.])
Batch size 1 (batch 0) - weight: tensor([1.], requires_grad=True)
Batch size 1 (batch 1) - grad: tensor([-20.])
Batch size 1 (batch 1) - weight: tensor([1.], requires_grad=True)
Batch size 1 (final) - grad: tensor([-20.])
Batch size 1 (final) - weight: tensor([1.2000], requires_grad=True)
As you can see, the model kept the same parameter for all batches, while the gradients were accumulate, and there is a single update at the end. Note that the loss needs to be scaled per batch, in order to have the same significance over the whole dataset as if you used a single batch.
While in this example, the whole dataset is used before performing the update, you can easily change that to update the parameters after a certain number of batches, but you have to remember to zero out the gradients after an optimiser step was taken. The general recipe would be:
accumulation_steps = 10
for i, batch in enumerate(batches):
# Scale the loss to the mean of the accumulated batch size
loss = calculate_loss(batch) / accumulation_steps
loss.backward()
if (i + 1) % accumulation_steps == 0:
optimizer.step()
# Reset gradients, for the next accumulated batches
optimizer.zero_grad()
You can find that recipe and more techniques for working with large batch sizes in HuggingFace - Training Neural Nets on Larger Batches: Practical Tips for 1-GPU, Multi-GPU & Distributed setups.
| https://stackoverflow.com/questions/62067400/ |
IndexError: The shape of the mask [1, 1682] at index 0 does not match the shape of the indexed tensor [100, 1682] at index 0 | I'm currently doing an DeepLearning course in Udemy.
I am currently designing an Restricted Boltzmann machine in which
my training runs perfectly, but I ended up with this error while testing
IndexError: The shape of the mask [1, 1682] at index 0 does not match the shape of the indexed tensor [100, 1682] at index 0
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
# Importing dataset
movies = pd.read_csv('ml-1m/movies.dat',sep='::',header = None,
engine='python',encoding='latin-1')
users = pd.read_csv('ml-1m/users.dat',sep='::',header = None,
engine='python',encoding='latin-1')
ratings = pd.read_csv('ml-1m/ratings.dat',sep='::',header = None,
engine='python',encoding='latin-1')
# preparing training and test set
training_set = pd.read_csv('ml-100k/u1.base',delimiter='\t')
training_set = np.array(training_set,dtype='int')
test_set = pd.read_csv('ml-100k/u1.test',delimiter='\t')
test_set = np.array(test_set,dtype='int')
#Getting the no of users and movies
nb_users = int(max(max(training_set[:,0]),max(test_set[:,0]))) #max out of both
nb_movies = int(max(max(training_set[:,1]),max(test_set[:,1])))
#Array with users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1,nb_users+1):
id_movies = data[:,1][data[:,0]==id_users]
id_ratings = data[:,2][data[:,0]==id_users]
ratings = np.zeros(nb_movies)
ratings[id_movies-1]= id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
# COnverting data to torcch tensors
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
# Converting the rating into binary ratings 1 (Liked) or 0 (Not liked)
training_set[training_set == 0] = -1 #taking all zero values in trainingset
training_set[training_set == 1] = 0
training_set[training_set == 2] = 0
training_set[training_set >= 3] = 1
test_set[test_set == 0] = -1 #taking all zero values in trainingset
test_set[test_set == 1] = 0
test_set[test_set == 2] = 0
test_set[test_set >= 3] = 1
# Creating the architecture of the Neural network
class RBM():
def __init__(self, nv, nh):
self.W = torch.randn(nh, nv)
self.a = torch.randn(1, nh)
self.b = torch.randn(1, nv)
def sample_h(self, x):
wx = torch.mm(x, self.W.t())
activation = wx + self.a.expand_as(wx)
p_h_given_v = torch.sigmoid(activation)
return p_h_given_v, torch.bernoulli(p_h_given_v)
def sample_v(self, y):
wy = torch.mm(y, self.W)
activation = wy + self.b.expand_as(wy)
p_v_given_h = torch.sigmoid(activation)
return p_v_given_h, torch.bernoulli(p_v_given_h)
def train(self, v0, vk, ph0, phk):
self.W += (torch.mm(v0.t(), ph0) - torch.mm(vk.t(), phk)).t()
self.b += torch.sum((v0 - vk), 0)
self.a += torch.sum((ph0 - phk), 0)
nv = len(training_set[0])
nh = 100
batch_size = 100
rbm = RBM(nv, nh)
#Training the RBM
nb_epoch = 10
for epoch in range(1, nb_epoch + 1):
train_loss = 0
s = 0.
for id_user in range(0, nb_users - batch_size, batch_size):
vk = training_set[id_user:id_user+batch_size]
v0 = training_set[id_user:id_user+batch_size]
ph0,_ = rbm.sample_h(v0)
for k in range(10):
_,hk = rbm.sample_h(vk)
_,vk = rbm.sample_v(hk)
vk[v0<0] = v0[v0<0]
phk,_ = rbm.sample_h(vk)
rbm.train(v0, vk, ph0, phk)
train_loss += torch.mean(torch.abs(v0[v0>=0] - vk[v0>=0]))
s += 1.
print(f'epoch: {epoch} loss: {train_loss/s}')
# Testing the RBM
test_loss = 0
s = 0.
for id_user in range(nb_users):
v = training_set[id_user:id_user+1]
vt = training_set[id_user:id_user+1]
if len(vt[vt>=0]) > 0:
_,h = rbm.sample_h(v)
_,v = rbm.sample_v(hk)
test_loss += torch.mean(torch.abs(vt[vt>=0] - v[vt>=0]))
s += 1.
print(f'test_loss: {test_loss/s}')
| In the test loop, you're re-using a variable of the training phase, which I think (you didn't provide the full stacktrace) is just a typo:
for id_user in range(nb_users):
v = training_set[id_user:id_user+1]
vt = training_set[id_user:id_user+1]
if len(vt[vt>=0]) > 0:
_,h = rbm.sample_h(v)
_,v = rbm.sample_v(hk) # hk should be h, shouldn't it?
| https://stackoverflow.com/questions/62069963/ |
How to do language model training on BERT | I want to train BERT on a target corpus. I am looking at this HuggingFace implementation.
They are using .raw files for the training data. If I have .txt files of my training data, how can I use their implementation?
| The .raw only indicates that they use the raw version of the WikiText, they are regular text files containing the raw text:
We're using the raw WikiText-2 (no tokens were replaced before the tokenization).
The description of the data files options also says that they are text files. From run_language_modeling.py - L86-L88:
train_data_file: Optional[str] = field(
default=None, metadata={"help": "The input training data file (a text file)."}
)
Therefore you can just specify your text files.
| https://stackoverflow.com/questions/62072536/ |
Does PyTorch have implicit functions for element-wise product and sum? | I am trying to implement RCAN for super resolution in Tensorflow by Yulun Zhang et al (the original code published with paper is implemented in PyTorch: https://github.com/yulunzhang/RCAN).
I am trying to understand how they have implemented RCAB. By looking at the diagram they have published of their network architecture, it seems pretty straight forward how the nueral network is built. But the code doesn't seem to match it.
According to the diagram here: https://raw.githubusercontent.com/yulunzhang/RCAN/master/Figs/RCAB.PNG
Each RCAB should have following structure:
Residual Channel Attention Block(RCAB){
--0) Conv2D
--1) Relu
--2) Conv2D
--3) Channel Attention Layer{
----0)Global pooling
----1)Conv2D
----2)Relu
----3)Conv2D
----4)Sigmoid
----5)Element Wise Product (Input of this layer/function would be the output from the Conv2D layer 3)
--}
--4) Element Wise Sum (Input of this layer/function would be the input of layer 1)
}
However, when I print the PyTorch model in the paper's GitHub repo, RCAB looks like this:
(see https://github.com/yulunzhang/RCAN/blob/master/RCAN_TrainCode/experiment/model/Network_RCAN_BIX2_G10R20P48-2018-07-15-20-14-55.txt for the full printed model)
(0)RCAB(
(body): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): CALayer(
(avg_pool): AdaptiveAvgPool2d(output_size=1)
(conv_du): Sequential(
(0): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1))
(1): ReLU(inplace)
(2): Conv2d(4, 64, kernel_size=(1, 1), stride=(1, 1))
(3): Sigmoid()
)
)
)
)
There seems to be no mention of Element Wise sum and Element Wise product in the RCABs of the models that are published along the paper. Signmoid layer is the last layer in each RCAB.
So my question is: does Pytorch have some implicit way of declaring thes element wise sum/product layers? or is it the case that the publishers of the code/model simply haven't added any such layer and therefore have not followed the model architecture diagram that they published?
| If you look at their actual model file you can find the elementwise sum (implemented as just +): https://github.com/yulunzhang/RCAN/blob/master/RCAN_TrainCode/code/model/rcan.py
I believe the elementwise product is handled the same way. These are not exactly "part of the model" in the PyTorch sense. They are not created in __init__ and are kind of dynamic, only revealing its behavior during the forward pass. Static model analysis could not have revealed them (and thus not show in the txt).
| https://stackoverflow.com/questions/62076291/ |
Setting numpy array to slice without any in-place operations | How can I do this operation efficiently without any inplace operations?
n_id = np.random.choice(np.arange(2708), size=100)
z = np.random.rand(100, 64)
z_sparse = np.zeros((2708,64))
z_sparse[n_id[:100]] = z
Essentially I want the n_id rows of z_sparse to contain z's rows, but I can't do any inplace operations because my end goal is to use this in a pytorch problem.
One though would be to create zero rows within z precisely so that the rows of z end up in the positions n_id, but not sure how this would work efficiently.
Essentially row 1 of z should be placed at row n_id[0] of z_sparse, then row 2 of z should be at row n_id[1] of z_sparse, and so on...
Here's the PyTorch error jic you are curious:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
| If n_id is a fixed index array, you can get z_sparse as a matrix multiplication:
# N, n, m = 2078,100, 64
row_mat = (n_id[:n] == np.arange(N)[:,None])
# for pytorch tensor
# row_mat = Tensor(n_id[:n] == np.arange(N)[:,None])
z_sparse = row_mat @ z
Since row_mat is a constant array (tensor), your graph should work just fine.
| https://stackoverflow.com/questions/62076485/ |
How to write a custom loss function in Keras/Tensorflow that uses loops/iterations with reference numpy code | I saw this question: Implementing custom loss function in keras with condition And I need to do the same thing but with code that seems to need loops.
I have a custom numpy function which calculates the mean Euclid distance from the mean vector. I wrote this based on the paper https://arxiv.org/pdf/1801.05365.pdf:
import numpy as np
def mean_euclid_distance_from_mean_vector(n_vectors):
dists = []
for (i, v) in enumerate(n_vectors):
n_vectors_rest = n_vectors[np.arange(len(n_vectors)) != i]
print("rest of vectors: ")
print(n_vectors_rest)
# calculate mean vector
mean_rest = n_vectors_rest.mean(axis=0)
print("mean rest vector")
print(mean_rest)
dist = v - mean_rest
print("dist vector")
print(dist)
dists.append(dist)
# dists is now a matrix of distance vectors (distance from the mean vector)
dists = np.array(dists)
print("distance vector matrix")
print(dists)
# here we matmult each vector
# sum them up
# and divide by the total number of elements
result = np.sum([np.matmul(d, d) for d in dists]) / dists.size
return result
features = np.array([
[1,2,3,4],
[4,3,2,1]
])
c = mean_euclid_distance_from_mean_vector(features)
print(c)
I need this function however to work inside tensorflow with Keras. So a custom lambda https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda
However, I'm not sure how to implement the above in Keras/Tensorflow since it has loops, and the way the paper talked about calculating the m_i seems to require loops like the way I implemented the above.
For reference, the PyTorch version of this code is here: https://github.com/PramuPerera/DeepOneClass
| Given a feature map like:
features = np.array([
[1, 2, 3, 4],
[2, 4, 4, 3],
[3, 2, 1, 4],
], dtype=np.float64)
reflecting a batch_size of
batch_size = features.shape[0]
and
k = features.shape[1]
One has that implementing the above Formulas in Tensorflow could be expressed (prototyped) by:
dim = (batch_size, features.shape[1])
def zero(i):
arr = np.ones(dim)
arr[i] = 0
return arr
mapper = [zero(i) for i in range(batch_size)]
elems = (features, mapper)
m = (1 / (batch_size - 1)) * tf.map_fn(lambda x: tf.math.reduce_sum(x[0] * x[1], axis=0), elems, dtype=tf.float64)
pairs = tf.map_fn(lambda x: tf.concat(x, axis=0) , tf.stack([features, m], 1), dtype=tf.float64)
compactness_loss = (1 / (batch_size * k)) * tf.map_fn(lambda x: tf.math.reduce_euclidean_norm(x), pairs, dtype=tf.float64)
with tf.Session() as sess:
print("loss value output is: ", compactness_loss.eval())
Which yields:
loss value output is: [0.64549722 0.79056942 0.64549722]
However a single measure is required for the batch, therefore it is necessary to reduce it; by the summation of all values.
The wanted Compactness Loss function à la Tensorflow is:
def compactness_loss(actual, features):
features = Flatten()(features)
k = 7 * 7 * 512
dim = (batch_size, k)
def zero(i):
z = tf.zeros((1, dim[1]), dtype=tf.dtypes.float32)
o = tf.ones((1, dim[1]), dtype=tf.dtypes.float32)
arr = []
for k in range(dim[0]):
arr.append(o if k != i else z)
res = tf.concat(arr, axis=0)
return res
masks = [zero(i) for i in range(batch_size)]
m = (1 / (batch_size - 1)) * tf.map_fn(
# row-wise summation
lambda mask: tf.math.reduce_sum(features * mask, axis=0),
masks,
dtype=tf.float32,
)
dists = features - m
sqrd_dists = tf.pow(dists, 2)
red_dists = tf.math.reduce_sum(sqrd_dists, axis=1)
compact_loss = (1 / (batch_size * k)) * tf.math.reduce_sum(red_dists)
return compact_loss
Of course the Flatten() could be moved back into the model for convenience and the k could be derived directly from the feature map; this answers your question. You may just have some trouble finding out the the expected values for the model are - feature maps from the VGG16 (or any other architechture) trained against the imagenet for instance?
The paper says:
In our formulation (shown in Figure 2 (e)), starting froma pre-trained deep model, we freeze initial features (gs) and learn (gl) and (hc). Based on the output of the classification sub-network (hc), two losses compactness loss and descriptiveness loss are evaluated. These two losses, introduced in the subsequent sections, are used to assess the quality of the learned deep feature. We use the provided one-class dataset to calculate the compactness loss. An external multi-class reference dataset is used to evaluate the descriptiveness loss.As shown in Figure 3, weights of gl and hc are learned in the proposed method through back-propagation from the composite loss. Once training is converged, system shown in setup in Figure 2(d) is used to perform classification where the resulting model is used as the pre-trained model.
then looking at the "Framework" backbone here plus:
AlexNet Binary and VGG16 Binary (Baseline). A binary CNN is trained by having ImageNet samples and one-class image samples as the two classes using AlexNet andVGG16 architectures, respectively. Testing is performed using k-nearest neighbor, One-class SVM [43], Isolation Forest [3]and Gaussian Mixture Model [3] classifiers.
Makes me wonder whether it would not be reasonable to add suggested the dense layers to both the Secondary and the Reference Networks to a single class output (Sigmoid) or even and binary class output (using Softmax) and using the mean_squared_error as the so called Compactness Loss and binary_cross_entropy as the Descriptveness Loss.
| https://stackoverflow.com/questions/62079034/ |
Using SVM with different kernels as a last layer in CNN network | I'm trying to replace the last fully connected layer of a CNN network with SVM using pytorch in a multi-classification problem. I've done some research and it says, that I should just replace the nn.CrossEntropyLoss with nn.MultiMarginLoss.
How does changing the criterion only actually corresponds with the "replacing fully connected layer with SVM" task? Another thing is that I'd like to use the SVM with different kernel, like for example the quadratic one.
| This question can actually be interpreted as the difference between Logistic regression and SVM in classification.
We can naively look at the whole platform of your deep learning as if you have a magician, and that magician accepts the input data, and give you a set of engineered featured, and you use those features to do the classification.
Depending on which loss you minimize, you can solve this classification issue with different sorts of functions. If you use cross-entropy, it is like you are applying a logistic regression classification. On the other hand, if you minimize the marginal loss, it is actually equal to finding the support vectors, which is indeed how SVM works.
You need to read about the role of kernels in the calculation of the loss(for ex, here ), but TL;DR is that for loss computation, you have a component of K(xi,xj) which is actually the kernel function and indicate the similarity of xi and xj.
So you can implement a custom loss, where you have a polynomial kernel (quadratic in your case), and imitate the margin loss calculation there.
| https://stackoverflow.com/questions/62081985/ |
Preferred way to decrease learning rate for Adam optimiser in PyTorch | I have been seeing code that uses an Adam optimizer . And the way they decrease the learning rate is as follows:
optimizer = torch.optim.Adam(net.parameters(),lr=0.01)
(training...
optimizer.step()...)
if iteration >= some_threshold:
for param_group in optimizer.param_groups:
param_group['lr'] = 0.001
I thought we have the same learning rate for all parameters. So why then iterate over the param_groups and individually set learning rate for each parameter?
Wouldn't the following be faster and have an identical effect?
optimizer = torch.optim.Adam(net.parameters(),lr=0.01)
scheduler = MultiStepLR(optimizer, milestones=[some_threshold], gamma=0.1)
(training...
optimizer.step()
scheduler.step())
Thank you
| You need to iterate over param_groups because if you don't specify multiple groups of parameters in the optimiser, you automatically have a single group. That doesn't mean you set the learning rate for each parameter, but rather each parameter group.
In fact the learning rate schedulers from PyTorch do the same thing. From _LRScheduler (base class of learning rate schedulers):
with _enable_get_lr_call(self):
if epoch is None:
self.last_epoch += 1
values = self.get_lr()
else:
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
self.last_epoch = epoch
if hasattr(self, "_get_closed_form_lr"):
values = self._get_closed_form_lr()
else:
values = self.get_lr()
for param_group, lr in zip(self.optimizer.param_groups, values):
param_group['lr'] = lr
Yes, it has the identical effect in this case, but it wouldn't be faster.
| https://stackoverflow.com/questions/62086065/ |
How to get embedding from bert finetuned model? | I have finedtuned 'bert-base-uncased' model using transformer and torch which gave me pytorch_model.bin, vocab.txt and other files as output.
After loading the model how to I get embedding for complete vocab, like a matrix which maps every word to its embedding vector
| I recommend to use Huggingface, they make it very easy to use and train all Transformer model variants.
To get the embedding from a BERT fine-tuned model, you could use BertModel.set_input_embeddings().
| https://stackoverflow.com/questions/62086878/ |
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False | RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I am getting the above error for code:-
def get_model(path, device):
model = models.vgg16(pretrained=False)
for param in model.parameters():
param.requires_grad = False
n_inputs = model.classifier[6].in_features
model.classifier[6] = torch.nn.Sequential(
torch.nn.Linear(n_inputs, 256), torch.nn.ReLU(), torch.nn.Dropout(0.2),
torch.nn.Linear(256, 10), torch.nn.LogSoftmax(dim=1))
model.load_state_dict(torch.load(path), map_location=torch.device('cpu'))
model.to(device)
model.eval()
return model
device = torch.device("cpu")
model = get_model('vgg16.pt', device)
| You are passing the map_location to the wrong function (to model.load_state_dict instead of torch.load).
The corrected line would look like this:
model.load_state_dict(torch.load(path, map_location=torch.device('cpu')))
| https://stackoverflow.com/questions/62087498/ |
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False, Dataloader Error, and setting pin_memory=False | I am a beginner trying to evaluate this video object segmentation network paper.
When following the instructions on https://github.com/seoungwugoh/STM
It says the requirements are as follows:-
python 3.6
pytorch 1.0.1.post2
numpy, opencv, pillow
I couldn't get this pytorch version to install, so I installed the conda-forge pytorch version 1.5.
and I run this command in either Windows 10 or Ubuntu 16.04 using Anaconda
(STMVOS) oneworld@oneworld:~/Documents/VideoObjectSegmentation/STMVOS$ python eval_DAVIS.py -g '1' -s val -y 16 -D ../DAVISSemiSupervisedTrainVal480
after doing pip install matplotlib, and pip install tqdm ...
I get the following error message:-
Space-time Memory Networks: initialized.
STM : Testing on DAVIS
Loading weights: STM_weights.pth
Traceback (most recent call last):
File "eval_DAVIS.py", line 111, in
model.load_state_dict(torch.load(pth_path))
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 729, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 178, in default_restore_location
result = fn(storage, location)
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 154, in _cuda_deserialize
device = validate_cuda_device(location)
File "/home/oneworld/anaconda3/envs/STMVOS/lib/python3.8/site-packages/torch/serialization.py", line 138, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU
My Graphics Card Driver, and System and Packages are as follows:-
(STMVOS) oneworld@oneworld:~/Documents/VideoObjectSegmentation/STMVOS$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 440.64.00 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 26% 34C P8 10W / 151W | 392MiB / 8118MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1247 G /usr/lib/xorg/Xorg 229MiB |
| 0 2239 G compiz 126MiB |
| 0 9385 G /usr/lib/firefox/firefox 2MiB |
| 0 11686 G /proc/self/exe 30MiB |
+-----------------------------------------------------------------------------+
I also tried this
(STMVOS) oneworld@oneworld:~/Documents/VideoObjectSegmentation/STMVOS$ python -c 'import torch; print(torch.rand(2,3).cuda())'
tensor([[0.9178, 0.8239, 0.4761],
[0.9429, 0.8877, 0.0097]], device='cuda:0')
Which shows that cuda is working here
(STMVOS) oneworld@oneworld:~/Documents/VideoObjectSegmentation/STMVOS$ conda info
active environment : STMVOS
active env location : /home/oneworld/anaconda3/envs/STMVOS
shell level : 1
user config file : /home/oneworld/.condarc
populated config files :
conda version : 4.8.2
conda-build version : 3.18.11
python version : 3.7.6.final.0
virtual packages : __cuda=10.2
__glibc=2.23
base environment : /home/oneworld/anaconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/oneworld/anaconda3/pkgs
/home/oneworld/.conda/pkgs
envs directories : /home/oneworld/anaconda3/envs
/home/oneworld/.conda/envs
platform : linux-64
user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Linux/4.4.0-179-generic ubuntu/16.04.6 glibc/2.23
UID:GID : 1000:1000
netrc file : None
offline mode : False
(STMVOS) oneworld@oneworld:~/Documents/VideoObjectSegmentation/STMVOS$ conda list
packages in environment at /home/oneworld/anaconda3/envs/STMVOS:
Name Version Build Channel
_libgcc_mutex 0.1 main
blas 1.0 mkl
bzip2 1.0.8 h516909a_2 conda-forge
ca-certificates 2020.4.5.1 hecc5488_0 conda-forge
cairo 1.16.0 hcf35c78_1003 conda-forge
certifi 2020.4.5.1 py38_0
cudatoolkit 10.2.89 hfd86e86_1
cycler 0.10.0 pypi_0 pypi
dbus 1.13.6 he372182_0 conda-forge
expat 2.2.9 he1b5a44_2 conda-forge
ffmpeg 4.2.3 h167e202_0 conda-forge
fontconfig 2.13.1 h86ecdb6_1001 conda-forge
freetype 2.9.1 h8a8886c_1
gettext 0.19.8.1 hc5be6a0_1002 conda-forge
giflib 5.2.1 h516909a_2 conda-forge
glib 2.64.3 h6f030ca_0 conda-forge
gmp 6.2.0 he1b5a44_2 conda-forge
gnutls 3.6.5 hd3a4fd2_1002 conda-forge
graphite2 1.3.13 he1b5a44_1001 conda-forge
gst-plugins-base 1.14.5 h0935bb2_2 conda-forge
gstreamer 1.14.5 h36ae1b5_2 conda-forge
harfbuzz 2.4.0 h9f30f68_3 conda-forge
hdf5 1.10.6 nompi_h3c11f04_100 conda-forge
icu 64.2 he1b5a44_1 conda-forge
intel-openmp 2020.1 217
jasper 1.900.1 h07fcdf6_1006 conda-forge
jpeg 9c h14c3975_1001 conda-forge
kiwisolver 1.2.0 pypi_0 pypi
lame 3.100 h14c3975_1001 conda-forge
ld_impl_linux-64 2.33.1 h53a641e_7
libblas 3.8.0 15_mkl conda-forge
libcblas 3.8.0 15_mkl conda-forge
libclang 9.0.1 default_hde54327_0 conda-forge
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 he1b5a44_1007 conda-forge
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libiconv 1.15 h516909a_1006 conda-forge
liblapack 3.8.0 15_mkl conda-forge
liblapacke 3.8.0 15_mkl conda-forge
libllvm9 9.0.1 he513fc3_1 conda-forge
libopencv 4.2.0 py38_6 conda-forge
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_0
libuuid 2.32.1 h14c3975_1000 conda-forge
libwebp 1.0.2 h56121f0_5 conda-forge
libxcb 1.13 h14c3975_1002 conda-forge
libxkbcommon 0.10.0 he1b5a44_0 conda-forge
libxml2 2.9.10 hee79883_0 conda-forge
matplotlib 3.2.1 pypi_0 pypi
mkl 2020.1 217
mkl-service 2.3.0 py38he904b0f_0
mkl_fft 1.0.15 py38ha843d7b_0
mkl_random 1.1.1 py38h0573a6f_0
ncurses 6.2 he6710b0_1
nettle 3.4.1 h1bed415_1002 conda-forge
ninja 1.9.0 py38hfd86e86_0
nspr 4.25 he1b5a44_0 conda-forge
nss 3.47 he751ad9_0 conda-forge
numpy 1.18.1 py38h4f9e942_0
numpy-base 1.18.1 py38hde5b4d6_1
olefile 0.46 py_0
opencv 4.2.0 py38_6 conda-forge
openh264 2.1.1 h8b12597_0 conda-forge
openssl 1.1.1g h516909a_0 conda-forge
pcre 8.44 he1b5a44_0 conda-forge
pillow 7.1.2 py38hb39fc2d_0
pip 20.0.2 py38_3
pixman 0.38.0 h516909a_1003 conda-forge
pthread-stubs 0.4 h14c3975_1001 conda-forge
py-opencv 4.2.0 py38h23f93f0_6 conda-forge
pyparsing 2.4.7 pypi_0 pypi
python 3.8.1 h0371630_1
python-dateutil 2.8.1 pypi_0 pypi
python_abi 3.8 1_cp38 conda-forge
pytorch 1.5.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
qt 5.12.5 hd8c4c69_1 conda-forge
readline 7.0 h7b6447c_5
setuptools 46.4.0 py38_0
six 1.14.0 py38_0
sqlite 3.31.1 h62c20be_1
tk 8.6.8 hbc83047_0
torchvision 0.6.0 py38_cu102 pytorch
tqdm 4.46.0 pypi_0 pypi
wheel 0.34.2 py38_0
x264 1!152.20180806 h14c3975_0 conda-forge
xorg-kbproto 1.0.7 h14c3975_1002 conda-forge
xorg-libice 1.0.10 h516909a_0 conda-forge
xorg-libsm 1.2.3 h84519dc_1000 conda-forge
xorg-libx11 1.6.9 h516909a_0 conda-forge
xorg-libxau 1.0.9 h14c3975_0 conda-forge
xorg-libxdmcp 1.1.3 h516909a_0 conda-forge
xorg-libxext 1.3.4 h516909a_0 conda-forge
xorg-libxrender 0.9.10 h516909a_1002 conda-forge
xorg-renderproto 0.11.1 h14c3975_1002 conda-forge
xorg-xextproto 7.3.0 h14c3975_1002 conda-forge
xorg-xproto 7.0.31 h14c3975_1007 conda-forge
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
The code it gets stuck on in eval_DAVIS.py is as follows:-
print('Loading weights:', pth_path)
model.load_state_dict(torch.load(pth_path))
I am using Ubuntu 16.04, however I tried a similar setup in windows 10 and received the same error messages.
Any help much appreciated.
Kind regards
OneWorld
| because of the Python error suggestion
if __name__ == '__main__':
freeze_support()
I added this line
if __name__ == '__main__':
above the line
for seq, V in enumerate(Testloader):
and indented that line and everything else below.
It then worked as far as to the end of [bike packing]
However requested a scipy install before [black swan]
So I did conda install scipy
and reran, and it started to go through the rest [bmx-trees], [breakdance] etc.
The resulting eval_DAVIS.py file looked like this...
from __future__ import division
import torch
from torch.autograd import Variable
from torch.utils import data
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
import torch.utils.model_zoo as model_zoo
from torchvision import models
# general libs
import cv2
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import math
import time
import tqdm
import os
import argparse
import copy
### My libs
from dataset import DAVIS_MO_Test
from model import STM
torch.set_grad_enabled(False) # Volatile
# def get_arguments():
# parser = argparse.ArgumentParser(description="SST")
# parser.add_argument("-g", type=str, help="0; 0,1; 0,3; etc", required=True)
# parser.add_argument("-s", type=str, help="set", required=True)
# parser.add_argument("-y", type=int, help="year", required=True)
# parser.add_argument("-viz", help="Save visualization", action="store_true")
# parser.add_argument("-D", type=str, help="path to data",default='/local/DATA')
# return parser.parse_args()
# args = get_arguments()
# GPU = args.g
# YEAR = args.y
# SET = args.s
# VIZ = args.viz
# DATA_ROOT = args.D
GPU = '0'
YEAR = '17'
SET = 'val'
VIZ = 'store_true'
DATA_ROOT = '..\\DAVIS2017SemiSupervisedTrainVal480'
# Model and version
MODEL = 'STM'
print(MODEL, ': Testing on DAVIS')
os.environ['CUDA_VISIBLE_DEVICES'] = GPU
if torch.cuda.is_available():
print('using Cuda devices, num:', torch.cuda.device_count())
if VIZ:
print('--- Produce mask overaid video outputs. Evaluation will run slow.')
print('--- Require FFMPEG for encoding, Check folder ./viz')
palette = Image.open(DATA_ROOT + '/Annotations/480p/blackswan/00000.png').getpalette()
def Run_video(Fs, Ms, num_frames, num_objects, Mem_every=None, Mem_number=None):
# initialize storage tensors
if Mem_every:
to_memorize = [int(i) for i in np.arange(0, num_frames, step=Mem_every)]
elif Mem_number:
to_memorize = [int(round(i)) for i in np.linspace(0, num_frames, num=Mem_number+2)[:-1]]
else:
raise NotImplementedError
Es = torch.zeros_like(Ms)
Es[:,:,0] = Ms[:,:,0]
for t in tqdm.tqdm(range(1, num_frames)):
# memorize
with torch.no_grad():
prev_key, prev_value = model(Fs[:,:,t-1], Es[:,:,t-1], torch.tensor([num_objects]))
if t-1 == 0: #
this_keys, this_values = prev_key, prev_value # only prev memory
else:
this_keys = torch.cat([keys, prev_key], dim=3)
this_values = torch.cat([values, prev_value], dim=3)
# segment
with torch.no_grad():
logit = model(Fs[:,:,t], this_keys, this_values, torch.tensor([num_objects]))
Es[:,:,t] = F.softmax(logit, dim=1)
# update
if t-1 in to_memorize:
keys, values = this_keys, this_values
pred = np.argmax(Es[0].cpu().numpy(), axis=0).astype(np.uint8)
return pred, Es
Testset = DAVIS_MO_Test(DATA_ROOT, resolution='480p', imset='20{}/{}.txt'.format(YEAR,SET), single_object=(YEAR==16))
Testloader = data.DataLoader(Testset, batch_size=1, shuffle=False, num_workers=2, pin_memory=True)
model = nn.DataParallel(STM())
if torch.cuda.is_available():
model.cuda()
model.eval() # turn-off BN
pth_path = 'STM_weights.pth'
print('Loading weights:', pth_path)
model.load_state_dict(torch.load(pth_path)) # , map_location=torch.device('cpu')
code_name = '{}_DAVIS_{}{}'.format(MODEL,YEAR,SET)
print('Start Testing:', code_name)
if torch.cuda.is_available() == False:
print("********** CUDA is NOT available just before line of error **********")
else:
print("********** CUDA is available, and working fine just before line of error ***********")
if __name__ == '__main__':
for seq, V in enumerate(Testloader):
Fs, Ms, num_objects, info = V
seq_name = info['name'][0]
num_frames = info['num_frames'][0].item()
print('[{}]: num_frames: {}, num_objects: {}'.format(seq_name, num_frames, num_objects[0][0]))
pred, Es = Run_video(Fs, Ms, num_frames, num_objects, Mem_every=5, Mem_number=None)
# Save results for quantitative eval ######################
test_path = os.path.join('./test', code_name, seq_name)
if not os.path.exists(test_path):
os.makedirs(test_path)
for f in range(num_frames):
img_E = Image.fromarray(pred[f])
img_E.putpalette(palette)
img_E.save(os.path.join(test_path, '{:05d}.png'.format(f)))
if VIZ:
from helpers import overlay_davis
# visualize results #######################
viz_path = os.path.join('./viz/', code_name, seq_name)
if not os.path.exists(viz_path):
os.makedirs(viz_path)
for f in range(num_frames):
pF = (Fs[0,:,f].permute(1,2,0).numpy() * 255.).astype(np.uint8)
pE = pred[f]
canvas = overlay_davis(pF, pE, palette)
canvas = Image.fromarray(canvas)
canvas.save(os.path.join(viz_path, 'f{}.jpg'.format(f)))
vid_path = os.path.join('./viz/', code_name, '{}.mp4'.format(seq_name))
frame_path = os.path.join('./viz/', code_name, seq_name, 'f%d.jpg')
os.system('ffmpeg -framerate 10 -i {} {} -vcodec libx264 -crf 10 -pix_fmt yuv420p -nostats -loglevel 0 -y'.format(frame_path, vid_path))
However...
Eventually I got an out of memory error
[car-shadow]: num_frames: 40, num_objects: 1
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:09<00:00, 3.98it/s]
Traceback (most recent call last):
File "eval_DAVIS.py", line 129, in <module>
for seq, V in enumerate(Testloader):
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\dataloader.py", line 856, in _next_data
return self._process_data(data)
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\dataloader.py", line 881, in _process_data
data.reraise()
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\_utils.py", line 395, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in pin memory thread for device 0.
Original Traceback (most recent call last):
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line 31, in _pin_memory_loop
data = pin_memory(data)
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line 55, in pin_memory
return [pin_memory(sample) for sample in data]
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line 55, in <listcomp>
return [pin_memory(sample) for sample in data]
File "C:\Users\OneWorld\anaconda3\envs\STMVOS\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line 47, in pin_memory
return data.pin_memory()
RuntimeError: cuda runtime error (2) : out of memory at ..\aten\src\THC\THCCachingHostAllocator.cpp:278
so I set the testloader from pin_memory=True to false at around line 108 in eval_DAVIS.py
Testloader = data.DataLoader(Testset, batch_size=1, shuffle=False, num_workers=2, pin_memory=False)
and reran.
Seemed to work fine.
| https://stackoverflow.com/questions/62088265/ |
PyTorch model saving error: "Can't pickle local object" | When I try to save the PyTorch model with this piece of code:
checkpoint = {'model': Net(), 'state_dict': model.state_dict(),'optimizer' :optimizer.state_dict()}
torch.save(checkpoint, 'Checkpoint.pth')
I get the following error:
E:\PROGRAM FILES\Anaconda\envs\staj_projesi\lib\site-packages\torch\serialization.py:251: UserWarning: Couldn't retrieve source code for container of type Net. It won't be checked for correctness upon loading.
...
"type " + obj.__name__ + ". It won't be checked "
Can't pickle local object 'trainModel.<locals>.Net'
When I try to save the PyTorch model with this piece of code:
checkpoint = {'state_dict': model.state_dict(),'optimizer' :optimizer.state_dict()}
torch.save(checkpoint, 'Checkpoint.pth')
I don't don't get any errors, but I want to save the ANN class. How can I solve this problem? Also, I could save the model with the first structure in the other projects before
| You can't! torch.save is saving the objects state_dict() only.
When you use the following:
checkpoint = {'model': Net(), 'state_dict': model.state_dict(),'optimizer' :optimizer.state_dict()}
torch.save(checkpoint, 'Checkpoint.pth')
You are trying to save the model itself, but this data is saved in the model.state_dict() and when loading a model with the state_dict you should first initiate a model object.
This is exactly the reason why the second method works properly:
checkpoint = {'state_dict': model.state_dict(),'optimizer' :optimizer.state_dict()}
torch.save(checkpoint, 'Checkpoint.pth')
I would suggest reading the pytorch docs of how to properly save\load a model in the following link:
https://pytorch.org/tutorials/beginner/saving_loading_models.html
| https://stackoverflow.com/questions/62090674/ |
Having a network output as another network parameters | I have y = Y(x;theta) and theta = M(t;omega), where x and t are input variables (given from the dataset), and theta and omega trainable parameters. I need to have theta as function of omega. Then, I have a loss function over y and need to backpropagate the gradient through M up to Y. How can I create such a structure in pytorch?
Currently, my network is built as follows (sizes is a list of integers, defined as sizes = [input_size, hidden1_size, hidden2_size, ..., output_size])
import torch
import torch.nn as nn
import torch.nn.functional as F
class M(nn.Module):
def __init__(self, sizes):
super(Y, self).__init__()
self.layers = nn.ModuleList()
for i in range(0, len(sizes) - 1):
self.layers.append(nn.Linear(sizes[i], sizes[i+1]))
def forward(self, x):
for l in self.layers[:-1]:
x = F.relu(l(x))
x = self.layers[-1](x)
return x
| I think it is quite simple or I didn't get your query correctly.
x, t are your input variables.
Now let us define a network M that will take input t and output theta.
M = nn.Sequential(....) # declare network here
Next, we define a network Y. This here might be tricky as you want to use theta as parameters. It might be easier and intuitive to work with functional counterparts of the modules declared in nn (see https://pytorch.org/docs/stable/nn.functional.html). I will try to give an example of this assuming theta are params of a linear module.
class Y(nn.Module):
def __init__(self):
# declare any modules here
def forward(self, theta, x):
return nn.functional.linear(input=x, weight=theta, bias=None)
The overall forward pass would be
def forward(t, x, M, Y):
theta = M(t)
output = Y(theta, x)
return output
| https://stackoverflow.com/questions/62095638/ |
why doesnt Stochastic gradient descent fluctuate | In batch gradient descent the parameters were updated based on the total/average loss of all the points
In Stochastic gradient descent or SGD
we are updating the parameters after every point instead of one epoch.
so lets say if the final point is an outlier woudnt that cause the whole fitted line to fluctuate drastically.
How is it reliable .
or converge on a contour like this SGD contour
|
While it is true that in its most pristine form SGD operates on just 1 sample point, in reality this is not the dominant practice. In practice, we use a mini-batch of say 256, 128 or 64 samples rather than operating on the full batch size containing all the samples in the database, which might be well over than 1 million samples. So clearly operating on a mini-batch of say 256 is much faster than operating on 1 million points and at the same time helps curb the variability caused due to just using 1 sample point.
A second point is that there is no final point. One simply keeps iterating over the dataset. The learning rate for SGD is generally quite small say 1e-3. So even if a sample point happens to be an outlier, the wrong gradients will be scaled by 1e-3 and hence SGD will not be too much off the correct trajectory. When it iterates over the upcoming sample points, which are not outliers, it will again head towards the correct direction.
So altogether using a medium-sized mini-batch and using a small learning rate helps SGD to not digress a lot from the correct trajectory.
Now the word stochastic in SGD can also imply various other measures. For example some practitioners also use gradient clipping i.e. they clamp the calculated gradient to maximum value if the gradients are well over this decided maximum threshold. You can find more on gradient clipping in this post. Now, this is just one trick amongst dozens of other techniques and if you are interested can read source code of popular implementation of SGD in PyTorch or TensorFlow.
| https://stackoverflow.com/questions/62097229/ |
How to set random seed when it is in distributed training in PyTorch? | Now I am training a model using torch.distributed, but I am not sure how to set the random seeds. For example, this is my current code:
def main():
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
cudnn.enabled = True
cudnn.benchmark = True
cudnn.deterministic = True
mp.spawn(main_worker, nprocs=args.ngpus, args=(args,))
And should I move the
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
cudnn.enabled = True
cudnn.benchmark = True
cudnn.deterministic = True
into the function main_worker() to make sure every process has the correct seed and cudnn settings? By the way, I have tried this, and this behavior will make the training 2 times slower, which really confused me.
Thank you very much for any help!
| The spawned child processes do not inherit the seed you set manually in the parent process, therefore you need to set the seed in the main_worker function.
The same logic applies to cudnn.benchmark and cudnn.deterministic, so if you want to use these, you have to set them in main_worker as well. If you want to verify that, you can just print their values in each process.
cudnn.benchmark = True tries to find the optimal algorithm for your model, by benchmarking various implementations of certain operations (e.g. available convolution algorithms). This will take time to find the best algorithm, but once that is done, further iterations will potentially be faster. The algorithm that was determined to be the best, only applies to the specific input size that was used. If in the next iteration you have a different input size, the benchmark needs to be run again, in order to determine the best algorithm for that specific input size, which might be a different one than for the first input size.
I'm assuming that your input sizes vary, which would explain the slow down, as the benchmark wasn't used when it was set in the parent process. cudnn.benchmark = True should only be used if your input sizes are fixed.
cudnn.determinstic = True may also have a negative impact on the performance, because certain underlying operations, that are non-deterministic, need to be replaced with a deterministic version, which tend to be slower, otherwise the deterministic version would be used in the first place, but that performance impact shouldn't be too dramatic.
| https://stackoverflow.com/questions/62097236/ |
Need help understanding the gradient function in pytorch | The following code
w = np.array([[2., 2.],[2., 2.]])
x = np.array([[3., 3.],[3., 3.]])
b = np.array([[4., 4.],[4., 4.]])
w = torch.tensor(w, requires_grad=True)
x = torch.tensor(x, requires_grad=True)
b = torch.tensor(b, requires_grad=True)
y = w*x + b
print(y)
# tensor([[10., 10.],
# [10., 10.]], dtype=torch.float64, grad_fn=<AddBackward0>)
y.backward(torch.FloatTensor([[1, 1],[ 1, 1]]))
print(w.grad)
# tensor([[3., 3.],
# [3., 3.]], dtype=torch.float64)
print(x.grad)
# tensor([[2., 2.],
# [2., 2.]], dtype=torch.float64)
print(b.grad)
# tensor([[1., 1.],
# [1., 1.]], dtype=torch.float64)
As the tensor argument inside gradient function is an all ones tensor in the shape of the input tensor, my understanding says that
w.grad means derivative of y w.r.t w, and produces b,
x.grad means derivative of y w.r.t x, and produces b and
b.grad means derivative of y w.r.t b, and produces all ones.
Out of these, only point 3 answer is matching my expected result. Can someone help me in understanding the first two answers. I think I understand the accumulation part, but don't think that is happening here.
| To find the correct derivatives in this example, we need to take the sum and product rule into consideration.
Sum rule:
Product rule:
That means the derivatives of your equation are calculated as follows.
With respect to x:
With respect to w:
With respect to b:
The gradients reflect exactly that:
torch.equal(w.grad, x) # => True
torch.equal(x.grad, w) # => True
torch.equal(b.grad, torch.tensor([[1, 1], [1, 1]], dtype=torch.float64)) # => True
| https://stackoverflow.com/questions/62099030/ |
RuntimeError: Given groups=3, weight of size 12 64 3 768, expected input[32, 12, 30, 768] to have 192 channels, but got 12 channels instead | I started working with Pytorch recently so my understanding of it isn't quite strong. I previously had a 1 layer CNN but wanted to extend it to 2 layers, but the input and output channels have been throwing errors I can seem to decipher. Why does it expect 192 channels? Can someone give me a pointer to help me understand this better? I have seen several related problems on here, but I don't understand those solutions either.
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from transformers import BertConfig, BertModel, BertTokenizer
import math
from transformers import AdamW, get_linear_schedule_with_warmup
def pad_sents(sents, pad_token): # Pad list of sentences according to the longest sentence in the batch.
sents_padded = []
max_len = max(len(s) for s in sents)
for s in sents:
padded = [pad_token] * max_len
padded[:len(s)] = s
sents_padded.append(padded)
return sents_padded
def sents_to_tensor(tokenizer, sents, device):
tokens_list = [tokenizer.tokenize(str(sent)) for sent in sents]
sents_lengths = [len(tokens) for tokens in tokens_list]
tokens_list_padded = pad_sents(tokens_list, '[PAD]')
sents_lengths = torch.tensor(sents_lengths, device=device)
masks = []
for tokens in tokens_list_padded:
mask = [0 if token == '[PAD]' else 1 for token in tokens]
masks.append(mask)
masks_tensor = torch.tensor(masks, dtype=torch.long, device=device)
tokens_id_list = [tokenizer.convert_tokens_to_ids(tokens) for tokens in tokens_list_padded]
sents_tensor = torch.tensor(tokens_id_list, dtype=torch.long, device=device)
return sents_tensor, masks_tensor, sents_lengths
class ConvModel(nn.Module):
def __init__(self, device, dropout_rate, n_class, out_channel=16):
super(ConvModel, self).__init__()
self.bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
self.dropout_rate = dropout_rate
self.n_class = n_class
self.out_channel = out_channel
self.bert = BertModel.from_pretrained('bert-base-uncased', config=self.bert_config)
self.out_channels = self.bert.config.num_hidden_layers * self.out_channel
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', config=self.bert_config)
self.conv = nn.Conv2d(in_channels=self.bert.config.num_hidden_layers,
out_channels=self.out_channels,
kernel_size=(3, self.bert.config.hidden_size),
groups=self.bert.config.num_hidden_layers)
self.conv1 = nn.Conv2d(in_channels=self.out_channels,
out_channels=48,
kernel_size=(3, self.bert.config.hidden_size),
groups=self.bert.config.num_hidden_layers)
self.hidden_to_softmax = nn.Linear(self.out_channels, self.n_class, bias=True)
self.dropout = nn.Dropout(p=self.dropout_rate)
self.device = device
def forward(self, sents):
sents_tensor, masks_tensor, sents_lengths = sents_to_tensor(self.tokenizer, sents, self.device)
encoded_layers = self.bert(input_ids=sents_tensor, attention_mask=masks_tensor)
hidden_encoded_layer = encoded_layers[2]
hidden_encoded_layer = hidden_encoded_layer[0]
hidden_encoded_layer = torch.unsqueeze(hidden_encoded_layer, dim=1)
hidden_encoded_layer = hidden_encoded_layer.repeat(1, 12, 1, 1)
conv_out = self.conv(hidden_encoded_layer) # (batch_size, channel_out, some_length, 1)
conv_out = self.conv1(conv_out)
conv_out = torch.squeeze(conv_out, dim=3) # (batch_size, channel_out, some_length)
conv_out, _ = torch.max(conv_out, dim=2) # (batch_size, channel_out)
pre_softmax = self.hidden_to_softmax(conv_out)
return pre_softmax
def batch_iter(data, batch_size, shuffle=False, bert=None):
batch_num = math.ceil(data.shape[0] / batch_size)
index_array = list(range(data.shape[0]))
if shuffle:
data = data.sample(frac=1)
for i in range(batch_num):
indices = index_array[i * batch_size: (i + 1) * batch_size]
examples = data.iloc[indices]
sents = list(examples.train_BERT_tweet)
targets = list(examples.train_label.values)
yield sents, targets # list[list[str]] if not bert else list[str], list[int]
def train():
label_name = ['Yes', 'Maybe', 'No']
device = torch.device("cpu")
df_train = pd.read_csv('trainn.csv') # , index_col=0)
train_label = dict(df_train.train_label.value_counts())
label_max = float(max(train_label.values()))
train_label_weight = torch.tensor([label_max / train_label[i] for i in range(len(train_label))], device=device)
model = ConvModel(device=device, dropout_rate=0.2, n_class=len(label_name))
optimizer = AdamW(model.parameters(), lr=1e-3, correct_bias=False)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=100, num_training_steps=1000) # changed the last 2 arguments to old ones
model = model.to(device)
model.train()
cn_loss = torch.nn.CrossEntropyLoss(weight=train_label_weight, reduction='mean')
train_batch_size = 16
for epoch in range(1):
for sents, targets in batch_iter(df_train, batch_size=train_batch_size, shuffle=True): # for each epoch
optimizer.zero_grad()
pre_softmax = model(sents)
loss = cn_loss(pre_softmax, torch.tensor(targets, dtype=torch.long, device=device))
loss.backward()
optimizer.step()
scheduler.step()
TrainingModel = train()
Here's a snippet of data https://github.com/Kosisochi/DataSnippet
| It seems that the original version of the code you had in this question behaved differently. The final version of the code you have here gives me a different error from what you posted, more specifically - this:
RuntimeError: Calculated padded input size per channel: (20 x 1). Kernel size: (3 x 768). Kernel size can't be greater than actual input size
I apologize if I misunderstood the situation, but it seems to me that your understanding of what exactly nn.Conv2d layer does is not 100% clear and that is the main source of your struggle. I interpret the part "detailed explanation on 2 layer CNN in Pytorch" you requested as an ask to explain in detail on how that layer works and I hope that after this is done there will be no problem applying it 1 time, 2 times or more.
You can find all the documentation about the layer here, but let me give you a recap which hopefully will help to understand more the errors you're getting.
First of all nn.Conv2d inputs are 4-d tensors of the shape (BatchSize, ChannelsIn, Height, Width) and outputs are 4-d tensors of the shape (BatchSize, ChannelsOut, HeightOut, WidthOut). The simplest way to think about nn.Conv2d is of something applied to 2d images with pixel grid of size Height x Width and having ChannelsIn different colors or features per pixel. Even if your inputs have nothing to do with actual images the behavior of the layer is still the same. Simplest situation is when the nn.Conv2d is not using padding (as in your code). In that case the kernel_size=(kernel_height, kernel_width) argument specifies the rectangle which you can imagine sweeping through Height x Width rectangle of your inputs and producing one pixel for each valid position. Without padding the coordinate of the rectangle's point can be any pair of indicies (x, y) with x between 0 and Height - kernel_height and y between 0 and Width - kernel_width. Thus the output will look like a 2d image of size (Height - kernel_height + 1) x (Width - kernel_width + 1) and will have as many output channels as specified to nn.Conv2d constructor, so the output tensor will be of shape (BatchSize, ChannelsOut, Height - kernel_height + 1, Width - kernel_width + 1).
The parameter groups is not affecting how shapes are changed by the layer - it is only controlling which input channels are used as inputs for the output channels (groups=1 means that every input channel is used as input for every output channel, otherwise input and output channels are divided into corresponding number of groups and only input channels from group i are used as inputs for the output channels from group i).
Now in your current version of the code you have BatchSize = 16 and the output of pre-trained model is (BatchSize, DynamicSize, 768) with DynamicSize depending on the input, e.g. 22. You then introduce additional dimension as axis 1 with unsqueeze and repeat the values along that dimension transforming the tensor of shape (16, 22, 768) into (16, 12, 22, 768). Effectively you are using the output of the pre-trained model as 12-channel (with each channel having same values as others) 2-d images here of size (22, 768), where 22 is not fixed (depends on the batch). Then you apply a nn.Conv2d with kernel size (3, 768) - which means that there is no "wiggle room" for width and output 2-d images will be of size (20, 1) and since your layer has 192 channels final size of the output of first convolution layer has shape (16, 192, 20, 1). Then you try to apply second layer of convolution on top of that with kernel size (3, 768) again, but since your 2-d "image" is now just (20 x 1) there is no valid position to fit (3, 768) kernel rectangle inside a rectangle (20 x 1) which leads to the error message Kernel size can't be greater than actual input size.
Hope this explanation helps. Now to the choices you have to avoid the issue:
(a) is to add padding in such a way that the size of the output is not changing comparing to input (I won't go into details here,
because I don't think this is what you need)
(b) Use smaller kernel on both first and/or second convolutions (e.g. if you don't change first convolution the only valid width for
the second kernel would be 1).
(c) Looking at what you're trying to do my guess is that you actually don't want to use 2d convolution, you want 1d convolution (on the sequence) with every position described by 768 values. When you're using one convolution layer with 768 width kernel (and same 768 width input) you're effectively doing exactly same thing as 1d convolution with 768 input channels, but then if you try to apply second one you have a problem. You can specify kernel width as 1 for the next layer(s) and that will work for you, but a more correct way would be to transpose pre-trained model's output tensor by switching the last dimensions - getting shape (16, 768, DynamicSize) from (16, DynamicSize, 768) and then apply nn.Conv1d layer with 768 input channels and arbitrary ChannelsOut as output channels and 1d kernel_size=3 (meaning you look at 3 consecutive elements of the sequence for convolution). If you do that than without padding input shape of (16, 768, DynamicSize) will become (16, ChannelsOut, DynamicSize-2), and after you apply second Conv1d with e.g. the same settings as first one you'll get a tensor of shape (16, ChannelsOut, DynamicSize-4), etc. (each time the 1d length will shrink by kernel_size-1). You can always change number of channels/kernel_size for each subsequent convolution layer too.
| https://stackoverflow.com/questions/62099558/ |
Pytorch can't convert np.ndarray of type numpy.object | I am trying to create a PyTorch data-loader with variable image size. Here is a snippet of my code
def get_imgs(path_to_imgs):
imgs = []
for path in path_to_imgs:
imgs.append(cv2.imread(path))
imgs = np.asarray(imgs)
return imgs
The function above takes a list of paths and loads the images from the path to the list 'imgs'. BTW the images are not equal-sized. The list looks like imgs = [NumPy array, NumPy array ....]. However, when I convert the list to np.asarray it turns the list into dtype = object.
This is my dataloader class
class Dataset(torch.utils.data.Dataset):
def __init__(self, path_to_imgs, path_to_label):
'Initialization'
self.path_to_imgs = path_to_imgs
self.path_to_label = path_to_label
self.imgs = get_imgs(path_to_imgs)
self.label = get_pts(path_to_label)
self.imgs = torch.Tensor(self.imgs) **Error here
# self.imgs = torch.from_numpy(self.imgs) ** I tried this as well. Same error
self.label = torch.Tensor(self.label)
self.len = len(self.imgs)
def __len__(self):
'Denotes the total number of samples'
return self.len
def __getitem__(self, index):
return self.imgs, self.label
When I try to convert the list of images to tensor** it fails giving the following error
can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
I have looked similar questions here and here but they were not helpful.
|
def get_imgs(path_to_imgs):
imgs = []
for path in path_to_imgs:
imgs.append(torch.Tensor(cv2.imread(path)))
return imgs
class Dataset(torch.utils.data.Dataset):
def __init__(self, path_to_imgs, path_to_label):
'Initialization'
self.path_to_imgs = path_to_imgs
self.path_to_label = path_to_label
self.imgs = get_imgs(path_to_imgs)
self.label = get_pts(path_to_label)
# padding ops here (https://pytorch.org/docs/stable/nn.html#padding-layers)
# for img in self.imgs:
# ...
self.label = torch.Tensor(self.label)
self.len = len(self.imgs)
def __len__(self):
'Denotes the total number of samples'
return self.len
def __getitem__(self, index):
return self.imgs, self.label
| https://stackoverflow.com/questions/62100388/ |
how convert string to path in opemnmt-py | I use opennmt-py for MT and in the code any time I want to set a path I have to write all directory and it's not good looking when I have long directory. is there any way to set a string as the main directory and just add the file name to the end.
I use google colab to train the model
The code is like:
!onmt_preprocess \\
-train_src //content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//train.en \\
-train_tgt //content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//train.fa \\
-valid_src //content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//dev.en \\
-valid_tgt //content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//dev.fa \\
-save_data //content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//demo//
and the code I want to be like:
path ='//content//dri`ve//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//'
!onmt_preprocess \\
-train_src path+'train.en' \\
-train_tgt path+'train.fa' \\
-valid_src path+'dev.en' \\
-valid_tgt path++'dev.fa' \\
-save_data path+'demo//'
or maybe just can write all path in a variable and use it like:
path_train ='//content//dri`ve//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//'
!onmt_preprocess \\
-train_src path_train \\
| You may use a mere concatenation:
path='//content//drive//My\ Drive//Colab\ Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//'
!onmt_preprocess \\
-train_src $path'train.en' \\
-train_tgt $path'train.fa' \\
-valid_src $path'dev.en' \\
-valid_tgt $path'dev.fa' \\
-save_data $path'demo//'
Notes:
The variable path must be followed with =, not a space. There must be no spaces around =. The path = 'text' is wrong, path ='text' is wrong, path= 'text' is also wrong.
When you use a variable, prepend it with $: !echo $path'train.en' will print //content//drive//My Drive//Colab Notebooks//NLP//spring99//CA6//Corpora//En2Fa-Translation//train.en
Concatenation means just glueing string literals to variables no need using +, &, etc.
| https://stackoverflow.com/questions/62103582/ |
LSTM cell implementation in Pytorch design choices | I was looking for an implementation of an LSTM cell in Pytorch that I could extend, and I found an implementation of it in the accepted answer here. I will post it here because I'd like to refer to it. There are quite a few implementation details that I do not understand, and I was wondering if someone could clarify.
import math
import torch as th
import torch.nn as nn
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, bias=True):
super(LSTM, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
self.i2h = nn.Linear(input_size, 4 * hidden_size, bias=bias)
self.h2h = nn.Linear(hidden_size, 4 * hidden_size, bias=bias)
self.reset_parameters()
def reset_parameters(self):
std = 1.0 / math.sqrt(self.hidden_size)
for w in self.parameters():
w.data.uniform_(-std, std)
def forward(self, x, hidden):
h, c = hidden
h = h.view(h.size(1), -1)
c = c.view(c.size(1), -1)
x = x.view(x.size(1), -1)
# Linear mappings
preact = self.i2h(x) + self.h2h(h)
# activations
gates = preact[:, :3 * self.hidden_size].sigmoid()
g_t = preact[:, 3 * self.hidden_size:].tanh()
i_t = gates[:, :self.hidden_size]
f_t = gates[:, self.hidden_size:2 * self.hidden_size]
o_t = gates[:, -self.hidden_size:]
c_t = th.mul(c, f_t) + th.mul(i_t, g_t)
h_t = th.mul(o_t, c_t.tanh())
h_t = h_t.view(1, h_t.size(0), -1)
c_t = c_t.view(1, c_t.size(0), -1)
return h_t, (h_t, c_t)
1- Why multiply the hidden size by 4 for both self.i2h and self.h2h (in the init method)
2- I don't understand the reset method for the parameters. In particular, why do we reset parameters in this way?
3- Why do we use view for h, c, and x in the forward method?
4- I'm also confused about the column bounds in the activations part of the forward method. As an example, why do we upper bound with 3 * self.hidden_size for gates?
5- Where are all the parameters of the LSTM? I'm talking about the Us and Ws here:
|
1- Why multiply the hidden size by 4 for both self.i2h and self.h2h (in the init method)
In the equations you have included, the input x and the hidden state h are used for four calculations, where each of them is a matrix multiplication with a weight. Whether you do four matrix multiplications or concatenate the weights and do one bigger matrix multiplication and separate the results afterwards, has the same result.
input_size = 5
hidden_size = 10
input = torch.randn((2, input_size))
# Two different weights
w_c = torch.randn((hidden_size, input_size))
w_i = torch.randn((hidden_size, input_size))
# Concatenated weights into one tensor
# with size:[2 * hidden_size, input_size]
w_combined = torch.cat((w_c, w_i), dim=0)
# Output calculated by using separate matrix multiplications
out_c = torch.matmul(w_c, input.transpose(0, 1))
out_i = torch.matmul(w_i, input.transpose(0, 1))
# One bigger matrix multiplication with the combined weights
out_combined = torch.matmul(w_combined, input.transpose(0, 1))
# The first hidden_size number of rows belong to w_c
out_combined_c = out_combined[:hidden_size]
# The second hidden_size number of rows belong to w_i
out_combined_i = out_combined[hidden_size:]
# Using torch.allclose because they are equal besides floating point errors.
torch.allclose(out_c, out_combined_c) # => True
torch.allclose(out_i, out_combined_i) # => True
By setting the output size of the linear layer to 4 * hidden_size there are four weights with size hidden_size, so only one layer is needed instead of four. There is not really an advantage of doing this, except maybe a minor performance improvement, mostly for smaller inputs that don't fully exhaust the parallelisations capabilities if done individually.
4- I'm also confused about the column bounds in the activations part of the forward method. As an example, why do we upper bound with 3 * self.hidden_size for gates?
That's where the outputs are separated to correspond to the output of the four individual calculations. The output is the concatenation of [i_t; f_t; o_t; g_t] (not including tanh and sigmoid respectively).
You can get the same separation by splitting the output into four chunks with torch.chunk:
i_t, f_t, o_t, g_t = torch.chunk(preact, 4, dim=1)
But after the separation you would have to apply torch.sigmoid to i_t, f_t and o_t, and torch.tanh to g_t.
5- Where are all the parameters of the LSTM? I'm talking about the Us and Ws here:
The parameters W are the weights in the linear layer self.i2h and U in the linear layer self.h2h, but concatenated.
W_i, W_f, W_o, W_c = torch.chunk(self.i2h.weight, 4, dim=0)
U_i, U_f, U_o, U_c = torch.chunk(self.h2h.weight, 4, dim=0)
3- Why do we use view for h, c, and x in the forward method?
Based on h_t = h_t.view(1, h_t.size(0), -1) towards the end, the hidden states have the size [1, batch_size, hidden_size]. With h = h.view(h.size(1), -1) that gets rid of the first singular dimension to get size [batch_size, hidden_size]. The same could be achieved with h.squeeze(0).
2- I don't understand the reset method for the parameters. In particular, why do we reset parameters in this way?
Parameter initialisation can have a big impact on the model's learning capability. The general rule for the initialisation is to have values close to zero without being too small. A common initialisation is to draw from a normal distribution with mean 0 and variance of 1 / n, where n is the number of neurons, which in turn means a standard deviation of 1 / sqrt(n).
In this case it uses a uniform distribution instead of a normal distribution, but the general idea is similar. Determining the minimum/maximum value based on the number of neurons but avoiding to make them too small. If the minimum/maximum value would be 1 / n the values would get very small, so using 1 / sqrt(n) is more appropriate, e.g. 256 neurons: 1 / 256 = 0.0039 whereas 1 / sqrt(256) = 0.0625.
Initializing neural networks provides some explanations of different initialisations with interactive visualisations.
| https://stackoverflow.com/questions/62104659/ |
RuntimeError: Error(s) in loading state_dict for Actor - torch.load() | I have created a custom environment in open ai gym and i am facing error while loading the weights Could some one help me to resolve the issue . I am training a TD3 network in a custom environment and i have trained successfully but while inferencing i am facing this issue
class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.layer_1 = nn.Linear(state_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.max_action = max_action
def forward(self, x):
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
return x
class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()
# Defining the first Critic neural network
self.layer_1 = nn.Linear(state_dim + action_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, 1)
# Defining the second Critic neural network
self.layer_4 = nn.Linear(state_dim + action_dim, 400)
self.layer_5 = nn.Linear(400, 300)
self.layer_6 = nn.Linear(300, 1)
def forward(self, x, u):
xu = torch.cat([x, u], 1)
# Forward-Propagation on the first Critic Neural Network
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
# Forward-Propagation on the second Critic Neural Network
x2 = F.relu(self.layer_4(xu))
x2 = F.relu(self.layer_5(x2))
x2 = self.layer_6(x2)
return x1, x2
def Q1(self, x, u):
xu = torch.cat([x, u], 1)
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
return x1
# Selecting the device (CPU or GPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Building the whole Training Process into a class
class TD3(object):
def __init__(self, state_dim, action_dim, max_action):
self.actor = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = torch.optim.Adam(self.actor.parameters())
self.critic = Critic(state_dim, action_dim).to(device)
self.critic_target = Critic(state_dim, action_dim).to(device)
self.critic_target.load_state_dict(self.critic.state_dict())
self.critic_optimizer = torch.optim.Adam(self.critic.parameters())
self.max_action = max_action
def select_action(self, state):
state = torch.Tensor(state.reshape(1, -1)).to(device)
return self.actor(state).cpu().data.numpy().flatten()
def train(self, replay_buffer, iterations, batch_size=100, discount=0.99, tau=0.005, policy_noise=0.2, noise_clip=0.5, policy_freq=2):
for it in range(iterations):
# Step 4: We sample a batch of transitions (s, s’, a, r) from the memory
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = replay_buffer.sample(batch_size)
state = torch.Tensor(batch_states).to(device)
next_state = torch.Tensor(batch_next_states).to(device)
action = torch.Tensor(batch_actions).to(device)
reward = torch.Tensor(batch_rewards).to(device)
done = torch.Tensor(batch_dones).to(device)
# Step 5: From the next state s’, the Actor target plays the next action a’
next_action = self.actor_target(next_state)
# Step 6: We add Gaussian noise to this next action a’ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(batch_actions).data.normal_(0, policy_noise).to(device)
noise = noise.clamp(-noise_clip, noise_clip)
next_action = (next_action + noise).clamp(-self.max_action, self.max_action)
# Step 7: The two Critic targets take each the couple (s’, a’) as input and return two Q-values Qt1(s’,a’) and Qt2(s’,a’) as outputs
target_Q1, target_Q2 = self.critic_target(next_state, next_action)
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2)
# Step 9: We get the final target of the two Critic models, which is: Qt = r + γ * min(Qt1, Qt2), where γ is the discount factor
target_Q = reward + ((1 - done) * discount * target_Q).detach()
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
current_Q1, current_Q2 = self.critic(state, action)
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
if it % policy_freq == 0:
actor_loss = -self.critic.Q1(state, self.actor(state)).mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Making a save method to save a trained model
def save(self, filename, directory):
torch.save(self.actor.state_dict(), '%s/%s_actor.pth' % (directory, filename))
torch.save(self.critic.state_dict(), '%s/%s_critic.pth' % (directory, filename))
# Making a load method to load a pre-trained model
def load(self, filename, directory):
self.actor.load_state_dict(torch.load('%s/%s_actor.pth' % (directory, filename)))
self.critic.load_state_dict(torch.load('%s/%s_critic.pth' % (directory, filename)))
def evaluate_policy(policy, eval_episodes=10):
avg_reward = 0.
for _ in range(eval_episodes):
obs = env.reset()
done = False
while not done:
action = policy.select_action(np.array(obs))
obs, reward, done, _ = env.step(action)
avg_reward += reward
avg_reward /= eval_episodes
print ("---------------------------------------")
print ("Average Reward over the Evaluation Step: %f" % (avg_reward))
print ("---------------------------------------")
return avg_reward
env_name = "Pygame-v0"
seed = 0
file_name = "%s_%s_%s" % ("TD3", env_name, str(seed))
print ("---------------------------------------")
print ("Settings: %s" % (file_name))
print ("---------------------------------------")
eval_episodes = 10
save_env_vid = True
env = gym.make(env_name)
max_episode_steps = env._max_episode_steps
if save_env_vid:
env = wrappers.Monitor(env, monitor_dir, force = True)
env.reset()
env.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
max_action = float(env.action_space.high[0])
policy = TD3(state_dim, action_dim, max_action)
#policy.load(file_name, './pytorch_models/')
policy.load(file_name,"/content/gdrive/My Drive/reinforce/gym_game/pytorch_models")
_ = evaluate_policy(policy, eval_episodes=eval_episodes)
Traceback:
I am facing a runtime error while loading the state_dict for actor model .I searched google but couldnt find similar issues .
RuntimeError: Error(s) in loading state_dict for Actor:
Missing key(s) in state_dict: "layer_1.weight", "layer_1.bias", "layer_2.weight", "layer_2.bias", "layer_3.weight", "layer_3.bias".
Unexpected key(s) in state_dict: "encoder.0.weight", "encoder.0.bias", "encoder.2.weight", "encoder.2.bias", "encoder.2.running_mean", "encoder.2.running_var", "encoder.2.num_batches_tracked", "encoder.3.weight", "encoder.3.bias", "encoder.5.weight", "encoder.5.bias", "encoder.5.running_mean", "encoder.5.running_var", "encoder.5.num_batches_tracked", "encoder.6.weight", "encoder.6.bias", "encoder.8.weight", "encoder.8.bias", "encoder.8.running_mean", "encoder.8.running_var", "encoder.8.num_batches_tracked", "encoder.10.weight", "encoder.10.bias", "encoder.12.weight", "encoder.12.bias", "encoder.12.running_mean", "encoder.12.running_var", "encoder.12.num_batches_tracked", "encoder.13.weight", "encoder.13.bias", "encoder.15.weight", "encoder.15.bias", "encoder.15.running_mean", "encoder.15.running_var", "encoder.15.num_batches_tracked", "encoder.16.weight", "encoder.16.bias", "linear.0.weight", "linear.0.bias", "linear.2.weight", "linear.2.bias".
| it was answered by @MicaelJungo
The weights you saved were not from the model you are using here. Make sure to load the correct checkpoint, which was created when training this particular model.
| https://stackoverflow.com/questions/62108063/ |
Why does nn.CrossEntropyLoss throw "TypeError: iteration over a 0-d tensor" when I verify inputs to be non-0-dimensional? | I am using PyTorch version 1.5.0.
When I pass an input torch tensor of size [8,21,400,400] with a target of size [8,400,400], the program raises a TypeError: iteration over a 0-d tensor. However, the dimensions of the arguments are 4 and 3 respectively.
What could be causing this error?
The traceback points to torch\tensor.py's iter function.
Traceback (most recent call last):
File "train.py", line 108, in <module>
loss, accuracy = lossLayer(pred2, targetBatch)
File "C:\Users\PC\anaconda3\lib\site-packages\torch\tensor.py", line 462, in __iter__
raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor
| You get the error because nn.CrossEntropyLoss just returns one torch.Tensor, not a pair (it doesn't return accuracy). And this tensor is 0-dimensional, i.e. one number (unless you don't override reduction argument to 'none' to get per-element loss). So when you try to assign its value to two variables loss, accuracy python tries to iterate over this tensor variable, hence the error message. Simply use loss = lossLayer(pred2, targetBatch).
| https://stackoverflow.com/questions/62109287/ |
Keras vs Pytorch NN code small differences, need clarification | I have the Keras and Pytorch code for the same neural network. Some of the lines are switched around between the two.
I am wondering why for the Pytorch version max pooling comes before batch normalization and reel activation. In Keras it comes after those two lines. And for flattening, I'm also confused on how Pytorch used 64 * 7 * 7 (where do the 7s come from?).
Here's the Keras version of the Shallow net Alex net:
def shallownet(nb_classes):
global img_size
model = Sequential()
model.add(Conv2D(64, (5, 5), input_shape=img_size, data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Conv2D(64, (5, 5), padding='same', data_format='channels_first'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))
model.add(Flatten())
model.add(Dense(384))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(192))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
return model
and the Pytorch version:
class AlexNet(nn.Module):
def __init__(self, num_classes=10):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=5, padding=2,
bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=5, padding=2, bias=False),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
)
self.classifier = nn.Sequential(
nn.Linear(64 * 7 * 7, 384, bias=False),
nn.BatchNorm1d(384),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(384, 192, bias=False),
nn.BatchNorm1d(192),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(192, num_classes)
)
self.regime = {
0: {'optimizer': 'SGD', 'lr': 1e-3,
'weight_decay': 5e-4, 'momentum': 0.9},
60: {'lr': 1e-2},
120: {'lr': 1e-3},
180: {'lr': 1e-4}
}
def forward(self, x):
x = self.features(x)
x = x.view(-1, 64 * 7 * 7)
x = self.classifier(x)
return F.log_softmax(x)
def cifar10_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 10)
return AlexNet(num_classes)
def cifar100_shallow(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 100)
return AlexNet(num_classes)
| Max pooling downsamples the data by picking the maximum of a certain pool of values. Comparisons between data will not be affected by batch normalization and ReLU activation because both are one-to-one monotonically increasing functions.
relu(x) = max(0, x)
bn(x) = (x - mu) / sigma
Therefore, it doesn't really matter if max pool comes after or before those two layers (it might be more efficient to have it before).
Regarding the flattening, I believe the 7s are the spatial dimensions of the layer before Flatten() i.e. H = W = 7. Thus, the total number of values is equal to the spatial dimensions times the channel size which is 64 * 7 * 7.
| https://stackoverflow.com/questions/62109679/ |
Why does the BERT NSP head linear layer have two outputs? | Here's the code in question.
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L491
class BertOnlyNSPHead(nn.Module):
def __init__(self, config):
super().__init__()
self.seq_relationship = nn.Linear(config.hidden_size, 2)
def forward(self, pooled_output):
seq_relationship_score = self.seq_relationship(pooled_output)
return seq_relationship_score
I think it was just ranking how likely one sentence would follow another? Wouldn't it be one score?
| The two scores are meant to represent unnormalized probabilities (logits) from the model. If we softmax them, we get our predictions, where index 0 indicates next sentence, and index 1 indicates random.
This is just a stylistic choice on the HuggingFace author's behalf, probably to keep the loss function consistent.
Here's the forward method of BertForPretraining, where self.cls is BertOnlyNSPHead:
prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)
outputs = (prediction_scores, seq_relationship_score,) + outputs[
2:
] # add hidden states and attention if they are here
if masked_lm_labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
outputs = (total_loss,) + outputs
it's convenient to use the same CrossEntropyLoss for both MLM and NSP.
As you describe, it would be equivalent to have NSP produce a single output, then feed that number through a sigmoid to get probability of the next sentence. We can then train with BCEWithLogitsLoss. (where BCE is just the special binary case of cross entropy loss).
| https://stackoverflow.com/questions/62109957/ |
What is the least total batch size for SyncBatchNorm | For the normal BatchNorm, the least batch size per GPU is 2.
I wonder if I use the SyncBatchNorm, can I use batch_size=1 for every GPU with more than a single GPU?
I.e, the total_batch_size is more than 1 but batch_size_per_gpu is 1.
I would appreciate answers for any deep learning framework, pytorch, tensorflow, mxnet, etc
| For PyTorch, using batch_size_per_gpu=1 and more than one GPU is fine.
| https://stackoverflow.com/questions/62110937/ |
Trouble opening audio files stored on S3 in SageMaker | I stored like 300 GB of audio data (mp3/wav mostly) on Amazon S3 and am trying to access it in a SageMaker notebook instance to do some data transformations. I'm trying to use either torchaudio or librosa to load a file as a waveform. torchaudio expects the file path as the input, librosa can either use a file path or file-like object. I tried using s3fs to get the url to the file but torchaudio doesn't recognize it as a file. And apparently SageMaker has problems installing librosa so I can't use that. What should I do?
| I ended up not using SageMaker for this, but for anybody else having similar problems, I solved this by opening the file using s3fs and writing it to a tempfile.NamedTemporaryFile. This gave me a file path that I could pass into either torchaudio.load or librosa.core.load. This was also important because I wanted the extra resampling functionality of librosa.core.load, but it doesn't accept file-like objects for loading mp3s.
| https://stackoverflow.com/questions/62111580/ |
Load data into GPU directly using PyTorch | In training loop, I load a batch of data into CPU and then transfer it to GPU:
import torch.utils as utils
train_loader = utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True, num_workers=4, pin_memory=True)
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
This way of loading data is very time-consuming. Any way to directly load data into GPU without transfer step ?
| You can load all the data to in tensor than move it yo GPU memory.(assuming that you have enough memory) When you need it use the one inside the tensor which is already at GPU memory. Hope it helps.
| https://stackoverflow.com/questions/62111599/ |
Can't use with import in Pytorch | When I import torch, there's a problem like this:
C: lUsers / ruiha Desktopl flappy_DQL>python flappy.py
Traceback (most recent call last):
File "flappy.py", line 2, in <module>
import torch
File "C: IUsers I rui ha /AppDat a / Local Programs I Python/ Python37 libIsite-packagesI torchl__init__.py", line 81, in <mod
ctypes.CDLL(dll)
File "C: LUsersI ruiha lAppDat a l Local | Programs I Python/ Python37 liblctypesl__init_.py", line 364, in _init_
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
| Where are you doing your development? I ask because I see that you are on Windows, where it is much more difficult to manage dll files (compared to Linux). Thus, I would avoid setting up your own bare environment (it would require debugging your OS) and use Anaconda, a Python manager with a focus on scientific computing. Install Anaconda, setup an environment (this is a 'virtual' environment managed by Anaconda), and you will be ready. Simply install torch by
conda install PyTorch -c PyTorch
You can then continue writing your ML app on Jupyter, terminal, etc. by using the Conda runtime to execute your file.
| https://stackoverflow.com/questions/62111920/ |
How to efficiently find correspondences between two point sets without nested for loop in Pytorch? | I now have two point sets (tensor) A and B that shape like
A.size() >>(50, 3) , example: [ [0, 0, 0], [0, 1, 2], ..., [1, 1, 1]]
B.size() >>(10, 3)
where the first dimension stands for number of points and the second dim stands for coordinates (x,y,z)
To some extent, the question could also be simplified into " Finding common elements between two tensors ". Is there a quick way to do this without nested loop ?
| You can quickly compute all the 50x10 distances using:
d2 = ((A[:, None, :] - B[None, ...])**2).sum(dim=2)
Once you have all the pair-wise distances, you can select "similar" ones if the distance does not exceed a threshold thr:
(d2 < thr).nonzero()
returns pairs of a-idx, b-idx of "similar" points.
If you want to match the points exactly, you can do instead:
((a[:, None, :] == b[None, ...]).all(dim=2)).nonzero()
| https://stackoverflow.com/questions/62113137/ |
When to use padding in Conv2d() and when to do ReflectionPad2d() Pytorch | I have two PyTorch models that are equivalent (I think), the only difference between them is the padding:
import torch
import torch.nn as nn
i = torch.arange(9, dtype=torch.float).reshape(1,1,3,3)
# First model:
model1 = nn.Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode='reflection')
# tensor([[[[-0.6095, -0.0321, 2.2022],
# [ 0.1018, 1.7650, 5.5392],
# [ 1.7988, 3.9165, 5.6506]]]], grad_fn=<MkldnnConvolutionBackward>)
# Second model:
model2 = nn.Sequential(nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(1, 1, kernel_size=3))
# tensor([[[[1.4751, 1.5513, 2.6566],
# [4.0281, 4.1043, 5.2096],
# [2.6149, 2.6911, 3.7964]]]], grad_fn=<MkldnnConvolutionBackward>)
I was wondering why and when you use both approaches, the output of both is different but as I see it they should be the same, because the padding is of type reflection.
Would appreciate some help in understanding it.
EDIT
After what @Ash said, I wanted to check wheter or not the weights had influence so I pinned all of them to the same value and still there is a difference between the 2 methods:
import torch
import torch.nn as nn
i = torch.arange(9, dtype=torch.float).reshape(1,1,3,3)
# First model:
model1 = nn.Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode='reflection')
model1.weight.data = torch.full(model1.weight.data.shape, 0.4)
print(model1(i))
print(model1.weight)
# tensor([[[[ 3.4411, 6.2411, 5.0412],
# [ 8.6411, 14.6411, 11.0412],
# [ 8.2411, 13.4411, 9.8412]]]], grad_fn=<MkldnnConvolutionBackward>)
# Parameter containing:
# tensor([[[[0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000]]]], requires_grad=True)
# Second model:
model2 = [nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(1, 1, kernel_size=3)]
model2[1].weight.data = torch.full(model2[1].weight.data.shape, 0.4)
model2 = nn.Sequential(*model2)
print(model2(i))
print(model2[1].weight)
# tensor([[[[ 9.8926, 11.0926, 12.2926],
# [13.4926, 14.6926, 15.8926],
# [17.0926, 18.2926, 19.4926]]]], grad_fn=<MkldnnConvolutionBackward>)
# Parameter containing:
# tensor([[[[0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000]]]], requires_grad=True)
|
the output of both is different but as I see it they should be the same
I don't think that the different outputs that you get are only related to how the reflective padding is implemented. In the code snippet that you provide, the values of the weights and biases of the convolutions from model1 and model2 differ, since they are initialized randomly and you don't seem to fix their values in the code.
EDIT:
Following your new edit, it seems that for versions prior to 1.5, looking at the implementation of the forward pass in <your_torch_install>/nn/modules/conv.pyshows that "reflection" is not supported. It wont complain about arbitrary strings instead of "reflection" either, but will default to zero-padding.
| https://stackoverflow.com/questions/62113314/ |
Memory Leak in loop pytorch | The following loop is not discarding any of the tensors it makes after each iteration of the loop leading to a memory leak. It is due to the use of grad_loss.backward() in the below code. Is there anything I'm missing or is there an issue with pytorch.
for (images, one_hot_labels) in tqdm(batched_train_data):
# I collect batch size here because the last batch may have a smaller batch_size
images = images.to(device)
one_hot_labels = one_hot_labels.to(device)
batch_size = images.shape[0]
images.requires_grad = True
optimizer.zero_grad()
# as images is not a parameters optimizer.zero_grad() won't reset it's gradient
if images.grad is not None:
images.grad.data.zero_()
probabilities = model.forward(images)
# I want to use .backward() twice rather than autograd because I want to accumulate the gradients
loss = loss_func(probabilities, one_hot_labels)
loss.backward(create_graph=True)
grad_loss = grad_loss_func(images.grad)
grad_loss.backward()
optimizer.step()
labels = one_hot_labels.detach().argmax(dim=1)
predictions = probabilities.detach().argmax(dim=1)
num_correct = int(predictions.eq(labels).sum())
train_data_length += batch_size
train_correct += num_correct
train_loss += float(loss.detach()) * batch_size
writer.add_graph(model, images)
writer.close()
# To stop memory leaks
del images
del one_hot_labels
del probabilities
del loss
del grad_loss
del labels
del predictions
del num_correct
| To fix it you need to replace
images.grad.data.zero_()
with
images.grad = None
I believe this is because doing images.grad.data.zero_() does not remove any computation graph associated with images therefore allowing the graph to grow as you loop through.
On a separate note, I've also been advised that you should avoid operating upon .data whenever possible as it's unsafe to do so.
| https://stackoverflow.com/questions/62115443/ |
Extract features from pretrained resnet50 in pytorch | Hy guys, i want to extract the in_features of Fully connected layer of my pretrained resnet50.
I create before a method that give me the vector of features:
def get_vector(image):
#layer = model._modules.get('fc')
layer = model.fc
my_embedding = torch.zeros(2048) #2048 is the in_features of FC , output of avgpool
def copy_data(m, i, o):
my_embedding.copy_(o.data)
h = layer.register_forward_hook(copy_data)
tmp = model(image)
h.remove()
# return the vector
return my_embedding
after I call this method here:
column = ["FlickrID", "Features"]
path = "./train_dataset/train_imgs/"
pathCSV = "./train_dataset/features/img_info_TRAIN.csv"
f_id=[]
features_extr=[]
df = pd.DataFrame(columns=column)
tr=transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
test = Dataset(path, pathCSV, transform=tr)
test_loader = DataLoader(test, batch_size=1, num_workers=2, shuffle = False)
#Leggiamo le immagini
for batch in test_loader:
nome = batch['FlickrID']
f_id.append(nome)
image = batch['image']
#print(image)
with torch.no_grad():
pred = get_vector(image)
features_extr.append(pred)
df["FlickrID"] = f_id
df["Features"] = features_extr
df.to_hdf("Places.h5", key='df', mode='w')
I have an error like this:
output with shape [2048] doesn't match the broadcast shape [1, 2048, 1, 2048]
How can I take the in_feature of Fully Connected of this resnet50?
The Dataset is a customized Dataset class.
Sorry for my bad english
| The model takes batched inputs, that means the input to the fully connected layer has size [batch_size, 2048]. Because you are using a batch size of 1, that becomes [1, 2048]. Therefore that doesn't fit into a the tensor torch.zeros(2048), so it should be torch.zeros(1, 2048) instead.
You are also trying to use the output (o) of the layer model.fc instead of the input (i).
Besides that, using hooks is overly complicated for this and a much easier way to get features is to modify the model by replacing model.fc with nn.Identity, which just returns the input as the output, and since the features are its input, the output of the entire model will be the features.
model.fc = nn.Identity()
features = model(image)
| https://stackoverflow.com/questions/62117707/ |
Testing an implementation of an LSTM in Pytorch | I'm trying to use the Pytorch implementation of an LSTM here. I'm including it here for reference. It consists of two classes, LSTMCell and LSTM, where LSTMCell is just a single unit and LSTM puts stacks multiple units together to create a full LSTM model
import math
import torch as th
import torch.nn as nn
class LSTMCell(nn.Module):
def __init__(self, input_size, hidden_size, bias=True):
super(LSTM, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.bias = bias
self.i2h = nn.Linear(input_size, 4 * hidden_size, bias=bias)
self.h2h = nn.Linear(hidden_size, 4 * hidden_size, bias=bias)
self.reset_parameters()
def reset_parameters(self):
std = 1.0 / math.sqrt(self.hidden_size)
for w in self.parameters():
w.data.uniform_(-std, std)
def forward(self, x, hidden):
if hidden is None:
hidden = self._init_hidden(x)
h, c = hidden
h = h.view(h.size(1), -1)
c = c.view(c.size(1), -1)
x = x.view(x.size(1), -1)
# Linear mappings
preact = self.i2h(x) + self.h2h(h)
# activations
gates = preact[:, :3 * self.hidden_size].sigmoid()
g_t = preact[:, 3 * self.hidden_size:].tanh()
i_t = gates[:, :self.hidden_size]
f_t = gates[:, self.hidden_size:2 * self.hidden_size]
o_t = gates[:, -self.hidden_size:]
c_t = th.mul(c, f_t) + th.mul(i_t, g_t)
h_t = th.mul(o_t, c_t.tanh())
h_t = h_t.view(1, h_t.size(0), -1)
c_t = c_t.view(1, c_t.size(0), -1)
return h_t, (h_t, c_t)
@staticmethod
def _init_hidden(input_):
h = th.zeros_like(input_.view(1, input_.size(1), -1))
c = th.zeros_like(input_.view(1, input_.size(1), -1))
return h, c
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, bias=True):
super().__init__()
self.lstm_cell = LSTMCell(input_size, hidden_size, bias)
def forward(self, input_, hidden=None):
# input_ is of dimensionalty (1, time, input_size, ...)
outputs = []
for x in torch.unbind(input_, dim=1):
hidden = self.lstm_cell(x, hidden)
outputs.append(hidden[0].clone())
return torch.stack(outputs, dim=1)
I'm doing the following simple test:
x = torch.randn(1, 3, 2, 4)
model = LSTM(4, 5, False)
model(x)
and I get the following error. What exactly is the problem here?
TypeError Traceback (most recent call last)
<ipython-input-33-09e5544a61fc> in <module>
----> 1 model = LSTM(4, 5, False)
<ipython-input-30-9ad06cd4b768> in __init__(self, input_size, hidden_size, bias)
3 def __init__(self, input_size, hidden_size, bias=True):
4 super().__init__()
----> 5 self.lstm_cell = LSTMCell(input_size, hidden_size, bias)
6
7 def forward(self, input_, hidden=None):
<ipython-input-29-c91ddfb9dfae> in __init__(self, input_size, hidden_size, bias)
6
7 def __init__(self, input_size, hidden_size, bias=True):
----> 8 super(LSTM, self).__init__()
9 self.input_size = input_size
10 self.hidden_size = hidden_size
TypeError: super(type, obj): obj must be an instance or subtype of type
| The first argument to super() should be class itself, not a different class.
class LSTMCell(nn.Module):
def __init__(self, input_size, hidden_size, bias=True):
super(LSTM, self).__init__()
# ^^^^ self is not an instance of LSTM but LSTMCell
It should be:
super(LSTMCell, self).__init__()
Since Python 3 you can omit the arguments to super to get the same result (as you have done in the LSTM class):
super().__init__()
| https://stackoverflow.com/questions/62118012/ |
Implementation Issue: Deep ConvNet for Pattern Recognition | I'm trying to implement a pattern recognition model using a fully convolutional network (fig 1 in https://www.sciencedirect.com/science/article/pii/S0031320318304370, I was able to get the full text without signing in or anything but if it's a problem I can attach a picture too!) but I'm getting a size error when moving from the final Conv2D layer to the first fc_layer.
Here is my error message:
RuntimeError: size mismatch, m1: [4 x 1024], m2: [4 x 1024] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:283
Originally, as in the figure, my first linear layer was:
nn.Linear(4*4*512, 1024)
but after getting the size mismatch, I changed it to:
nn.Linear(4,1024)
Now, I have a strange error message as written above.
For reference (if it helps), here is my code:
import torch.nn as nn
import torch.utils.model_zoo as model_zoo
class convnet(nn.Module):
def __init__(self, num_classes=1000):
super(convnet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=3, stride=2, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=3, padding=1),
nn.MaxPool2d(kernel_size=1),
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2),# stride=2),
nn.Conv2d(128, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2), #stride=2),
nn.Conv2d(256, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(512, 512, kernel_size=3, padding=1),
nn.ReLU(inplace=True), #nn.Dropout(p=0.5)
)
self.classifier = nn.Sequential(
nn.Linear(4, 1024),
nn.Dropout(p=0.5),
nn.ReLU(inplace=True),
#nn.Dropout(p=0.5),
nn.Linear(1024, 1024),
nn.ReLU(inplace=True),
nn.Linear(1024, num_classes),
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x,1)
x = self.classifier(x)
return x
I suspect it's an issue with the padding and stride.
Thanks!
| The error is from a matrix multiplication, where m1 should be an m x n matrix and m2 an n x p matrix and the result would be an m x p matrix. In your case it's 4 x 1024 and 4 x 1024, but that doesn't work since 1024 != 4.
That means your input to the first linear layer has size [4, 1024] (4 being the batch size), therefore the input features of the first linear layer should be 1024.
self.classifier = nn.Sequential(
nn.Linear(1024, 1024),
nn.Dropout(p=0.5),
nn.ReLU(inplace=True),
#nn.Dropout(p=0.5),
nn.Linear(1024, 1024),
nn.ReLU(inplace=True),
nn.Linear(1024, num_classes),
)
If you are uncertain how many features your input has, you can print out its size just before the layer:
x = self.features(x)
x = torch.flatten(x,1)
print(x.size()) # => torch.Size([4, 1024])
x = self.classifier(x)
| https://stackoverflow.com/questions/62122188/ |
Test Time Augmentation throwing a value error | I am trying to use the Test Time Augmentation on my Classifier:
log_preds,y = learn.TTA(scale=1.1, ds_type=DatasetType.Valid, with_loss=True)
And this is the error that it threw:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-f33d9737819a> in <module>()
----> 1 log_preds,y = learn.TTA(scale=1.1, ds_type=DatasetType.Valid, with_loss=True)
2 probs = np.mean(np.exp(log_preds),0)
3
4 accuracy(probs, y)
ValueError: too many values to unpack (expected 2)
Initially, I tried to find a way to use my custom test set in TTA but couldn’t figure out how to do it and DatasetType.Test was throwing an error so I decided to go with DatasetType.Valid and after running 8 epochs, I got the above error.
| The error message indicate that learn.TTA returns more than 2 values,but you only got log_predsand y for them.
You want to find out what exactly does learn.TTA return
| https://stackoverflow.com/questions/62124141/ |
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select site:stackoverflow.com | I am using the bert on the SMILE dataset. I have written following code can you guide me where I am getting wrong.
I have written training code which is evaluating correctly but the when I try to run evaluate code for validation it is giving error. I tried to pass the parameters directly to cuda. still I am facing the issue
'''
def evaluate(dataloader_val):
print("in evaluate")
model.eval()
loss_val_total = 0
predictions, true_value = [],[]
for batch in dataloader_val:
print("in for loop of dataloader")
barch = tuple(b.to(device) for b in batch)
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels' : batch[2],
}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
print("before logit")
logits = logits.to(device)
print("in the for batch evaluate: ",logits)
label_ids = inputs['labels'].to(device)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis = 0)
true_vals = np.concatenate(true_vals,axis = 0)
return loss_val_avg, predictions, true_vals
'''
and another function is
'''
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train,
desc = 'Epoch {:1d}'.format(epoch),
leave = False,
disable = False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {
'input_ids' : batch[0],
'attention_mask' : batch[1],
'labels' : batch[2]
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss' : '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'/content/drive/My Drive/Bert/Coursera/SMILE/Bert_ft_epoch{epoch}.model')
tqdm.write(f'\n Epoch {epoch}')
loss_train_avg = loss_train_total / len(dataloader_train)
tqdm.write(f'Training Loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_val)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss : {val_loss}')
tqdm.write(f'F1 score(weighted): {val_f1}')
'''
| You have a typo in your evaluation function:
barch = tuple(b.to(device) for b in batch)
You assign the gpu data to barch instead of batch.
| https://stackoverflow.com/questions/62125405/ |
What is the difference between mm and spmm in Pytorch | What is the difference between mm and spmm in Pytorch? I know that spmm does sparse matrix multiplication, but what exactly does that mean? Why would you ever choose mm as opposed to spmm if spmm has the potential to save space?
| In order to use spmm you need your tensor arguments to actually be of sparse type.
Although torch.sparse representation does have the potential of saving space, sparse support does not yet covers all tensor operations and functions.
| https://stackoverflow.com/questions/62125442/ |
RuntimeError: the derivative for 'indices' is not implemented | I am following this online tutorial for coding a DQN,https://github.com/philtabor/Youtube-Code-Repository/blob/master/ReinforcementLearning/DeepQLearning/torch_deep_q_model.py
, however I am running into this Runtime Error that I am unsure of how to debug or modify to prevent this error. Thanks!
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-196-00975d66fd2d> in <module>
28 agent.storeTransition(preprocess(obs),action,reward,preprocess(obs_))
29 obs= obs_
---> 30 agent.learn(batch_size)
31 lastAction = action
32 scores.append(score)
<ipython-input-191-f6b163cc3a8a> in learn(self, batch_size)
72 Qtarget = Qpred.clone()
73 print(Qnext[1])
---> 74 Qtarget[:,maxA] = rewards + self.GAMMA*torch.max(Qnext[1])
75 # epsilon decay action
76 if self.steps > 2000:
RuntimeError: the derivative for 'indices' is not implemented
These are my code blocks in my jupyter notebook
class DeepQNetwork(nn.Module):
def __init__(self,Alpha):
super(DeepQNetwork,self).__init__()
self.conv1 = nn.Conv2d(1,32,8,stride=4, padding=1)
self.conv2 = nn.Conv2d(32,64,4,stride=2)
self.conv3 = nn.Conv2d(64,128,3)
self.fc1 = nn.Linear(128* 21* 12,512)
self.fc2 = nn.Linear(512,6)
self.optimizer = optim.RMSprop(self.parameters(), lr = Alpha)
self.loss = nn.MSELoss()
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self,obs):
'''Passing in a sequence of arrays'''
obs = torch.Tensor(obs).to(self.device) # send to the GPU
''' Feed forward the Network Parameters'''
obs = obs.view(-1, 1,200,125)
#print(obs.shape)
obs = F.relu(self.conv1(obs))
#print(obs.shape)
obs = F.relu(self.conv2(obs))
#print(obs.shape)
obs = F.relu(self.conv3(obs))
#print(obs.shape)
obs = obs.view(-1,128* 21* 12)
obs = F.relu(self.fc1(obs))
# 4 Rows and 6 columns
actions = self.fc2(obs)
return actions
This is the Agent Code, and it contains the error causing line of code
class DQNAgent(object):
def __init__(self, gamma, epsilon, alpha, maxMemory,
epsEnd = 0.05, replace =10000, actionSpace = [0,1,2,3,4,5]):
'''
Gamma -> discount factor of valuing current reward over future reward
Epsilon -> for trade off between exploration-exploitation
alpha -> learn rate
maxMemory -> max size of Memory buffer
epsEnd -> smallest value of Exploration
repace -> how often to replace target network
'''
self.GAMMA = gamma
self.EPSILON = epsilon
self.EPS_END = epsEnd
self.actionSpace = actionSpace
self.maxMemory = maxMemory
self.steps = 0
self.learn_step_counter = 0
self.memory = []
self.memCount = 0
self.replace_tgt_count = replace
self.Q_eval = DeepQNetwork(alpha)
self.Q_next = DeepQNetwork(alpha)
def storeTransition(self, state, action, reward, state_):
'''Stores Transition states'''
if self.memCount < self.maxMemory:
self.memory.append([state,action,reward,state_])
else:
self.memory[self.memCount%self.maxMemory] = [state,action,reward,state_]
self.memCount +=1
def chooseAction(self,obs):
'''
Exploration if np.random > epsilon
else take epsilon greedy action
'''
rand = np.random.random()
# Get the value for all actions for the current set of states
# Forward pass the stack of frames to get value of each action given subset of staes in obs
actions = self.Q_eval.forward(obs)
if rand<1-self.EPSILON:
action = torch.argmax(actions[1]).item()
else:
action = np.random.choice(self.actionSpace)
self.steps += 1
return action
def learn(self, batch_size):
self.Q_eval.optimizer.zero_grad()
#0 gradient to do batch optimisation
if self.replace_tgt_count is not None and self.learn_step_counter % self.replace_tgt_count==0:
self.Q_next.load_state_dict(self.Q_eval.state_dict())
# memory subsampling
if self.memCount + batch_size < self.maxMemory:
memStart = int(np.random.choice(range(self.memCount)))
else:
memStart = int(np.random.choice(range(self.maxMemory-batch_size-1)))
miniBatch = self.memory[memStart:memStart+batch_size]
memory = np.array(miniBatch)
#feed forward current state and successor state conv to list as memory is array of numpy objects
Qpred = self.Q_eval.forward(list(memory[:,0][:])).to(self.Q_eval.device)
Qnext = self.Q_next.forward(list(memory[:,3][:])).to(self.Q_eval.device)
maxA = torch.argmax(Qnext,dim = 1).to(self.Q_eval.device)
#calculate rewards
rewards = torch.Tensor(list(memory[:,2])).to(self.Q_eval.device)
# loss for every action except max action to be 0
Qtarget = Qpred.clone()
print(Qnext.shape)
Qtarget[:,maxA] = rewards + self.GAMMA*torch.max(Qnext[1])# PROBLEMATIC LINE
# epsilon decay action
if self.steps > 2000:
if self.EPSILON-1e-4 >self.EPS_END:
self.EPSILON-= 1e-4
else:
self.EPSILON = self.EPS_END
loss = self.Q_eval.loss(Qtarget,Qpred).to(self.Q_eval.device)
loss.backward()
self.Q_eval.optimizer.step()
self.learn_step_counter +=1
env = gym.make("Invader-v0")
agent = DQNAgent(gamma=0.95,epsilon = 1.0,alpha = 0.003, maxMemory = 5000,replace = None)
while agent.memCount < agent.maxMemory:
obs = env.reset()
done = False
lives = 3
while not done:
action = env.action_space.sample()
obs_ , reward, done, info = env.step(action)
if done and info['lives']<lives:
lives = info['lives']
reward -= 200
agent.storeTransition(preprocess(obs),action,reward,preprocess(obs_))
obs= obs_
initialised = True
scores = []
epsHistory = []
numGames = 50
batch_size = 16
for i in range(numGames):
print(f'starting game {i+1}, epsilon = {agent.EPSILON}')
epsHistory.append(agent.EPSILON)
done = False
obs = env.reset()
frames = [np.sum(obs)]
score = 0
lastAction = 0
lives = 3
while not done:
if len(frames) == 4:
action = agent.chooseAction(frames)
frames = []
else:
action = lastAction
obs_, reward, done, info = env.step(action)
score += score-reward
frames.append(preprocess(obs_))
if done and info['lives'] < lives:
reward -=200
agent.storeTransition(preprocess(obs),action,reward,preprocess(obs_))
obs= obs_
agent.learn(batch_size)
lastAction = action
scores.append(score)
print('score: ', score)
x = [i+1 for i in range(numGames)]
| You have to do use .detach() for :
Qnext = self.Q_next.forward(list(memory[:,3][:])).detach().to(self.Q_eval.device)
| https://stackoverflow.com/questions/62126327/ |
High Loss Validation | I use the following model to predict the competition: "intel-mobileodt-cervical-cancer-screening".
The labels are divided into 3 categories(1 ,2 ,3).
When I want to do a prediction workout I get the next output
Model:
resnet50 = pretrainedmodels.__dict__["resnet50"](num_classes=1000, pretrained='imagenet')
resnet50.last_linear=torch.nn.Linear(in_features=2048,out_features=3, bias=True)
#optim
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)
loss_cross=torch.nn.CrossEntropyLoss().cuda()
optim Adam
DataLoader:
class Kaggle_Cancer(Dataset):
def __init__(self, root_path, transform=None,preprocessing=None,resize=216):
self.path = root_path
self.transform=transform
self.preprocessing=preprocessing
self.resize=resize
def __len__(self):
return len(self.path)
def __getitem__(self, idx):
p=self.path[idx]
image1=cv2.imread(p)
label=p.split("/")[-2].split("_")[-1]
image1=cv2.cvtColor(image1,cv2.COLOR_BGR2RGB)
if self.transform:
image1=self.transform(image=image1)['image']
image1=transforms.ToPILImage()(image1)
image1=transforms.ToTensor()(image1)
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])
image1=normalize(image1)
return image1,int(label)
Train :
def train(epoch_number,model,optim,loss):
model.train()
all_loss=0
correct=0
tqdm_loader=tqdm(training_set)
for index,(img,target) in enumerate(tqdm_loader):
img=img.float().cuda()
target=target.long().cuda()
optim.zero_grad()
out=model(img)
print(out," target ",target)
loss1=loss(out,target)
print(loss1)
loss1.backward()
optim.step()
all_loss+=loss1.item()
avg_loss=all_loss/(index+1)
pred=out.argmax(dim=1,keepdim=True)
correct+=pred.eq(target.view_as(pred)).sum().item()/len(target)
avg_acc=correct/(index+1)
tqdm_loader.set_description("Epoch {} train loss={:4} acc={:4} ".format(epoch_number,round(avg_loss,4),round(avg_acc,4)))
return avg_loss,avg_acc
output:
print(out," target ",target)
,
[ 6.1667e-02, -3.9864e-01, -4.1212e-01],
[-2.3100e-01, -3.7821e-01, -2.8159e-01],
[-2.9442e-01, -5.0409e-01, -3.1046e-01],
[ 1.4866e-01, -2.8496e-01, -1.7643e-01],
[-2.4554e-01, -2.5063e-01, -6.7061e-01],
[-7.1597e-02, -3.5376e-01, -5.7830e-01],
[-2.1527e-01, -4.0284e-01, -4.5993e-01],
[ 1.2050e-02, -5.5684e-01, -1.6044e-01],
[-3.7750e-02, -5.3680e-01, -4.3820e-01],
[-1.1966e-01, -2.5146e-01, -4.9405e-01],
[-2.3308e-01, -6.3452e-01, -3.9821e-01],
[-3.6530e-01, -1.5242e-01, -2.6457e-01],
[-1.8864e-01, -6.0979e-01, -5.5342e-01],
[-2.4755e-01, -4.7011e-01, -2.6204e-01],
[-3.1907e-01, -4.2680e-01, -3.4576e-01],
[-2.1872e-01, -5.3857e-01, -2.9729e-01],
[-7.1475e-02, -4.0458e-01, -3.2042e-01],
[-2.8925e-01, -4.3376e-02, -4.9899e-01],
[-4.8227e-02, -1.8701e-01, -2.2106e-01],
[ 1.7829e-02, -6.5816e-01, -4.0141e-01],
[-2.7450e-01, -3.9498e-01, -2.3189e-01],
[-1.8847e-01, -6.8187e-01, -2.0631e-01],
[-3.5251e-01, -5.3258e-01, -6.3298e-01],
[-6.5548e-02, -2.5093e-01, -5.4346e-01],
[ 2.3848e-01, -3.6152e-01, -1.6380e-01],
[-2.1488e-01, -6.4888e-01, -7.7022e-01],.....
target tensor([2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 3, 2, 3, 2, 2, 2, 2, 3, 2, 1, 3, 3, 2, 2,
3, 2, 3, 2, 3, 1, 3, 3, 1, 2, 3, 2, 1, 1, 3, 1, 1, 2, 3, 2, 2, 2, 2, 2,.....
print(loss1)
1, 2, 3, 3, 1, 3, 1, 3, 3, 2, 3, 3, 2, 3, 2, 3], device='cuda:0'
tensor(1.0870, device='cuda:0', grad_fn=<NllLossBackward>
number epoch =10/20/30:
same result:
val loss=1.2 acc=0.4 train loss=0.6 acc=0.65
What i do wrong?
| When the validation loss is larger than the training loss, It is usually a sign of overfitting. There area few things you can do:
Add Dropout of Batch Normalisation:
This makes the model more robust.
Make the model deeper:
Add more layers to the model for a better comprehension of the patterns.
Use better optimizers:
Adaptive optimizers such as Adam, Adagrad and RMSprop are usually effective.
| https://stackoverflow.com/questions/62129665/ |
Duplicate element of tensor while perserving gradient values | So I have a vector that I computed somehow with size k
x = torch.FloatTensor([0.5, 0.3, 0.1, 0.7])
x = x + 2
I want to take its first element x[0] and create a vector of size k-1 filled with the value x[0] so that the gradients that come along with this element are present in the new vector.
I tried using torch.full and filling it up with x[0] but that does not preserve gradients.
Using pytorch 1.4
| You could also use .repeat like this (IMO cleaner and more verbose):
# type deduction is automatic
x = torch.tensor([0.5, 0.3, 0.1, 0.7])
x = x + 2
y = x[0].repeat(50)
Gradient will be preserved (gradient history will be copied).
| https://stackoverflow.com/questions/62129803/ |
Clip or threshold a tensor using condition and zero pad the result in PyTorch | let's say I have a tensor like this
w = [[0.1, 0.7, 0.7, 0.8, 0.3],
[0.3, 0.2, 0.9, 0.1, 0.5],
[0.1, 0.4, 0.8, 0.3, 0.4]]
Now I want to eliminate certain values base on some condition (for example greater than 0.5 or not)
w = [[0.1, 0.3],
[0.3, 0.2, 0.1],
[0.1, 0.4, 0.3, 0.4]]
Then pad it to equal length:
w = [[0.1, 0.3, 0, 0],
[0.3, 0.2, 0.1, 0],
[0.1, 0.4, 0.3, 0.4]]
and this is how I implemented it in pytorch:
w = torch.rand(3, 5)
condition = w <= 0.5
w = [w[i][condition[i]] for i in range(3)]
w = torch.nn.utils.rnn.pad_sequence(w)
But apparently this is going to be extremely slow, mainly because of the list comprehension.
is there any better way to do it?
| Here's one straightforward way using boolean masking, tensor splitting, and then eventually padding the splitted tensors using torch.nn.utils.rnn.pad_sequence(...).
# input tensor to work with
In [213]: w
Out[213]:
tensor([[0.1000, 0.7000, 0.7000, 0.8000, 0.3000],
[0.3000, 0.2000, 0.9000, 0.1000, 0.5000],
[0.1000, 0.4000, 0.8000, 0.3000, 0.4000]])
# values above this should be clipped from the input tensor
In [214]: clip_value = 0.5
# generate a boolean mask that satisfies the condition
In [215]: boolean_mask = (w <= clip_value)
# we need to sum the mask along axis 1 (needed for splitting)
In [216]: summed_mask = boolean_mask.sum(dim=1)
# a sequence of splitted tensors
In [217]: splitted_tensors = torch.split(w[boolean_mask], summed_mask.tolist())
# finally pad them along dimension 1 (or axis 1)
In [219]: torch.nn.utils.rnn.pad_sequence(splitted_tensors, 1)
Out[219]:
tensor([[0.1000, 0.3000, 0.0000, 0.0000],
[0.3000, 0.2000, 0.1000, 0.5000],
[0.1000, 0.4000, 0.3000, 0.4000]])
A short note on efficiency: Using torch.split() is super efficient since it returns the splitted tensors as a view of the original tensor (i.e. no copy is made).
| https://stackoverflow.com/questions/62132312/ |
How does PyTorch's loss.backward() work when "retain_graph=True" is specified? | I'm a newbie with PyTorch and adversarial networks. I've tried to look for an answer on the PyTorch documentation and from previous discussions both in the PyTorch and StackOverflow forums, but I couldn't find anything useful.
I'm trying to train a GAN with a Generator and a Discriminator, but I cannot understand if the whole process is working or not. As far as I'm concerned, I should train the Generator first and, then, updating the Discriminator's weights (similarly as this). My code for updating the weights of both models is:
# computing loss_g and loss_d...
optim_g.zero_grad()
loss_g.backward()
optim_g.step()
optim_d.zero_grad()
loss_d.backward()
optim_d.step()
where loss_g is the generator loss, loss_d is the discriminator loss, optim_g is the optimizer referring to the generator's parameters and optim_d is the discriminator optimizer.
If I run the code like this, I get an error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
So I specify loss_g.backward(retain_graph=True), and here comes my doubt: why should I specify retain_graph=True if there are two networks with two different graphs? Am I getting something wrong?
| Having two different networks doesn't necessarily mean that the computational graph is different. The computational graph only tracks the operations that were performed from the input to the output and it doesn't matter where the operation takes place. In other words, if you use the output of the first model in the second model (e.g. model2(model1(input))), you have the same sequential operations as if they were part of the same model. In fact, that is no different from having different parts of the model, such as multiple convolutions, that you apply one after the other.
The error you get, indicates that you are trying to backpropagate from the discriminator through the generator, which would mean that the discriminator's output directly adapts the generator's parameters for the discriminator to be successful. In an adversarial setting that is precisely what you want to avoid, they should be independent from each other. By setting retrain_graph=True you incorrectly hide this bug. In nearly all cases retain_graph=True is not the solution and should be avoided.
To resolve that issue, the two models need to be made independent from each other. The crossover between the two models happens when you use the generators output for the discriminator, since it should decide whether that was real or fake. Something along these lines:
fake = generator(noise)
real_prediction = discriminator(real)
# Using the output of the generator, continues the graph.
fake_prediction = discriminator(fake)
Even though fake comes from the generator, as far as the discriminator is concerned, it's merely another input, just like real. Therefore fake should be treated the same as real, where it is not attached to any computational graph. That can easily be done with torch.Tensor.detach, which decouples the tensor from the graph.
fake = generator(noise)
real_prediction = discriminator(real)
# Detach to make it independent of the generator
fake_prediction = discriminator(fake.detach())
That is also done in the code you referenced, from erikqu/EnhanceNet-PyTorch - train.py:
hr_imgs = torch.cat([discriminator(hr), discriminator(generated_hr.detach())], dim=0)
| https://stackoverflow.com/questions/62133737/ |
Convert custom Convolution from PyTorch to Tensorflow (2.2.0) | I am currently attempting to convert a custom convolution from PyTorch to Tensorflow (V. 2.2.0).
The convolution is defined in PyTorch as:
self.quantizer = q = nn.Conv1d(1, 2*nq, kernel_size=1, bias=True)
a = (nq-1) / gap
#1st half = lines passing to (min+x,1) and (min+x+1/a,0) with x = {nq-1..0}*gap/(nq-1)
q.weight.data[:nq] = -a
q.bias.data[:nq] = torch.from_numpy(a*min + np.arange(nq, 0, -1)) # b = 1 + a*(min+x)
#2nd half = lines passing to (min+x,1) and (min+x-1/a,0) with x = {nq-1..0}*gap/(nq-1)
q.weight.data[nq:] = a
q.bias.data[nq:] = torch.from_numpy(np.arange(2-nq, 2, 1) - a*min) # b = 1 - a*(min+x)
# first and last one are special: just horizontal straight line
q.weight.data[0] = q.weight.data[-1] = 0
q.bias.data[0] = q.bias.data[-1] = 1
where nq = 20, min = 0 and max = 1.
My reimplementation looks like this:
my_weight = my_init_weight((1,1,nq*2))
q = tf.nn.convolution(input_q, my_weight)
q = tf.nn.bias_add(q, my_init_bias((40,1), tf.float32))
with these these functions as weight and bias initialization:
def my_init_weight(shape, dtype=None):
weights = np.zeros(shape, dtype=np.float32)
weights[:, :, :nq] = -a
weights[:, :, nq:] = a
weights[:, :, 0] = weights[:, :, -1] = 0
return tf.convert_to_tensor(weights, dtype=tf.float32)
def my_init_bias(shape, dtype=None):
weights = np.zeros(shape[0], dtype=np.float32)
weights[:nq] = a*min + np.arange(nq, 0, -1)
weights[nq:] = np.arange(2-nq, 2, 1) - a*min
weights[0] = weights[-1] = 1
return weights
The input is a matrix with shape 1681, 1, 1600 for PyTorch (as it uses channels first) and 1681, 1600, 1 for Tensorflow (as it uses channels last) and the out put is 1681, 40, 1600 or 1681, 1600, 40. So it should be correct, however, the output of both convolutions is different.
Input, Output: Tensorflow on a random 100, 100 image:
my_weight = my_init_weight((1,1,nq*2))
my_weight = tf.nn.bias_add(my_weight, my_init_bias((40,1), tf.float32))
q = tf.nn.convolution(test_conv, my_weight)
q_left, q_right = tf.split(q, 2, axis=2)
q = tf.math.minimum(q_left, q_right)
nbs = tf.reduce_sum(q, axis=0)
Input, Output: PyTorch on a random 100, 100 image:
output = q(input_t_t)
output = torch.min(output[:,:nq], output[:,nq:]).clamp(min=0)
nbs = output.sum(dim=-1)
| Okay, I found the solution:
I forgot to add the .clamp(min=0).
Adding q = tf.clip_by_value(q, 0, tf.keras.backend.max(q)) to
my_weight = my_init_weight((1,1,nq*2))
my_weight = tf.nn.bias_add(my_weight, my_init_bias((40,1), tf.float32))
q = tf.nn.convolution(test_conv, my_weight)
q_left, q_right = tf.split(q, 2, axis=2)
q = tf.math.minimum(q_left, q_right)
q = tf.clip_by_value(q, 0, tf.keras.backend.max(q)) <-----------
nbs = tf.reduce_sum(q, axis=0)
fixed the problem.
| https://stackoverflow.com/questions/62134394/ |
How to compile torch 1.5.0 without GPU support? | I want to install pytorch 1.5.0 on AWS lambda. Since the torch library is very large, I need to make it as small as possible to fit within the size limits. My script looks like this so far:
mkdir python
docker run \
--rm \
-v $(pwd):/build \
python:3.8 \
sh -c "
cd /build;
pip3 install torch==1.5.0 -t python/torch --no-cache-dir;
find . -type d -name '__pycache__' | xargs rm -rf;
find . -type d -name 'tests' | xargs rm -rf;
find . -type f -name '*.py[co]' | xargs rm -rf;
";
zip -r9 torch.zip python;
But the resulting zip file is very large (500+ MB). However, one of the largest files in the installation package is libtorch_cuda.so. Removing that file makes the zip file less than half the size. I know that cuda is a library for GPUs, and since AWS lambda doesn't have a GPU, i don't need this support. But when I remove that file torch will not import correctly.
torch 1.4.0, by comparison, is much smaller because it did not by default include the cuda libraries.
I want torch 1.5.0 without the GPU support.
Is there a way to pip install torch==1.5.0 without gpu support?
| PyTorch also distributes CPU only versions, that you can install with pip. Although they aren't published to PyPI, so you need to get them from their own registry.
You can get the CPU version on PyTorch - Getting Started Locally by selecting CUDA: None.
pip install torch==1.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/62134409/ |
Pytorch / device problem(cpu, gpu) when load state dict for optimizer | Hi i`m student who studies pytorch since last summer.
state = torch.load('drive/My Drive/MODEL/4 CBAM classifier55')
model = MyResNet()
model.load_state_dict(state['state_dict'])
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.0003,betas=(0.5,0.999))
optimizer.load_state_dict(state['optimizer'])
model.to(device)
i wrote code like above.
RuntimeError Traceback (most recent call last)
<ipython-input-26-507493db387a> in <module>()
56 new_loss.backward()
57
---> 58 optimizer.step()
59
60 running_loss += loss.item()
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
/usr/local/lib/python3.6/dist-packages/torch/optim/adam.py in step(self, closure)
97
98 # Decay the first and second moment running average coefficient
---> 99 exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
100 exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
101 if amsgrad:
RuntimeError: expected device cpu but got device cuda:0
And when i implement training code, then i got this kind of error. When i comment out 'optimizer.load_state_dict', it works well. How can i solve this problem? Thank you for your answer. :)
| Seems like the state was on cuda when you saved and now trying to use it on cpu or vice-versa. To avoid this error, a simple way is to pass the map_location argument to load.
Just pass map_location=<device you want to use> in torch.load and it should work fine. Also, see https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices
| https://stackoverflow.com/questions/62136244/ |
Pytorch NN Training issue: Loss of NN does not decrase | I want to classify random Instagram images as "image has a dog" or "image has not a dog".
To train my NN to classify dogs I want to use the Stanford Dogs Dataset, so I have about 20.000 training images of different dogs with different breeds.
But while training my NN the loss does not decrease, I checked that with different learning rates and with or without dropout layers.
Can anyone give tips or does anyone see bugs in the following code?:
import torch
import torchvision
from torchvision import transforms
from PIL import Image
from os import listdir
import os
import random
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
import torch.nn as nn
TRAINDATAPATH = 'C:/Users/.../Desktop/train/'
TESTDATAPATH = 'C:/Users/.../Desktop/#apfel/'
"""normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)"""
normalize = transforms.Normalize(
mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5]
)
transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(256),
transforms.ToTensor(),
normalize])
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
train_data_list = []
target_list = []
train_data = []
batch_size = 1
files = listdir(TRAINDATAPATH)
for i in range(len(listdir(TRAINDATAPATH))):
try:
f = random.choice(files)
files.remove(f)
img = Image.open(TRAINDATAPATH + f)
img_tensor = transforms(img) # (3,256,256)
train_data_list.append(img_tensor)
isObj = 1 if 'obj' in f else 0
isNotObj = 0 if 'obj' in f else 1
target = [isObj, isNotObj]
target_list.append(target)
if len(train_data_list) >= 1:
train_data.append((torch.stack(train_data_list), target_list))
train_data_list = []
target_list = []
print('Loaded batch ', int(len(train_data)/batch_size), 'of ', int(len(listdir(TRAINDATAPATH))/batch_size))
print('Percentage Done: ', 100*int(len(train_data)/batch_size)/int(len(listdir(TRAINDATAPATH))/batch_size), '%')
except Exception:
print("Error occured but ignored")
print(str(Exception))
continue
class Netz(nn.Module):
def __init__(self):
super(Netz, self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 12, kernel_size=5)
self.conv3 = nn.Conv2d(12, 18, kernel_size=5)
self.conv4 = nn.Conv2d(18, 24, kernel_size=5)
self.fc1 = nn.Linear(3456, 1000)
self.fc2 = nn.Linear(1000, 2)
def forward(self, x):
x = self.conv1(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = self.conv3(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = self.conv4(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = x.view(-1,3456)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return torch.sigmoid(x)
model = Netz()
model.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
optimizer = optim.Adadelta(model.parameters(), lr=10)
def train(epoch):
global model
model.train()
batch_idx = 0
for data, target in train_data:
batch_idx += 1
data = data.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
target = torch.Tensor(target).to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
data = Variable(data)
target = Variable(target)
optimizer.zero_grad()
output = model(data)
criterion = F.binary_cross_entropy
loss = criterion(output, target)
loss.backward()
optimizer.step()
print('Train Epoch: '+ str(epoch) + '\tLoss: ' + str(loss.data.item()) )
def test():
global model
model.eval()
files = listdir(TESTDATAPATH)
f = random.choice(files)
img = Image.open(TESTDATAPATH + f)
img_eval_tensor = transforms(img)
img_eval_tensor.unsqueeze_(0)
data = Variable(img_eval_tensor.to(torch.device("cuda" if torch.cuda.is_available() else "cpu")) )
out = model(data)
string_prediction = str(out.data.max(0, keepdim=True)[1])
print(string_prediction[9:10])
for epoch in range(1,4):
train(epoch)
i = 100
while i > 0:
test()
i -= 1
In the TRAINDATAPATH are thousands of Dog images with the filename "obj_XXX.jpg" and some other images WITHOUT dogs with other filenames don't including "obj".
In the TESTDATAPATH are just random images, some with dogs, some without.
The NN classifies them all as "not including dogs" or "0" which is incorrect.
Thanks for every help!
| You are doing a binary classification but you are using two classes:
isObj = 1 if 'obj' in f else 0
isNotObj = 0 if 'obj' in f else 1
target = [isObj, isNotObj]
In the binary case, it should be a single class, where 1 means it is a dog, and 0 means it is not. You are already doing it, but twice. You can remove the isNotObj entirely and only keep the isObj.
You need to adapt the model accordingly, such that it only predicts the isObj, therefore fc2 should only have 1 class as output:
self.fc2 = nn.Linear(1000, 1)
In the testing phase you need to make a prediction based on a single class, which can be seen as the probability of being a dog. Then you set a threshold for which you consider the model to be confident enough that it's actually a dog. To make it balanced, the threshold is 0.5, so everything above that is a dog and everything below it is not. This can easily be achieved with torch.round:
# Size: [batch_size, 1]
out = model(data)
predictions = torch.round(out)
# Get rid of the singular dimension
# To get size: [batch_size]
predictions = predictions.squeeze(1)
Besides that, the learning rate of 10 is astronomically high, and a learning rate greater than 1 makes it impossible to converge. A more appropriate learning rate is around 0.01 or 0.001.
And on side note, since you are new to PyTorch: Please don't use Variable, it was deprecated with PyTorch 0.4.0, which was released over 2 years ago, and all of its functionality has been merged into the tensors.
| https://stackoverflow.com/questions/62136980/ |
Is there a function that finds the OR of every row in a boolean matrix in pytorch? | I have a matrix A of size n x m and all the entries are bools. I want all my calculation to be on the GPU and I have matrix A stored as a tensor with each entry being the pytorch's bool datatype. I want an output of a single vector, b, to be a 1 x m tensor that stores the OR of all rows in A.
What I want:
A matrix =
[a1,1, a1,2, , a1,3, ... , a1,m]
[a2,1, a2,2, , a2,3, ... , a2,m]
...
[an,1, an,2, , an,3, ... , an,m]
b = [b1, b2, , b3, , ..., bm]
s.t. bi = a1,i | a2,i | a3,i | ... | an,iwhere | is the OR operator in Pytorch
essentially I want a function that applies row or column wise boolean operation. I know the | .__OR__ can be used and Pytorch's OR function takes two boolean tensor as inputs and I would need to loop over all rows to get my desired b vector.
Since OR is commutative (a|(b|c) = (a|b)|c), I would think pytorch would have some nice function that speeds it up by doing the | operations in parallel or in some divide and conquer method, instead of doing this with a loop. Any ideas or references to speed up the process of applying communicative row/column wise boolean operation with pytorch is welcomed. Best if all operations are done on the GPU.
| See torch.any and torch.all.
Both take a dim argument and hence you can compute or/and of rows.
| https://stackoverflow.com/questions/62139949/ |
How to use pretrained weights of a model for initializing the weights in next iteration? | I have a model architecture. I have saved the entire model using torch.save() for some n number of iterations. I want to run another iteration of my code by using the pre-trained weights of the model I saved previously.
Edit: I want the weight initialization for the new iteration be done from the weights of the pretrained model
Edit 2: Just to add, I don't plan to resume training. I intend to save the model and use it for a separate training with same parameters. Think of it like using a saved model with weights etc. for a larger run and more samples (i.e. a complete new training job)
Right now, I do something like:
# default_lr = 5
# default_weight_decay = 0.001
# model_io = the pretrained model
model = torch.load(model_io)
optim = torch.optim.Adam(model.parameters(),lr=default_lr, weight_decay=default_weight_decay)
loss_new = BCELoss()
epochs = default_epoch
.
.
training_loop():
....
outputs = model(input)
....
.
#similarly for test loop
Am I missing something? I have to run for a very long epoch for a huge number of sample so can not afford to wait to see the results then figure out things.
Thank you!
| From the code that you have posted, I see that you are only loading the previous model parameters in order to restart your training from where you left it off. This is not sufficient to restart your training correctly. Along with your model parameters (weights), you also need to save and load your optimizer state, especially when your choice of optimizer is Adam which has velocity parameters for all your weights that help in decaying the learning rate.
In order to smoothly restart training, I would do the following:
# For saving your model
state = {
'model': model.state_dict(),
'optimizer': optimizer.state_dict()
}
model_save_path = "Enter/your/model/path/here/model_name.pth"
torch.save(state, model_save_path)
# ------------------------------------------
# For loading your model
state = torch.load(model_save_path)
model = MyNetwork()
model.load_state_dict(state['model'])
optim = torch.optim.Adam(model.parameters(),lr=default_lr, weight_decay=default_weight_decay)
optim.load_state_dict(state['optimizer'])
Besides these, you may also want to save your learning rate if you are using a learning rate decay strategy, your best validation accuracy so far which you may want for checkpointing purposes, and any other changeable parameter which might affect your training. But in most of the cases, saving and loading just the model weights and optimizer state should be sufficient.
EDIT: You may also want to look at this following answer which explains in detail how you should save your model in different scenarios.
| https://stackoverflow.com/questions/62143332/ |
How to convert a tensor of booleans to ints in PyTorch? | Suppose, we have a tensor
t = torch.tensor([True, False, True, False])
How do we convert it to an integer tensor with values [1, 0, 1, 0]?
| The solution is just a single line of code.
To convert a tensor t with values [True, False, True, False] to an integer tensor, just do the following.
t = torch.tensor([True, False, True, False])
t_integer = t.long()
print(t_integer)
[1, 0, 1, 0]
| https://stackoverflow.com/questions/62150659/ |
pytorch to Onnx(OCR model) | I am trying to convert pytorch model in the given repo https://github.com/clovaai/deep-text-recognition-benchmark
to onnx.
I am facing the issue while doing so.
Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible
Link to the git issue https://github.com/clovaai/deep-text-recognition-benchmark/issues/76
Any suggestion?
Thanks.
| adaptive_avg_pool2d is not supported in my case and this nn.AdaptiveAvgPool2d((None,1)) also have have issue.
| https://stackoverflow.com/questions/62152195/ |
PyTorch non-deterministic dropout | I'm trying to make output of BLSTM deterministic, after investigation its appeared that my dropout layer creates not deterministic dropout masks, so I was researching about how to fix random seed in pytorch. I found this page and other suggestions though I put everything in code it did not help. Here is my code:
import sys
import random
import datetime as dt
import numpy as np
import torch
torch.manual_seed(42)
torch.cuda.manual_seed(42)
np.random.seed(42)
random.seed(42)
torch.backends.cudnn.deterministic = True
ex = torch.ones(10)
torch.nn.functional.dropout(ex, p=0.5, training=True)
# Out[29]: tensor([0., 0., 2., 0., 0., 0., 0., 0., 2., 2.])
torch.nn.functional.dropout(ex, p=0.5, training=True)
# Out[30]: tensor([0., 2., 0., 2., 2., 0., 0., 2., 2., 2.])
Please help me get deterministic output from dropout for the same input
| every time you want the same output you need to reset the random seed again so:
>>> import torch
>>> torch.manual_seed(42)
<torch._C.Generator object at 0x127cd9170>
>>> ex = torch.ones(10)
>>> torch.nn.functional.dropout(ex, p=0.5, training=True)
tensor([0., 0., 2., 2., 2., 2., 2., 0., 2., 0.])
>>> torch.manual_seed(42)
<torch._C.Generator object at 0x127cd9170>
>>> torch.nn.functional.dropout(ex, p=0.5, training=True)
tensor([0., 0., 2., 2., 2., 2., 2., 0., 2., 0.])
You may want to keep resetting all those random seeds in general though, you can find a lot of different randomness when building neural nets in python
| https://stackoverflow.com/questions/62152674/ |
How to efficiently calculate pairwise intersection of nonzero indices in a scipy.csr sparse matrix? | I have a scipy.sparse.csr matrix X which is n x p. For each row in X I would like to compute the intersection of the non zero element indices with each row in X and store them in a new tensor or maybe even a dictionary. For example, X is:
X = [
[0., 1.5, 4.7],
[4., 0., 0.],
[0., 0., 2.6]
]
I would like the output to be
intersect =
[
[[1,2], [], [2]],
[[], [0], []],
[[2], [], [2]]
]
intersect[i,j] is an ndarray representing the intersection of the indices of nonzero elements of ith and jth rows of X i.e X[i], X[j].
Currently the way I am doing this is by looping and I would like to vectorize this as it would be much faster and the computations are done in parallel.
# current code
n = X.shape[0]
intersection_dict = {}
for i in range(n):
for j in range(n):
indices = np.intersect1d(X[i].indices, X[j].indices)
intersection_dict[(i,j)] = indices
My n is pretty large so looping over n^2 is very poor. I am just having trouble figuring out a way to vectorize this operation. Does anybody have any ideas on how to tackle this?
EDIT:
It was made apparent that I should explain the problem I am trying to solve, so here it is.
I am solving an optimization problem and have an equation
W = X diag(theta) X'. I want to find W in a quick manner as I update the entries of theta till convergence. Further I am updating parameters using pytorch where sparse operations are not as extensive as in scipy.
where:
X : n x p sparse data matrix (n documents, p features)
theta : p x 1 parameter vector I want to learn and will be updating
X' : p x n transpose of sparse data matrix
note p >> n
I had in mind two methods of solving this quickly
Cache sparse outer product of (see More efficient matrix multiplication with diagonal matrix)
W_ij = X_i * theta * X_j (element wise product of row i of X, theta, and row j of X. And since X_i, X_j are sparse I was thinking if I take the intersection of the nonzero indices then I can do a simple dense elementwise product (sparse element wise product not supported in pytorch) of X_i[intersection indices] * theta[intersection indices] X_j[intersection indices]
I want to vectorize as much of this computation as possible rather than loop as my n is typically in the thousands and p is 11 million.
I am attempting method 2 over method 1 do to the lack of sparse support in Pytorch. Mainly when updating the entries of theta I would not like to do sparse-dense or sparse-sparse operations. I want to do dense-dense operations.
| The optimization you're looking at requires storing p different n x n matrices. If you do want to try it, I'd probably use all the functionality built into sparse matrices in scipy's C extensions.
import numpy as np
from scipy import sparse
arr = sparse.random(100,10000, format="csr", density=0.01)
xxt = arr @ arr.T
p_comps = [arr[:, i] @ arr.T[i, :] for i in range(arr.shape[1])]
def calc_weights(xxt, thetas, p_comps):
xxt = xxt.copy()
xxt.data = np.zeros(xxt.data.shape, dtype=xxt.dtype)
for i, t in enumerate(thetas):
xxt += (p_comps[i] * t)
return xxt
W = calc_weights(xxt, np.ones(10000), p_comps)
>>>(xxt.A == W.A).all()
True
It's really unlikely that this is going to work well implemented in python. You may have better luck doing this in C, or writing something with nested loops that operates on elements and is amenable to getting JIT compiled with numba.
| https://stackoverflow.com/questions/62155922/ |
How to solve size mismatch error in pytorch? | I am trying to create a logistic model by using CIFAR10 data in PyTorch. After running the model for evaluation I run into an error :
RuntimeError: size mismatch, m1: [750 x 4096], m2: [1024 x 10] at C:\w\1\s\tmp_conda_3.7_100118\conda\conda-bld\pytorch_1579082551706\work\aten\src\TH/generic/THTensorMath.cpp:136
It seems like input_size is creating a problem, I dont know I am new to this. Please let me know what changes should I make in order to overcome this error.
These are the hyperparameters:
batch_size = 100
learning_rate = 0.001
# Other constants
input_size = 4*4*64
num_classes = 10
This is the cell that downloads and splits the dataset into train, validation and test.
transform = torchvision.transforms.Compose(
[torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))])
testset = torchvision.datasets.CIFAR10(root='D:\PyTorch\cifar-10-python', train=False,download=False, transform=transform)
trainvalset = torchvision.datasets.CIFAR10(root='D:\PyTorch\cifar-10-python', train=True,download=False, transform=transform)
trainset, valset = torch.utils.data.random_split(trainvalset, [45000, 5000]) # 10% for validation
train_loader = torch.utils.data.DataLoader(trainset, batch_size=50, shuffle=True)
test_loader = torch.utils.data.DataLoader(testset, batch_size=1000, shuffle=False)
val_loader = torch.utils.data.DataLoader(valset, batch_size=1000, shuffle=False)
This is the architecture of my model.
class CifarModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, xb):
xb = xb.view(-1, 64*8*8)
#xb = xb.reshape(-1, 784)
print(xb.shape)
out = self.linear(xb)
return out
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss.detach(), 'val_acc': acc.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc']))
model = CifarModel()
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
evaluate(model, val_loader)
| Here you are specifying that the number of output classes should be 10:
num_classes = 10
Your forward function does not reflect this:
xb = xb.view(-1, 64*8*8) # you get 750x4096
out = self.linear(xb) # here an input of
# input_size to linear layer = 4*4*64 # 1024
# num_classes = 10
Modify it like this:
xb = xb.view(-1, 64*4*4) # you get 750x1024
out = self.linear(xb) # M1 750x1024 M2 1024x10:
# input_size = 4*4*64 # 1024
# num_classes = 10
| https://stackoverflow.com/questions/62156124/ |
How to solve error: no match between expected input batch size and target batch size in PyTorch? | I attempting to create a logistic model on CIFAR10 dataset by PyTorch. However I am getting an error:
ValueError: Expected input batch_size (900) to match target batch_size (300).
What I think is happening is that 3*100 is 300. So may be the 3 axis of the RGB image is doing that but I cant figure how to solve.
These are my hyperparameters.
batch_size = 100
learning_rate = 0.001
# Other constants
input_size = 32*32
num_classes = 10
Here I divide my data into train, validation and test data.
transform_train = transforms.Compose([transforms.Resize((32,32)),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(10),
transforms.RandomAffine(0, shear=10, scale=(0.8,1.2)),
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
transform = transforms.Compose([transforms.Resize((32,32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
training_dataset = CIFAR10(root='D:\PyTorch\cifar-10-python', train=True, download=True, transform=transform_train)
train_ds, val_ds = random_split(training_dataset, [40000, 10000])
test_ds = CIFAR10(root='D:\PyTorch\cifar-10-python', train=False, download=True, transform=transform)
train_loader = DataLoader(train_ds, batch_size=100, shuffle=True)
val_loader = DataLoader(val_ds, batch_size = 100, shuffle = False)
test_loader = DataLoader(test_ds, batch_size = 100, shuffle=False)
This is the model.
class CifarModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, xb):
xb = xb.view(-1, 32*32)
#xb = xb.reshape(-1, 784)
print(xb.shape)
out = self.linear(xb)
return out
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss.detach(), 'val_acc': acc.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc']))
model = CifarModel()
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
evaluate(model, val_loader)
Here's the error I encounter when I run evaluate function:
torch.Size([900, 1024])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-3621eab8de1a> in <module>
21 history.append(result)
22 return history
---> 23 evaluate(model, val_loader)
<ipython-input-23-3621eab8de1a> in evaluate(model, val_loader)
3 return torch.tensor(torch.sum(preds == labels).item() / len(preds))
4 def evaluate(model, val_loader):
----> 5 outputs = [model.validation_step(batch) for batch in val_loader]
6 return model.validation_epoch_end(outputs)
7
<ipython-input-23-3621eab8de1a> in <listcomp>(.0)
3 return torch.tensor(torch.sum(preds == labels).item() / len(preds))
4 def evaluate(model, val_loader):
----> 5 outputs = [model.validation_step(batch) for batch in val_loader]
6 return model.validation_epoch_end(outputs)
7
<ipython-input-22-c9e17d21eaff> in validation_step(self, batch)
19 images, labels = batch
20 out = self(images) # Generate predictions
---> 21 loss = F.cross_entropy(out, labels) # Calculate loss
22 acc = accuracy(out, labels) # Calculate accuracy
23 return {'val_loss': loss.detach(), 'val_acc': acc.detach()}
~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2019 if size_average is not None or reduce is not None:
2020 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2022
2023
~\Anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1834 if input.size(0) != target.size(0):
1835 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 1836 .format(input.size(0), target.size(0)))
1837 if dim == 2:
1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (900) to match target batch_size (300).
| One problem that I am seeing is this line:
xb = xb.view(-1, 32*32)
Here you are saying that the input image has only one channels. In other words, grayscale. Change it to reflect the number of channels (RGB):
xb = xb.view(-1, 32*32*3)
| https://stackoverflow.com/questions/62157890/ |
How to extract output of torch model in c++? | I have got trained keras model and converted it using mmdnn. Then I try use it in c++ code:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <torch.h>
int main()
{
cv::Mat image;
image= cv::imread("test_img.png", cv::IMREAD_GRAYSCALE); // Read the file
try
{
torch::jit::script::Module module;
module = torch::jit::load("my_model.pth");
torch::IntArrayRef input_dim = std::vector<int64_t>({ 1, 2, 256, 256});
cv::Mat input_img;
image.convertTo(input_img, CV_32FC3, 1 / 255.0);
torch::Tensor x = torch::from_blob(input_img.data, { 1, 2, 256, 256 }, torch::kFloat);
torch::NoGradGuard no_grad;
auto output = module.forward({ x });
float* data = static_cast<float*>(output.toTensor().data_ptr());
cv::Mat output_img = cv::Mat(256, 256, CV_32FC3, data);
cv::imwrite("output_img.png", output_img);
}
catch (std::exception &ex)
{
std::cout << "exception! " << ex.what() << std::endl;
}
return 0;
}
This code throws an exception:
exception! isTensor() INTERNAL ASSERT FAILED at
E:\20B\pytorch\pytorch\aten\src\ATen/core/ivalue_inl.h:112, please
report a bug to PyTorch. Expected Tensor but got Tuple (toTensor at
E:\20B\pytorch\pytorch\aten\src\ATen/core/ivalue_inl.h:112) (no
backtrace available)
This was thrown in line float* data = static_cast<float*>(output.toTensor().data_ptr()); when the function toTensor() was called. If I use toTuple() instead of toTensor() then the result doesn't have the function data_ptr(), but I need this for extracting data (and putting it into opencv image).
How to extract image from the model output?
| In this case the answer of model is tuple of 2 images. We can extract them by such way:
torch::Tensor t0 = output.toTuple()->elements()[0].toTensor();
torch::Tensor t1 = output.toTuple()->elements()[1].toTensor();
Variables t0 and t1 contain tensors with output of model.
| https://stackoverflow.com/questions/62158785/ |
Pytorch: Recover network with customized VGG model that was saved improperly | I am currently doing work with customizing the forward method for models. I was using some tutorial code that ran VGG. I did a few runs with the baseline model and it seemed to work fine. Afterwards, I replaced the forward method for the VGG using:
net.forward = types.MethodType(forward_vgg_new, net)
Unfortunately, the way that the tutorial code saves the models is:
state = {
'net':net,
'acc':acc,
'epoch':epoch,
}
...
torch.save(state, ...)
While This worked for the original tutorial code, loading no longer works for my custom models as I get:
AttributeError: 'VGG' object has no attribute 'forward_vgg_new'
I have since read from the documentation that it is better for me to save the model's state_dict:
state = {
'net':net.state_dict(),
'acc':acc,
'epoch':epoch,
}
...
torch.save(state, ...)
While I will change the code for future runs, I was wondering if it was possible to salvage the models I have already trained. I naively already tried to import the VGG class and add my forward_vgg_new method to it:
setattr(VGG, 'forward_vgg_new', forward_vgg_new)
before calling torch.load, but it doesn't work.
| To solve the problem, I went directly into the VGG library and temporarily added my function so that I could load the saved models and save only their state dicts. I reverted the changes to the VGG library after I recovered the saves. Not the most graceful way of fixing the problem, but it worked.
| https://stackoverflow.com/questions/62159437/ |
Pytorch Multiclass Logistic Regression Type Errors | I'm new to ML and even more naive with Pytorch. Here's the problem. (I've skipped certain parts like the random_split() which seem to work just fine)
I've to predict wine quality (red) which from the dataset is the last column with 6 classes
That's what my dataset looks like
The link to the dataset (winequality-red.csv)
features = df.drop(['quality'], axis = 1)
targets = df.iloc[:, -1] # theres 6 classes
dataset = TensorDataset(torch.Tensor(np.array(features)).float(), torch.Tensor(targets).float())
# here's where I think the error might be, but I might be wrong
batch_size = 8
# Dataloader
train_loader = DataLoader(train_ds, batch_size, shuffle = True)
val_loader = DataLoader(val_ds, batch_size)
test_ds = DataLoader(test_ds, batch_size)
input_size = len(df.columns) - 1
output_size = 6
threshold = .5
class WineModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, output_size)
def forward(self, xb):
out = self.linear(xb)
return out
model = WineModel()
n_iters = 2000
num_epochs = n_iters / (len(train_ds) / batch_size)
num_epochs = int(num_epochs)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
# the part below returns the error on running
iter = 0
for epoch in range(num_epochs):
for i, (x, y) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(x)
loss = criterion(outputs, y)
loss.backward()
optimizer.step()
RuntimeError: expected scalar type Long but found Float
Hopefully that is sufficient info
| The targets for nn.CrossEntropyLoss are given as the class indices, which are required to be integers, to be precise they need to be of type torch.long, which is equivalent to torch.int64.
You converted the targets to floats, but you should convert them to longs:
dataset = TensorDataset(torch.Tensor(np.array(features)).float(), torch.Tensor(targets).long())
Since the targets are the indices of the classes, they must be in range [0, num_classes - 1]. As you have 6 classes that would be in range [0, 5]. Having a quick look at your data, the quality uses values in range [3, 8]. Even though you have 6 classes, the values cannot be used directly as the classes. If you list the classes as classes = [3, 4, 5, 6, 7, 8], you can see that the first class is 3, classes[0] == 3, up to the last class being classes[5] == 8.
You need to replace the class values with the indices, just like you would for named classes (e.g. if you had the classes dog and cat, dog would be 0 and cat would be 1), but you can avoid having to look them up, since the values are simply shifted by 3, i.e. index = classes[index] - 3. Therefore you can subtract 3 from the entire target tensor:
torch.Tensor(targets).long() - 3
| https://stackoverflow.com/questions/62159589/ |
Cross Entropy Calculation in PyTorch tutorial | I'm reading the Pytorch tutorial of a multi-class classification problem. And I find the behavior of Loss calculation in Pytorch confuses me a lot. Can you help me with this?
The model used for classification goes like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
The training process goes as follows:
optimizer.zero_grad()
outputs = net(inputs)
loss = nn.CrossEntropyLoss(outputs, labels)
loss.backward()
optimizer.step()
My question is: What's the exact behavior of Loss calculation in Pytorch here? During each iteration, the input of nn.CrossEntropyLoss() has two parts:
The output of the model, which is a 10 by 1 tensor, with different values in it. This is a tensor without normalized into probability.
The label as a scalar, like 1 or 2 or 3.
As far as I know, the calculation of cross-entropy usually used between two tensors like:
Target as [0,0,0,1], where 1 is the right class
Output tensor as [0.1,0.2,0.3,0.4], where the sum as 1.
So based on this assumption, nn.CrossEntropyLoss() here needs to achieve:
Firstly normalize the output tensor into possibility one.
Encode the label into one-hot ones, like 2 in 5 class as [0,1,0,0,0]. The length must be the same as output tensor.
Then calculate the loss.
May I ask is this what nn.CrossEntropyLoss() does? Or do we need to one-hot encoding the true label before we input into the model?
Thank you a lot for your time in advance!
| nn.CrossEntropyLoss first applies log-softmax (log(Softmax(x)) to get log probabilities and then calculates the negative-log likelihood as mentioned in the documentation:
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
When using one-hot encoded targets, the cross-entropy can be calculated as follows:
where y is the one-hot encoded target vector and ŷ is the vector of probabilities for each class. To get the probabilities you would apply softmax to the output of the model. The logarithm of the probabilities is used, and PyTorch just combines the logarithm and the softmax into one operation nn.LogSoftmax(), for numerical stability.
Since all of the values except one in the one-hot vector are zero, only a single term of the sum will be non-zero. Therefore given the actual class, it can be simplified to:
As long as you know the class index, the loss can be calculated directly, making it more efficient than using a one-hot encoded target, hence nn.CrossEntropyLoss expects the class indices.
The full calculation is given in the documentation of nn.CrossEntropyLoss:
The loss can be described as:
| https://stackoverflow.com/questions/62161194/ |
PyTorch: The number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1) | I'm trying to convert a CPU model to GPU using Pytorch, but I'm running into issues. I'm running this on Colab and I'm sure that Pytorch detects a GPU. This is a deep Q network (RL).
I declare my network as: Q = Q_Network(input_size, hidden_size, output_size).to(device)
I ran into an issue when I tried to pass arguments through the network (It expected type cuda but got type cpu) so I add .to(device):
batch = np.array(shuffled_memory[i:i+batch_size])
b_pobs = np.array(batch[:, 0].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_pact = np.array(batch[:, 1].tolist(), dtype=np.int32)
b_reward = np.array(batch[:, 2].tolist(), dtype=np.int32)
b_obs = np.array(batch[:, 3].tolist(), dtype=np.float32).reshape(batch_size, -1)
b_done = np.array(batch[:, 4].tolist(), dtype=np.bool)
q = Q(torch.from_numpy(b_pobs).to(device))
q_ = Q_ast(torch.from_numpy(b_obs).to(device))
maxq = torch.max(q_.data,axis=1)
target = copy.deepcopy(q.data)
for j in range(batch_size):
print(target[j, b_pact[j]].shape) # torch.Size([])
target[j, b_pact[j]] = b_reward[j]+gamma*maxq[j]*(not b_done[j]) #I run into issues here
Here is the error:
RuntimeError: expand(torch.cuda.FloatTensor{[50]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)
| target[j, b_pact[j]] is a single element of the tensor (a scalar, hence size of torch.Size([])). If you want to assign anything to it, the right hand side can only be a scalar. That is not the case, as one of the terms is a tensor with 1 dimension (a vector), namely your maxq[j].
When specifying a dimension dim (axis is treated as a synonym) to torch.max, it will return a named tuple of (values, indices), where values contains the maximum values and indices the location of each of the maximum values (equivalent to argmax).
maxq[j] is not indexing into the maximum values, but rather the tuple of (values, indices). If you only want the values you can use one of the following to get the values out of the tuple (all of them are equivalent, you can use whichever you prefer):
# Destructure/unpack and ignore the indices
maxq, _ = torch.max(q_.data,axis=1)
# Access first element of the tuple
maxq = torch.max(q_.data,axis=1)[0]
# Access `values` of the named tuple
maxq = torch.max(q_.data,axis=1).values
| https://stackoverflow.com/questions/62163194/ |
padding='same' conversion to PyTorch padding=# | I'm trying to convert the following Keras model code to pytorch, but am having problems dealing with padding='same'.
model = Sequential()
model.add(Conv2D(64, (3, 3), input_shape=img_size))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
Which produces the following summary:
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 30, 30, 64) 1792
_________________________________________________________________
batch_normalization_1 (Batch (None, 30, 30, 64) 120
_________________________________________________________________
activation_1 (Activation) (None, 30, 30, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 30, 30, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 30, 30, 64) 36928
_________________________________________________________________
batch_normalization_2 (Batch (None, 30, 30, 64) 120
_________________________________________________________________
activation_2 (Activation) (None, 30, 30, 64) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
=================================================================
Total params: 38,960
Trainable params: 38,840
Non-trainable params: 120
Right now, I would write:
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3,
bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Dropout(0.3),
nn.Conv2d(64, 64, kernel_size=3, padding = ?
bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding = ?),
)
Where padding should have numerical value. I was wondering if there is an easier way to calculate this since we're using padding='same'.
Also, the next line of the Keras model looks like:
model.add(Conv2D(128, (3, 3), padding='same'))
So I really need to brush up on how to calculate padding, especially after stride too. From a rough eye only, is the padding 2?
| W:input volume size
F:kernel size
S:stride
P:amount of padding
size of output volume = (W-F+2P)/S+1
e.g.
input:7x7, kernel:3x3, stride:1, pad:0
output size = (7-3+2*0)/1+1 = 5 =>5x5
| https://stackoverflow.com/questions/62166719/ |
pytorch affine_grid: what is the theta input? | When trying to use torch.nn.functional.affine_grid, it requires a theta affine matrix of size (N x 3 x 4) according to the documentation. I thought a general affine matrix is (N x 4 x 4). What is the supposed affine matrix format in pytorch?
An example of 3D rotation affine input would be ideal. Appreciate your help.
| The dimensions you mention are applicable for the case of 3D inputs, that is you wish to apply 3D geometric transforms on the input tensor x of shape bxcxdxhxw.
A transformation to points in 3D (represented as 4-vector in homogeneous coordinates as (x, y, z, 1)) should be, in the general case, a 4x4 matrix as you noted.
However, since we restrict ourselves to homogeneous coordinates, i.e., the fourth coordinate must be 1, the 4th row of the matrix must be (0, 0, 0, 1) (see this).
Therefore, there's no need to explicitly code this last row.
To conclude, a 3D transformation composed of a 3x3 rotation R and 3d translation t is simply the 3x4 matrix:
theta = [R t]
| https://stackoverflow.com/questions/62167113/ |
How to balance the generator and the discriminator performances in a GAN? | It's the first time I'm working with GANs and I am facing an issue regarding the Discriminator repeatedly outperforming the Generator. I am trying to reproduce the PA model from this article and I'm looking at this slightly different implementation to help me out.
I have read quite a lot of papers on how GANs work and also followed some tutorials to understand them better. Moreover, I've read articles on how to overcome the major instabilities, but I can't find a way to overcome this behavior.
In my environment, I'm using PyTorch and BCELoss(). Following the DCGAN PyTorch tutorial, I'm using the following training loop:
criterion = nn.BCELoss()
train_d = False
# Discriminator true
optim_d.zero_grad()
disc_train_real = target.to(device)
batch_size = disc_train_real.size(0)
label = torch.full((batch_size,), 1, device=device).cuda()
output_d = discriminator(disc_train_real).view(-1)
loss_d_real = criterion(output_d, label).cuda()
if lossT:
loss_d_real *= 2
if loss_d_real.item() > 0.3:
loss_d_real.backward()
train_d = True
D_x = output_d.mean().item()
# Discriminator false
output_g = generator(image)
output_d = discriminator(output_g.detach()).view(-1)
label.fill_(0)
loss_d_fake = criterion(output_d, label).cuda()
D_G_z1 = output_d.mean().item()
if lossT:
loss_d_fake *= 2
loss_d = loss_d_real + loss_d_fake
if loss_d_fake.item() > 0.3:
loss_d_fake.backward()
train_d = True
if train_d:
optim_d.step()
# Generator
label.fill_(1)
output_d = discriminator(output_g).view(-1)
loss_g = criterion(output_d, label).cuda()
D_G_z2 = output_d.mean().item()
if lossT:
loss_g *= 2
loss_g.backward()
optim_g.step()
and, after a period of settlement, everything seems to work fine:
Epoch 1/5 - Step: 1900/9338 Loss G: 3.057388 Loss D: 0.214545 D(x): 0.940985 D(G(z)): 0.114064 / 0.114064
Time for the last step: 51.55 s Epoch ETA: 01:04:13
Epoch 1/5 - Step: 2000/9338 Loss G: 2.984724 Loss D: 0.222931 D(x): 0.879338 D(G(z)): 0.159163 / 0.159163
Time for the last step: 52.68 s Epoch ETA: 01:03:24
Epoch 1/5 - Step: 2100/9338 Loss G: 2.824713 Loss D: 0.241953 D(x): 0.905837 D(G(z)): 0.110231 / 0.110231
Time for the last step: 50.91 s Epoch ETA: 01:02:29
Epoch 1/5 - Step: 2200/9338 Loss G: 2.807455 Loss D: 0.252808 D(x): 0.908131 D(G(z)): 0.218515 / 0.218515
Time for the last step: 51.72 s Epoch ETA: 01:01:37
Epoch 1/5 - Step: 2300/9338 Loss G: 2.470529 Loss D: 0.569696 D(x): 0.620966 D(G(z)): 0.512615 / 0.350175
Time for the last step: 51.96 s Epoch ETA: 01:00:46
Epoch 1/5 - Step: 2400/9338 Loss G: 2.148863 Loss D: 1.071563 D(x): 0.809529 D(G(z)): 0.114487 / 0.114487
Time for the last step: 51.59 s Epoch ETA: 00:59:53
Epoch 1/5 - Step: 2500/9338 Loss G: 2.016863 Loss D: 0.904711 D(x): 0.621433 D(G(z)): 0.440721 / 0.435932
Time for the last step: 52.03 s Epoch ETA: 00:59:02
Epoch 1/5 - Step: 2600/9338 Loss G: 2.495639 Loss D: 0.949308 D(x): 0.671085 D(G(z)): 0.557924 / 0.420826
Time for the last step: 52.66 s Epoch ETA: 00:58:12
Epoch 1/5 - Step: 2700/9338 Loss G: 2.519842 Loss D: 0.798667 D(x): 0.775738 D(G(z)): 0.246357 / 0.265839
Time for the last step: 51.20 s Epoch ETA: 00:57:19
Epoch 1/5 - Step: 2800/9338 Loss G: 2.545630 Loss D: 0.756449 D(x): 0.895455 D(G(z)): 0.403628 / 0.301851
Time for the last step: 51.88 s Epoch ETA: 00:56:27
Epoch 1/5 - Step: 2900/9338 Loss G: 2.458109 Loss D: 0.653513 D(x): 0.820105 D(G(z)): 0.379199 / 0.103250
Time for the last step: 53.50 s Epoch ETA: 00:55:39
Epoch 1/5 - Step: 3000/9338 Loss G: 2.030103 Loss D: 0.948208 D(x): 0.445385 D(G(z)): 0.303225 / 0.263652
Time for the last step: 51.57 s Epoch ETA: 00:54:47
Epoch 1/5 - Step: 3100/9338 Loss G: 1.721604 Loss D: 0.949721 D(x): 0.365646 D(G(z)): 0.090072 / 0.232912
Time for the last step: 52.19 s Epoch ETA: 00:53:55
Epoch 1/5 - Step: 3200/9338 Loss G: 1.438854 Loss D: 1.142182 D(x): 0.768163 D(G(z)): 0.321164 / 0.237878
Time for the last step: 50.79 s Epoch ETA: 00:53:01
Epoch 1/5 - Step: 3300/9338 Loss G: 1.924418 Loss D: 0.923860 D(x): 0.729981 D(G(z)): 0.354812 / 0.318090
Time for the last step: 52.59 s Epoch ETA: 00:52:11
that is, the gradients on the Generator are higher and start to decrease after a while, and in the meanwhile the gradients on the Discriminator rise up. As for the losses, the Generator goes down while the Discriminator goes up. If compared to the tutorial, I guess this can be acceptable.
Here's my first question: I've noticed that on the tutorial (usually) as D_G_z1 rises, D_G_z2 decreases (and viceversa), while in my example this happens a lot less. Is it just a coincidence or am I doing something wrong?
Given that, I've let the training procedure go on, but now I'm noticing this:
Epoch 3/5 - Step: 1100/9338 Loss G: 4.071329 Loss D: 0.031608 D(x): 0.999969 D(G(z)): 0.024329 / 0.024329
Time for the last step: 51.41 s Epoch ETA: 01:11:24
Epoch 3/5 - Step: 1200/9338 Loss G: 3.883331 Loss D: 0.036354 D(x): 0.999993 D(G(z)): 0.043874 / 0.043874
Time for the last step: 51.63 s Epoch ETA: 01:10:29
Epoch 3/5 - Step: 1300/9338 Loss G: 3.468963 Loss D: 0.054542 D(x): 0.999972 D(G(z)): 0.050145 / 0.050145
Time for the last step: 52.47 s Epoch ETA: 01:09:40
Epoch 3/5 - Step: 1400/9338 Loss G: 3.504971 Loss D: 0.053683 D(x): 0.999972 D(G(z)): 0.052180 / 0.052180
Time for the last step: 50.75 s Epoch ETA: 01:08:41
Epoch 3/5 - Step: 1500/9338 Loss G: 3.437765 Loss D: 0.056286 D(x): 0.999941 D(G(z)): 0.058839 / 0.058839
Time for the last step: 52.20 s Epoch ETA: 01:07:50
Epoch 3/5 - Step: 1600/9338 Loss G: 3.369209 Loss D: 0.062133 D(x): 0.955688 D(G(z)): 0.058773 / 0.058773
Time for the last step: 51.05 s Epoch ETA: 01:06:54
Epoch 3/5 - Step: 1700/9338 Loss G: 3.290109 Loss D: 0.065704 D(x): 0.999975 D(G(z)): 0.056583 / 0.056583
Time for the last step: 51.27 s Epoch ETA: 01:06:00
Epoch 3/5 - Step: 1800/9338 Loss G: 3.286248 Loss D: 0.067969 D(x): 0.993238 D(G(z)): 0.063815 / 0.063815
Time for the last step: 52.28 s Epoch ETA: 01:05:09
Epoch 3/5 - Step: 1900/9338 Loss G: 3.263996 Loss D: 0.065335 D(x): 0.980270 D(G(z)): 0.037717 / 0.037717
Time for the last step: 51.59 s Epoch ETA: 01:04:16
Epoch 3/5 - Step: 2000/9338 Loss G: 3.293503 Loss D: 0.065291 D(x): 0.999873 D(G(z)): 0.070188 / 0.070188
Time for the last step: 51.85 s Epoch ETA: 01:03:25
Epoch 3/5 - Step: 2100/9338 Loss G: 3.184164 Loss D: 0.070931 D(x): 0.999971 D(G(z)): 0.059657 / 0.059657
Time for the last step: 52.14 s Epoch ETA: 01:02:34
Epoch 3/5 - Step: 2200/9338 Loss G: 3.116310 Loss D: 0.080597 D(x): 0.999850 D(G(z)): 0.074931 / 0.074931
Time for the last step: 51.85 s Epoch ETA: 01:01:42
Epoch 3/5 - Step: 2300/9338 Loss G: 3.142180 Loss D: 0.073999 D(x): 0.995546 D(G(z)): 0.054752 / 0.054752
Time for the last step: 51.76 s Epoch ETA: 01:00:50
Epoch 3/5 - Step: 2400/9338 Loss G: 3.185711 Loss D: 0.072601 D(x): 0.999992 D(G(z)): 0.076053 / 0.076053
Time for the last step: 50.53 s Epoch ETA: 00:59:54
Epoch 3/5 - Step: 2500/9338 Loss G: 3.027437 Loss D: 0.083906 D(x): 0.997390 D(G(z)): 0.082501 / 0.082501
Time for the last step: 52.06 s Epoch ETA: 00:59:03
Epoch 3/5 - Step: 2600/9338 Loss G: 3.052374 Loss D: 0.085030 D(x): 0.999924 D(G(z)): 0.073295 / 0.073295
Time for the last step: 52.37 s Epoch ETA: 00:58:12
not only D(x) has increased again and it's stuck to almost one, but also both D_G_z1 and D_G_z2 always show the same value. Moreover, looking at the losses it seems pretty clear that the Discriminator has outperformed the Generator. This behavior has gone on and on for the rest of the epoch and for all the next one, until the end of the training.
Hence my second question: is this normal? If not, what am I doing wrong within the procedure? How can I achieve a more stable training?
EDIT: I've tried to train the network using the MSELoss() as suggested and this is the output:
Epoch 1/1 - Step: 100/9338 Loss G: 0.800785 Loss D: 0.404525 D(x): 0.844653 D(G(z)): 0.030439 / 0.016316
Time for the last step: 55.22 s Epoch ETA: 01:25:01
Epoch 1/1 - Step: 200/9338 Loss G: 1.196659 Loss D: 0.014051 D(x): 0.999970 D(G(z)): 0.006543 / 0.006500
Time for the last step: 51.41 s Epoch ETA: 01:21:11
Epoch 1/1 - Step: 300/9338 Loss G: 1.197319 Loss D: 0.000806 D(x): 0.999431 D(G(z)): 0.004821 / 0.004724
Time for the last step: 51.79 s Epoch ETA: 01:19:32
Epoch 1/1 - Step: 400/9338 Loss G: 1.198960 Loss D: 0.000720 D(x): 0.999612 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.47 s Epoch ETA: 01:18:09
Epoch 1/1 - Step: 500/9338 Loss G: 1.212810 Loss D: 0.000021 D(x): 0.999938 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.18 s Epoch ETA: 01:17:11
Epoch 1/1 - Step: 600/9338 Loss G: 1.216168 Loss D: 0.000000 D(x): 0.999945 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.24 s Epoch ETA: 01:16:02
Epoch 1/1 - Step: 700/9338 Loss G: 1.212301 Loss D: 0.000000 D(x): 0.999970 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.61 s Epoch ETA: 01:15:02
Epoch 1/1 - Step: 800/9338 Loss G: 1.214397 Loss D: 0.000005 D(x): 0.999973 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.58 s Epoch ETA: 01:14:04
Epoch 1/1 - Step: 900/9338 Loss G: 1.212016 Loss D: 0.000003 D(x): 0.999932 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.20 s Epoch ETA: 01:13:13
Epoch 1/1 - Step: 1000/9338 Loss G: 1.215162 Loss D: 0.000000 D(x): 0.999988 D(G(z)): 0.000000 / 0.000000
Time for the last step: 52.28 s Epoch ETA: 01:12:23
Epoch 1/1 - Step: 1100/9338 Loss G: 1.216291 Loss D: 0.000000 D(x): 0.999983 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.78 s Epoch ETA: 01:11:28
Epoch 1/1 - Step: 1200/9338 Loss G: 1.215526 Loss D: 0.000000 D(x): 0.999978 D(G(z)): 0.000000 / 0.000000
Time for the last step: 51.88 s Epoch ETA: 01:10:35
As can be seen, the situation gets even worse. Moreover, reading the EnhanceNet paper all over again, Section 4.2.4 (Adversarial Training) states that the adversarial loss function used is a BCELoss(), as I would expect to solve the vanishing gradients problem that I get with MSELoss().
| Interpreting GAN Losses are a bit of a black art because the actual loss values
Question 1: The frequency of swinging between a discriminator/generator dominance will vary based on a few factors primarily (in my experience): learning rates and batch sizes which will impact the propagated loss. The particular loss metrics used will impact variance in how the D & G networks train. The EnhanceNet paper (for baseline) and the tutorial use a Mean Squared Error loss too - you're using a Binary Cross Entropy loss which will change the rate at which the networks converge. I'm no expert so here's a pretty good link to Rohan Varma's article that explains the difference between loss functions. Would be curious to see if your network behaves differently when you change the loss function - try it and update the question?
Question 2: Over time both the D and G losses should settle to a value, however it's somewhat difficult to tell whether they've converged on strong performance or whether they've converged due to something like mode collapse/diminishing gradients (Jonathan Hui's explanation on problems in training GANs). The best way I've found is to actually inspect a cross section of the generated images and either visually inspect the output or use some kind of perceptual metrics (SSIM, PSNR, PIQ, etc.) across the generated image set.
Some other useful leads that you might find useful in finding an ans:
This post has a couple of reasonably good pointers on interpreting GAN Losses.
Ian Goodfellow's NIPS2016 tutorial also has some solid ideas on how to balance D & G training.
| https://stackoverflow.com/questions/62174141/ |
How to create new pandas dataframe column containing values of all other columns as a tensor? | I'm trying to create a new feature column which is a tensor containing the values in the existing columns. So if col A's value is '1', col B's value is '0', and col C's value is 0, then the new feature column's value will be [1,0,0].
I tried the following code:
import numpy as np
import pandas as pd
import torch
df = pd.DataFrame({"A":[1,1,0], "B":[0,1,1], "Sentiment":[0,0,1]})
df["new_feature"] = [df["A"].values, df["B"].values, df["C"].values]
...but the result is not what I need. The result is getting the values down each column rather than the values across the row (multiple column values). For example, the new_feature column value for the first row should be [1,0,0] but its showing [1,1,0]
My ultimate aim is to get a dataframe column that I can use as a torch tensor to input into a neural net.
| Use torch.from_numpy with apply lambda function.
df["new_feature"] = df.apply(lambda x:torch.from_numpy(x.to_numpy()), axis = 1)
df
A B C new_feature
0 1 0 0 [tensor(1), tensor(0), tensor(0)]
1 1 1 0 [tensor(1), tensor(1), tensor(0)]
2 0 1 1 [tensor(0), tensor(1), tensor(1)]
df["new_feature"][0]
tensor([1, 0, 0])
First convert dataframe values to numpy array using pd.Series.to_numpy and
then convert numpy array to tensor using torch.from_numpy.
| https://stackoverflow.com/questions/62175302/ |
Recommended way to replace several values in a tensor at once? | Is there a batch way to replace several particular values in a pytorch tensor at once without a for loop?
Example:
old_values = torch.Tensor([1, 2, 3, 4, 5, 5, 2, 3, 3, 2])
old_new_value = [[2,22], [3,33], [6, 66]]
old_new_value = [[2,22], [3,33], [6, 66]], which means 2 should be replaced by 22, and 3 should be replaced by 33 and 6 to 66
Can I have an efficient way to achieve the following end_result?
end_result = torch.Tensor([1, 22, 33, 4, 5, 5, 22, 33, 33, 22])
Note that old_values is not unique. Also, it is possible that old_new_value has a pair here(6, 66) that does not exist in the old_values.
Also, the old_new_values includes unique rows,
| If you don't have any duplicate elements in your input tensor, here's one straightforward way using masking and value assignment using basic indexing. (I'll assume that the data type of the input tensor is int. But, you can simply adapt this code in a straightforward manner to other dtypes). Below is a reproducible illustration, with explanations interspersed in inline comments.
# input tensors to work with
In [75]: old_values
Out[75]: tensor([1, 2, 3, 4, 5], dtype=torch.int32)
In [77]: old_new_value
Out[77]:
tensor([[ 2, 22],
[ 3, 33]], dtype=torch.int32)
# generate a boolean mask using the values that need to be replaced (i.e. 2 & 3)
In [78]: boolean_mask = (old_values == old_new_value[:, :1]).sum(dim=0).bool()
In [79]: boolean_mask
Out[79]: tensor([False, True, True, False, False])
# assign the new values by basic indexing
In [80]: old_values[boolean_mask] = old_new_value[:, 1:].squeeze()
# sanity check!
In [81]: old_values
Out[81]: tensor([ 1, 22, 33, 4, 5], dtype=torch.int32)
A small note on efficiency: Throughout the whole process, we never made any copy of the data (i.e. we operate only on new views by massaging the shapes according to our needs). Therefore, the runtime would be blazing fast.
| https://stackoverflow.com/questions/62185188/ |
what is the pytorch's view equivalence with tensorflow 2.0? | l_conv7 = self.loc_conv7(conv7_feats) # (N, 24, 19, 19)
l_conv7 = l_conv7.permute(0, 2, 3, 1).contiguous() # (N, 19, 19, 24)
l_conv7 = l_conv7.view(batch_size, -1, 4) # (N, 2166, 4), there are a total 2116 boxes on this feature map
what is the equivalence with torch's view in TensorFlow?
how to change the l_conv7.view in TensorFlow 2.0?
| Use
l_conv7.reshape(batch_size, -1, 4)
| https://stackoverflow.com/questions/62185282/ |
Weight Normalization in PyTorch | An important weight normalization technique was introduced in this paper and has been included in PyTorch since long as follows:
from torch.nn.utils import weight_norm
weight_norm(nn.Conv2d(in_channles, out_channels))
From the docs I get to know, weight_norm does re-parametrization before each forward() pass. But I am not sure if this re-parameterization is also happening during the inference when everything is running inside with torch.no_grad() and the model is set to eval() mode.
Can someone please help me know if weight_norm is active only during training or during the inference mode as described above?
Thank you
| I have finally figured out the problem.
Batch normalization learns two parameters during training and uses them for inference. Thus it is necessary to change its behaviour using eval() to tell not to modify them any further.
I then scrutinizingly checked the weight normalization paper and found it to be 'inherently deterministic'. It simply decouples the original weight vectors as product of two quantities as shown below.
w = g . v
Obviously either you use LHS for computing output or RHS it does not matter. However by decoupling it into two vectors and passing them to optimizer and deleting the w parameter better training is achieved. For reasons refer the paper where things are nicely described.
Thus it does not matter if weight normalization is removed or not during testing. To validate this I tried the following small code.
import torch
import torch.nn as nn
from torch.nn.utils import weight_norm as wn
from torch.nn.utils import remove_weight_norm as wnr
# define the model 'm'
m = wn(nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=1, bias=True))
ip = torch.rand(1,1,5,5)
target = torch.rand(1,1,5,5)
l1 = torch.nn.L1Loss()
optimizer = torch.optim.Adam(m.parameters())
# begin training
for _ in range(5):
out = m(ip)
loss = l1(out,target)
loss.backward()
optimizer.step()
with torch.no_grad():
m.eval()
print('\no/p after training with wn: {}'.format(m(ip)))
wnr(m)
print('\no/p after training without wn: {}'.format(m(ip)))
# begin testing
m2 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3,padding=1, bias=True)
m2.load_state_dict(m.state_dict())
with torch.no_grad():
m2.eval()
out = m2(ip)
print('\nOutput during testing and without weight_norm: {}'.format(out))
And the output is below,
o/p after training with wn:
tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307],
[0.1846, 0.3931, 0.5713, 0.2909, 0.4026],
[0.1716, 0.5971, 0.4297, 0.0845, 0.6172],
[0.2938, 0.2389, 0.4478, 0.5828, 0.6276],
[0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]])
o/p after training without wn:
tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307],
[0.1846, 0.3931, 0.5713, 0.2909, 0.4026],
[0.1716, 0.5971, 0.4297, 0.0845, 0.6172],
[0.2938, 0.2389, 0.4478, 0.5828, 0.6276],
[0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]])
Output during testing and without weight_norm:
tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307],
[0.1846, 0.3931, 0.5713, 0.2909, 0.4026],
[0.1716, 0.5971, 0.4297, 0.0845, 0.6172],
[0.2938, 0.2389, 0.4478, 0.5828, 0.6276],
[0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]])
Please see that all the values are exactly same as only reparameterization is happening.
Regarding,
Then I tested two models using C++ code with libtorch. But the results are not the same.
See https://github.com/pytorch/pytorch/issues/21275 which reports a bug with TorchScript.
And regarding,
I am wondering what does weight_norm do in inference? Is it usefull?
The answer is it does nothing. you do x * 2 or x * (1+1) does not matter. It is not useful but not harmful either. So better remove it.
| https://stackoverflow.com/questions/62188472/ |
Expected object of scalar type Float but got scalar type Long for argument #2 'mat1' in call to _th_addmm | The error Expected object of scalar type Float but got scalar type Long for argument #2 'mat1' in call to _th_addmm is being displayed after running the code below.
import numpy as np
import pandas as pd
import nltk
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
import re
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader, TensorDataset
... (created a pandas dataframe containing a 'tweet', 'sentiment' and 47 one-hot bag of words cols)
# create train_target data set containing just the target
train_target = torch.tensor(df_train['sentiment'].values, dtype=torch.long)
# create train predictor features data set containing just the predictor features
train = torch.tensor(df_train.drop(['tweet','sentiment'], axis = 1).values, dtype=torch.long)
# convert to torch tensor and define data loader (train)
train_tensor = TensorDataset(train, train_target)
trainset = torch.utils.data.DataLoader(train_tensor, batch_size=2, shuffle=False)
# create test_target data set containing just the target
test_target = torch.tensor(df_test['sentiment'].values, dtype=torch.long)
# create test predictor features data set containing just the predictor features
test = torch.tensor(df_test.drop(['tweet','sentiment'], axis = 1).values, dtype=torch.long)
# convert to torch tensor and define data loader (test)
test_tensor = TensorDataset(test, test_target)
testset = torch.utils.data.DataLoader(test_tensor, batch_size=2, shuffle=False)
input_length = 47
class Net(nn.Module): # create a new class called Net that inherits from nn's "Module" class
# initialise our Net class
def __init__(self):
super().__init__() # run the initialisation function of the nn.Module class (the parent class)
self.fc1 = nn.Linear(input_length, 768) # define the first fully connected layer.
self.fc2 = nn.Linear(768, 768) # define the 2nd layer
self.fc3 = nn.Linear(768, 2) # output layer. 2 classes so output is size 2
# define how data will flow through feed-forward network
def forward(self, x):
x = F.relu(self.fc1(x)) # x becomes the output of running the 1st layer, after relu activation is applied
x = F.relu(self.fc2(x)) # x then becomes the output of the 2nd layer...
x = self.fc3(x)
return F.log_softmax(x, dim=1) # we want a probability distribution across the 2 classes so using softmax
# dim=1 because we want the probabilities across the classes, not
# the batches
# create a Net object called net
net = Net()
print(net)
# Train the model (net)
optimiser = optim.Adam(net.parameters(), # net.parameters is everything thats adjustable in our model
lr=0.001) # learning rate
epochs = 10
for epoch in range(epochs):
for batch_data in trainset:
X, y = batch_data # set X as the input and y as the label
net.zero_grad() # start with gradients of zero
output = net(X.view(-1, input_length)) # run the model (could put batch size in instead of -1)
loss = F.nll_loss(output, y) # how wrong were we? Calculate loss.
loss.backward() # backpropagate the loss (how much did each weight contribute to the loss?)
optimiser.step() # adjust the weights based on the backpropagation
print('Epoch: ', epoch, ' Loss: ', loss)
What do I need to do to fix this? I'm trying to create a neural network to do sentiment classification on some text data from tweets.
| The inputs to the nn.Linear layers, and therefore your model, need to be floats, not longs.
You need to change the features to use dtype=torch.float:
train = torch.tensor(df_train.drop(['tweet','sentiment'], axis = 1).values, dtype=torch.float)
test = torch.tensor(df_test.drop(['tweet','sentiment'], axis = 1).values, dtype=torch.float)
That only applies to the input features, whereas your targets need to remain torch.long for the NLL loss.
| https://stackoverflow.com/questions/62191983/ |
How to attach hooks to ReLUs in Inception V3 from torchvision | I am using Inception v3 from torchvision. I tried to find the ReLUs within the model:
def recursively_find_submodules(model, submodule_type):
module_list = []
q = [model]
while q:
child = q.pop()
if isinstance(child, submodule_type):
module_list.append(child)
q.extend(list(child.children()))
return module_list
inception = torch.hub.load('pytorch/vision:v0.6.0', 'inception_v3', pretrained=True)
l = recursively_find_submodules(inception, torch.nn.ReLU) # l is empty!
So the ReLUs are not children of any module within the torch model. Upon closer inspection I found the ReLUs in the source code of torchvision but not as modules. In inception.py I found the following:
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
So the BasicConv2d module uses the ReLU function to clamp it's output instead of the module (torch.nn.ReLU). I guess there is no way to hook up to ReLU functions and modify their input / output without modifying the whole model to use ReLU modules or is there a way to do this?
| You can hook to the batch-norm layer preceding the ReLU and attach there, taking into account you observe the inputs to the ReLU rather that the features after the activation.
| https://stackoverflow.com/questions/62192030/ |
Why doesn't pytorch allow inplace operations on leaf variables? | So if I run this code in Pytorch:
x = torch.ones(2,2, requires_grad=True)
x.add_(1)
I will get the error:
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
I understand that Pytorch does not allow inplace operations on leaf variables and I also know that there are ways to get around this restrictions. What I don't understand is the philosophy behind this rule. Why is it wrong to change a leaf variable with inplace operations?
| As I understand it, any time you do a non-traditional operation on a tensor that was initialized with requires_grad=True, Pytorch throws an error to make sure it was intentional. For example, you normally would only update a weight tensor using optimizer.step().
For another example, I ran into this issue when trying to update the values in a backprop-able tensor during network initialization.
self.weight_layer = nn.Parameter(data=torch.zeros(seq_length), requires_grad=True)
self.weight_layer[true_ids == 1] = -1.2
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
The problem is that, because requires_grad=True, the network doesn't know that I'm still initializing the values. If this is what you are trying to do, wrapping the update in a torch.no_grad block is one solution:
with torch.no_grad()
self.weight_layer = nn.Parameter(data=torch.zeros(seq_length), requires_grad=True)
self.weight_layer[true_ids == 1] = -1.2
Otherwise, you could just set requires_grad=True after you finish initializing the Tensor:
self.weight_layer = nn.Parameter(data=torch.zeros(seq_length))
self.weight_layer[true_ids == 1] = -1.2
self.weight_layer.requires_grad = True
| https://stackoverflow.com/questions/62198351/ |
What is the best file type to save a 4D tensor? | I have some data I need to pre-process for a later step in a 3D Convolutional Network. The data comes in a file formatted like this:
POSITION
x y z (feature 1 x) (feature 1 y) (feature 1 z) (feature 2 x) (feature 2 y ...
1.2 0.54 2.3 0.04 0.2 -0.9 -0.2 0.65 ...
...(more rows of the same format)...
And after some other steps which involve operating on the positional data and the features, I get a pytorch tensor with dimensions [height][width][depth][features], or equivalently a numpy array, where the first three are positional data that I can use to plot the features using colours, and the [features] are vectors containing each of the feature values.
These are pretty large files and I'd like not have to perform the conversion from the first file format shown above to the tensor/array form later during processing. I'm thinking of using torch.save(tensor, 'file.pt').
My question is: what is the best file format to save this data so that it can be easily accessed later without the need for any pre-processing? Having to serialize it with PyTorch seems to be quite a convoluted way to save a type of data I would expect to have a more specific/designated file format.
| I think I've found a more direct way to do it. Numpy supports saving its arrays as a .npy file.
The procedure is pretty straightforward. To save an array array_1 into the file numpy_array_1.npy, all you need to do is:
np.save('numpy_array_1.npy', array_l)
And then to load it into array_2:
array_2 = np.load('numpy_array_1.npy')
| https://stackoverflow.com/questions/62203705/ |
`return_sequences = False` equivalent in pytorch LSTM | In tensorflow/keras, we can simply set return_sequences = False for the last LSTM layer before the classification/fully connected/activation (softmax/sigmoid) layer to get rid of the temporal dimension.
In PyTorch, I don't find anything similar. For the classification task, I don't need a sequence to sequence model but many to one architecture like this:
Here's my simple bi-LSTM model.
import torch
from torch import nn
class BiLSTMClassifier(nn.Module):
def __init__(self):
super(BiLSTMClassifier, self).__init__()
self.embedding = torch.nn.Embedding(num_embeddings = 65000, embedding_dim = 64)
self.bilstm = torch.nn.LSTM(input_size = 64, hidden_size = 8, num_layers = 2,
batch_first = True, dropout = 0.2, bidirectional = True)
# as we have 5 classes
self.linear = nn.Linear(8*2*512, 5) # last dimension
def forward(self, x):
x = self.embedding(x)
print(x.shape)
x, _ = self.bilstm(x)
print(x.shape)
x = self.linear(x.reshape(x.shape[0], -1))
print(x.shape)
# create our model
bilstmclassifier = BiLSTMClassifier()
If I observe the shapes after each layer,
xx = torch.tensor(X_encoded[0]).reshape(1,512)
print(xx.shape)
# torch.Size([1, 512])
bilstmclassifier(xx)
#torch.Size([1, 512, 64])
#torch.Size([1, 512, 16])
#torch.Size([1, 5])
What can I do so that the last LSTM returns a tensor with shape (1, 16) instead of (1, 512, 16)?
| The simplest way to do this is by indexing into the tensor:
x = x[:, -1, :]
where x is the RNN output. Of course, if batch_first is False, one would have to use x[-1, :, :] (or just x[-1]) to index into the time axis instead. Turns out this is the same thing Tensorflow/Keras do. The relevant code can be found in K.rnn here:
last_output = tuple(o[-1] for o in outputs)
Note that the code at this point uses time_major data format, so the index is into the first axis. Also, outputs is a tuple because it can be multiple layers, state/cell pairs etc., but it is generally the sequence of outputs for all time steps.
This is then used in the RNN class as follows:
if self.return_sequences:
output = K.maybe_convert_to_ragged(is_ragged_input, outputs, row_lengths)
else:
output = last_output
So in total, we can see that return_sequences=False just uses outputs[-1].
| https://stackoverflow.com/questions/62204109/ |
Pytorch: Index with tensor along multiple axes OR scatter to more than one index at once | I am trying to update very specific indices of a multidimensional tensor in Pytorch, and I am not sure how to access the correct indices. I can do this in a very straightforward way in Numpy:
import numpy as np
#set up the array containing the data
data = 100*np.ones((10,10,2))
data[5:,:,:] = 0
#select the data points that I want to update
idxs = np.nonzero(data.sum(2))
#generate the updates that I am going to do
updates = np.random.randint(5,size=(idxs[0].shape[0],2))
#update the data
data[idxs[0],idxs[1],:] = updates
I need to implement this in Pytorch but I am not sure how to do this. It seems like I need the scatter function but that only works along a single dimension instead of the multiple dimensions that I need. How can I do this?
| These operations work exactly the same in their PyTorch counterparts, except for torch.nonzero, which by default returns a tensor of size [z, n] (where z is the number of non-zero elements and n the number of dimensions) instead of a tuple of n tensors with size [z] (as NumPy does), but that behaviour can be changed by setting as_tuple=True.
Other than that you can directly translate it to PyTorch, but you need to make sure that the types match, because you cannot assign a tensor of type torch.long (default of torch.randint) to a tensor of type torch.float (default of torch.ones). In this case, data is probably meant to have type torch.long:
#set up the array containing the data
data = 100*torch.ones((10,10,2), dtype=torch.long)
data[5:,:,:] = 0
#select the data points that I want to update
idxs = torch.nonzero(data.sum(2), as_tuple=True)
#generate the updates that I am going to do
updates = torch.randint(5,size=(idxs[0].shape[0],2))
#update the data
data[idxs[0],idxs[1],:] = updates
| https://stackoverflow.com/questions/62207512/ |
pytorch custom dataset: DataLoader returns a list of tensors rather than tensor of a list | import torch
class Custom_Dataset(torch.utils.data.dataset.Dataset):
def __init__(self, _dataset):
self.dataset = _dataset
def __getitem__(self, index):
example, target = self.dataset[index]
return example, target
def __len__(self):
return len(self.dataset)
train_data = [([1, 3, 5], 0),
([2, 4, 6], 1)]
train_loader = torch.utils.data.DataLoader(dataset=Custom_Dataset(train_data),
batch_size=1,
shuffle=False)
for inputs, targets in train_loader:
print(inputs)
print(targets)
I'm defining my training data as [([1, 3, 5], 0), ([2, 4, 6], 1)]: input([1, 3, 5]) paired target (0).
But when I fetch data from data loader, it becomes:
[tensor([1]), tensor([3]), tensor([5])]
tensor([0])
How do I get instead:
tensor([[1],
[3],
[5]])
tensor([0])
?
I know torch.stack can do the trick, but can I convert it in my custom dataset class?
| One solution to get the desired input would be using numpy. Below I changed only two lines in your example to make it work.
import torch
import numpy as np
class Custom_Dataset(torch.utils.data.dataset.Dataset):
def __init__(self, _dataset):
self.dataset = _dataset
def __getitem__(self, index):
example, target = self.dataset[index]
return np.array(example), target
def __len__(self):
return len(self.dataset)
train_data = [([1, 3, 5], 0),
([2, 4, 6], 1)]
train_loader = torch.utils.data.DataLoader(dataset=Custom_Dataset(train_data),
batch_size=1,
shuffle=False)
for inputs, targets in train_loader:
print(inputs)
print(targets)
Output of this code would be
tensor([[1, 3, 5]])
tensor([0])
tensor([[2, 4, 6]])
tensor([1])
But of course, I am assuming that having a row vector or a column vector does not make any difference to you. Otherwise, you might want to check this answer about transposing 1D vectors.
Hope this helps.
| https://stackoverflow.com/questions/62208904/ |
Why use Caffe2 or Core-ML instead of LibTorch(.pt file) on iOS? | It seems like there are several ways to run Pytorch models on iOS.
PyTorch(.pt) -> onnx -> caffe2
PyTorch(.pt) -> onnx -> Core-ML (.mlmodel)
PyTorch(.pt) -> LibTorch (.pt)
PyTorch Mobile?
What is the difference between the above methods?
Why people use caffe2 or Core-ml (.mlmodel), which requires model format conversion, instead of LibTorch?
| Core ML can use the Apple Neural Engine (ANE), which is much faster than running the model on the CPU or GPU. If a device has no ANE, Core ML can automatically fall back to the GPU or CPU.
I haven't really looked into PyTorch Mobile in detail, but I think it currently only runs on the CPU, not on the GPU. And it definitely won't run on the ANE because only Core ML can do that.
Converting models can be a hassle, especially from PyTorch which requires going through ONNX first. But you do end up with a much faster way to run those models.
| https://stackoverflow.com/questions/62211409/ |
Get the Cross Entropy Loss in pytorch as in Keras | I am struggling to port a classification model form keras to pytorch. Especially the cross entropy loss seems to return totally different numbers.
import numpy as np
import torch as t
import torch.nn as nn
import tensorflow.keras.backend as K
y_true = np.array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]])
y_pred = np.array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1, 0.41, 0.31, 0.21, 0.11]])
print("Keras", K.categorical_crossentropy(K.constant(y_true), K.constant(y_pred)))
print("PyTorch", nn.CrossEntropyLoss()(t.tensor(y_pred).argsort(dim=-1).float(), t.tensor(y_true).argmax(dim=-1)))```
prints:
Keras tf.Tensor([2.3369865], shape=(1,), dtype=float32)
PyTorch tensor(1.4587)
Since I have a custom loss function where cross entropy is a part of it, I would need to get similar if not the same numbers.
| The problem is that they have different implementations.
As pytorch docs says, nn.CrossEntropyLoss combines nn.LogSoftmax() and nn.NLLLoss() in one single class. However, tensorflow docs specifies that keras.backend.categorical_crossentropy do not apply Softmax by default unless you set from_logits is True. For this reason, you should not use keras.backend.categorical_crossentropy without having previously apply softmax unless you use from_logits=True.
If you don't want to apply softmax beforehand you should use:
import numpy as np
import torch as t
import torch.nn as nn
import tensorflow.keras.backend as K
y_true = np.array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]])
y_pred = np.array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1, 0.41, 0.31, 0.21, 0.11]])
print("Keras", K.categorical_crossentropy(K.constant(y_true), K.constant(y_pred), from_logits=True))
# output: Keras tf.Tensor([2.408051], shape=(1,), dtype=float32)
print("PyTorch", nn.CrossEntropyLoss()(t.tensor(y_pred).float(), t.tensor(y_true).argmax(dim=-1)))
# output: PyTorch tensor(2.4081)
Otherwise, you can apply Softmax manually before computing categorical_crossentropy
import numpy as np
import torch as t
import torch.nn as nn
import tensorflow.keras.backend as K
y_true = np.array([[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]])
y_pred = np.array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 1, 0.41, 0.31, 0.21, 0.11]])
print("Keras", K.categorical_crossentropy(K.constant(y_true), K.softmax(K.constant(y_pred))))
# output: Keras tf.Tensor([2.408051], shape=(1,), dtype=float32)
print("PyTorch", nn.CrossEntropyLoss()(t.tensor(y_pred).float(), t.tensor(y_true).argmax(dim=-1)))
# output: PyTorch tensor(2.4081)
So you should not use keras.backend.categorical_crossentropy with from_logits=False as you were doing in your example.
tf.keras.backend.categorical_crossentropy
target: A tensor of the same shape as output.
output: A tensor resulting from a softmax (unless from_logits is True, in which case output is expected to be the logits).
from_logits: Boolean, whether output is the result of a softmax, or is a tensor of logits.
| https://stackoverflow.com/questions/62213536/ |
Pytorch ValueError: Target and input must have the same number of elements after change Image size | i have a working peace of code, which takes a Batchsize from 32 Image with the shape of 256*256 and i can train my neuronal network.
class Netz(nn.Module):
def __init__(self):
super(Netz,self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 12, kernel_size=3)
self.conv3 = nn.Conv2d(12, 18, kernel_size=3)
self.conv4 = nn.Conv2d(18, 24, kernel_size=3)
self.fc1 = nn.Linear(4704, 1000)
self.fc2 = nn.Linear(1000, 350)
self.fc3 = nn.Linear(350,43)
def forward (self,x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = F.relu(F.max_pool2d(self.conv3(x), 2))
x = F.relu(F.max_pool2d(self.conv4(x), 2))
x = x.view(-1,4704)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return torch.sigmoid(x)
# Traningsalgorithmus
optimizer = optim.Adam(model.parameters(), lr=0.001)
def train(epoch):
model.train()
batch_id = 0
for data, target in train_data_set:
data = Variable(data)
target = torch.Tensor(target)
target = Variable(target)
optimizer.zero_grad()
out = model(data)
criterion = F.binary_cross_entropy
loss = criterion(out,target)
loss.backward()
optimizer.step()
print ('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_id * len(data), len(train_data_set)*32,
100. * batch_id / len(train_data_set), loss.item()))
batch_id = batch_id + 1
When i change the size of the Image to 50*50 and i change the code Net like this:
class Netz(nn.Module):
def __init__(self):
super(Netz,self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 12, kernel_size=3)
self.conv3 = nn.Conv2d(12, 18, kernel_size=3)
self.conv4 = nn.Conv2d(18, 24, kernel_size=3)
self.fc1 = nn.Linear(768, 1000)
self.fc2 = nn.Linear(1000, 350)
self.fc3 = nn.Linear(350,43)
def forward (self,x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = F.relu(F.max_pool2d(self.conv3(x), 2))
x = F.relu(F.max_pool2d(self.conv4(x), 2))
x = x.view(-1,768)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return torch.sigmoid(x)
I gonne get the error:
ValueError: Target and input must have the same number of elements. target nelement (1376) != input nelement (43)
So far i see it the problem comes after the x = x.view(-1,768) it returns an Tensor with the torch.Size ([1,768]). When i use the Image Size 256*256 it returns Tensor with torch.Size ([32,4704]) and i don't get the error.
Does someone know how i can fix me problem ?
| You need to be careful when using -1 in views, since it just uses the remaining size and if that doesn't correspond to your intentions, you won't immediately know that it didn't behave as expected. You should particularly avoid -1 for the batch dimension, because you can mistakenly change the batch size, which should not change and their data should be independent from each other.
Given an input of size [32, 3, 256, 256] the output after the convolutions has size [32, 24, 14, 14], which can be flattened to [32, 4704], as you anticipated in the first version. When you change the input to size [32, 3, 50, 50], the output after the convolutions has size [32, 24, 1, 1], which can clearly not be converted to size [32, 768], because flattening it, would result in a size of [32, 24]. Given that 32 * 24 = 768, you incorrectly combine the batches into one, creating a tensor of size [1, 768], and if you had used a different batch size, it wouldn't even work.
The correct input size to the first linear should be 24:
self.fc1 = nn.Linear(24, 1000)
To catch any mistake regarding the dimensions in the model, instead of later at the loss calculation, you can set the actual size without any -1 in the view, or alternatively flatten it after the batch size, either with view or torch.flatten, so an error will occur if there is a size mismatch in the linear layer:
# Reshape to [batch_size, 24]
x = x.view(x.size(0), 24)
# Flatten with view
x = x.view(x.size(0), -1)
# Flatten, starting from dimension 1 (after the batch dimension)
x = x.flatten(1)
| https://stackoverflow.com/questions/62218670/ |
Dimension error in implementation of a convolutional network | I am trying to understand why my classifier has a dimension issue. Here is my code:
class convnet(nn.Module):
def __init__(self, num_classes=1000):
super(convnet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(kernel_size=2, stride = 2),
nn.Conv2d(32, 32, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(kernel_size=2, stride = 2), #stride=2),
nn.Conv2d(32, 64, kernel_size=3, stride=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.MaxPool2d(kernel_size=2, stride = 2),
)
self.classifier = nn.Sequential(
nn.Linear(576, 128),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Linear(128, 64),
nn.ReLU(inplace=True),
nn.BatchNorm2d(64),
nn.Linear(64,num_classes),
nn.Softmax(),
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x,1) #x.view(x.size(0), 256 * 6 * 6)
x = self.classifier(x)
return x
def neuralnet(num_classes,**kwargs):
model = convnet(**kwargs)
return model
So here my issue is: expected 4D input (got 2D input)
I'm quite sure that the error arises from the flatten command, however I don't really understand why as the classifier has fully dense connections. If someone knows where I'm going wrong, that would be very helpful!
Thank you
| After flattening, the input to the classifier has 2 dimensions (size: [batch_size, 576]), therefore the output of the first linear layer will also have 2 dimensions (size: [batch_size, 128]). That output is then passed to nn.BatchNorm2d, which requires its input to have 4 dimensions (size: [batch_size, channels, height, width]).
If you want to use batch norm on a 2D input, you need to use nn.BatchNorm1d, which accepts either a 3D input (size: [batch_size, channels, length]) or a 2D input (size: [batch_size, length]).
self.classifier = nn.Sequential(
nn.Linear(576, 128),
nn.BatchNorm1d(128),
nn.ReLU(inplace=True),
nn.Linear(128, 64),
nn.ReLU(inplace=True),
nn.BatchNorm1d(64),
nn.Linear(64,num_classes),
nn.Softmax(),
)
| https://stackoverflow.com/questions/62219488/ |
Why do I get NONE gradient of parameters in a loaded model in Pytorch, even after backword? | I have a pretrained model which was saved by
torch.save(net, 'lenet5_mnist_model')
And now I am loading it back and trying to calculate fisher information matrix like this:
precision_matrices = {}
batch_size = 32
my_model = torch.load('lenet5_mnist_model')
my_model.eval() # I tried to comment this off, but still no luck
for n, p in deepcopy({n: p for n, p in my_model.named_parameters()}).items()
p = torch.tensor(p, requires_grad = True)
p.data.zero_()
precision_matrices[n] = variable(p.data)
for idx in range(int(images.shape[0]/batch_size)):
x = images[idx*batch_size : (idx+1)*batch_size]
my_model.zero_grad()
x = Variable(x.cuda(), requires_grad = True)
output = my_model(x).view(1,-1)
label = output.max(1)[1].view(-1)
loss = F.nll_loss(F.log_softmax(output, dim=1), label)
loss = Variable(loss, requires_grad = True)
loss.backward()
for n, p in my_model.named_parameters():
precision_matrices[n].data += p.grad.data**2
Finally, the above code will crash at the last line, because p.grad is NoneType. So the error is:
AttributeError: 'NoneType' object has no attribute 'data'.
Could someone provide some guidance on what caused the NoneType grad for the parameters? How should I fix this?
| Your loss does not backpropagate the gradients through the model, because you are creating a new loss tensor with the value of the actual loss, which is a leaf of the computational graph, meaning that there is no history to backpropagate through.
loss.backward() needs to be called on the output of loss = F.nll_loss(F.log_softmax(output, dim=1), label).
I'm assuming that you thought you need to create a tensor with requires_grad=True, to be able to calculate the gradients. That is not the case. Tensors created with requires_grad=True are the leaves of the computational graph (they start the graph) and every operation performed on any tensor that is part of the graph is tracked such that the gradients can flow through the intermediate results to the leaves. Only tensors that need to be optimised (i.e. learnable parameters) should set requires_grad=True manually (the model's parameters do that automatically), everything else regarding the gradients is inferred. Neither x nor the loss are learnable parameters.
This confusion presumably arose due to the use of Variable. It was deprecated in PyTorch 0.4.0, which was released over 2 years ago, and all of its functionality has been merged into the tensors. Please do not use Variable.
x = images[idx*batch_size : (idx+1)*batch_size]
my_model.zero_grad()
x = x.cuda()
output = my_model(x).view(1,-1)
label = output.max(1)[1].view(-1)
loss = F.nll_loss(F.log_softmax(output, dim=1), label)
loss.backward()
| https://stackoverflow.com/questions/62224241/ |
How to use list as index with pytorch | For example, a 2d tensor:
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
And a list l = [0, 1]
If I execute t[l], then it ends up printing the 0th and first line of t.
But what if I want to use l as an index? I expect to use l to find the element at 0th row and 1th column. In other words, I expect the same result as t[0, 1] or t[0][1].
And I want to use it in more than 2d dimensions as well. Using l with length n as an index to track elements in n dimensions tensor.
| IIUC You can do this for given scenario - t[tuple(l)]
t
tensor([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
l
[0, 1]
t[tuple(l)] # equivalent to t[(0,1)] or t[0,1]
tensor(2)
| https://stackoverflow.com/questions/62228373/ |
Training with Pytorch: error due to CUDA memory issue | I am trying to train a model on the Cityscapes dataset, for segmentation. I use torchvision deeplabv3_resnet50 model and it's Cityscapes dataset class and transforms. In case it matters, I am running the code in Jupyter notebook.
The datasets are working, as are the dataloaders. When I attempt to train, I always get this error, at the point when the first batch is trying to be put thru the network (y_ = net(xb) in one_epoch function).
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 6.00 GiB total capacity; 4.20 GiB already allocated; 6.87 MiB free; 4.20 GiB reserved in total by PyTorch)
What is strange, is that no matter what the batch size (bs) is, the the amount of memory free according to the error is a value a little less than the amount of memory that is trying to be allocated, e.g. for bs=16 I get:
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 6.00 GiB total capacity; 2.90 GiB already allocated; 1.70 GiB free; 2.92 GiB reserved in total by PyTorch)
I have a much more complicated model running, that will work with bs=16. This model builds everything from scratch. But I really want to be able to use the simplicity that torchvision seems to have with it's model zoo and datasets.
My code is below, not much more than the bare essentials, enough to show if it is running ok on the GPU.
def one_epoch(net, loss, dl, opt=None, metric=None):
if opt:
net.train() # only affects some layers
else:
net.eval()
rq_stored = []
for p in net.parameters():
rq_stored.append(p.requires_grad)
p.requires_grad = False
L, M = [], []
dl_it = iter(dl)
for xb, yb in tqdm(dl_it, leave=False):
xb, yb = xb.cuda(), yb.cuda()
y_ = net(xb)
l = loss(y_, yb)
if opt:
opt.zero_grad()
l.backward()
opt.step()
L.append(l.detach().cpu().numpy())
if metric: M.append(metric(y_, yb).cpu().numpy())
if not opt:
for p,rq in zip(net.parameters(), rq_stored): p.requires_grad = rq
return L, M
accuracy = lambda y_,yb: (y_.max(dim=1)[1] == yb).float().mean()
def fit(net, tr_dl, val_dl, loss=nn.CrossEntropyLoss(), epochs=3, lr=3e-3, wd=1e-3):
opt = optim.Adam(net.parameters(), lr=lr, weight_decay=wd)
Ltr_hist, Lval_hist = [], []
for epoch in trange(epochs):
Ltr, _ = one_epoch(net, loss, tr_dl, opt)
Lval, Aval = one_epoch(net, loss, val_dl, None, accuracy)
Ltr_hist.append(np.mean(Ltr))
Lval_hist.append(np.mean(Lval))
print(f'epoch: {epoch+1}\ttraining loss: {np.mean(Ltr):0.4f}\tvalidation loss: {np.mean(Lval):0.4f}\tvalidation accuracy: {np.mean(Aval):0.2f}')
return Ltr_hist, Lval_hist
class To3ch(object):
def __call__(self, pic):
if pic.shape[0]==1: pic = pic.repeat(3,1,1)
return pic
bs = 1
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transf = transforms.Compose([
transforms.ToTensor(),
To3ch(),
transforms.Normalize(*imagenet_stats)
])
train_ds = datasets.Cityscapes('C:/cityscapes_ds', split='train', target_type='semantic', transform=transf, target_transform=transf)
val_ds = datasets.Cityscapes('C:/cityscapes_ds', split='val', target_type='semantic', transform=transf, target_transform=transf)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True, num_workers=0)
val_dl = DataLoader(val_ds, batch_size=2*bs, shuffle=False, num_workers=0)
net = models.segmentation.deeplabv3_resnet50(num_classes=20)
fit(net.cuda(), train_dl, val_dl, loss=nn.CrossEntropyLoss(), epochs=1, lr=1e-4, wd=1e-4, plot=True)
| You didn't specify, but if you're using the original Cityscapes, this OOM is completely expected.
The original Cityscapes dataset has large images (something like 1024x2048, IIRC), and it looks like you have a 6GB GPU. FYI, I cannot fit batch_size=2 in a 12GB GPU with inputs of this size.
When training DeepLab models, it is common to apply transformations on the input (e.g., random crops, resize, scaling, etc.), and it looks like you don't apply any.
When you say:
I have a much more complicated model running, that will work with bs=16.
Perhaps you're looking at a different kind of complexity, something that has less impact on memory requirements than you think.
| https://stackoverflow.com/questions/62232285/ |
Training Generator of Generative Adversarial Network (GAN) in PyTorch | I am working on implementing a Generative Adversarial Network (GAN) in PyTorch 1.5.0.
For computing the loss of the generator, I compute both the negative probabilities that the discriminator mis-classifies an all-real minibatch and an all-(generator-generated-)fake minibatch. Then, I back-propagate both parts sequentially and finally apply the step function.
Calculating and back-propagating the part of the loss which is a function of the mis-classifications of the generated fake data seems straight forward, since during back-propagation of that loss term, the backward path leads through the generator who has produced the fake data in the first place.
However, classification of all-real-data minibatches does not involve passing data through the generator. Therefore, I was wondering whether the following code snipped would still calculate gradients for the generator or whether it would not calculate any gradients at all (since the backward path does not lead through the generator and the discriminator is in eval-mode while updating the generator)?
# Update generator #
net.generator.train()
net.discriminator.eval()
net.generator.zero_grad()
# All-real minibatch
x_real = get_all_real_minibatch()
y_true = torch.full((batch_size,), label_fake).long() # Pretend true targets were fake
y_pred = net.discriminator(x_real) # Produces softmax probability distribution over (0=label_fake,1=label_real)
loss_real = NLLLoss(torch.log(y_pred), y_true)
loss_real.backward()
optimizer_generator.step()
If this doesn’t work as intended, how could I make it work? Thanks in advance!
| No gradients are propagated to the generator, as no calculation was performed with any of the generator's parameters. The discriminator being in eval mode would not prevent the gradients from propagating to the generator, albeit they would be slightly different if you are using layers that behave differently in eval mode compared to train mode, such as dropout.
The misclassification of real images is not part of training the generator, because it doesn't gain anything from this information. Conceptually, what should the generator learn from the fact that the discriminator failed to correctly classify a real image? The sole task of the generator is to create a fake image such that the discriminator thinks it's real, therefore the only relevant information for the generator is whether the discriminator was able to identify the fake image. If the discriminator was indeed able to identify the fake image, the generator needs to adjust itself to create a more convincing fake.
Of course it's not a binary case, but the generator always tries to improve the fake image such that the discriminator is even more convinced that it was a real image. The generator's goal is not to make the discriminator be doubtful (probability of 0.5 that it's real or fake), but that the discriminator is fully convinced that it's real, even though it's fake. That's why they are adversarial, not cooperative.
| https://stackoverflow.com/questions/62234151/ |
huggingface transformers bert model without classification layer | I want to do a joint-embedding from vgg16 and bert for classification.
The thing with huggingface transformers bert is that it has the classification layer which has num_labels dimension.
But, I want the output from BertPooler (768 dimensions) which I will use as a text-embedding for an extended model.
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
This gives the following model:
BertForSequenceClassification(
...
...
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=2, bias=True)
)
How can I get rid of the classifier layer?
| from transformers import BertModel
model = BertModel.from_pretrained('bert-base-uncased')
Output
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
Checkout the BertModel definition here.
| https://stackoverflow.com/questions/62235153/ |
Fixing incorrect dimensions in PyTorch neural network | I am trying to train my neural network, which is written in PyTorch, but I got the following traceback because of incorrect dimensions. Got the following traceback
Traceback (most recent call last):
File "plot_parametric_pytorch.py", line 139, in <module>
ops = opfun(X_train[smpl])
File "plot_parametric_pytorch.py", line 92, in <lambda>
opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))
File "/mnt_home/klee/LBSBGenGapSharpnessResearch/deepnet.py", line 77, in forward
x = self.features(x)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/pooling.py", line 141, in forward
self.return_indices)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/_jit_internal.py", line 209, in fn
return if_false(*args, **kwargs)
File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/functional.py", line 539, in _max_pool2d
input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small
This is all when trying to run a forward pass.
I'm pretty sure this is a small bug but I myself am new to writing PyTorch code so I am not sure if I know where it is. For reference, when I checked the dimensions of the Keras model version of this by using model.summary(), the final dimensions before flattening and adding dense layers(which I think should happen in self.classifier in pytorch, although I am not sure too) were 512 x 1 x 1.
This is my model in PyTorch:
class VGG(nn.Module):
def __init__(self, num_classes=10):
super(VGG, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Dropout(0.3),
nn.Conv2d(64, 64, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 128, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(128, 128, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(128, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(256, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(256, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(256, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(512, 512, bias=False),
nn.Dropout(0.5),
nn.BatchNorm1d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
x = x.view(-1, 512)
x = self.classifier(x)
return F.log_softmax(x)
def cifar10_deep(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 10)
return VGG(num_classes)
def cifar100_deep(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 100)
return VGG(num_classes)
And I am trying to run the following code:
cudnn.benchmark = True
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32')
X_train = np.transpose(X_train, axes=(0, 3, 1, 2))
X_test = X_test.astype('float32')
X_test = np.transpose(X_test, axes=(0, 3, 1, 2))
X_train /= 255
X_test /= 255
device = torch.device('cuda:0')
# This is where you can load any model of your choice.
# I stole PyTorch Vision's VGG network and modified it to work on CIFAR-10.
# You can take this line out and add any other network and the code
# should run just fine.
model = cifar_shallow.cifar10_shallow()
#model.to(device)
# Forward pass
opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))
# Forward pass through the network given the input
predsfun = lambda op: np.argmax(op.data.numpy(), 1)
# Do the forward pass, then compute the accuracy
accfun = lambda op, y: np.mean(np.equal(predsfun(op), y.squeeze()))*100
# Initial point
x0 = deepcopy(model.state_dict())
# Number of epochs to train for
# Choose a large value since LB training needs higher values
# Changed from 150 to 30
nb_epochs = 30
batch_range = [25, 40, 50, 64, 80, 128, 256, 512, 625, 1024, 1250, 1750, 2048, 2500, 3125, 4096, 4500, 5000]
# parametric plot (i.e., don't train the network if set to True)
hotstart = False
if not hotstart:
for batch_size in batch_range:
optimizer = torch.optim.Adam(model.parameters())
model.load_state_dict(x0)
#model.to(device)
average_loss_over_epoch = '-'
print('Optimizing the network with batch size %d' % batch_size)
np.random.seed(1337) #So that both networks see same sequence of batches
for e in range(nb_epochs):
model.eval()
print('Epoch:', e, ' of ', nb_epochs, 'Average loss:', average_loss_over_epoch)
average_loss_over_epoch = 0
# Checkpoint the model every epoch
torch.save(model.state_dict(), "./models/ShallowNetCIFAR10BatchSize" + str(batch_size) + ".pth")
array = np.random.permutation(range(X_train.shape[0]))
slices = X_train.shape[0] // batch_size
beginning = 0
end = 1
# Training loop!
for _ in range(slices):
start_index = batch_size * beginning
end_index = batch_size * end
smpl = array[start_index:end_index]
model.train()
optimizer.zero_grad()
ops = opfun(X_train[smpl]) <<----- error in this line
tgts = Variable(torch.from_numpy(y_train[smpl]).long().squeeze())
loss_fn = F.nll_loss(ops, tgts)
average_loss_over_epoch += loss_fn.data.numpy() / (X_train.shape[0] // batch_size)
loss_fn.backward()
optimizer.step()
beginning += 1
end += 1
I am wondering where in my model I went wrong. I was writing the PyTorch version of the following Keras model. Any help in fixing the small bug would be appreciated!
def deepnet(nb_classes):
global img_size
model = Sequential()
model.add(Conv2D(64, (3, 3), input_shape=img_size))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(128, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
model.add(Conv2D(256, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu')); model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same'))
model.add(BatchNormalization(axis=1))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
model.add(Flatten()); model.add(Dropout(0.5))
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu')); model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
return model
Please let me know if there is an issue with the way I converted the neural network model from PyTorch to Keras. From what I understand, padding should always equal 1 in pytorch because of padding=same setting in Keras.
| The first convolution doesn't use padding.
nn.Conv2d(3, 64, kernel_size=3, bias=False)
Therefore the spatial dimensions will be reduced by 2. In the case of CIFAR the input has size [batch_size, 3, 32, 32] and the output would be [batch_size, 64, 30, 30]. For all other convolutions the spatial dimensions are unchanged, but the max pooling will halve them (integer division). Since you have 5 max pooling layers, the height/width change as follows:
30 -> 15 -> 7 -> 3 -> 1 -> 0 (error)
In the Keras version you are using padding in the max pooling layers as well, which is presumably only applied if the input is not strictly divisible by 2. If you wanted to replicate that behaviour in PyTorch you would have to set the padding of the max pooling layers manually for the ones that receive an input with an odd height/width.
I don't think that using padding in max pooling with a kernel size of 2 is beneficial, especially as you are using ReLU before them, meaning that the padded max pooling just preserves the border values (it's a different story for bigger kernel sizes).
The simplest solution is to use padding in the first convolution, such that the spatial dimensions are unchanged:
nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=False)
Another option would be to remove the last max pooling layer, since the height/width are already 1, but that also means that the last three convolutions are applied to only one value, since the input sizes would be [batch_size, 512, 1, 1], which kind of defeats the purpose of using a convolution.
| https://stackoverflow.com/questions/62237372/ |
PyTorch: Index high dimensional tensor with two dimensional tensor | Suppose I have the following tensors:
N = 2
k = 3
d = 2
L = torch.arange(N * k * d * d).view(N, k, d, d)
L
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]],
[[ 8, 9],
[10, 11]]],
[[[12, 13],
[14, 15]],
[[16, 17],
[18, 19]],
[[20, 21],
[22, 23]]]])
index = torch.Tensor([0,1,0,0]).view(N,-1)
index
tensor([[0., 1.],
[0., 0.]])
I now would like to use the index tensor to pick out the corresponding matrices on the second dimension, i.e. I would like to get something like:
tensor([[[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]]],
[[[12, 13],
[14, 15]],
[[[[12, 13],
[14, 15]]])
Any idea how I could achieve this?
Thank you so much!
| Tensors can be indexed with multiple tensors specified across different dimensions (tuples of tensors), where the i-th element of each tensor are combined to create a tuple of indices, i.e. data[indices_dim0, indices_dim1] results in indexing data[indices_dim0[0], indices_dim1[0]], data[indices_dim0[1], indices_dim1[1]] and so on. They must have the same length len(indices_dim0) == len(indices_dim1).
Let's use the flat version of index (before you applied the view). Each element needs to be matched to the appropriate batch index, which would be [0, 0, 1, 1]. Also index needs to have type torch.long, because floats cannot be used as indices. torch.tensor should be preferred for creating tensors with existing data, since torch.Tensor is an alias for the default tensor type (torch.FloatTensor), whereas torch.tensor automatically uses the data type that represents the given values, but also supports the dtype argument to set the type manually, and is generally more versatile.
# Type torch.long is inferred
index = torch.tensor([0, 1, 0, 0])
# Same, but explicitly setting the type
index = torch.tensor([0, 1, 0, 0], dtype=torch.long)
batch_index = torch.tensor([0, 0, 1, 1])
L[batch_index, index]
# => tensor([[[ 0, 1],
# [ 2, 3]],
#
# [[ 4, 5],
# [ 6, 7]],
#
# [[12, 13],
# [14, 15]],
#
# [[12, 13],
# [14, 15]]])
The indices are not limited to 1D tensors, but they all need to have the same size and each element is used as one index, for example with 2D tensors the indexing happens as data[indices_dim0[i][j], indices_dim1[i][j]]
With 2D tensors it happens to be much simpler to create the batch indices without having to do it manually.
index = torch.tensor([0, 1, 0, 0]).view(N, -1)
# => tensor([[0, 1],
# [0, 0]])
# Every batch gets its index and is repeated across dim=1
batch_index = torch.arange(N).view(N, 1).expand_as(index)
# => tensor([[0, 0],
# [1, 1]])
L[batch_index, index]
| https://stackoverflow.com/questions/62241956/ |
How to create a Pytorch network with mixed categorical and continuous matrix input | I'm creating a network network that will take a matrix of continuous values along with some categorical input represented as vectors of all the classes.
Now, I'm also looking extract features from the matrix with convolution. But this would not be possible if I reduce the matrix to dimension 1 and concatenate with the class vectors.
Is there a way to concat it together as a single input? Or do I have to create two separate input layer then somehow join them after the convolution? If its the latter, what function am I looking for?
| The most common approach to create continuous values from categorical data is nn.Embedding. It creates a learnable vector representation of the available classes, such that two similar classes (in a specific context) are closer to each other than two dissimilar classes.
When you have a vector of classes with size [v], the embedding would create a tensor of size [v, embedding_size], where each class is represented by a vector of length embedding_size.
num_classes = 4
embedding_size = 10
embedding = nn.Embedding(num_classes, embedding_size)
class_vector = torch.tensor([1, 0, 3, 3, 2])
embedded_classes = embedding(class_vector)
embedded_classes.size() # => torch.Size([5, 10])
How you combine them with your continuous matrix depends on your particular use case. If you just want a 1D vector you can flatten and concatenate them. On the other hand, if the matrix has meaningful dimensions that you want to keep, you should decide which dimension makes sense to concatenate on and adapt the embedding_size such that they can be concatenated.
| https://stackoverflow.com/questions/62242396/ |
Pytorch transform tensor to one hot | What is the easiest way to transform tensor of shape (batch_size, height, width) filled with n values to tensor of shape (batch_size, n, height, width)?
I created solution below, but looks like there are easier and faster way to do this
def batch_tensor_to_onehot(tnsr, classes):
tnsr = tnsr.unsqueeze(1)
res = []
for cls in range(classes):
res.append((tnsr == cls).long())
return torch.cat(res, dim=1)
| You can use torch.nn.functional.one_hot.
For your case:
a = torch.nn.functional.one_hot(tnsr, num_classes=classes)
out = a.permute(0, 3, 1, 2)
| https://stackoverflow.com/questions/62245173/ |
torch.rfft - fft-based convolution creating different output than spatial convolution | I implemented FFT-based convolution in Pytorch and compared the result with spatial convolution via conv2d() function. The convolution filter used is an average filter. The conv2d() function produced smoothened output due to average filtering as expected but the fft-based convolution returned a more blurry output.
I have attached the code and outputs here -
spatial convolution -
from PIL import Image, ImageOps
import torch
from matplotlib import pyplot as plt
from torchvision.transforms import ToTensor
import torch.nn.functional as F
import numpy as np
im = Image.open("/kaggle/input/tiger.jpg")
im = im.resize((256,256))
gray_im = im.convert('L')
gray_im = ToTensor()(gray_im)
gray_im = gray_im.squeeze()
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
conv_gray_im = gray_im.unsqueeze(0).unsqueeze(0)
conv_fil = fil.unsqueeze(0).unsqueeze(0)
conv_op = F.conv2d(conv_gray_im,conv_fil)
conv_op = conv_op.squeeze()
plt.figure()
plt.imshow(conv_op, cmap='gray')
FFT-based convolution -
def fftshift(image):
sh = image.shape
x = np.arange(0, sh[2], 1)
y = np.arange(0, sh[3], 1)
xm, ym = np.meshgrid(x,y)
shifter = (-1)**(xm + ym)
shifter = torch.from_numpy(shifter)
return image*shifter
shift_im = fftshift(conv_gray_im)
padded_fil = F.pad(conv_fil, (0, gray_im.shape[0]-fil.shape[0], 0, gray_im.shape[1]-fil.shape[1]))
shift_fil = fftshift(padded_fil)
fft_shift_im = torch.rfft(shift_im, 2, onesided=False)
fft_shift_fil = torch.rfft(shift_fil, 2, onesided=False)
shift_prod = fft_shift_im*fft_shift_fil
shift_fft_conv = fftshift(torch.irfft(shift_prod, 2, onesided=False))
fft_op = shift_fft_conv.squeeze()
plt.figure('shifted fft')
plt.imshow(fft_op, cmap='gray')
original image -
spatial convolution output -
fft-based convolution output -
Could someone kindly explain the issue?
| The main problem with your code is that Torch doesn't do complex numbers, the output of its FFT is a 3D array, with the 3rd dimension having two values, one for the real component and one for the imaginary. Consequently, the multiplication does not do a complex multiplication.
There currently is no complex multiplication defined in Torch (see this issue), we'll have to define our own.
A minor issue, but also important if you want to compare the two convolution operations, is the following:
The FFT takes the origin of its input in the first element (top-left pixel for an image). To avoid a shifted output, you need to generate a padded kernel where the origin of the kernel is the top-left pixel. This is quite tricky, actually...
Your current code:
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
conv_fil = fil.unsqueeze(0).unsqueeze(0)
padded_fil = F.pad(conv_fil, (0, gray_im.shape[0]-fil.shape[0], 0, gray_im.shape[1]-fil.shape[1]))
generates a padded kernel where the origin is in pixel (1,1), rather than (0,0). It needs to be shifted by one pixel in each direction. NumPy has a function roll that is useful for this, I don't know the Torch equivalent (I'm not at all familiar with Torch). This should work:
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
padded_fil = fil.unsqueeze(0).unsqueeze(0).numpy()
padded_fil = np.pad(padded_fil, ((0, gray_im.shape[0]-fil.shape[0]), (0, gray_im.shape[1]-fil.shape[1])))
padded_fil = np.roll(padded_fil, -1, axis=(0, 1))
padded_fil = torch.from_numpy(padded_fil)
Finally, your fftshift function, applied to the spatial-domain image, causes the frequency-domain image (the result of the FFT applied to the image) to be shifted such that the origin is in the middle of the image, rather than the top-left. This shift is useful when looking at the output of the FFT, but is pointless when computing the convolution.
Putting these things together, the convolution is now:
def complex_multiplication(t1, t2):
real1, imag1 = t1[:,:,0], t1[:,:,1]
real2, imag2 = t2[:,:,0], t2[:,:,1]
return torch.stack([real1 * real2 - imag1 * imag2, real1 * imag2 + imag1 * real2], dim = -1)
fft_im = torch.rfft(gray_im, 2, onesided=False)
fft_fil = torch.rfft(padded_fil, 2, onesided=False)
fft_conv = torch.irfft(complex_multiplication(fft_im, fft_fil), 2, onesided=False)
Note that you can do one-sided FFTs to save a bit of computation time:
fft_im = torch.rfft(gray_im, 2, onesided=True)
fft_fil = torch.rfft(padded_fil, 2, onesided=True)
fft_conv = torch.irfft(complex_multiplication(fft_im, fft_fil), 2, onesided=True, signal_sizes=gray_im.shape)
Here the frequency domain is about half the size as in the full FFT, but it is only redundant parts that are left out. The result of the convolution is unchanged.
| https://stackoverflow.com/questions/62246089/ |
How to initialize weights in a pytorch model | I've got a fairly straight forward problem here.
I've just finished re-configuring a network by replacing nn.Upsample with the upConv sequential container shown in the code below. I've verified that everything is lined up by running summary(UNetPP, (3, 128, 128)) which runs with no issue.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
class blockUNetPP(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels):
super().__init__()
self.relu = nn.LeakyReLU(0.2, inplace=True)
self.conv1 = nn.Conv2d(in_channels, middle_channels, 3, padding=1)
self.bn1 = nn.BatchNorm2d(middle_channels)
self.conv2 = nn.Conv2d(middle_channels, out_channels, 3, padding=1)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
return out
class upConv(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.upc = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.Conv2d(in_ch, out_ch*2, 3, stride=1, padding=1),
nn.BatchNorm2d(out_ch*2),
nn.ReLU(inplace=True)
)
def forward(self, x):
out = self.upc(x)
return out
My issue is that when I try to start training the model I get the following issue:
Traceback (most recent call last):
File "runTrain.py", line 90, in <module>
netG.apply(weights_init)
File "C:\Users\Anaconda3\envs\CFD\lib\site-packages\torch\nn\modules\module.py", line 289, in apply
module.apply(fn)
File "C:\Users\Anaconda3\envs\CFD\lib\site-packages\torch\nn\modules\module.py", line 290, in apply
fn(self)
File "D:\Thesis Models\Deep_learning_models\UNet\train\NetC.py", line 8, in weights_init
m.weight.data.normal_(0.0, 0.02)
File "C:\Users\Anaconda3\envs\CFD\lib\site-packages\torch\nn\modules\module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'upConv' object has no attribute 'weight'
I've looked up solutions which suggest looping over container modules, but I'm already doing this with weights_init(m). Could someone explain whats wrong with my current setup?
| You are deciding how to initialise the weight by checking that the class name includes Conv with classname.find('Conv'). Your class has the name upConv, which includes Conv, therefore you try to initialise its attribute .weight, but that doesn't exist.
Either rename your class or make the condition more strict, such as classname.find('Conv2d'). The strictest approach would be to check whether it's an instance of nn.Conv2d, instead of looking at the name of the class.
def weights_init(m):
if isinstance(m, nn.Conv2d):
m.weight.data.normal_(0.0, 0.02)
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
| https://stackoverflow.com/questions/62246656/ |
Pytorch Faster R-CNN size mismatch errors in testing | there!
When running test_net.py in pytorch1.0 Faster R-CNN and demo.py on coco dataset with faster_rcnn_1_10_9771.pth(the pretrained resnet101 model on coco dataset provided by jwyang), I encounter the same errors below :
Called with args:
Namespace(batch_size=1, cfg_file='cfgs/res101.yml', checkepoch=10, checkpoint=9771, checksession=1, class_agnostic=True, cuda=True, dataset='coco', image_dir='/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/images', load_dir='/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/models', mGPUs=True, net='res101', parallel_type=0, set_cfgs=None, vis=True, webcam_num=-1)
/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/lib/model/utils/config.py:374: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml_cfg = edict(yaml.load(f))
Using config:
{'ANCHOR_RATIOS': [0.5, 1, 2],
'ANCHOR_SCALES': [4, 8, 16, 32],
'CROP_RESIZE_WITH_MAX_POOL': False,
'CUDA': False,
'DATA_DIR': '/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/data',
'DEDUP_BOXES': 0.0625,
'EPS': 1e-14,
'EXP_DIR': 'res101',
'FEAT_STRIDE': [16],
'GPU_ID': 0,
'MATLAB': 'matlab',
'MAX_NUM_GT_BOXES': 20,
'MOBILENET': {'DEPTH_MULTIPLIER': 1.0,
'FIXED_LAYERS': 5,
'REGU_DEPTH': False,
'WEIGHT_DECAY': 4e-05},
'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
'POOLING_MODE': 'align',
'POOLING_SIZE': 7,
'RESNET': {'FIXED_BLOCKS': 1, 'MAX_POOL': False},
'RNG_SEED': 3,
'ROOT_DIR': '/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0',
'TEST': {'BBOX_REG': True,
'HAS_RPN': True,
'MAX_SIZE': 1000,
'MODE': 'nms',
'NMS': 0.3,
'PROPOSAL_METHOD': 'gt',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 300,
'RPN_PRE_NMS_TOP_N': 6000,
'RPN_TOP_N': 5000,
'SCALES': [600],
'SVM': False},
'TRAIN': {'ASPECT_GROUPING': False,
'BATCH_SIZE': 128,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.0,
'BIAS_DECAY': False,
'BN_TRAIN': False,
'DISPLAY': 20,
'DOUBLE_BIAS': False,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'GAMMA': 0.1,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'LEARNING_RATE': 0.001,
'MAX_SIZE': 1000,
'MOMENTUM': 0.9,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 8,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [600],
'SNAPSHOT_ITERS': 5000,
'SNAPSHOT_KEPT': 3,
'SNAPSHOT_PREFIX': 'res101_faster_rcnn',
'STEPSIZE': [30000],
'SUMMARY_INTERVAL': 180,
'TRIM_HEIGHT': 600,
'TRIM_WIDTH': 600,
'TRUNCATED': False,
'USE_ALL_GT': True,
'USE_FLIPPED': True,
'USE_GT': False,
'WEIGHT_DECAY': 0.0001},
'USE_GPU_NMS': True}
load checkpoint /home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/models/res101/coco/faster_rcnn_1_10_9771.pth
Traceback (most recent call last):
File "/home/ubuntu/users/fasterrcnn/faster-rcnn.pytorch-pytorch-1.0/demo.py", line 205, in <module>
fasterRCNN.load_state_dict(checkpoint['model'])
File "/home/ubuntu/users/anaconda3/envs/fasterrcnn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for resnet:
**size mismatch for RCNN_bbox_pred.weight:** copying a param with shape torch.Size([324, 2048]) from checkpoint, the shape in current model is torch.Size([4, 2048]).
**size mismatch for RCNN_bbox_pred.bias**: copying a param with shape torch.Size([324]) from checkpoint, the shape in current model is torch.Size([4]).
Process finished with exit code 1
And here is my envs:
# Name Version Build Channel
_libgcc_mutex 0.1 main https://mirrors.ustc.edu.cn/anaconda/pkgs/main
blas 1.0 mkl https://mirrors.ustc.edu.cn/anaconda/pkgs/main
bzip2 1.0.8 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
ca-certificates 2020.1.1 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cairo 1.14.12 h8948797_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi 2020.4.5.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
cffi 1.14.0 py36he30daa8_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
cuda100 1.0 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
cycler 0.10.0 py36_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
cython 0.29.17 py36he6710b0_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
dbus 1.13.14 hb2f20db_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
easydict 1.9 py_0 https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
expat 2.2.6 he6710b0_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
faster-rcnn 0.1 dev_0 <develop>
ffmpeg 4.0 hcdf2ecd_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
fontconfig 2.13.0 h9420a91_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
freeglut 3.0.0 hf484d3e_5 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
freetype 2.9.1 h8a8886c_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
glib 2.63.1 h3eb4bd4_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
graphite2 1.3.13 h23475e2_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
gst-plugins-base 1.14.0 hbbd80ab_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
gstreamer 1.14.0 hb31296c_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
harfbuzz 1.8.8 hffaf4a1_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
hdf5 1.10.2 hba1933b_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
icu 58.2 he6710b0_3 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
intel-openmp 2020.1 217 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
jasper 2.0.14 h07fcdf6_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
jpeg 9b h024ee3a_2 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
kiwisolver 1.2.0 py36hfd86e86_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
ld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libedit 3.1.20181209 hc058e9b_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libffi 3.3 he6710b0_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libgcc-ng 9.1.0 hdf63c60_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libgfortran-ng 7.3.0 hdf63c60_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libglu 9.0.0 hf484d3e_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libopencv 3.4.2 hb342d67_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libopus 1.3.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libpng 1.6.37 hbc83047_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libprotobuf 3.11.4 hd408876_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libtiff 4.1.0 h2733197_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libuuid 1.0.3 h1bed415_2 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libvpx 1.7.0 h439df22_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libxcb 1.13 h1bed415_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
libxml2 2.9.9 hea5a465_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
matplotlib 3.1.3 py36_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
matplotlib-base 3.1.3 py36hef1b27d_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
mkl 2020.1 217 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
mkl-service 2.3.0 py36he904b0f_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
mkl_fft 1.0.15 py36ha843d7b_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
mkl_random 1.1.1 py36h0573a6f_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
ncurses 6.2 he6710b0_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
ninja 1.9.0 py36hfd86e86_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy 1.18.1 py36h4f9e942_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
numpy-base 1.18.1 py36hde5b4d6_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
olefile 0.46 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
opencv 3.4.2 py36h6fd60c2_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
openssl 1.1.1g h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pcre 8.43 he6710b0_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
pillow 7.1.2 py36hb39fc2d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pip 20.0.2 py36_3 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
pixman 0.38.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
protobuf 3.11.4 py36he6710b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
py-opencv 3.4.2 py36hb342d67_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pycparser 2.20 py_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
pyparsing 2.4.7 py_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
pyqt 5.9.2 py36h05f1152_2 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
python 3.6.10 h7579374_2 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
python-dateutil 2.8.1 py_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
pytorch 1.0.0 py3.6_cuda10.0.130_cudnn7.4.1_1 [cuda100] https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
pyyaml 5.3.1 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
qt 5.9.7 h5867ecd_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
readline 8.0 h7b6447c_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
scipy 1.2.1 py36h7c811a0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
setuptools 46.4.0 py36_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
sip 4.19.8 py36hf484d3e_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
six 1.14.0 py36_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
sqlite 3.31.1 h62c20be_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
tensorboardx 2.0 py_0 conda-forge
tk 8.6.8 hbc83047_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
torchvision 0.2.0 py36h17b6947_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
tornado 6.0.4 py36h7b6447c_1 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
wheel 0.34.2 py36_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
xz 5.2.5 h7b6447c_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
yaml 0.1.7 had09818_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.11 h7b6447c_3 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
zstd 1.3.7 h0b5b093_0 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
I tried the most useful way described here:
solution1
to __C.ANCHOR_SCALES = [4,8,16,32].
Also tried modifying the classes of pascal voc dataset to the classes of coco dataset
coco_classes = np.asarray(["__background__",
"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat",
"traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep",
"cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
"skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich",
"orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed",
"dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven",
"toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier",
"toothbrush"])
But still don't work for me.
Anyone help?
| It says your model doesn't fit the pre-trained parameters you want to load.
Maybe check the model you're using and the .pth file and find out if they match or what.
Or post the code of your model and let's see what's going wrong.
| https://stackoverflow.com/questions/62247674/ |
Is this Neural Net example I'm looking at a mistake or am I not understanding backprop? | Is this model using one relu in two places, or are gradients computed by doing a matrix multiplication of layers on both sides of one layer?
In the last layer of this simple neural net (below) during back prop it calculates the gradient for the last layer w2 by doing a matrix multiplication of y prediction - y and h_relu, which I thought was only between layers w1 and w2 not between w2 and y_pred
The line in question is near the bottom. It is grad_w2 = h_relu.t().mm(grad_y_pred).
I am confused because I thought everything was supposed to go in order forward and go in order backwards. Is this relu being used in two places?
Here is an attempt at a visual illustration of the model.
This example is from the Pytorch website. It is the second block of code on the page.
grad_w2 = h_relu.t().mm(grad_y_pred)
import torch
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
I appreciate your patience looking at this and trying to clear this up for me.
If you can try adding another layer of whieghts in the middle with another relu that might help me understand. This is what I was trying to do.
| Consider the following diagram which represents the network in question. The concept of back-propagation is simply a way to quickly and intuitively apply the chain rule on a complex sequence of operations to compute the gradient of an output w.r.t. a tensor. Usually we are interested in computing the gradients of leaf tensors (tensors which are not derived from other tensors) with respect to a loss or objective. All the leaf tensors are represented as circles in the following diagram and the loss is represented by the rectangle with the L label.
Using the backward diagram we can follow the path from L to w1 and w2 in order to determine which partial derivatives we need in order to compute the gradient of L w.r.t. w1 and w2. For simplicity we will assume that all the leaf tensors are scalars so as to avoid getting into the complexities of multiplying vectors and matrices.
Using this approach the gradients of L w.r.t. w1 and w2 are
and
Something to notice is that since w2 is a leaf tensor, we only use dy/dw2 (aka grad_w2) during computation of dL/dw2 since it isn't part of the path from L to w1.
| https://stackoverflow.com/questions/62247728/ |
How to use GPU while training a model? | I am running a code to train a resnet model on a kaggle notebook. I have chosen the accelerator as GPU so I haven't made any mistakes there. I am training the model using the following code:
model.cuda()
for epoch in range(10):
model.train(True)
trainloss=0
for x,y in trainloader:
x,y=x.cuda(),y.cuda()
yhat=model(x)
optimizer.zero_grad()
loss=criterion(yhat,y)
loss.backward()
optimizer.step()
trainloss+=loss.item()
print('Epoch {} Loss: {}'.format(epoch,(trainloss/len(trainloader.dataset))))
model.eval()
testcorrect=0
with torch.no_grad():
for test_x,test_y in testloader:
test_x,test_y=test_x.cuda(),test_y.cuda()
yhat=model(test_x)
_,z=yhat.max(1)
testcorrect+=(test_y==z).sum().item()
print('Model Accuracy: ',(testcorrect/len(testloader.dataset)))
Network Code:
model=torchvision.models.resnet18(pretrained=True)
num_ftrs=model.fc.in_features
model.fc=nn.Sequential(nn.Linear(num_ftrs,1000),
nn.ReLU(),
nn.Linear(1000,2)
)
If you see I have used the .cuda() function on both my model as well as the tensors(inside the training part as well as validation part). However the GPU usage shown for the kaggle notebook is 0% while my CPU usage is up to 99%. Am I missing any code which is required to train the model using the GPU?
| It might be that your model doesn't give GPU enough work. Try to make your network more GPU-hungry, e.g. introduce some linear layer with a bunch of neurons, etc. to double check that in that case you see increased GPU usage. Also I noticed that the measurement is delayed by a bit, so maybe you give GPU some work which it can do in a fraction of a second and the GPU usage bar doesn't have a chance to go higher from 0%.
Maybe you could share the actual network you're using?
I can see the GPU usage going to 100% in Kaggle notebook with a toy example like this (notice 2500 x 2500 linear layer here):
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
trainloader = [(torch.Tensor(np.random.randn(1000, 5)), torch.Tensor([1.0] * 1000))] * 1000
model = nn.Sequential(nn.Linear(5, 2500), nn.Linear(2500, 1500), nn.Linear(1500, 1))
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.)
criterion = lambda x,y : ((x-y)**2).mean()
for epoch in range(10):
for x,y in trainloader:
x,y=x.cuda(),y.cuda()
yhat=model(x)
optimizer.zero_grad()
loss=criterion(yhat,y)
loss.backward()
optimizer.step()
print(epoch)
| https://stackoverflow.com/questions/62248523/ |
Conv3D size doesn’t make sense with NIFTI data? | So I am writing custom dataset for medical images, with .nii (NIFTI1 format), but there is a confusion.
My dataloader returns the shape torch.Size (1,1,256,256,51) . But NIFTI volumes use anatomical axes, different coordinate system, so it doesn’t make any sense to permute the axes, which I normally would with volume made of 2D images each stored separately in local drive with 51 slice images (or depth), as Conv3D follows the convention (N,C,D,H,W).
so torch.Size (1,1,256,256,51) (ordinarily 51 would be the depth) doesn’t follow the convention (N,C,D,H,W) , but I should not permute the axes as the data uses entirely different coordinate system ?
| In pytorch 3d convolution layer naming of the 3 dimensions you do convolution on is not really important (e.g. this layer doesn't really have a special treatment for depth compared to height). All difference is coming from kernel_size argument (and also padding if you use that). If you permute the dimensions and correspondingly permute the kernel_size parameters nothing will really change. So you can either permute your input's dimensions using e.g. x.permute(0, 1, 4, 2, 3) or continue using your initial tensor with depth as the last dimension.
Just to clarify - if you wanted to use kernel_size=(2, 10, 10) on your DxHxW image, now you can instead to use kernel_size=(10, 10, 2) on your HxWxD image. If you want all your code explicitly assume that dimension order is always D, H, W then you can create tensor with permuted dimensions using x.permute(0, 1, 4, 2, 3).
Let me know if I somehow misunderstand the problem you have.
| https://stackoverflow.com/questions/62250184/ |
How to correctly use CTC Loss with GRU in pytorch? | I am trying to create ASR and I am still learning so, I am just trying with a simple GRU:
MySpeechRecognition(
(gru): GRU(128, 128, num_layers=5, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.3, inplace=False)
(fc1): Linear(in_features=128, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=28, bias=True)
)
Classifies each output as one of the possible alphabets + space + blank.
Then I use CTC Loss Function and Adam optimizer:
lr = 5e-4
criterion = nn.CTCLoss(blank=28, zero_infinity=False)
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
In my training loop (I am only showing the problematic area):
output, h = mynet(specs, h)
print(output.size())
output = F.log_softmax(output, dim=2)
output = output.transpose(0,1)
# calculate the loss and perform backprop
loss = criterion(output, labels, input_lengths, label_lengths)
loss.backward()
I get this error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-133-5e47e7b03a46> in <module>
42 output = output.transpose(0,1)
43 # calculate the loss and perform backprop
---> 44 loss = criterion(output, labels, input_lengths, label_lengths)
45 loss.backward()
46 # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, log_probs, targets, input_lengths, target_lengths)
1309 def forward(self, log_probs, targets, input_lengths, target_lengths):
1310 return F.ctc_loss(log_probs, targets, input_lengths, target_lengths, self.blank, self.reduction,
-> 1311 self.zero_infinity)
1312
1313 # TODO: L1HingeEmbeddingCriterion
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, reduction, zero_infinity)
2050 """
2051 return torch.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, _Reduction.get_enum(reduction),
-> 2052 zero_infinity)
2053
2054
RuntimeError: blank must be in label range
I am not sure why I am getting this error. I tried changing to
labels.float()
Thanks.
| Your model predicts 28 classes, therefore the output of the model has size [batch_size, seq_len, 28] (or [seq_len, batch_size, 28] for the log probabilities that are given to the CTC loss). In the nn.CTCLoss you set blank=28, which means that the blank label is the class with index 28. To get the log probabilities for the blank label you would index it as output[:, :, 28], but that doesn't work, because that index is out of range, as the valid indices are 0 to 27.
The last class in your output is at index 27, hence it should be blank=27:
criterion = nn.CTCLoss(blank=27, zero_infinity=False)
| https://stackoverflow.com/questions/62251289/ |
Detectron2 Panoptic FPN Model Partial Execution - TypeError: 'NoneType' object is not iterable | I am trying to extract the pre-output feature-map for the Panoptic Output of Detectron2 ResNet50 - based FPN model.
Hence, In order to get partial model outputs, I am following the official Detectron2 Modeling Documentation to Partially Execute Models.
Please find the code below:
# Setting the model Configurations
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(
"COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.yaml")
)
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-PanopticSegmentation/panoptic_fpn_R_50_1x.yaml"
)
# Build the model from config
model = build_model(cfg)
# Loading an image just for testing
im = cv2.imread("./detectron/input.jpg")
im = torch.from_numpy(im).cuda().permute(2, 0, 1).unsqueeze(0).float()
# Extracting Features - This part works fine
features = model.backbone(im)
# Extracting the proposals - Throws Error
proposals = model.proposal_generator(im, features)
Please find the error thrown by the above step shown below:
TypeError Traceback (most recent call last)
<ipython-input-17-dd53471cf2d2> in <module>
----> 1 proposals = model.proposal_generator(im, features)
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/media/Data/Documents/Python-Codes/Freelance/detectron2/detectron2/modeling/proposal_generator/rpn.py in forward(self, images, features, gt_instances)
409
410 if self.training:
--> 411 gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances)
412 losses = self.losses(
413 anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes
~/miniconda3/envs/pytorch/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
/media/Data/Documents/Python-Codes/Freelance/detectron2/detectron2/modeling/proposal_generator/rpn.py in label_and_sample_anchors(self, anchors, gt_instances)
272 anchors = Boxes.cat(anchors)
273
--> 274 gt_boxes = [x.gt_boxes for x in gt_instances]
275 image_sizes = [x.image_size for x in gt_instances]
276 del gt_instances
TypeError: 'NoneType' object is not iterable
Please let me know what am I doing wrong, and how to fix it. An explanation regarding why did this error come up would also be very helpful.
Please let me know if any other information is required.
| With a bit more digging, I solved the issue. There were a couple of problems in the above code:
I did not set the model to eval mode first - model.eval(). The model needs to be set to eval fist.
The mode.proposal_generator() expects inputs in the form of ImageList object, details regarding which can be found here.
Performing the above two steps solved the issue.
| https://stackoverflow.com/questions/62253755/ |
Calculating negative ELBO | I am going through the tutorial of deep Markov models where they are trying to learn the polyphonic dataset. The Link to tutorial is:
https://pyro.ai/examples/dmm.html
This model parameterises transitions and emissions with using a neural network and for variational inference part they use RNN to map observable 'x' to latent space. And in order to ensure that their model is learning something they try to maximise ELBO or minimise negative ELBO. They refer to negative ELBO as NLL. So far I understand what they are doing. However, the next step confuses me. Once they have their NLL, they divide it by sum of sequence lengths.
times = [time.time()]
for epoch in range(args.num_epochs):
# accumulator for our estimate of the negative log likelihood
# (or rather -elbo) for this epoch
epoch_nll = 0.0
# prepare mini-batch subsampling indices for this epoch
shuffled_indices = np.arange(N_train_data)
np.random.shuffle(shuffled_indices)
# process each mini-batch; this is where we take gradient steps
for which_mini_batch in range(N_mini_batches):
epoch_nll += process_minibatch(epoch, which_mini_batch, shuffled_indices)
# report training diagnostics
times.append(time.time())
epoch_time = times[-1] - times[-2]
log("[training epoch %04d] %.4f \t\t\t\t(dt = %.3f sec)" %
(epoch, epoch_nll / N_train_time_slices, epoch_time))
And I don't quite understand why they are doing that. Can some explain? Are they averaging here? Insights would be appreciated.
| In the tutorial, through optimisation process they are trying to reduce the loss and finally wants to compare it with reference[1] in the tutorial.
"Finally we report some diagnostic info. Note that we normalize the loss by the total number of time slices in the training set (this allows us to compare to reference [1])."
This is from the tutorial that you have provided.
Basically loss is calculated for all the mini-batches and they are normalising it such that final loss would be the loss over whole training data sequence length they have originally taken.
And when we will run the code we can the overall loss after every epoch in diagnostic report generated from logging.
| https://stackoverflow.com/questions/62255080/ |
How to perform Multi output regression using RoBERTa? | I have a problem statement where I want to predict multiple continuous outputs using a text input. I tried using 'robertaforsequenceclassification' from HuggingFace library. But the documentation states that when the number of outputs in the final layer is more than 1, a cross entropy loss is used automatically as mentioned here: https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification.
But I want to use an RMSE loss in a regression setting with two classes in the final layer. How would one go about modifying it?
| BertForSequenceClassification is a small wrapper that wraps the BERTModel.
It calls the models, takes the pooled output (the second member of the output tuple), and applies a classifier over it. The code is here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1168
The simplest solution is writing your own simple wrapper class (based on the BertForSequenceClassification class) hat will do the regression that will do the regression with the loss you like.
| https://stackoverflow.com/questions/62255856/ |
Why does a single Conv2d with 10x10x3 take up 850mb of gpu | In Pytorch I am optimizing a model. If I run the following code, nvidia-smi shows that I am using 850MiB / 7979MiB of memory on my gpu. Why would this be the case?
with torch.no_grad():
A = nn.Conv2d(10,10,3).cuda()
I imagine there are some overhead or a default allocation size specified somewhere but I could not find such documentation. I do recall that tensorflow had a setting to limit the amount of memory allocated.
Related Git Issue
| The convolution does not occupy that much memory. You can verify this with torch.cuda.memory_allocated, which shows the memory that is occupied by all tensors in bytes:
torch.cuda.memory_allocated() # => 0
A = nn.Conv2d(10,10,3).cuda()
torch.cuda.memory_allocated() # => 4608
The convolution only uses 4608 bytes.
nvidia-smi shows higher memory usage for two separate reasons.
Caching Memory Allocator
PyTorch uses a caching memory allocator, meaning that it holds onto more memory than necessary to avoid device synchronisations.
From PyTorch CUDA Semantics - Memory Management:
PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations. However, the unused memory managed by the allocator will still show as if used in nvidia-smi. You can use memory_allocated() and max_memory_allocated() to monitor memory occupied by tensors, and use memory_reserved() and max_memory_reserved() to monitor the total amount of memory managed by the caching allocator.
CUDA Context
When CUDA is first initialised, it creates a context that manages the control of the device. Most notably, the context contains the code of all the different CUDA kernels, of which PyTorch has a large number. The size of the context also varies across different GPU architectures. Some details are discussed in Issue #20532 - Couple hundred MB are taken just by initializing cuda .
The memory you are observing is almost exclusively attributed to the CUDA context.
| https://stackoverflow.com/questions/62257967/ |
Shape of target and predictions tensors in PyTorch loss functions | I am confused with the input shapes for tensors in nn.CrossEntropyLoss.
I am trying to implement a simple autoencoder for text sequences. The core of my problem can be illustrated by the following code
predictions = torch.rand(2, 3, 4)
target = torch.rand(2, 3)
print(predictions.shape)
print(target.shape)
nn.CrossEntropyLoss(predictions.transpose(1, 2), target)
In my case predictions has the shape (time_step, batch_size, vocabulary_size) while target has the shape (time_step, batch_size). Next I am transposing the predictions as per description which says that the second dimension of predictions should be the number of classes - vocabulary_size in my case. The code returns an error RuntimeError: bool value of Tensor with more than one value is ambiguous. Could someone please enlighten me how to use the damn thing? Thank you in advance!
| You are not calling the loss function, but you are building it. The signature of the nn.CrossEntropyLoss constructor is:
nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')
You are setting the predictions as the weight and the target as size_average,
where weight is an optional rescaling of the classes and size_average is deprecated, but expects a boolean. The target is a tensor of size [2, 3], which cannot be converted to a boolean.
You need to create the loss function first, as you don't use any of the optional parameters of the constructor, you don't specify any of them.
# Create the loss function
cross_entropy = nn.CrossEntropyLoss()
# Call it to calculate the loss for your data
loss = cross_entropy(predictions.transpose(1, 2), target)
Alternatively, you can directly use the functional version nn.functional.cross_entropy:
import torch.nn.functional as F
loss = F.cross_entropy(predictions.transpose(1, 2), target)
The advantage of the class version, compared to the functional version, is that you only need to specify the extra parameters once (such as the weight) instead of having to supply them manually each time.
Regarding the dimensions of the tensors, the batch size must be the first dimension, because the losses are averaged per element in the batch, so you have tensor of losses with size [batch_size]. If you used reduction="none", you would get back theses losses per element in the batch, but by default (reduction="mean") the mean of these losses is returned. That result would be different if the mean is taken across time steps rather than batches.
Lastly, the targets need to be the class indices, which means they need to have type torch.long not torch.float. In this randomly chosen example, you could create the random classes with torch.randint.
predictions = torch.rand(2, 3, 4)
target = torch.randint(4, (2, 3))
# Reorder the dimensions
# From: [time_step, batch_size, vocabulary_size]
# To: [batch_size, vocabulary_size, time_step]
predictions = predictions.permute(1, 2, 0)
# From: [time_step, batch_size]
# To: [batch_size, time_step]
target = target.transpose(0, 1)
F.cross_entropy(predictions, target)
| https://stackoverflow.com/questions/62258765/ |
What are saved in optimizer's state_dict? what "state","param_groups" stands for? | When we use Adam optimizer, if we want to continue train a network from a pretrained model, we not only should load "model.state_dict", but also "optimizer.state_dict". And, if we modified our network's structure, we should also modify saved optimizer's state_dict to make our loading successful.
But I don't understand some params in saved "optimizer.state_dict". like optim_dict["state"] (dict_keys(['step', 'exp_avg', 'exp_avg_sq', 'max_exp_avg_sq']))and optim_dict['param_groups'][0]['params']. There are many of numbers like these:
b['optimizer_state_dict']['state'].keys()
Out[71]: dict_keys([140623218628000, 140623218628072, 140623218628216, 140623218628360, 140623218628720, 140623218628792, 140623218628936, 140623218629080, 140623218629656, 140623218629728, 140623218629872, 140623218630016, 140623218630376, 140623218630448, 140623218716744, 140623218716816, 140623218717392, 140623218717464, 140623218717608, 140623218717752, 140623218718112, 140623218718184, 140623218718328, 140623218718472, 140623218719048, 140623218719120, 140623218719264, 140623218719408, 140623218719768, 140623218719840, 140623218719984, 140623218720128, 140623218720704, 140623209943112, 140623209943256, 140623209943400, 140623209943760, 140623209943832, 140623209943976, 140623209944120, 140623209944696, 140623209944768, 140623209944912, 140623209945056, 140623209945416, 140623209945488, 140623209945632, 140623209945776, 140623209946352, 140623209946424, 140623209946568, 140623209946712, 140623209947072, 140623210041416, 140623210041560, 140623210041704, 140623244033768, 140623244033840, 140623244033696, 140623244033912, 140623244033984, 140623244070984, 140623244071056, 140623244071128, 140623429501576, 140623244071200, 140623244071272, 140623244071344, 140623244071416, 140623244071488, 140623244071560, 140623244071632, 140623244071848, 140623244071920, 140623244072064, 140623244072208, 140623244072424, 140623244072496, 140623244072640, 140623244072784, 140623244073216, 140623244073288, 140623244073432, 140623244073576, 140623244073792, 140623244073864, 140623244074008, 140623244074152, 140623244074584, 140623244074656, 140623244074800, 140623244074944, 140623218540760, 140623218540832, 140623218540976, 140623218541120, 140623218541552, 140623218541624, 140623218541768, 140623218541912, 140623218542128, 140623218542200, 140623218542344, 140623218542488, 140623218542920, 140623218542992, 140623218543136, 140623218543280, 140623218543496, 140623218543568, 140623218543712, 140623218543856, 140623218544288, 140623218544360, 140623218544504, 140623218626632, 140623218626992, 140623218627064, 140623218627208, 140623218627352, 140623218627784, 140623218629440, 140623218717176, 140623218718832, 140623218720488, 140623209944480, 140623209946136, 140623210043000])
In [44]: b['optimizer_state_dict']['state'][140623218628072].keys()
Out[44]: dict_keys(['step', 'exp_avg', 'exp_avg_sq', 'max_exp_avg_sq'])
In [45]: b['optimizer_state_dict']['state'][140623218628072]['exp_avg'].shape
Out[45]: torch.Size([480])
| In contrast to model's state_dict, which saves learnable parameters, the optimizer's state_dict contains information about the optimizer’s state (parameters to be optimized), as well as the hyperparameters used.
All optimizers in PyTorch need to inherit from the base class torch.optim.Optimizer. It requires two entries:
params (iterable) – an iterable of torch.Tensors or dicts. Specifies what Tensors should be optimized.
defaults (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).
In addition to that, optimizers also support specifying per-parameter options.
To do this, instead of passing an iterable of Tensors, pass in an iterable of dicts. Each of them will define a separate parameter group, and should contain a params key, containing a list of parameters belonging to it.
Consider an example,
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
Here, we have provided the a) params, b) default hyperparameters: lr, momentum, and c) a parameter group. In this case, the model.base’s parameters will use the default learning rate of 1e-2, model.classifier’s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all parameters.
The step (optimizer.step()) performs a single optimization step (parameter update), which changes the state of the optimizer.
Now, coming to optimizer's state_dict, it returns the state of the optimizer as a dict. It contains two entries:
state - a dict holding current optimization state.
param_groups - a dict containing all parameter groups (as discussed above)
Some of the hyperparameters are specific to the optimizer or model used e.g. (used in Adam)
exp_avg: exponential moving average of gradient values
exp_avg_sq: exponential moving average of squared gradient values
| https://stackoverflow.com/questions/62260985/ |
What happens when we call cpu().data.numpy() on a PyTorch tensor? | I am working on a project and need to pass the data in loss tensor to a plotting library.
What happens when I perform this call -> loss.cpu().data.numpy()
Is there a risk of detaching the tensor from the computation graph?
| .cpu() copies the tensor to the CPU, but if it is already on the CPU nothing changes.
.numpy() creates a NumPy array from the tensor. The tensor and the array share the underlying memory, therefore if the NumPy array is modified in-place, the changes will be reflected in the original tensor. If you plan to make in-place modifications to the NumPy array, you should generally create a copy of it. In the case where loss was on the GPU, loss.cpu() already creates a copy, hence the in-place modifications would only affect the intermediate CPU tensor, which you aren't using.
Is there a risk of detaching the tensor from the computation graph?
No, the original tensor loss is not affected by this in regards to the computational graph.
| https://stackoverflow.com/questions/62261793/ |
Subsets and Splits