instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Pytorch LSTM grad only on last output | I'm working with sequences of different lengths. But I would only want to grad them based on the output computed at the end of the sequence.
The samples are ordered so that they are decreasing in length and they are zero-padded. For 5 1D samples it looks like this (omitting width dimension for visibility):
array([[5, 7, 7, 4, 5, 8, 6, 9, 7, 9],
[6, 4, 2, 2, 6, 5, 4, 2, 2, 0],
[4, 6, 2, 4, 5, 1, 3, 1, 0, 0],
[8, 8, 3, 7, 7, 7, 9, 0, 0, 0],
[3, 2, 7, 5, 7, 0, 0, 0, 0, 0]])
For the LSTM I'm using nn.utils.rnn.pack_padded_sequence with the individual sequence lengths:
x = nn.utils.rnn.pack_padded_sequence(x, [10, 9, 8, 7, 5], batch_first=True)
The initialization of LSTM in the Model constructor:
self.lstm = nn.LSTM(width, n_hidden, 2)
Then I call the LSTM and unpack the values:
x, _ = self.lstm(x)
x = nn.utils.rnn.pad_packed_sequence(x1, batch_first=True)
Then I'm applying a fully connected layer and a softmax
x = x.contiguous()
x = x.view(-1, n_hidden)
x = self.linear(x)
x = x.reshape(batch_size, n_labels, 10) # 10 is the sample height
return F.softmax(x, dim=1)
This gives me an output of shape batch x n_labels x height (5x12x10).
For each sample, I would only want to use a single score, for the last output batch x n_labels (5*12). My question is How can I achieve this?
One idea is to apply tanh on the last hidden layer returned from the model but I'm not quite sure if that would give the same results. Is it possible to efficiently extract the output computed at the end of the sequence eg using the same lengths sequence used for pack_padded_sequence?
| As Neaabfi answered hidden[-1] is correct. To be more specific to your question, as the docs wrote:
output, (h_n, c_n) = self.lstm(x_pack) # batch_first = True
# h_n is a vector of shape (num_layers * num_directions, batch, hidden_size)
In your case, you have a stack of 2 LSTM layers with only forward direction, then:
h_n shape is (num_layers, batch, hidden_size)
Probably, you may prefer the hidden state h_n of the last layer, then **here is what you should do:
output, (h_n, c_n) = self.lstm(x_pack)
h = h_n[-1] # h of shape (batch, hidden_size)
y = self.linear(h)
Here is the code which wraps any recurrent layer LSTM, RNN or GRU into DynamicRNN. DynamicRNN has a capacity of performing recurrent computations on sequences of varied lengths without any care about the order of lengths.
| https://stackoverflow.com/questions/55907234/ |
Register_hook with respect to a subtensor of a tensor | Assuming that we would like to modify gradients only of a part of the variable values, is it possible to register_hook in pytorch only to a subtensor (of a tensor that is a pytorch variable)?
| We can make a function where we modify the part of the tensor as follows:
def gradi(module):
module[1]=0
h = net.param_name.register_hook(gradi)
| https://stackoverflow.com/questions/55911016/ |
The train loss and test loss are the same high in CNN Pytorch(FashionMNIST) | The problem is the Training loss and test loss are the same and the loss and accuracy weren't changing, what's wrong with my CNN structure and training process?
Training results:
Epoch: 1/30.. Training Loss: 2.306.. Test Loss: 2.306.. Test Accuracy: 0.100
Epoch: 2/30.. Training Loss: 2.306.. Test Loss: 2.306.. Test Accuracy: 0.100
Class code:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
#the output will be 0~9 (10)
Below is my CNN and training process:
def forward(self, t):
# implement the forward pass
# (1)input layer
t = t
# (2) hidden conv layer
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# (3) hidden conv layer
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# (4) hidden linear layer
t = t.reshape(-1, 12 * 4 * 4)
t = self.fc1(t)
t = F.relu(t)
# (5) hidden linear layer
t = self.fc2(t)
t = F.relu(t)
# (6) output layer
t = self.out(t)
#t = F.softmax(t, dim=1)
return t
epoch = 30
train_losses, test_losses = [], []
for e in range(epoch):
train_loss = 0
test_loss = 0
accuracy = 0
for images, labels in train_loader:
optimizer.zero_grad()
op = model(images) #output
loss = criterion(op, labels)
train_loss += loss.item()
loss.backward()
optimizer.step()
else:
with torch.no_grad():
model.eval()
for images,labels in testloader:
log_ps = model(images)
prob = torch.exp(log_ps)
top_probs, top_classes = prob.topk(1, dim=1)
equals = labels == top_classes.view(labels.shape)
accuracy += equals.type(torch.FloatTensor).mean()
test_loss += criterion(log_ps, labels)
model.train()
print("Epoch: {}/{}.. ".format(e+1, epoch),
"Training Loss: {:.3f}.. ".format(train_loss/len(train_loader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
train_losses.append(train_loss/len(train_loader))
test_losses.append(test_loss/len(testloader))
|
Be careful when using nn.CrossEntropyLoss and nn.NLLLoss without any confusion.
I don't think your code has problem, I tried to run it exactly the same way as you defined. Maybe you didn't give us other lines of code for initialization for other parts, and it might be a problem.
log_ps is supposed to be log_softmax values but your network only produce logits values (As you said you used CrossEntropyLoss. These lines can be modified as below:
log_ps = model(images)
prob = torch.exp(log_ps)
top_probs, top_classes = prob.topk(1, dim=1)
# Change into simple code:
logits = model(images)
output = logits.argmax(dim=-1) # should give you the class of predicted label
I just made a very similar version of your code and it works well:
Define your model
import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
#the output will be 0~9 (10)
def forward(self, t):
# implement the forward pass
# (1)input layer
t = t
# (2) hidden conv layer
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# (3) hidden conv layer
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# (4) hidden linear layer
t = t.reshape(-1, 12 * 4 * 4)
t = self.fc1(t)
t = F.relu(t)
# (5) hidden linear layer
t = self.fc2(t)
t = F.relu(t)
# (6) output layer
t = self.out(t)
return t
Prepare your dataset
import torchvision
import torchvision.transforms as T
train_dataset = torchvision.datasets.FashionMNIST('./data', train=True,
transform=T.ToTensor(),
download=True)
test_dataset = torchvision.datasets.FashionMNIST('./data', train=False,
transform=T.ToTensor(),
download=True)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)
Start training
epoch = 5
model = Model();
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
train_losses, test_losses = [], []
for e in range(epoch):
train_loss = 0
test_loss = 0
accuracy = 0
for images, labels in train_loader:
optimizer.zero_grad()
logits = model(images) #output
loss = criterion(logits, labels)
train_loss += loss.item()
loss.backward()
optimizer.step()
else:
with torch.no_grad():
model.eval()
for images,labels in test_loader:
logits = model(images)
output = logits.argmax(dim=-1)
equals = (labels == output)
accuracy += equals.to(torch.float).mean()
test_loss += criterion(logits, labels)
model.train()
print("Epoch: {}/{}.. ".format(e+1, epoch),
"Training Loss: {:.3f}.. ".format(train_loss/len(train_loader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(test_loader)),
"Test Accuracy: {:.3f}".format(accuracy/len(test_loader)))
train_losses.append(train_loss/len(train_loader))
test_losses.append(test_loss/len(test_loader))
And here is the result, it converges at least:
Epoch: 1/5.. Training Loss: 0.721.. Test Loss: 0.525.. Test Accuracy: 0.809
Epoch: 2/5.. Training Loss: 0.473.. Test Loss: 0.464.. Test Accuracy: 0.829
Epoch: 3/5.. Training Loss: 0.408.. Test Loss: 0.391.. Test Accuracy: 0.858
Epoch: 4/5.. Training Loss: 0.370.. Test Loss: 0.396.. Test Accuracy: 0.858
Epoch: 5/5.. Training Loss: 0.348.. Test Loss: 0.376.. Test Accuracy: 0.858
| https://stackoverflow.com/questions/55912817/ |
PyTorch does not converge when approximating square function with linear model | I'm trying to learn some PyTorch and am referencing this discussion here
The author provides a minimum working piece of code that illustrates how you can use PyTorch to solve for an unknown linear function that has been polluted with random noise.
This code runs fine for me.
However, when I change the function such that I want t = X^2, the parameter does not seem to converge.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
# Let's make some data for a linear regression.
A = 3.1415926
b = 2.7189351
error = 0.1
N = 100 # number of data points
# Data
X = Variable(torch.randn(N, 1))
# (noisy) Target values that we want to learn.
t = X * X + Variable(torch.randn(N, 1) * error)
# Creating a model, making the optimizer, defining loss
model = nn.Linear(1, 1)
optimizer = optim.SGD(model.parameters(), lr=0.05)
loss_fn = nn.MSELoss()
# Run training
niter = 50
for _ in range(0, niter):
optimizer.zero_grad()
predictions = model(X)
loss = loss_fn(predictions, t)
loss.backward()
optimizer.step()
print("-" * 50)
print("error = {}".format(loss.data[0]))
print("learned A = {}".format(list(model.parameters())[0].data[0, 0]))
print("learned b = {}".format(list(model.parameters())[1].data[0]))
When I execute this code, the new A and b parameters are seemingly random thus it does not converge. I think this should converge because you can approximate any function with a slope and offset function. My theory is that I'm using PyTorch incorrectly.
Can any identify a problem with my t = X * X + Variable(torch.randn(N, 1) * error) line of code?
| You cannot fit a 2nd degree polynomial with a linear function. You cannot expect more than random (since you have random samples from the polynomial).
What you can do is try and have two inputs, x and x^2 and fit from them:
model = nn.Linear(2, 1) # you have 2 inputs now
X_input = torch.cat((X, X**2), dim=1) # have 2 inputs per entry
# ...
predictions = model(X_input) # 2 inputs -> 1 output
loss = loss_fn(predictions, t)
# ...
# learning t = c*x^2 + a*x + b
print("learned a = {}".format(list(model.parameters())[0].data[0, 0]))
print("learned c = {}".format(list(model.parameters())[0].data[0, 1]))
print("learned b = {}".format(list(model.parameters())[1].data[0]))
| https://stackoverflow.com/questions/55912952/ |
PyTorch: _thnn_nll_loss_forward is not implemented for type torch.LongTensor | When trying to create a model using PyTorch, when I am trying to implement the loss function nll_loss, it is throwing the following error
RuntimeError: _thnn_nll_loss_forward is not implemented for type torch.LongTensor
The fit function I have created is:
for epoch in tqdm_notebook(range(1, epochs+1)):
for batch_idx, (data, targets) in enumerate(train_loader):
optimizer.zero_grad()
net.float()
output = net(data)
output_x = output.argmax(dim=2) #to convert (64,50,43) -> (64, 50)
loss = F.nll_loss(output_x, targets)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train epochs: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx*len(data), len(ds.data),
100.*batch_idx / len(ds), loss.item()
))
Where the output and targets' shape is (64, 50) and the dtypes are torch.int64 for both.
| Look at the description of F.nll_loss. It expects to get as input not the argmax of the prediction (type torch.long), but rather the full 64x50x43 prediction vectors (of type torch.float). Note that indeed the prediction you provide to F.nll_loss has an extra dimension more than the ground truth targets you provide.
In your case, simply remove the argmax:
loss = F.nll_loss(output, targets)
| https://stackoverflow.com/questions/55914172/ |
Code Optimization: Computation in Torch.Tensor | I am currently implementing a function to compute Custom Cross Entropy Loss.
The definition of the function is a following image.
my codes are as following,
output = output.permute(0, 2, 3, 1)
target = target.permute(0, 2, 3, 1)
batch, height, width, channel = output.size()
total_loss = 0.
for b in range(batch): # for each batch
o = output[b]
t = target[b]
loss = 0.
for w in range(width):
for h in range(height): # for every pixel([h,w]) in the image
sid_t = t[h][w][0]
sid_o_candi = o[h][w]
part1 = 0. # to store the first sigma
part2 = 0. # to store the second sigma
for k in range(0, sid_t):
p = torch.sum(sid_o_candi[k:]) # to get Pk(w,h)
part1 += torch.log(p + 1e-12).item()
for k in range(sid_t, intervals):
p = torch.sum(sid_o_candi[k:]) # to get Pk(w,h)
part2 += torch.log(1-p + 1e-12).item()
loss += part1 + part2
loss /= width * height * (-1)
total_loss += loss
total_loss /= batch
return torch.tensor(total_loss, dtype=torch.float32)
I am wondering is there any optimization could be done with these code.
| I'm not sure sid_t = t[h][w][0] is the same for every pixel or not. If so, you can get rid of all for loop which boost the speed of computing loss.
Don't use .item() because it will return a Python value which loses the grad_fn track. Then you can't use loss.backward() to compute the gradients.
If sid_t = t[h][w][0] is not the same, here is some modification to help you get rid of at least 1 for-loop:
batch, height, width, channel = output.size()
total_loss = 0.
for b in range(batch): # for each batch
o = output[b]
t = target[b]
loss = 0.
for w in range(width):
for h in range(height): # for every pixel([h,w]) in the image
sid_t = t[h][w][0]
sid_o_candi = o[h][w]
part1 = 0. # to store the first sigma
part2 = 0. # to store the second sigma
sid1_cumsum = sid_o_candi[:sid_t].flip(dim=(0,)).cumsum(dim=0).flip(dims=(0,))
part1 = torch.sum(torch.log(sid1_cumsum + 1e-12))
sid2_cumsum = sid_o_candi[sid_t:intervals].flip(dim=(0,)).cumsum(dim=0).flip(dims=(0,))
part2 = torch.sum(torch.log(1 - sid2_cumsum + 1e-12))
loss += part1 + part2
loss /= width * height * (-1)
total_loss += loss
total_loss /= batch
return torch.tensor(total_loss, dtype=torch.float32)
How it works:
x = torch.arange(10);
print(x)
x_flip = x.flip(dims=(0,));
print(x_flip)
x_inverse_cumsum = x_flip.cumsum(dim=0).flip(dims=(0,))
print(x_inverse_cumsum)
# output
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
tensor([9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
tensor([45, 45, 44, 42, 39, 35, 30, 24, 17, 9])
Hope it helps.
| https://stackoverflow.com/questions/55917472/ |
Convert integer to pytorch tensor of binary bits | Given an number and an encoding length, how can I convert the number to its binary representation as a tensor?
Eg, given the number 6 and width 8, how can I obtain the tensor:
(0, 0, 0, 0, 0, 1, 1, 0)
|
def binary(x, bits):
mask = 2**torch.arange(bits).to(x.device, x.dtype)
return x.unsqueeze(-1).bitwise_and(mask).ne(0).byte()
If you wanna reverse the order of bits, use it with torch.arange(bits-1,-1,-1) instead.
| https://stackoverflow.com/questions/55918468/ |
CUDA for pytorch: CUDA C++ stream and state | I am trying to follow this tutorial and make a simple c++ extension with CUDA backend.
My CPU implementation seems to work fine.
I am having trouble finding examples and documentation (it seems like things are constantly changing).
Specifically,
I see pytorch cuda functions getting THCState *state argument - where does this argument come from? How can I get a state for my function as well?
For instance, in cuda implementation of tensor.cat:
void THCTensor_(cat)(THCState *state, THCTensor *result, THCTensor *ta, THCTensor *tb, int dimension)
However, when calling tensor.cat() from python one does not provide any state argument, pytorch provides it "behind the scene". How pytorch provides this information and how can I get it?
state is then converted to cudaStream_t stream = THCState_getCurrentStream(state);
For some reason, THCState_getCurrentStream is no longer defined? How can I get the stream from my state?
I also tried asking on pytorch forum - so far to no avail.
| It's deprecated (without documentation!)
See here:
https://github.com/pytorch/pytorch/pull/14500
In short: use at::cuda::getCurrentCUDAStream()
| https://stackoverflow.com/questions/55919123/ |
How to realize a polynomial regression in Pytorch / Python | I want my neural network to solve a polynomial regression problem like y=(x*x) + 2x -3.
So right now I created a network with 1 input node, 100 hidden nodes and 1 output node and gave it a lot of epochs to train with a high test data size. The problem is that the prediction after like 20000 epochs is okayish, but much worse then the linear regression predictions after training.
import torch
from torch import Tensor
from torch.nn import Linear, MSELoss, functional as F
from torch.optim import SGD, Adam, RMSprop
from torch.autograd import Variable
import numpy as np
# define our data generation function
def data_generator(data_size=1000):
# f(x) = y = x^2 + 4x - 3
inputs = []
labels = []
# loop data_size times to generate the data
for ix in range(data_size):
# generate a random number between 0 and 1000
x = np.random.randint(1000) / 1000
# calculate the y value using the function x^2 + 4x - 3
y = (x * x) + (4 * x) - 3
# append the values to our input and labels lists
inputs.append([x])
labels.append([y])
return inputs, labels
# define the model
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = Linear(1, 100)
self.fc2 = Linear(100, 1)
def forward(self, x):
x = F.relu(self.fc1(x)
x = self.fc2(x)
return x
model = Net()
# define the loss function
critereon = MSELoss()
# define the optimizer
optimizer = SGD(model.parameters(), lr=0.01)
# define the number of epochs and the data set size
nb_epochs = 20000
data_size = 1000
# create our training loop
for epoch in range(nb_epochs):
X, y = data_generator(data_size)
X = Variable(Tensor(X))
y = Variable(Tensor(y))
epoch_loss = 0;
y_pred = model(X)
loss = critereon(y_pred, y)
epoch_loss = loss.data
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: {} Loss: {}".format(epoch, epoch_loss))
# test the model
model.eval()
test_data = data_generator(1)
prediction = model(Variable(Tensor(test_data[0][0])))
print("Prediction: {}".format(prediction.data[0]))
print("Expected: {}".format(test_data[1][0]))
Is their a way to get way better results? I wondered if I should try to get 3 outputs, call them a, b and c, such that y= a(x*x)+b(x)+c. But I have no idea how to implement that and train my neural network.
| For this problem, it might be such easier if you consider the Net() with 1 Linear layer as Linear Regression with inputs features including [x^2, x].
Generate your data
import torch
from torch import Tensor
from torch.nn import Linear, MSELoss, functional as F
from torch.optim import SGD, Adam, RMSprop
from torch.autograd import Variable
import numpy as np
# define our data generation function
def data_generator(data_size=1000):
# f(x) = y = x^2 + 4x - 3
inputs = []
labels = []
# loop data_size times to generate the data
for ix in range(data_size):
# generate a random number between 0 and 1000
x = np.random.randint(2000) / 1000 # I edited here for you
# calculate the y value using the function x^2 + 4x - 3
y = (x * x) + (4 * x) - 3
# append the values to our input and labels lists
inputs.append([x*x, x])
labels.append([y])
return inputs, labels
Define your model
# define the model
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = Linear(2, 1)
def forward(self, x):
return self.fc1(x)
model = Net()
Then train it we get:
Epoch: 0 Loss: 33.75775909423828
Epoch: 1000 Loss: 0.00046704441774636507
Epoch: 2000 Loss: 9.437128483114066e-07
Epoch: 3000 Loss: 2.0870876138445738e-09
Epoch: 4000 Loss: 1.126847400112485e-11
Prediction: 5.355223655700684
Expected: [5.355224999999999]
The coeffients
The coefficients a, b, c you are looking for are actually the weight and bias of the self.fc1:
print('a & b:', model.fc1.weight)
print('c:', model.fc1.bias)
# Output
a & b: Parameter containing:
tensor([[1.0000, 4.0000]], requires_grad=True)
c: Parameter containing:
tensor([-3.0000], requires_grad=True)
In only 5000 epochs, all converges: a -> 1, b -> 4, and c -> -3.
The model is such a light-weight with only 3 parameters instead of:
(100 + 1) + (100 + 1) = 202 parameters in the old model
Hope this helps you!
| https://stackoverflow.com/questions/55920015/ |
Spacy similarity warning : "Evaluating Doc.similarity based on empty vectors." | I'm trying to do data enhancement with a FAQ dataset. I change words, specifically nouns, by most similar words with Wordnet checking the similarity with Spacy. I use multiple for loop to go through my dataset.
import spacy
import nltk
from nltk.corpus import wordnet as wn
import pandas as pd
nlp = spacy.load('en_core_web_md')
nltk.download('wordnet')
questions = pd.read_csv("FAQ.csv")
list_questions = []
for question in questions.values:
list_questions.append(nlp(question[0]))
for question in list_questions:
for token in question:
treshold = 0.5
if token.pos_ == 'NOUN':
wordnet_syn = wn.synsets(str(token), pos=wn.NOUN)
for syn in wordnet_syn:
for lemma in syn.lemmas():
similar_word = nlp(lemma.name())
if similar_word.similarity(token) != 1. and similar_word.similarity(token) > treshold:
good_word = similar_word
treshold = token.similarity(similar_word)
However, the following warning is printed several times and I don't understand why :
UserWarning: [W008] Evaluating Doc.similarity based on empty vectors.
It is my similar_word.similarity(token) which creates the problem but I don't understand why.
The form of my list_questions is :
list_questions = [Do you have a paper or other written explanation to introduce your model's details?, Where is the BERT code come from?, How large is a sentence vector?]
I need to check token but also the similar_word in the loop, for example, I still get the error here :
tokens = nlp(u'dog cat unknownword')
similar_word = nlp(u'rabbit')
if(similar_word):
for token in tokens:
if (token):
print(token.text, similar_word.similarity(token))
| You get that error message when similar_word is not a valid spacy document. E.g. this is a minimal reproducible example:
import spacy
nlp = spacy.load('en_core_web_md') # make sure to use larger model!
tokens = nlp(u'dog cat')
#similar_word = nlp(u'rabbit')
similar_word = nlp(u'')
for token in tokens:
print(token.text, similar_word.similarity(token))
If you change the '' to be 'rabbit' it works fine. (Cats are apparently just a fraction more similar to rabbits than dogs are!)
(UPDATE: As you point out, unknown words also trigger the warning; they will be valid spacy objects, but not have any word vector.)
So, one fix would be to check similar_word is valid, including having a valid word vector, before calling similarity():
import spacy
nlp = spacy.load('en_core_web_md') # make sure to use larger model!
tokens = nlp(u'dog cat')
similar_word = nlp(u'')
if(similar_word and similar_word.vector_norm):
for token in tokens:
if(token and token.vector_norm):
print(token.text, similar_word.similarity(token))
Alternative Approach:
You could suppress the particular warning. It is W008. I believe setting an environmental variable SPACY_WARNING_IGNORE=W008 before running your script would do it. (Not tested.)
(See source code)
By the way, similarity() might cause some CPU load, so is worth storing in a variable, instead of calculating it three times as you currently do. (Some people might argue that is premature optimization, but I think it might also make the code more readable.)
| https://stackoverflow.com/questions/55921104/ |
Pytorch's model can't feed forward a DataLoader dataset, NotImplementedError | so I am experimenting with PyTorch library to train a CNN. There is nothing wrong with the model (I can feed forward a data w/ no error) and I prepare a custom dataset with DataLoader function.
This is my code for data prep (I've omitted some irrelevant variable declaration, etc.):
# Initiliaze model
class neural_net_model(nn.Module):
# omitted
...
# Prep the dataset
train_data = torchvision.datasets.ImageFolder(root = TRAIN_DATA_PATH, transform = TRANSFORM_IMG)
train_data_loader = data_utils.DataLoader(train_data, batch_size = BATCH_SIZE, shuffle = True)
test_data = torchvision.datasets.ImageFolder(root = TEST_DATA_PATH, transform = TRANSFORM_IMG)
test_data_loader = data_utils.DataLoader(test_data, batch_size = BATCH_SIZE, shuffle = True)
But, in the training code (which I follow based on various online references), there is an error when I feed forward the model with this instruction:
...
for step, (data, label) in enumerate(train_data_loader):
outputs = neural_net_model(data)
...
Which raise an error:
NotImplementedError Traceback (most recent call last)
<ipython-input-12-690cfa6916ec> in <module>
6
7 # Forward pass
----> 8 outputs = neural_net_model(images)
9 loss = criterion(outputs, labels)
10
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in forward(self, *input)
83 registered hooks while the latter silently ignores them.
84 """
---> 85 raise NotImplementedError
86
87 def register_buffer(self, name, tensor):
NotImplementedError:
I can't find a similar problems on the internet and it seems strange because I've followed the code much exactly as the references and the error is not really well defined in the docs (NotImplementedError:)
Do you guys know the cause and solution to this problem?
This is the code for the network
from torch import nn, from_numpy
import torch
import torch.nn.functional as F
class DeXpression(nn.Module):
def __init__(self, ):
super(DeXpression, self).__init__()
# Layer 1
self.convolution1 = nn.Conv2d(in_channels = 1, out_channels = 64, kernel_size = 7, stride = 2, padding = 3)
self.pooling1 = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Layer FeatEx1
self.convolution2a = nn.Conv2d(in_channels = 64, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
self.convolution2b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)
self.pooling2a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
self.convolution2c = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)
self.pooling2b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Layer FeatEx2
self.convolution3a = nn.Conv2d(in_channels = 272, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
self.convolution3b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)
self.pooling3a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
self.convolution3c = nn.Conv2d(in_channels = 272, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)
self.pooling3b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Fully-connected Layer
self.fc1 = nn.Linear(45968, 1024)
self.fc2 = nn.Linear(1024, 64)
self.fc3 = nn.Linear(64, 8)
def net_forward(self, x):
# Layer 1
x = F.relu(self.convolution1(x))
x = F.local_response_norm(self.pooling1(x), size = 2)
y1 = x
y2 = x
# Layer FeatEx1
y1 = F.relu(self.convolution2a(y1))
y1 = F.relu(self.convolution2b(y1))
y2 = self.pooling2a(y2)
y2 = F.relu(self.convolution2c(y2))
x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
x[:, 0:y1.shape[1], :, :] = y1
x[:, y1.shape[1]:, :, :] = y2
x = self.pooling2b(x)
y1 = x
y2 = x
# Layer FeatEx2
y1 = F.relu(self.convolution3a(y1))
y1 = F.relu(self.convolution3b(y1))
y2 = self.pooling3a(y2)
y2 = F.relu(self.convolution3c(y2))
x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
x[:, 0:y1.shape[1], :, :] = y1
x[:, y1.shape[1]:, :, :] = y2
x = self.pooling3b(x)
# Fully-connected layer
x = x.view(-1, x.shape[0] * x.shape[1] * x.shape[2] * x.shape[3])
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x), dim = None)
return x
| Your network class implemented a net_forward method. However, nn.Module expects its derived classes to implement forward method (without net_ prefix).
Simply rename net_forward to just forward and your code should be okay.
You can learn more about inheritance and overloaded methods here.
Old Answer:
The code you are running, and the code you post are not the same.
You posted a code:
for step, (data, label) in enumerate(train_data_loader):
neural_net_model(data)
While the code you run (as it appears in the error message posted) is:
# Forward pass
outputs = model(images)
The error you get indicates that model to which you feed images is of class nn.Module and not an actual implementation derived from nn.Module. Therefore, the actual model you are trying to use has no explicit implementation of forward method. Make sure you are using the actual model you implemented.
| https://stackoverflow.com/questions/55932767/ |
Difference between logloss in sklearn and BCEloss in Pytorch? | Looking at the documentation for logloss in Sklearn and BCEloss in Pytorch, these should be the same, i.e. just the normal log loss with weights applied. However, they behave differently - both with and without weights applied. Can anyone explain it to me? I could not find the source code for BCEloss (which refers to binary_cross_entropy internally).
input = torch.randn((3, 1), requires_grad=True)
target = torch.ones((3, 1), requires_grad=False)
w = torch.randn((3, 1), requires_grad=False)
# ----- With weights
w = F.sigmoid(w)
criterion_test = nn.BCELoss(weight=w)
print(criterion_test(input=F.sigmoid(input), target=F.sigmoid(target)))
print(log_loss(y_true=target.detach().numpy(),
y_pred=F.sigmoid(input).detach().numpy(), sample_weight=w.detach().numpy().reshape(-1), labels=np.array([0.,1.])))
print("")
print("")
# ----- Without weights
criterion_test = nn.BCELoss()
print(criterion_test(input=F.sigmoid(input),target=F.sigmoid(target)))
print(log_loss(y_true=target.detach().numpy(),
y_pred=F.sigmoid(input).detach().numpy(), labels=np.array([0.,1.])))
| Actually, I found out. It turns out that BCELoss and log_loss behaves differently when the weights sum up to more than the dimension of the input array. Interesting.
| https://stackoverflow.com/questions/55933305/ |
How to normalize convolutional weights in pytorch? | I have a CNN in pytorch and I need to normalize the convolution weights (filters) with L2 norm in each iteration. What is the most efficient way to do this?
Basically, in my particular experiment I need to replace the filters with their normalized value in the model (during both training and test).
| I am not sure if I have understood your question correctly. But if I were asked to normalize the weights of NN layer at each iteration, I would do something like as follows.
for ite in range(100): # training iteration
# write your code to train your model
# update the parameters using optimizer.step() and then normalize
with torch.no_grad():
model.conv.weight.div_(torch.norm(model.conv.weight, dim=2, keepdim=True)
Here, the model.conv refers to the Convolution layer of the model. Please make sure, you give the dim parameter in torch.norm() function appropriately. I just set it to 2 to give you an example.
For example, if you are using Conv1d, then the shape of the weight parameters would be (out_channels, in_channels, kW), then you can set dim=2.
| https://stackoverflow.com/questions/55941503/ |
Pytorch: backpropagating from sum of matrix elements to leaf variable | I'm trying to understand backpropagation in pytorch a bit better. I have a code snippet that successfully does backpropagation from the output d to the leaf variable a, but then if I add in a reshape step, the backpropagation no longer gives the input a gradient.
I know reshape is out-of-place, but I'm still not sure how to contextualize this.
Any thoughts?
Thanks.
#Works
a = torch.tensor([1.])
a.requires_grad = True
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
print('a gradient is')
print(a.grad) #=> Tensor([1.])
#Doesn't work
a = torch.tensor([1.])
a.requires_grad = True
a = a.reshape(a.shape)
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
print('a gradient is')
print(a.grad) #=> None
| Edit:
Here is a detailed explanation of what's going on ("this isn't a bug per se, but it is definitely a source of confusion"): https://github.com/pytorch/pytorch/issues/19778
So one solution is to specifically ask to retain grad for now non-leaf a:
a = torch.tensor([1.])
a.requires_grad = True
a = a.reshape(a.shape)
a.retain_grad()
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
Old answer:
If you move a.requires_grad = True after the reshape, it works:
a = torch.tensor([1.])
a = a.reshape(a.shape)
a.requires_grad = True
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
Seems like a bug in PyTorch, because after this a.requires_grad is still true.
a = torch.tensor([1.])
a.requires_grad = True
a = a.reshape(a.shape)
This seems to be related to the fact the a is no longer a leaf in your "Doesn't work" example, but still a leaf in other cases (print a.is_leaf to check).
| https://stackoverflow.com/questions/55942423/ |
PyTorch transformations change dimensions | I do not understand why pytorch transformations from 100x100 pic make 3x100 pic.
print("Original shape ", x.shape)
x = transforms.Compose([
transforms.ToPILImage(),
transforms.ToTensor()
])(x)
print("After transformation shape ", x.shape)
outputs
Original shape torch.Size([100, 100, 3])
After transformation shape torch.Size([3, 100, 3])
What is happening?
| According to the docs https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.ToPILImage, if the input is a Torch tensor, the shape is C x H x W. So 100 is considered to be number of channels. Because there is no mode that corresponds to 100 channels, it interpreted as RGB (3 channels).
So you need input shape torch.Size([3, 100, 100]) to make it work as you want.
| https://stackoverflow.com/questions/55958330/ |
In PyTorch, how to I make certain module `Parameters` static during training? | Context:
In pytorch, any Parameter is a special kind of Tensor. A Parameter is automatically registered with a module's parameters() method when it is assigned as an attribute.
During training, I will pass m.parameters() to the Optimizer instance so they can be updated.
Question: For a built-in pytorch module, how to I prevent certain parameters from being modified by the optimizer?
s = Sequential(
nn.Linear(2,2),
nn.Linear(2,3), # I want this one's .weight and .bias to be constant
nn.Linear(3,1)
)
Can I make it so they don't appear in s.parameters()?
Can I make the parameters read-only so any attempted changes are ignored?
| Parameters can be made static by setting their attribute requires_grad=False.
In my example case:
params = list(s.parameters()) # .parameters() returns a generator
# Each linear layer has 2 parameters (.weight and .bias),
# Skipping first layer's parameters (indices 0, 1):
params[2].requires_grad = False
params[3].requires_grad = False
When a mix of requires_grad=True and requires_grad=False tensors are used to make a calculation, the result inherits requires_grad=True.
According to the PyTorch autograd mechanics documentation:
If there’s a single input to an operation that requires gradient, its output will also require gradient. Conversely, only if all inputs don’t require gradient, the output also won’t require it. Backward computation is never performed in the subgraphs, where all Tensors didn’t require gradients.
My concern was that if I disabled gradient tracking for the middle layer, the first layer wouldn't receive backpropagated gradients. This was faulty understanding.
Edge Case: If I disable gradients for all parameters in a module and try to train, the optimizer will raise an exception. Because there is not a single tensor to apply the backward() pass to.
This edge case is why I was getting errors. I was trying to test requires_grad=False on parameters for module with a single nn.Linear layer. That meant I disabled tracking for all parameters, which caused the optimizer to complain.
| https://stackoverflow.com/questions/55959918/ |
Pytorch: How plot the prediction output from segmentation task when batch size is larger than 1? | I have a doubt and a question about plot output of different batches in segmentation subject.
The snippet below plot the probability of each class and the prediction output.
I am sure the prob plots is plotting one batch, but not sure about prediction when I got the torch.argmax(outputs, 1). Am I plotted the argmax of one batch while the output of the network has the size of [10,4,256,256].
Also, I am wondering how can I plot the prediction of all batches while my batch size is 10.
outputs = model(t_image)
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(nrows=1, ncols=5, sharex=True, sharey=True, figsize=(6,6))
img1 = ax1.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu(), cmap = 'jet')
ax1.set_title("prob class 0")
img2 = ax2.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu(), cmap = 'jet')
ax2.set_title("prob class 1")
img3 = ax3.imshow(torch.exp(outputs[0,2,:,:]).detach().cpu(), cmap = 'jet')
ax3.set_title("prob class 2")
img4 = ax4.imshow(torch.exp(outputs[0,3,:,:]).detach().cpu(), cmap = 'jet')
ax4.set_title("prob class 3")
img5 = ax5.imshow(torch.argmax(outputs, 1).detach().cpu().squeeze(), cmap = 'jet')
ax5.set_title("predicted")
| Not sure about what you are asking. Assuming you are using the NCHW data layout, your output is 10 samples per batch, 4 channels (each channel for a different class), and 256x256 resolution, then the first 4 graphs are plotting the class scores of the four classes.
For the 5th plot, your torch.argmax(outputs, 1).detach().cpu().squeeze() would give you a 10x256x256 image, which is the class prediction results for all 10 images in the batch, and matplotlib cannot properly plot it directly. So you would want to do torch.argmax(outputs[0,:,:,:], 0).detach().cpu().squeeze() which would get you a 256x256 map, which you can plot.
Since the result would range from 0 to 3 which represents the 4 classes, (and may be displayed as a very dim image), usually people would use a palette to color the plots. An example is provided here and looks like the cityscapes_map[p] line in the example.
For plotting all 10, why not write a for loop:
for i in range(outputs.size(0)):
# do whatever you do with outputs[i, ...]
# ...
plt.show()
and go over each result in the batch one by one. There is also the option to have 10 rows in your subplot, if your screen is big enough.
| https://stackoverflow.com/questions/55967347/ |
translate from pyrotch to tensorflow | What does this line of code in PyTorch do?
normA = A.mul(A).sum(dim=1).sum(dim=1).sqrt()
Y = A.div(normA.view(batchSize, 1, 1).expand_as(A))
Normally it should be a second term like this:
torch.div(input, value, out=None) → Tensor
| Your question is a little unclear because you didn't mention what is the shape of tensor A and what is normA. But I guess the following:
A is a tensor of shape (batchSize, X, Y)
normA is a tensor of norms of all batch elements of A and its' shape is (batchSize).
So, you normalize the tensor A with the following statement.
A.div(normA.view(batchSize, 1, 1).expand_as(A))
Where, normA.view(batchSize, 1, 1).expand_as(A) is first converted into a tensor of shape (batchSize, X, Y) and then you divide A by the resulting tensor.
An example (created from my guess):
batchSize = 8
A = torch.randn(batchSize, 5, 5)
normA = A.norm(dim=-1).norm(dim=-1)
print(normA.size()) # torch.Size([8])
normA = normA.view(batchSize, 1, 1).expand_as(A)
print(normA.size()) # torch.Size([8, 5, 5])
A = A.div(normA)
| https://stackoverflow.com/questions/55972397/ |
PyTorch - RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' | I load my previously trained model and want to classify a single (test) image from the disk through this model. All the operations in my model are carried out on my GPU. Hence, I move the numpy array of the test image to GPU by calling cuda() function. When I call the forward() function of my model with the numpy array of the test image, I get the RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'.
Here is the code I use to load the image from disk and call the forward() function:
test_img = imageio.imread('C:\\Users\\talha\\Desktop\\dog.png')
test_img = imresize(test_img, (28, 28))
test_tensor = torch.from_numpy(test_img)
test_tensor = test_tensor.cuda()
test_tensor = test_tensor.type(torch.FloatTensor)
log_results = model.forward(test_tensor)
Software Environment:
torch: 1.0.1
GPU: Nvidia GeForce GTX 1070
OS: Windows 10 64-bit
Python: 3.7.1
| Convert to FloatTensor before sending it over the GPU.
So, the order of operations will be:
test_tensor = torch.from_numpy(test_img)
# Convert to FloatTensor first
test_tensor = test_tensor.type(torch.FloatTensor)
# Then call cuda() on test_tensor
test_tensor = test_tensor.cuda()
log_results = model.forward(test_tensor)
| https://stackoverflow.com/questions/55983122/ |
I am having trouble calculating the accuracy, recall, precision and f1-score for my model | I have got my confusion matrix working correctly, just having some trouble producing the scores. A little help would go a long way. I am currently getting the error. "Tensor object is not callable".
def get_confused(model_ft):
nb_classes = 120
from sklearn.metrics import precision_recall_fscore_support as score
confusion_matrix = torch.zeros(nb_classes, nb_classes)
with torch.no_grad():
for i, (inputs, classes) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
classes = classes.to(device)
outputs = model_ft(inputs)
_, preds = torch.max(outputs, 1)
for t, p in zip(classes.view(-1), preds.view(-1)):
confusion_matrix[t.long(), p.long()] += 1
cm = confusion_matrix(classes, preds)
recall = np.diag(cm) / np.sum(cm, axis = 1)
precision = np.diag(cm) / np.sum(cm, axis = 0)
print(confusion_matrix)
print(confusion_matrix.diag()/confusion_matrix.sum(1))
| The problem is with this line.
cm = confusion_matrix(classes, preds)
confusion_matrix is a tensor and you can't call it like a function. Hence Tensor is not callable. I am also, not sure why you need this line. Instead, I think you would want to write cm= confusion_matrix.cpu().data.numpy() to make it a numpy array I think. From your code, it seems cm is np.array.
| https://stackoverflow.com/questions/55984768/ |
Pytorch sum tensors doing an operation within each set of numbers | I have the following Pytorch tensor:
V1 = torch.tensor([[2, 4], [6, 4], [5, 3]])
I want to do the sum of the difference of each pair of numbers (applying absolute value), something like the code below;
result.sum(abs(2-4), abs(6-4), abs(5-3))
I can do this using a for statement:
total = 0
for i in range(0,vector.size(0)):
total = total + torch.abs(vector.data[i][1] - vector.data[i][0])
But I want to do it without using a for.
Is there a way to do that?
| You can do
torch.abs(V1[:, 1]- V1[:, 0])
and to sum over it
torch.sum(torch.abs(V1[:, 1]- V1[:, 0]))
| https://stackoverflow.com/questions/55985336/ |
Sample each image from dataset N times in single batch | I'm currently working on task of learning representation (deep embeddings). The dataset I use have only one example image per object. I also use augmentation.
During training, each batch must contain N different augmented versions of single image in dataset (dataset[index] always returns new random transformation).
Is there some standart solution or library with DataLoader for this purpose, that will work with torch.utils.data.distributed.DistributedSampler?
If not, will any DataLoader, inherited from torch.utils.data.DataLoader (and calling super().__init__(...)), will work in distributed training?
| As far as I know, this is not a standard way of doing things — even if you have only one sample per object, one would still sample different images from different object per batch, and in different epochs the sampled images would be transformed differently.
That said, if you truly want to do what you are doing, why not simply write a wrapper of you dataset?
class Wrapper(Dataset):
N = 16
def __getitem__(self, index):
sample = [ super().__getitem__(index) for _ in N ]
sample = torch.stack(sample, dim=0)
return sample
Then each of your batch would be BxNxCxHxW where B is the batch size, N is your repetition. You can reshape your batch after you get it from the dataloader.
| https://stackoverflow.com/questions/55986607/ |
TypeError: argument 0 is not a Variable | I am trying to learn CNN with my own data. The shape of the data is (1224, 15, 23). 1224 is the number of data, and each data is (15, 23). CNN is built with PyTorch.
I think there is no logical error because conv2D needs 4-D tensor, and I feed (batch, channel, x, y).
when I build an instance of the Net class I got this error.
TypeError: argument 0 is not a Variable
I have been using PyTroch for half of a year but this error is the first time and I am still confused.
Here is my code.
class Net(nn.Module):
def __init__(self, n):
super(Net,self).__init__()
self.conv = nn.Sequential(nn.Conv2d(1, 32, kernel_size=3, stride=1),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=3, stride=1),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1), # 64 x 9 x 17
nn.ReLU()
)
conv_out_size = self._get_conv_out(input_shape)
self.fc = nn.Sequential(nn.Linear(64 * 9 * 17, 128),
nn.ReLU(),
nn.Linear(128, n)
)
def _get_conv_out(self, shape):
o = self.conv(torch.zeros(1, *shape))
return int(np.prod(o.size()))
def forward(self, x):
conv_out = self.conv(x).view(x.size()[0], -1)
return sefl.fc(conv_out)
if __name__=='__main__':
num_epochs = 1
num_classes = 2
input_shape = train_img[0].shape # 1, 15, 23
net = Net(num_classes)
iteration = 51
BATCH_SIZE = 24
LEARNING_RATE = 0.0001
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
loss_list= []
batch_index = 0
# train
for epoch in range(num_epochs):
for i in range(iteration):
input_img = torch.FloatTensor(train_img[batch_index: batch_index + BATCH_SIZE])
print(input_img.size()) # 24, 1, 15, 23
outputs = net(input_img)
loss = criterion(outputs, labels)
loss_list.append(loss.item())
# Backprop
opimizer.zero_grad()
loss.backward()
optimizer.step()
And the error message:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-179-0f6bc7588c29> in <module>
4 input_shape = train_img[0].shape # 1, 15, 23
5
----> 6 net = Net(num_classes)
7 iteration = 51
8 BATCH_SIZE = 24
<ipython-input-178-8a68d4a0dc4a> in __init__(self, n)
11 )
12
---> 13 conv_out_size = self._get_conv_out(input_shape)
14 self.fc = nn.Sequential(nn.Linear(64 * 9 * 17, 128),
15 nn.ReLU(),
<ipython-input-178-8a68d4a0dc4a> in _get_conv_out(self, shape)
18
19 def _get_conv_out(self, shape):
---> 20 o = self.conv(torch.zeros(1, *shape))
21 return int(np.prod(o.size()))
22
C:\DTools\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
C:\DTools\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
65 def forward(self, input):
66 for module in self._modules.values():
---> 67 input = module(input)
68 return input
69
C:\DTools\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
355 result = self._slow_forward(*input, **kwargs)
356 else:
--> 357 result = self.forward(*input, **kwargs)
358 for hook in self._forward_hooks.values():
359 hook_result = hook(self, input, result)
C:\DTools\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
280 def forward(self, input):
281 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 282 self.padding, self.dilation, self.groups)
283
284
C:\DTools\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups)
88 _pair(0), groups, torch.backends.cudnn.benchmark,
89 torch.backends.cudnn.deterministic, torch.backends.cudnn.enabled)
---> 90 return f(input, weight, bias)
91
92
TypeError: argument 0 is not a Variable
| Your code actually works for PyTorch >= 0.4.1. I guess your PyTorch version is < 0.4 and so you need to pass a Variable in the following line.
o = conv(torch.autograd.Variable(torch.zeros(1, *x.shape)))
In PyTorch >= 0.4.1, the concept of Variable no longer exists. So, torch.FloatTensor can be directly passed to NN layers.
| https://stackoverflow.com/questions/55990271/ |
pytorch model loading and prediction, AttributeError: 'dict' object has no attribute 'predict' | model = torch.load('/home/ofsdms/san_mrc/checkpoint/best_v1_checkpoint.pt', map_location='cpu')
results, labels = predict_function(model, dev_data, version)
> /home/ofsdms/san_mrc/my_utils/data_utils.py(34)predict_squad()
-> phrase, spans, scores = model.predict(batch)
(Pdb) n
AttributeError: 'dict' object has no attribute 'predict'
How do I load a saved checkpoint of pytorch model, and use the same for prediction. I have the model saved in .pt extension
| the checkpoint you save is usually a state_dict: a dictionary containing the values of the trained weights - but not the actual architecture of the net. The actual computational graph/architecture of the net is described as a python class (derived from nn.Module).
To use a trained model you need:
Instantiate a model from the class implementing the computational graph.
Load the saved state_dict to that instance:
model.load_state_dict(torch.load('/home/ofsdms/san_mrc/checkpoint/best_v1_checkpoint.pt', map_location='cpu')
| https://stackoverflow.com/questions/56002682/ |
PyTorch: inconsistent pretrained VGG output | When loading a pretrained VGG network with the torchvision.models module and using it to classify an arbitrary RGB image, the network's output differs noticeably from invocation to invocation. Why does this happen? From my understanding no part of the VGG forward pass should be non-deterministic.
Here's an MCVE:
import torch
from torchvision.models import vgg16
vgg = vgg16(pretrained=True)
img = torch.randn(1, 3, 256, 256)
torch.all(torch.eq(vgg(img), vgg(img))) # result is 0, but why?
| vgg16 has a nn.Dropout layer that, during training, randomly drops 50% of its inputs. During test time you should "switch off" this behavior by setting the mode of the net to "eval" mode:
vgg.eval()
torch.all(torch.eq(vgg(img), vgg(img)))
Out[73]: tensor(1, dtype=torch.uint8)
Note that there are other layers with random behavior and different behavior for training and evaluation (e.g., BatchNorm). Therefore, it is important to switch to eval() mode before evaluating a trained model.
| https://stackoverflow.com/questions/56003198/ |
How can I calculate the network gradients w.r.t weights for all inputs in PyTorch? | I'm trying to figure out how I can calculate the gradient of the network for each input. And I'm a bit lost. Essentially, what I would want, is to calculate d self.output/d weight1 and d self.output/d weight2 for all values of input x. So, I would have a matrix of size (1000, 5) for example. Where the 1000 is for the size of the input x, and 5 is the number of weights in the layer.
The example I've included below returns weights as size (1,5). What exactly is being calculated here? Is this d self.output/ d weight1 for 1 input of x, or an average of all inputs?
Secondly, would a matmul of features.grad and weight1.grad be the same as what I'm asking? A matrix of all the gradients of weight1 for all values of x.
class Network(torch.nn.Module):
def __init__(self, iNode, hNode, oNode):
super(Network, self).__init__()
print("Building Model...")
iNode = int(iNode) ; self.iNode = iNode
hNode = int(hNode) ; self.hNode = hNode
oNode = int(oNode) ; self.oNode = oNode
self.fc1 = nn.Linear(iNode, hNode, bias=False)
self.fc2 = nn.Linear(hNode, oNode, bias=False)
def forward(self, x):
self.hidden_probs = self.fc1(x)
self.hidden = self.actFunc1(self.hidden_probs)
self.output_probs = self.fc2(self.hidden)
self.output = self.actFunc2(self.output_probs)
return self.output
def actFunc1(self, x):
return 1.0/(1.0+torch.exp(-x))
def actFunc2(self, x):
return x
def trainData(self, features, labels, epochs, alpha, optimisation, verbose=False):
for epoch in range(0,epochs):
net_pred = self.forward(features)
net_pred.backward(gradient=torch.ones(features.size())) #calc. dout/dw for all w
print(features.grad.size()) #returns (1000,1)
with torch.no_grad():
for name, param in self.named_parameters():
if(param.requires_grad):
param -= alpha*param.grad
for name, param in self.named_parameters():
if(param.requires_grad):
param.grad.zero_()
sys.stdout.write("Epoch: %06i\r" % (epoch))
sys.stdout.flush()
sys.stdout.write("\n")
| I am not sure what exactly you are trying to achieve because normally you only work with the sum of gradients of (d output)/(d parameter) and not with any other gradients in between as autograd takes care that, but let me try to answer.
Question 1
The example I've included below returns weights as size (1,5). What exactly is being calculated here? Is this d self.output/ d weight1 for 1 input of x, or an average of all inputs?
You get size (1,5) because training is done in mini batches, meaning the gradients for each data point with respect to the (5) weights are calculated and summed over the mini batch.
According to the docs:
This attribute is None by default and becomes a Tensor the first time a call to backward() computes gradients for self. The attribute will then contain the gradients computed and future calls to backward() will accumulate (add) gradients into it.
If you explicitly want the gradient for each data point, then make your mini batch size one. Normally we train in mini batches because updating after each data point can be unstable, image jumping in a different direction each time, where with a batch this would average out.
On the other extreme, many data sets are simply too large to calculate the gradient in one go.
Question 2
An example might give more insight:
import torch
x = torch.tensor([1.5], requires_grad=True)
a = torch.nn.Parameter(torch.tensor([2.]))
b = torch.nn.Parameter(torch.tensor([10.]))
y = x*a
z = y+0.5*b
temp = z.backward()
print('gradients of a: %0.2f and b: %0.2f' % (a.grad.item(), b.grad.item()))
I start with two parameters, a and b, and calculate z=a*x+0.5*b.
No gradients are calculated yet, pytorch only keeps track of the history of operations, so all .grad attributes are empty.
When z.backward() is called, the gradients of the output with respect to the parameters are calculated, which you can view by calling grad on the parameters.
Updating the parameters can then be done like you are already doing a -= alpha*a.grad.
| https://stackoverflow.com/questions/56004624/ |
pytorch embedding index out of range | I'm following this tutorial here https://cs230-stanford.github.io/pytorch-nlp.html. In there a neural model is created, using nn.Module, with an embedding layer, which is initialized here
self.embedding = nn.Embedding(params['vocab_size'], params['embedding_dim'])
vocab_size is the total number of training samples, which is 4000. embedding_dim is 50. The relevant piece of the forward method is below
def forward(self, s):
# apply the embedding layer that maps each token to its embedding
s = self.embedding(s) # dim: batch_size x batch_max_len x embedding_dim
I get this exception when passing a batch to the model like so
model(train_batch)
train_batch is a numpy array of dimension batch_sizexbatch_max_len. Each sample is a sentence, and each sentence is padded so that it has the length of the longest sentence in the batch.
File
"/Users/liam_adams/Documents/cs512/research_project/custom/model.py",
line 34, in forward
s = self.embedding(s) # dim: batch_size x batch_max_len x embedding_dim File
"/Users/liam_adams/Documents/cs512/venv_research/lib/python3.7/site-packages/torch/nn/modules/module.py",
line 493, in call
result = self.forward(*input, **kwargs) File "/Users/liam_adams/Documents/cs512/venv_research/lib/python3.7/site-packages/torch/nn/modules/sparse.py",
line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse) File "/Users/liam_adams/Documents/cs512/venv_research/lib/python3.7/site-packages/torch/nn/functional.py",
line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at
../aten/src/TH/generic/THTensorEvenMoreMath.cpp:193
Is the problem here that the embedding is initialized with different dimensions than those of my batch array? My batch_size will be constant but batch_max_len will change with every batch. This is how its done in the tutorial.
| Found the answer here https://discuss.pytorch.org/t/embeddings-index-out-of-range-error/12582
I'm converting words to indexes, but I had the indexes based off the total number of words, not vocab_size which is a smaller set of the most frequent words.
| https://stackoverflow.com/questions/56010551/ |
Numpy/Pytorch dtype conversion / compatibility | I'm trying to find some documentation understand how dtypes are combined. For example:
x : np.int32 = ...
y : np.float64 = ...
What is going to be the type of x + y ?
does it depend on the operator (here +) ?
does it depend where it is stored (z = x + y vs z[...] = x + y) ?
I'm looking for part of the documentation that describe these kind of scenario but so far I'm empty-handed.
| If data types don't match, then NumPy will upcast the data to the higher precision data types if possible. And it doesn't depend on the type of (arithmetic) operation that we do or to the variables that we assign to, unless that variable already has some other dtype. Here is a small illustration:
In [14]: x = np.arange(3, dtype=np.int32)
In [15]: y = np.arange(3, dtype=np.float64)
# `+` is equivalent to `numpy.add()`
In [16]: summed = x + y
In [17]: summed.dtype
Out[17]: dtype('float64')
In [18]: np.add(x, y).dtype
Out[18]: dtype('float64')
If you don't explicitly assign a datatype, then the result will be upcasted to the higher of data types of the given inputs. For example, numpy.add() accepts a dtype kwarg where you can specify the datatype of the resultant array.
And, one can check whether two different datatypes can be safely casted according to casting rules by using numpy.can_cast()
For the sake of completeness, I add the following numpy.can_cast() matrix:
>>> def print_casting_matrix(ntypes):
... ntypes_ex = ["X"] + ntypes.split()
... print("".join(ntypes_ex))
... for row in ntypes:
... print(row, sep='\t', end=''),
... for col in ntypes:
... print(int(np.can_cast(row, col)), sep='\t', end='')
... print()
>>> print_casting_matrix(np.typecodes['All'])
And the output would be the following matrix which shows what dtypes can be safely casted (indicated by 1) and what dtypes cannot be casted (indicated by 0), following the order of from casting (along axis-0) to to casting (axis-1) :
# to casting -----> ----->
X?bhilqpBHILQPefdgFDGSUVOMm
?11111111111111111111111101
b01111110000001111111111101
h00111110000000111111111101
i00011110000000011011111101
l00001110000000011011111101
q00001110000000011011111101
p00001110000000011011111101
B00111111111111111111111101
H00011110111110111111111101
I00001110011110011011111101
L00000000001110011011111101
Q00000000001110011011111101
P00000000001110011011111101
e00000000000001111111111100
f00000000000000111111111100
d00000000000000011011111100
g00000000000000001001111100
F00000000000000000111111100
D00000000000000000011111100
G00000000000000000001111100
S00000000000000000000111100
U00000000000000000000011100
V00000000000000000000001100
O00000000000000000000001100
M00000000000000000000001110
m00000000000000000000001101
Since the characters are cryptic, we can use the following for better understanding of the above casting matrix:
In [74]: for char in np.typecodes['All']:
...: print(char, " --> ", np.typeDict[char])
And the output would be:
? --> <class 'numpy.bool_'>
b --> <class 'numpy.int8'>
h --> <class 'numpy.int16'>
i --> <class 'numpy.int32'>
l --> <class 'numpy.int64'>
q --> <class 'numpy.int64'>
p --> <class 'numpy.int64'>
B --> <class 'numpy.uint8'>
H --> <class 'numpy.uint16'>
I --> <class 'numpy.uint32'>
L --> <class 'numpy.uint64'>
Q --> <class 'numpy.uint64'>
P --> <class 'numpy.uint64'>
e --> <class 'numpy.float16'>
f --> <class 'numpy.float32'>
d --> <class 'numpy.float64'>
g --> <class 'numpy.float128'>
F --> <class 'numpy.complex64'>
D --> <class 'numpy.complex128'>
G --> <class 'numpy.complex256'>
S --> <class 'numpy.bytes_'>
U --> <class 'numpy.str_'>
V --> <class 'numpy.void'>
O --> <class 'numpy.object_'>
M --> <class 'numpy.datetime64'>
m --> <class 'numpy.timedelta64'>
| https://stackoverflow.com/questions/56022497/ |
Setting device for pytorch version 0.3.1.post2 | In newer versions of pytorch, you can set the device by using torch.device. I need to use torch version 0.3.1.post2 for some legacy code. How do I set the device for this version?
| As far as I know, you can use the set_device function. But this is not encouraged. Please see the reference.
The suggested method is, you just set the CUDA_VISIBLE_DEVICES environmental variable. You can run your script as follows.
CUDA_VISIBLE_DEVICES=GPU_ID python script_name.py
In your program, you can just simply use .cuda() to use GPUs. (e.g., model=model.cuda())
| https://stackoverflow.com/questions/56026555/ |
How to define specific number of convolutional kernels/filters in pytorch? | On pytorch website they have the following model in their tutorial
class BasicCNN(nn.Module):
def __init__(self):
super(BasicCNN, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = x.permute(0, 3, 1, 2)
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
How many kernels/filters this model has? Is it two - e.g conv1 and conv2. How do I easily create many filters by specifying the number of them? For example 100 filters.
Thanks!
| Your question is a little ambiguous but let me try to answer it.
Usually, in a convolutional layer, we set the number of filters as the number of out_channels. But this is not straight-forward. Let's discuss based on the example that you provided.
What are the convolutional layer parameters?
model = BasicCNN()
for name, params in model.named_parameters():
if 'conv' in name:
print(name, params.size())
Output:
conv1.weight torch.Size([6, 3, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([16, 6, 5, 5])
conv2.bias torch.Size([16])
Explanation
Let's consider conv1 layer in the above model. We can say, there are 6 filters of shape 5 x 5 because we have chosen 2d Convolution. Since the number of input channels is 3, so there are in total 6 x 3 = 18 kernels.
Here, the inputs of this model are 3d like images. You can consider, we have images with shape W x H and there are 3 channels (RGB) for images. So, we can feed 3d tensors representing images to this model.
Now coming back to your question, "How do I easily create many filters by specifying the number of them? For example 100 filters.". If you want to simply use 100 filters per input channel, then just set 100 in conv1 instead of 6. This is typically what people do in computer vision!
But you can definitely modify the architecture as per your need and identify the best setting.
| https://stackoverflow.com/questions/56030884/ |
PyTorch and Chainer implementations of the Linear layer- are they equivalent? | I want to use a Linear, Fully-Connected Layer as one of the input layers in my network. The input has shape (batch_size, in_channels, num_samples). It is based on the Tacotron paper: https://arxiv.org/pdf/1703.10135.pdf, the Enocder prenet part.
It feels to me as if Chainer and PyTorch have different implementations of the Linear layer - are they really performing the same operations or am I misunderstanding something?
In PyTorch, behavior of the Linear layer follows the documentations: https://pytorch.org/docs/0.3.1/nn.html#torch.nn.Linear
according to which, the shape of the input and output data are as follows:
Input: (N,∗,in_features) where * means any number of additional dimensions
Output: (N,∗,out_features) where all but the last dimension are the same shape as the input.
Now, let's try creating a linear layer in pytorch and performing the operation. I want an output with 8 channels, and the input data will have 3 channels.
import numpy as np
import torch
from torch import nn
linear_layer_pytorch = nn.Linear(3, 8)
Let's create some dummy input data of shape (1, 4, 3) - (batch_size, num_samples, in_channels:
data = np.array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4], dtype=np.float32).reshape(1, 4, 3)
data_pytorch = torch.from_numpy(data)
and finally, perform the operation:
results_pytorch = linear_layer_pytorch(data_pytorch)
results_pytorch.shape
the shape of the output is as follows: Out[27]: torch.Size([1, 4, 8])
Taking a look at the source of the PyTorch implementation:
def linear(input, weight, bias=None):
# type: (Tensor, Tensor, Optional[Tensor]) -> Tensor
r"""
Applies a linear transformation to the incoming data: :math:`y = xA^T + b`.
Shape:
- Input: :math:`(N, *, in\_features)` where `*` means any number of
additional dimensions
- Weight: :math:`(out\_features, in\_features)`
- Bias: :math:`(out\_features)`
- Output: :math:`(N, *, out\_features)`
"""
if input.dim() == 2 and bias is not None:
# fused op is marginally faster
ret = torch.addmm(bias, input, weight.t())
else:
output = input.matmul(weight.t())
if bias is not None:
output += bias
ret = output
return ret
It transposes the weight matrix that is passed to it, broadcasts it along the batch_size axis and performs a matrix multiplications. Having in mind how a linear layer works, I imagine it as 8 nodes, connected through a synapse, holding a weight, with every channel in an input sample, thus in my case it has 3*8 weights. And that is exactly the shape I see in debugger (8, 3).
Now, let's jump to Chainer. The Chainer's linear layer documentation is available here: https://docs.chainer.org/en/stable/reference/generated/chainer.links.Linear.html#chainer.links.Linear. According to this documentation, the Linear layer wraps the function linear, which according to the docs, flattens the input along the non-batch dimensions and the shape of it's weight matrix is (output_size, flattend_input_size)
import chainer
linear_layer_chainer = chainer.links.Linear(8)
results_chainer = linear_layer_chainer(data)
results_chainer.shape
Out[21]: (1, 8)
Creating the layer as linear_layer_chainer = chainer.links.Linear(3, 8) and calling it causes a size mismatch. So in case of chainer, I have gotten a totally different results, because this time around I have a weight matrix that is of shape (8, 12) and my results have a shape of (1, 8). So now, here is my question : since the results are clearly different,both the weight matrices and the outputs have different shapes, how can I make them equivalent and what should be the desired output? In the PyTorch implementation of Tacotron it seems that the PyTorch approach is used as is (https://github.com/mozilla/TTS/blob/master/layers/tacotron.py) - Prenet. If that is the case, how can I make the Chainer produce the same results (I have to implement this in Chainer). I will be grateful for any inshight, sorry that the post has gotten this long.
| Chainer Linear layer (a bit frustratingly) does not apply the transformation to the last axis. Chainer flattens the rest of the axes. Instead you need to provide how many batch axes there are, documentation which is 2 in your case:
# data.shape == (1, 4, 3)
results_chainer = linear_layer_chainer(data, n_batch_axes=2)
# 2 batch axes (1,4) means you apply linear to (..., 3)
# results_chainer.shape == (1, 4, 8)
You can also use l(data, n_batch_axes=len(data.shape)-1) to always apply to the last dimension which is the default behaviour in PyTorch, Keras etc.
| https://stackoverflow.com/questions/56033418/ |
Slicing uneven columns from tensor array | I have an array like so:
([[[ 0, 1, 2],
[ 3, 4, 5]],
[[ 6, 7, 8],
[ 9, 10, 11]],
[[12, 13, 14],
[15, 16, 17]]])
If i want to slice the numbers 12 to 17 i would use:
arr[2, 0:2, 0:3]
but how would i go about slicing the array to get 12 to 16?
| You'll need to "flatten" the last two dimensions first. Only then will you be able to extract the elements you want:
xf = x.view(x.size(0), -1) # flatten the last dimensions
xf[2, 0:5]
Out[87]: tensor([12, 13, 14, 15, 16])
| https://stackoverflow.com/questions/56034978/ |
Considerations of model definitions when moving from Tensorflow to PyTorch | I've just recently switched to PyTorch after getting frustrated in debugging tf and understand that it is equivalent to coding in numpy almost completely. My question is what are the permitted python aspects we can use in a PyTorch model (to be put completely on GPU) eg. if-else has to be implemented as follows in tensorflow
a = tf.Variable([1,2,3,4,5], dtype=tf.float32)
b = tf.Variable([6,7,8,9,10], dtype=tf.float32)
p = tf.placeholder(dtype=tf.float32)
ps = tf.placeholder(dtype=tf.bool)
li = [None]*5
li_switch = [True, False, False, True, True]
for i in range(5):
li[i] = tf.Variable(tf.random.normal([5]))
sess = tf.Session()
sess.run(tf.global_variables_initializer())
def func_0():
return tf.add(a, p)
def func_1():
return tf.subtract(b, p)
with tf.device('GPU:0'):
my_op = tf.cond(ps, func_1, func_0)
for i in range(5):
print(sess.run(my_op, feed_dict={p:li[i], ps:li_switch[i]}))
How would the structure change in pytorch for the above code? How to place the variables and ops above on GPU and parallelize the list inputs to our graph in pytorch?
| In pytorch, the code can be written like the way normal python code is written.
CPU
import torch
a = torch.FloatTensor([1,2,3,4,5])
b = torch.FloatTensor([6,7,8,9,10])
cond = torch.randn(5)
for ci in cond:
if ci > 0:
print(torch.add(a, 1))
else:
print(torch.sub(b, 1))
GPU
Move the tensors to GPU like this:
a = torch.FloatTensor([1,2,3,4,5]).to('cuda')
b = torch.FloatTensor([6,7,8,9,10]).to('cuda')
cond = torch.randn(5).to('cuda')
import torch.nn as nn
class Cond(nn.Module):
def __init__(self):
super(Cond, self).__init__()
def forward(self, cond, a, b):
result = torch.empty(cond.shape[0], a.shape[0]).cuda()
for i, ci in enumerate(cond):
if ci > 0:
result[i] = torch.add(a, 1)
else:
result[i] = torch.sub(b, 1)
return result
cond_model = Cond().to('cuda')
output = cond_model(cond, a, b)
https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors
| https://stackoverflow.com/questions/56063686/ |
How to add a 2d tensor to every 2d tensor from a 3d tensor | I'm trying to add a 2d tensor to every 2d tensor from a 3d tensor.
Let's say i have a tensor a with (2,3,2) shape and a tensor b with (2,2) shape.
a = [[[1,2],
[1,2],
[1,2]],
[[3,4],
[3,4],
[3,4]]]
b = [[1,2], [3,4]]
#the result i want to get
a[:, 0, :] + b
a[:, 1, :] + b
a[:, 2, :] + b
I wanna know if there is a method in pytorch that can do this.
| The most efficient way of doing this would be to add an extra second dimension to b and use broadcasting to add:
a = torch.Tensor([[[1,2],[1,2],[1,2]],[[3,4],[3,4],[3,4]]])
b = torch.Tensor([[1,2],[3,4]])
a += b.unsqueeze(1)
| https://stackoverflow.com/questions/56066305/ |
Pytorch: How can I find indices of first nonzero element in each row of a 2D tensor? | I have a 2D tensor with some nonzero element in each row like this:
import torch
tmp = torch.tensor([[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0, 0]], dtype=torch.float)
I want a tensor containing the index of first nonzero element in each row:
indices = tensor([2],
[3])
How can I calculate it in Pytorch?
| I could find a tricky answer for my question:
tmp = torch.tensor([[0, 0, 1, 0, 1, 0, 0],
[0, 0, 0, 1, 1, 0, 0]], dtype=torch.float)
idx = reversed(torch.Tensor(range(1,8)))
print(idx)
tmp2= torch.einsum("ab,b->ab", (tmp, idx))
print(tmp2)
indices = torch.argmax(tmp2, 1, keepdim=True)
print(indeces)
The result is:
tensor([7., 6., 5., 4., 3., 2., 1.])
tensor([[0., 0., 5., 0., 3., 0., 0.],
[0., 0., 0., 4., 3., 0., 0.]])
tensor([[2],
[3]])
| https://stackoverflow.com/questions/56088189/ |
Get Euclidian and infinite distance in Pytorch | I'm trying to get the Euclidian Distance in Pytorch, using torch.dist, as shown below:
torch.dist(vector1, vector2, 1)
If I use "1" as the third Parameter, I'm getting the Manhattan distance, and the result is correct, but I'm trying to get the Euclidian and Infinite distances and the result is not right. I tried with a lot of different numbers on the third parameter, but I'm not able to get the desired distances.
How can I get the Euclidian and Infinite distances using Pytorch?
| You should use the .norm() instead of .dist().
vector1 = torch.FloatTensor([3, 4, 5])
vector2 = torch.FloatTensor([1, 1, 1])
dist = torch.norm(vector1 - vector2, 1)
print(dist) # tensor(9.)
dist = torch.norm(vector1 - vector2, 2)
print(dist) # tensor(5.3852)
dist = torch.norm(vector1 - vector2, float("inf"))
print(dist) # tensor(4.)
dist = torch.dist(vector1, vector2, 1)
print(dist) # tensor(9.)
dist = torch.dist(vector1, vector2, 2)
print(dist) # tensor(5.3852)
dist = torch.dist(vector1, vector2, float("inf"))
print(dist) # tensor(1.)
As we can see for the infinity distance, .norm() returns the correct answer.
| https://stackoverflow.com/questions/56093749/ |
IndexError - Implementing the test of CBOW with pytorch | I am not very used to python & machine learning code. I am testing on pytorch CBOW test, but it says something about Index Error. Can anyone help?
# model class
class CBOW(nn.Module):
...
def get_word_embedding(self, word):
word = torch.cuda.LongTensor([word_to_ix[word]])
return self.embeddings(word).view(1, -1)
# test method
def test_cbow(model, train_words, word_to_ix):
# test word similarity
word_1 = train_words[2] #randomly chosen word
word_2 = train_words[3] #randomly chosen word
word_1_vec = model.get_word_embedding(word_1)[0].cpu()
word_2_vec = model.get_word_embedding(word_2)[0].cpu()
print(word_1_vec)
print(word_2_vec)
word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * torch.norm(word_2_vec))).data.numpy()[0]
print("Similarity between '{}' & '{}' : {:0.4f}".format(word_1, word_2, word_similarity))
# executing the test
test_cbow(model, train_words, word_to_ix)
BELOW IS THE RESULT:
tensor([ 0.8978, 1.0713, -1.6856, -1.0967, -0.0114, 0.4107, -0.4293, -0.7351,
0.4410, -1.5937, -1.3773, 0.7744, 0.0739, -0.3263, 1.0342, 1.0420,
-1.1333, 0.4158, 1.1316, -0.0141, -0.8383, 0.2544, -2.2409, -1.1858,
0.2652, -0.3232, 0.1287, -1.5274, 0.3199, -2.1822, 0.9464, -0.6619,
1.1549, 0.5276, 0.0849, -0.1594, -1.7922, 1.3567, -0.4376, -0.9093,
1.0701, 1.5373, -1.3277, -1.1833, 1.8070, -0.0551, -0.8439, 1.5236,
-0.3890, -0.2306, -0.7392, -1.6435, 0.4485, 0.8988, -0.5958, -0.6989,
1.6123, -1.6668, 0.0583, 0.6698, -0.6998, 1.1942, 0.6355, 0.7437,
-1.0006, -0.5398, 1.3197, 1.3696, -0.3221, 0.9004, 0.6268, 0.0221,
0.0269, -1.7966, -1.6153, -0.1695, -0.0339, -0.5145, 1.5744, -0.3388,
-0.9617, 0.6750, -1.1334, 0.0377, 1.1123, 1.1002, -0.3605, 0.2105,
-1.6570, 1.3818, 0.9183, 0.0274, 0.9072, 0.8414, 0.3424, 0.2199,
1.6546, -0.1357, 1.1291, -0.5309], grad_fn=<CopyBackwards>)
tensor([-0.6263, -0.5639, 2.1590, -0.3659, 0.2862, -0.4542, -0.4825, -0.1776,
-0.4242, 0.9525, 0.7138, -0.3107, 1.8733, -0.3406, 0.0277, 1.6775,
2.1893, 2.0332, 0.7185, 0.0050, -0.1627, -0.1113, 1.0444, 1.4057,
0.2183, 0.3405, 0.0930, 1.2428, -0.0740, 0.3991, -0.2722, 1.4980,
0.9207, 0.5008, -1.9297, 0.5600, 1.6416, 1.1550, 0.1440, 0.0739,
-0.7465, -0.2458, 0.9217, 0.7156, -1.2558, -0.9891, -0.7313, 0.8501,
-1.2851, -0.3068, -0.0796, 0.9361, 0.0927, -1.2988, 0.7422, 0.1388,
1.3895, -0.7935, 0.4008, -0.1338, 1.5563, 0.5864, 0.6606, -0.2341,
0.1218, -0.7313, 0.5073, -0.2941, 0.0316, -2.5356, -0.0885, 2.5765,
0.2090, 0.2819, -0.0386, 0.7986, 2.1165, -0.0271, -0.2987, 0.2905,
0.0149, 0.2403, 0.0752, -1.5535, 0.3794, 2.0638, 1.0603, 0.0703,
-0.3643, -1.5671, -0.4736, -1.3035, 0.6583, 0.2531, 0.9829, -0.6025,
-0.8148, -0.3457, -0.7339, 0.6758], grad_fn=<CopyBackwards>)
What I am also confused is that I have to convert cuda datatype to numpy, since I used cuda in get_word_embedding method. Is adding .cpu() to convert the datatype correct?
IndexError Traceback (most recent call last)
<ipython-input-68-39d73aa6e0de> in <module>()
17 print("Similarity between '{}' & '{}' : {:0.4f}".format(word_1, word_2, word_similarity))
18
---> 19 test_cbow(model, train_words, word_to_ix)
<ipython-input-68-39d73aa6e0de> in test_cbow(model, train_words, word_to_ix)
14 print(type(word_1_vec))
15
---> 16 word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * torch.norm(word_2_vec))).data.numpy()[0]
17 print("Similarity between '{}' & '{}' : {:0.4f}".format(word_1, word_2, word_similarity))
18
**IndexError: too many indices for array**
| In your code, word_similarity is not an array, so you can't access it's 0th element. Just modify your code as:
word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * torch.norm(word_2_vec))).data.numpy()
You can also use:
word_similarity = (word_1_vec.dot(word_2_vec) / (torch.norm(word_1_vec) * torch.norm(word_2_vec))).item()
Here, the .item() in PyTorch would directly give you a float value since the similarity value is a scalar.
When to use .cpu()?
To convert a cuda tensor to cpu tensor, you need to use .cpu(). You cannot directly convert a cuda tensor to numpy. You have to convert to cpu tesnor first and then call .numpy().
| https://stackoverflow.com/questions/56094478/ |
How to create embeddedings for a column that is a list of categorical values | I am having some trouble deciding how to create embeddings for a categorical feature for my DNN model. The feature consists of a non fixed set of tags.
The feature is like:
column = [['Adventure','Animation','Comedy'],
['Adventure','Comedy'],
['Adventure','Children','Comedy']
I would like to do this with tensorflow so I know the tf.feature_column module should work, I just don't know which version to use.
Thanks!
| First you need to fill in your features to the same length.
import itertools
import numpy as np
column = np.array(list(itertools.zip_longest(*column, fillvalue='UNK'))).T
print(column)
[['Adventure' 'Animation' 'Comedy']
['Adventure' 'Comedy' 'UNK']
['Adventure' 'Children' 'Comedy']]
Then you can use tf.feature_column.embedding_column to create embeddings for a categorical feature. The inputs of embedding_column must be a CategoricalColumn created by any of the categorical_column_* function.
# if you have big vocabulary list in files, you can use tf.feature_column.categorical_column_with_vocabulary_file
cat_fc = tf.feature_column.categorical_column_with_vocabulary_list(
'cat_data', # identifying the input feature
['Adventure', 'Animation', 'Comedy', 'Children'], # vocabulary list
dtype=tf.string,
default_value=-1)
cat_column = tf.feature_column.embedding_column(
categorical_column =cat_fc,
dimension = 5,
combiner='mean')
categorical_column_with_vocabulary_list will ignore the 'UNK' since there is no 'UNK' in vocabulary list. dimension specifying dimension of the embedding and combiner specifying how to reduce if there are multiple entries in a single row with 'mean' the default in embedding_column.
The result:
tensor = tf.feature_column.input_layer({'cat_data':column}, [cat_column])
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
print(session.run(tensor))
[[-0.694761 -0.0711766 0.05720187 0.01770079 -0.09884425]
[-0.8362482 0.11640486 -0.01767573 -0.00548441 -0.05738768]
[-0.71162754 -0.03012567 0.15568805 0.00752804 -0.1422816 ]]
| https://stackoverflow.com/questions/56099266/ |
How to convert binary classifier output/loss tensor in LSTM to multiclass | I am trying to train an LSTM model to predict what year a song was written given its lyrics using word-level association in Pytorch. There are 51 potential classes/labels (1965-2015) - however I was working off of a template that used a binary classifier for a different problem. I have been trying to figure out how to change the model to predict multiple classes (1965, 1966, etc).
I understand that you are supposed to provide a tensor of size C=num_classes as the output. However, I did that by making output_size=51 but I am getting an error, which makes me think there is something related to defining or operating on the criterion class I am defining that I am not doing correctly.
Here is the model:
class LyricLSTM(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
super().__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
#self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x, hidden):
batch_size = x.size(0)
# embeddings and lstm_out
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sig(out)
#sig_out = self.softmax(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
And the training loop:
n_epochs = 10
batch_size = 16 #100 # 11 batches of size 337 so iters = 11 (11 * 337 = 3707)
# Split into training, validation, testing - train= 80% | valid = 10% | test = 10%
split_frac = 0.8
train_x = encoded_lyrics[0:int(split_frac * len(encoded_lyrics))] # 3707 training samples
train_y = encoded_years[0:int(split_frac * len(encoded_lyrics))] # 3707 training samples
# Dataloaders and batching
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
# make sure to SHUFFLE your data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size, drop_last=True)
output_size = 51
embedding_dim = 400
hidden_dim = 128 #256
n_layers = 2
lstmc = lstm.LyricLSTM(vocab_len, output_size, embedding_dim, hidden_dim, n_layers)
# Loss function + accuracy reporting
current_loss = 0
losses = np.zeros(n_epochs) # For plotting
accuracy = np.zeros(n_epochs)
lr = 0.001
criterion = nn.CrossEntropyLoss() #nn.BCELoss()
optimizer = torch.optim.Adam(lstmc.parameters(), lr=lr)
counter = 0
print_every = 1
clip = 5 # gradient clipping
# Main training loop
start = time.time()
lstmc.train()
for epoch in range(0, n_epochs):
# initialize hidden state
h = lstmc.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
lstmc.zero_grad()
# get the output from the model
inputs = inputs.type(torch.LongTensor)
output, h = lstmc(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
nn.utils.clip_grad_norm_(lstmc.parameters(), clip)
optimizer.step()
I am getting this error when I run the code
File "main.py", line 182, in main
loss = criterion(output.squeeze(), labels.float())
/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 904, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
/venv/lib/python3.7/site-packages/torch/nn/functional.py", line 1970, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
/venv/lib/python3.7/site-packages/torch/nn/functional.py", line 1295, in log_softmax
ret = input.log_softmax(dim)
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
This is the output I am getting and the labels (for batch size 16):
Output: tensor([0.4962, 0.5025, 0.4963, 0.4936, 0.5058, 0.4872, 0.4995, 0.4852, 0.4840,
0.4791, 0.4984, 0.5034, 0.4796, 0.4826, 0.4811, 0.4859],
grad_fn=<SqueezeBackward0>)
Labels: tensor([1994., 1965., 1981., 1986., 1973., 1981., 1975., 1968., 1981., 1968.,
1989., 1981., 1988., 1991., 1983., 1982.])
I was expecting for the output to be a tensor of length 51 where each element contained the likelihood that that year was the correct answer (ex: output[0] = first year / 1965, output[1] = 1966, etc).
| You must provide input as (N, C) and target as (N) to CrossEntropyLoss. I suspect in your model's foward() method, the following code segment is wrong.
sig_out = self.sig(out) # shape: batch_size*seq_len x output_size
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1) # shape: batch_size x seq_len*output_size
sig_out = sig_out[:, -1] # shape: batch_size
What you are trying to do with your last statement? Also, what you want to do with the seq_len dimension of the LSTM output?
Try to think about what you are doing here.
Although I think the shape of the output tensor is wrong, make sure output is a 2d tensor of shape (N, C) and labels is a 1d tensor of shape (N).
Also, I see a few problems in your code.
Usually, it is a good practice to apply zero_grad to the optimizer, not on the model. Don't do the following.
# zero accumulated gradients
lstmc.zero_grad()
Instead, do: optimizer.zero_grad()
You shouldn't use Sigmoid with 51 classes. Rather use fully connected layers followed by a softmax layer. And you don't need view() operations before the fc and softmax layers.
self.fc = nn.Linear(hidden_dim, output_size)
self.softmax = nn.LogSoftmax(dim=-1) # use -1 to apply in the last axis
...
out = self.dropout(lstm_out)
out = self.softmax(self.fc(out))
So, don't use the following code segment.
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim) # DON'T DO THIS
| https://stackoverflow.com/questions/56101779/ |
Gradient calculation not disabled in no_grad() PyTorch | Why is the gradient calculation of y not disabled in the following piece of code?
x = torch.randn(3, requires_grad=True)
print(x.requires_grad)
print((x ** 2).requires_grad)
y = x**2
print(y.requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
print(y.requires_grad)
Which gives the following output:
True
True
True
False
True
| Going through the official documentation says that the results would have require_grad=False even though the inputs have required_grad=True
Disabling gradient calculation is useful for inference, when you are sure
that you will not call :meth:Tensor.backward(). It will reduce memory
consumption for computations that would otherwise have requires_grad=True.
In this mode, the result of every computation will have
requires_grad=False, even when the inputs have requires_grad=True.
| https://stackoverflow.com/questions/56111964/ |
KeyError: 'predictions' When use SimpleSeq2SeqPredictor to predict string | Please first search our GitHub repository for similar questions. If you don't find a similar example you can use the following template:
System (please complete the following information):
- OS: Ubunti 18.04
- Python version: 3.6.7
- AllenNLP version: v0.8.3
- PyTorch version: 1.1.0
Question
When I Try to predict string using SimpleSeq2SeqPredictor, It always show that
Traceback (most recent call last):
File "predict.py", line 96, in <module>
p = predictor.predict(i)
File "venv/lib/python3.6/site-packages/allennlp/predictors/seq2seq.py", line 17, in predict
return self.predict_json({"source" : source})
File "/venv/lib/python3.6/site-packages/allennlp/predictors/predictor.py", line 56, in predict_json
return self.predict_instance(instance)
File "/venv/lib/python3.6/site-packages/allennlp/predictors/predictor.py", line 93, in predict_instance
outputs = self._model.forward_on_instance(instance)
File "/venv/lib/python3.6/site-packages/allennlp/models/model.py", line 124, in forward_on_instance
return self.forward_on_instances([instance])[0]
File "/venv/lib/python3.6/site-packages/allennlp/models/model.py", line 153, in forward_on_instances
outputs = self.decode(self(**model_input))
File "/venv/lib/python3.6/site-packages/allennlp/models/encoder_decoders/simple_seq2seq.py", line 247, in decode
predicted_indices = output_dict["predictions"]
KeyError: 'predictions'
I try to do a translate system, but I am newbie, most of code come from
https://github.com/mhagiwara/realworldnlp/blob/master/examples/mt/mt.py
http://www.realworldnlpbook.com/blog/building-seq2seq-machine-translation-models-using-allennlp.html
this is my training code
EN_EMBEDDING_DIM = 256
ZH_EMBEDDING_DIM = 256
HIDDEN_DIM = 256
CUDA_DEVICE = 0
prefix = 'small'
reader = Seq2SeqDatasetReader(
source_tokenizer=WordTokenizer(),
target_tokenizer=CharacterTokenizer(),
source_token_indexers={'tokens': SingleIdTokenIndexer()},
target_token_indexers={'tokens': SingleIdTokenIndexer(namespace='target_tokens')},
lazy = True)
train_dataset = reader.read(f'./{prefix}-data/train.tsv')
validation_dataset = reader.read(f'./{prefix}-data/val.tsv')
vocab = Vocabulary.from_instances(train_dataset,
min_count={'tokens': 3, 'target_tokens': 3})
en_embedding = Embedding(num_embeddings=vocab.get_vocab_size('tokens'),
embedding_dim=EN_EMBEDDING_DIM)
# encoder = PytorchSeq2SeqWrapper(
# torch.nn.LSTM(EN_EMBEDDING_DIM, HIDDEN_DIM, batch_first=True))
encoder = StackedSelfAttentionEncoder(input_dim=EN_EMBEDDING_DIM, hidden_dim=HIDDEN_DIM, projection_dim=128, feedforward_hidden_dim=128, num_layers=1, num_attention_heads=8)
source_embedder = BasicTextFieldEmbedder({"tokens": en_embedding})
# attention = LinearAttention(HIDDEN_DIM, HIDDEN_DIM, activation=Activation.by_name('tanh')())
# attention = BilinearAttention(HIDDEN_DIM, HIDDEN_DIM)
attention = DotProductAttention()
max_decoding_steps = 20 # TODO: make this variable
model = SimpleSeq2Seq(vocab, source_embedder, encoder, max_decoding_steps,
target_embedding_dim=ZH_EMBEDDING_DIM,
target_namespace='target_tokens',
attention=attention,
beam_size=8,
use_bleu=True)
optimizer = optim.Adam(model.parameters())
iterator = BucketIterator(batch_size=32, sorting_keys=[("source_tokens", "num_tokens")])
iterator.index_with(vocab)
if torch.cuda.is_available():
cuda_device = 0
model = model.cuda(cuda_device)
else:
cuda_device = -1
trainer = Trainer(model=model,
optimizer=optimizer,
iterator=iterator,
train_dataset=train_dataset,
validation_dataset=validation_dataset,
num_epochs=50,
serialization_dir=f'ck/{prefix}/',
cuda_device=cuda_device)
# for i in range(50):
# print('Epoch: {}'.format(i))
trainer.train()
predictor = SimpleSeq2SeqPredictor(model, reader)
for instance in itertools.islice(validation_dataset, 10):
print('SOURCE:', instance.fields['source_tokens'].tokens)
print('GOLD:', instance.fields['target_tokens'].tokens)
print('PRED:', predictor.predict_instance(instance)['predicted_tokens'])
# Here's how to save the model.
with open(f"ck/{prefix}/manually_save_model.th", 'wb') as f:
torch.save(model.state_dict(), f)
vocab.save_to_files(f"ck/{prefix}/vocabulary")
and this is my predict code
EN_EMBEDDING_DIM = 256
ZH_EMBEDDING_DIM = 256
HIDDEN_DIM = 256
CUDA_DEVICE = 0
prefix = 'big'
reader = Seq2SeqDatasetReader(
source_tokenizer=WordTokenizer(),
target_tokenizer=CharacterTokenizer(),
source_token_indexers={'tokens': SingleIdTokenIndexer()},
target_token_indexers={'tokens': SingleIdTokenIndexer(namespace='target_tokens')},
lazy = True)
# train_dataset = reader.read(f'./{prefix}-data/train.tsv')
# validation_dataset = reader.read(f'./{prefix}-data/val.tsv')
# vocab = Vocabulary.from_instances(train_dataset,
# min_count={'tokens': 3, 'target_tokens': 3})
vocab = Vocabulary.from_files("ck/small/vocabulary")
en_embedding = Embedding(num_embeddings=vocab.get_vocab_size('tokens'),
embedding_dim=EN_EMBEDDING_DIM)
# encoder = PytorchSeq2SeqWrapper(
# torch.nn.LSTM(EN_EMBEDDING_DIM, HIDDEN_DIM, batch_first=True))
encoder = StackedSelfAttentionEncoder(input_dim=EN_EMBEDDING_DIM, hidden_dim=HIDDEN_DIM, projection_dim=128, feedforward_hidden_dim=128, num_layers=1, num_attention_heads=8)
source_embedder = BasicTextFieldEmbedder({"tokens": en_embedding})
# attention = LinearAttention(HIDDEN_DIM, HIDDEN_DIM, activation=Activation.by_name('tanh')())
# attention = BilinearAttention(HIDDEN_DIM, HIDDEN_DIM)
attention = DotProductAttention()
max_decoding_steps = 20 # TODO: make this variable
model = SimpleSeq2Seq(vocab, source_embedder, encoder, max_decoding_steps,
target_embedding_dim=ZH_EMBEDDING_DIM,
target_namespace='target_tokens',
attention=attention,
beam_size=8,
use_bleu=True)
# And here's how to reload the model.
with open("./ck/small/best.th", 'rb') as f:
model.load_state_dict(torch.load(f))
predictor = Seq2SeqPredictor(model, dataset_reader=reader)
# print(predictor.predict("The dog ate the apple"))
test = [
'Surely ,he has no power over those who believe and put their trust in their Lord ;',
'And assuredly We have destroyed the generations before you when they did wrong ,while their apostles came unto them with the evidences ,and they were not such as to believe . In this wise We requite the sinning people .',
'And warn your tribe ( O Muhammad SAW ) of near kindred .',
'And to the Noble Messengers whom We have mentioned to you before ,and to the Noble Messengers We have not mentioned to you ; and Allah really did speak to Moosa .',
'It is He who gave you hearing ,sight ,and hearts ,but only few of you give thanks .',
'spreading in them much corruption ?',
'That will envelop the people . This will be a painful punishment .',
'When you received it with your tongues and spoke with your mouths what you had no knowledge of ,and you deemed it an easy matter while with Allah it was grievous .',
'of which you are disregardful .',
'Whoever disbelieves ,then the calamity of his disbelief is only on him ; and those who do good deeds ,are preparing for themselves .'
]
for i in test:
p = predictor.predict(i) # <------------------- ERROR !!!!!!!
print(p)
Am I do something wrong ?
| solved
I forgot add model.eval() after load model.
Sorry
| https://stackoverflow.com/questions/56113646/ |
How to use Adam optim considering of its adaptive learning rate? | In the Adam optimization algorithm, the learning speed is adjusted according to the number of iterations. I don't quite understand Adam's design, especially when using batch training. When using batch training, if there are 19,200 pictures, each time 64 pictures are trained, it is equivalent to 300 iterations. If our epoch has 200 times, there are a total of 60,000 iterations. I don't know if such multiple iterations will reduce the learning speed to a very small size. So when we are training, shall we initialize the optim after each epoch, or do nothing throughout the process?
Using pytorch. I have tried to initialize the optim after each epoch if I use batch train, and I do nothing when the number of data is small.
For expample, I don't know whether the two pieces of code is correct:
optimizer = optim.Adam(model.parameters(), lr=0.1)
for epoch in range(100):
###Some code
optim.step()
Another piece of code:
for epoch in range(100):
optimizer = optim.Adam(model.parameters(), lr=0.1)
###Some code
optim.step()
| You can read the official paper here https://arxiv.org/pdf/1412.6980.pdf
Your update looks somewhat like this (for brevity, sake I have omitted the warmup-phase):
new_theta = old_theta-learning_rate*momentum/(velocity+eps)
The intuition here is that if momentum>velocity, then the optimizer is in a plateau, so the the learning_rate is increased because momentum/velocity > 1. on the other hand if momentum<velocity, then the optimizer is in a steep slope or noisy region, so the learning_rate is decreased.
The learning_rate isn't necessarily decreased throughout the training, as you have mentioned in you question.
| https://stackoverflow.com/questions/56114706/ |
What are the differences between type of layer and its activation function in PyTorch? | I am trying to write some simple neural network using pytorch. I am new to this library. I faced with two ways of implementing the same idea: a layer with some fixed activation function (e.g. tanh).
The first way to implement it:
l1 = nn.Tanh(n_in, n_out)
The second way:
l2 = nn.Linear(n_in, n_out) # linear layer, that do nothing with its input except summation
but in forward propagation use:
import torch.nn.functional as F
x = F.tanh(l2(x)) # x - value that propagates from layer to layer
What are the differences between those mechanisms? Which one is better for which purposes?
| An activation function is just a non-linear function and it doesn't have any parameters. So, your first approach doesn't make any sense!
However, you can use a sequential wrapper to combine a linear layer with tanh activation.
model = nn.Sequential(
nn.Linear(n_in, n_out),
nn.Tanh()
)
output = model(input)
| https://stackoverflow.com/questions/56117566/ |
Deep learning : How to build character level embedding? | I am trying to use character level embedding in my model but I have few doubts regarding character level embedding.
So for word level embedding :
Sentence = 'this is a example sentence'
create the vocab :
vocab = {'this' : 0 , 'is' :1 , 'a': 2 'example' : 3, 'sentence' : 4 }
encode the sentence :
encoded_sentence = [ 0, 1 , 2 , 3 , 4 ]
now send it to any pre-trained embedding like word2vec or glove :
each id will be replaced with 300 or embedding dim :
embedding_sentence = [ [ 0.331,0.11 , ----300th dim ] , [ 0.331,0.11 , ----300th dim ] , [ 0.331,0.11 , ----300th dim ] , [ 0.331,0.11 , ----300th dim ] , [ 0.331,0.11 , ----300th dim ] ]
and if we are dealing with batches then we pad the sentences
So the shape goes like this :
[ batch_size , max_sentence_length , embedding_dim ]
Now for character level embedding I have few doubts :
so for char level embedding :
Sentence = 'this is a example sentence'
create the char_vocab :
char_vocab = [' ', 'a', 'c', 'e', 'h', 'i', 'l', 'm', 'n', 'p', 's', 't', 'x']
int_to_vocab = {n:m for m,n in enumerate(char_vocab)}
encoded the sentence by char level :
Now here is my confusion , so in word embedding we first tokenise the sentence and then encode each token with vocab id ( word_id)
but for char embedding if I am tokenzing the sentence and then encoding with character level then shape will be 4 dim and I can't feed this to LSTM.
But if i am not tokenising and directly encoding raw text then it's 3 dim and I can feed it to LSTM
for example :
with tokenization :
token_sentence = ['this','is','a','example','sentence']
encoded_char_level = []
for words in token_sentence:
char_lvel = [int_to_vocab[char] for char in words]
encoded_char_level.append(char_lvel)
it's look like this:
[[0, 1, 2, 3],
[2, 3],
[5],
[6, 7, 5, 8, 9, 10, 6],
[3, 6, 11, 0, 6, 11, 12, 6]]
Now we have to pad this for two level , one is char_level padding and second is sentence level padding:
char_level_padding:
[[0, 1, 2, 3, 0, 0, 0,0],
[2, 3, 0, 0, 0, 0, 0, 0],
[5, 0, 0, 0, 0, 0, 0, 0],
[6, 7, 5, 8, 9, 10, 6, 0],
[3, 6, 11, 0, 6, 11, 12, 6]]
Now if we have 4 sentences then we have to pad each sentence with max sentence len so shape will be :
[batch_size , max_sentence_length , max_char_length ]
Now if we pass this to embedding layer then:
[ batch_size , max_sentence_length, max_char_length , embedding_dim ]
Which is 4 dim.
How to encode sentences with character level and use it with tensorflow LSTM layer?
Because lstm takes 3 dim input [ batch_size , max_sequence_length , embedding_dim ]
Can I use it something like :
[ Batch_size , ( max_sentence_length x max_char_length ) , dim ]
so for example :
[ 12 , [ 3 x 4 ] , 300 ]
| You can concatenate the character level features with a fixed length.
For example:
``[[0, 1, 2, 3, 0, 0, 0,0],
[2, 3, 0, 0, 0, 0, 0, 0],
[5, 0, 0, 0, 0, 0, 0, 0],
[6, 7, 5, 8, 9, 10, 6, 0],
[3, 6, 11, 0, 6, 11, 12, 6]]``
can be changed to:
[[0, 1, 2, 3, 0, 0, 0,0,2, 3, 0, 0, 0, 0, 0, 0,5, 0, 0, 0, 0, 0, 0, 0,6, 7, 5, 8, 9, 10, 6, 0,3, 6, 11, 0, 6, 11, 12, 6]]
| https://stackoverflow.com/questions/56131509/ |
Pytorch how to multiply tensors of variable size except the first dimention | I have one tensor which is A = 40x1.
i need to multiply this one with 3 other tensors: B = 40x100x384, C = 40x10, D=40x10.
for example in tensor B, we got 40 100x384 matrixes and i need each one of these matrixes to be multiplied with its corresponding element from A
what is the best way to do this in pytorch? Suppose that we could have more matrixes like B,C,D they will always be in the style 40xKxL or 40xJ
| If I understand correctly, you want to multiply every i-th matrix K x L by the corresponding i-th scalar in A.
One possible way is:
(A * B.view(len(A), -1)).view(B.shape)
Or you can use the power of broadcasting:
A = A.reshape(len(A), 1, 1)
# now A is (40, 1, 1) and you can do
A*B
A*C
A*D
essentially each trailing dimension equal to 1 in A is stretched and copied to match the other matrix.
| https://stackoverflow.com/questions/56133218/ |
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1' | I'm failing to run my GAN on GPU. I call to(device) for all both models and all tensors but still keep getting the following error:
Traceback (most recent call last):
File "code/a3_gan_template.py", line 185, in <module>
main(args)
File "code/a3_gan_template.py", line 162, in main
train(dataloader, discriminator, generator, optimizer_G, optimizer_D, device)
File "code/a3_gan_template.py", line 100, in train
d_x = discriminator.forward(imgs)
File "code/a3_gan_template.py", line 80, in forward
out = self.model(img)
File "/home/lgpu0365/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/lgpu0365/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/lgpu0365/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/lgpu0365/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
return F.linear(input, self.weight, self.bias)
File "/home/lgpu0365/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1352, in linear
ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1'
Source code:
import argparse
import os
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torchvision import datasets
from torch.nn.functional import binary_cross_entropy
class Generator(nn.Module):
def __init__(self, latent_dim):
super(Generator, self).__init__()
# Construct generator. You are free to experiment with your model,
# but the following is a good start:
# Linear args.latent_dim -> 128
# LeakyReLU(0.2)
# Linear 128 -> 256
# Bnorm
# LeakyReLU(0.2)
# Linear 256 -> 512
# Bnorm
# LeakyReLU(0.2)
# Linear 512 -> 1024
# Bnorm
# LeakyReLU(0.2)
# Linear 1024 -> 784
# Output non-linearity
self.latent_dim = latent_dim
self.model = nn.Sequential(
nn.Linear(latent_dim, 128),
nn.LeakyReLU(0.2),
nn.Linear(128, 256),
nn.BatchNorm1d(256),
nn.LeakyReLU(0.2),
nn.Linear(256, 512),
nn.BatchNorm1d(512),
nn.LeakyReLU(0.2),
nn.Linear(512, 1024),
nn.BatchNorm1d(1024),
nn.LeakyReLU(0.2),
nn.Linear(1024, 784),
nn.Sigmoid()
)
def forward(self, z):
# Generate images from z
out = self.model(z)
return out
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
# Construct distriminator. You are free to experiment with your model,
# but the following is a good start:
# Linear 784 -> 512
# LeakyReLU(0.2)
# Linear 512 -> 256
# LeakyReLU(0.2)
# Linear 256 -> 1
# Output non-linearity
self.model = nn.Sequential(
nn.Linear(784, 512),
nn.LeakyReLU(0.2),
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Linear(256, 1),
nn.Sigmoid()
)
def forward(self, img):
# return discriminator score for img
out = self.model(img)
return out
def train(dataloader, discriminator, generator, optimizer_G, optimizer_D, device):
for epoch in range(args.n_epochs):
for i, (imgs, _) in enumerate(dataloader):
batch_count = epoch * len(dataloader) + i
imgs.to(device)
batch_size = imgs.shape[0]
imgs = imgs.reshape(batch_size, -1)
z = torch.rand(batch_size, generator.latent_dim, device=device)
gen_imgs = generator(z)
discriminator.to(device)
d_x = discriminator(imgs)
d_g_z = discriminator(gen_imgs)
ones = torch.ones(d_g_z.shape, device=device)
# Train Generator
# ---------------
loss_G = binary_cross_entropy(d_g_z, ones)
optimizer_G.zero_grad()
loss_G.backward(retain_graph=True)
optimizer_G.step()
# Train Discriminator
# -------------------
if batch_count % args.d_train_interval == 0:
loss_D = binary_cross_entropy(d_x, 0.9 * ones) + binary_cross_entropy(d_g_z, 0. * ones)
optimizer_D.zero_grad()
loss_D.backward()
optimizer_D.step()
# Save Images
# -----------
if batch_count % args.save_interval == 0:
print(f'epoch: {epoch} batches: {batch_count} L_G: {loss_G.item():0.3f} L_D: {loss_D.item():0.3f}')
# You can use the function save_image(Tensor (shape Bx1x28x28),
# filename, number of rows, normalize) to save the generated
# images, e.g.:
save_image(gen_imgs[:25],
f'images/{batch_count}.png',
nrow=5, normalize=True)
def main(args):
# Create output image directory
os.makedirs('images', exist_ok=True)
# Set device
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
# load data
dataloader = torch.utils.data.DataLoader(
datasets.MNIST('./data/mnist', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize( (0.5,), (0.5,) )
])),
batch_size=args.batch_size, shuffle=True)
# Initialize models and optimizers
generator = Generator(args.latent_dim)
generator.to(device)
discriminator = Discriminator()
discriminator.to(device)
optimizer_G = torch.optim.Adam(generator.parameters(), lr=args.lr)
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=args.lr)
# Start training
train(dataloader, discriminator, generator, optimizer_G, optimizer_D, device)
# You can save your generator here to re-use it to generate images for your
# report, e.g.:
torch.save(generator.state_dict(), "mnist_generator.pt")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--n_epochs', type=int, default=200,
help='number of epochs')
parser.add_argument('--batch_size', type=int, default=64,
help='batch size')
parser.add_argument('--lr', type=float, default=0.0002,
help='learning rate')
parser.add_argument('--latent_dim', type=int, default=100,
help='dimensionality of the latent space')
parser.add_argument('--save_interval', type=int, default=500,
help='save every SAVE_INTERVAL iterations')
parser.add_argument('--d_train_interval', type=int, default=25,
help='train discriminator (only) every D_TRAIN_INTERVAL iterations')
args = parser.parse_args()
main(args)
Any ideas on how to figure out what is missing? Thanks!
| Found the solution. Turns out .to(device) does not work in place for tensors.
# wrong
imgs.to(device)
# correct
imgs = imgs.to(device)
| https://stackoverflow.com/questions/56133521/ |
groupby aggregate mean in pytorch | I have a 2D tensor:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
and a label for each sample corresponding to a class:
labels = torch.LongTensor([1, 2, 2, 0])
so len(samples) == len(labels). Now I want to calculate the mean for each class / label. Because there are 3 classes (0, 1 and 2) the final vector should have dimension [n_classes, samples.shape[1]] So the expected solution should be:
result == torch.Tensor([
[0.1, 0.1],
[0.3, 0.3], # -> mean of [0.2, 0.2] and [0.4, 0.4]
[0.0, 0.0]
])
Question: How can this be done in pure pytorch (i.e. no numpy so that I can autograd) and ideally without for loops?
| All you need to do is form an mxn matrix (m=num classes, n=num samples) which will select the appropriate weights, and scale the mean appropriately. Then you can perform a matrix multiplication between your newly formed matrix and the samples matrix.
Given your labels, your matrix should be (each row is a class number, each class a sample number and its weight):
[[0.0000, 0.0000, 0.0000, 1.0000],
[1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.5000, 0.5000, 0.0000]]
Which you can form as follows:
M = torch.zeros(labels.max()+1, len(samples))
M[labels, torch.arange(len(samples)] = 1
M = torch.nn.functional.normalize(M, p=1, dim=1)
torch.mm(M, samples)
Output:
tensor([[0.0000, 0.0000],
[0.1000, 0.1000],
[0.3000, 0.3000]])
Note that the output means are correctly sorted in class order.
Why does M[labels, torch.arange(len(samples))] = 1 work?
This is performing a broadcast operation between the labels and the number of samples. Essentially, we are generating a 2D index for every element in labels: the first specifies which of the m classes it belongs to, and the second simply specifies its index position (from 1 to N). Another way would be top explicitly generate all the 2D indices:
twoD_indices = []
for count, label in enumerate(labels):
twoD_indices.append((label, count))
| https://stackoverflow.com/questions/56154604/ |
Why do I get RuntimeError: CUDA error: invalid argument in pytorch? | Recently I've frequently been getting RuntimeError: CUDA error: invalid argument when calling functions like torch.cholesky e.g.:
import torch
a = torch.randn(3, 3, device="cuda:0")
a = torch.mm(a, a.t()) # make symmetric positive-definite
torch.cholesky(a)
This works fine if I use device="cpu" instead. This error isn't very descriptive, so I'm not sure what's wrong here.
| I discovered that this error was because the machine I'm running things on has CUDA 10 installed now, but I just installed pytorch as pip install torch. From their website, the proper way to install with pip and CUDA 10 is pip install https://download.pytorch.org/whl/cu100/torch-1.1.0-cp37-cp37m-linux_x86_64.whl.
| https://stackoverflow.com/questions/56156032/ |
Pytorch: How to assign a default value for look-up table using torch tensor | Say that I have two tensors as follows:
a = torch.tensor([[1, 2, 3], [1, 2, 3]])
b = torch.tensor([0, 2, 3, 4])
where b is the lookup value for a such as:
b[a]
will return the value of:
tensor([[2, 3, 4], [2, 3, 4]])
My problem is, what if I only have a look-up table of:
c = torch.tensor([0, 2, 3])
In which, for every out-of-index, I would like it to be assigned to index 0, such as c[a] will return
tensor([[2, 3, 0], [2, 3, 0]])
If I run c[a], of course, I will get this result:
RuntimeError: index 3 is out of bounds for dim with size 3
Thanks for your help.
|
Code
# replace values greater than a certain number
def custom_replace(tensor, value, on_value):
# we create a copy of the original tensor,
# because of the way we are replacing them.
res = tensor.clone()
res[tensor>=value] = on_value
return res
a = torch.tensor([[1, 2, 3], [1, 2, 3]])
c = torch.tensor([0, 2, 3])
a_ = custom_replace(a, c.size(0), 0)
print(c[a_])
Output
tensor([[2, 3, 0],
[2, 3, 0]])
| https://stackoverflow.com/questions/56164949/ |
Transform List to Tensor more accurat | I want to return in the Dataloader a list.
But to return it, its need to be a tensor right?
So I transform it but in this process information is lost, is there another way to do this?
pt_tensor_from_list = torch.tensor(pose_transform)
pt_tensor_from_list = torch.FloatTensor(pose_transform)
I excpect the output:
([[-0.0003000000142492354, -0.0008999999845400453,
0.00039999998989515007, 0], [0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0], [0.00019999999494757503, -0.0005000000237487257,
-0.0008999999845400453, 0], [5.484399795532227, -24.28619956970215, 117.5000991821289, 1])
But it is:
([[ -0.0003, -0.0009, 0.0004, 0.0000],
[ 0.0010, -0.0002, 0.0003, 0.0000],
[ 0.0002, -0.0005, -0.0009, 0.0000],
[ 5.4844, -24.2862, 117.5001, 1.0000]])
|
You are not losing any information during such conversion. The reason why it looks like that, more compactly, is as when you are printing a tensor, it invokes __str__() or __repr__() methods and it makes your tensor looks more pretty. As you can find here torch.Tensor uses a kind of internal tensor formatter called _tensor_str. If you look inside the code link you will find that by default parameter precision is set to 4:
precision: Number of digits of precision for floating point output (default = 4).
That's why you have only 4 digits for tensor values when printing the tensor. But actually, values stored in a tensor are the same as in original list.
Here is small example to get an idea:
Code:
import torch
test_list = ([[-0.0003000000142492354, -0.0008999999845400453, 0.00039999998989515007, 0],
[0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0],
[0.00019999999494757503, -0.0005000000237487257, -0.0008999999845400453, 0],
[5.484399795532227, -24.28619956970215, 117.5000991821289, 1]])
print('Original values:')
for i in test_list:
for j in i:
print(j)
pt_tensor_from_list = torch.FloatTensor(test_list)
print('When printing FloatTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
Output:
Original values:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0
5.484399795532227
-24.28619956970215
117.5000991821289
1
When printing FloatTensor:
torch.float32
tensor([[-3.0000e-04, -9.0000e-04, 4.0000e-04, 0.0000e+00],
[ 1.0000e-03, -2.0000e-04, 3.0000e-04, 0.0000e+00],
[ 2.0000e-04, -5.0000e-04, -9.0000e-04, 0.0000e+00],
[ 5.4844e+00, -2.4286e+01, 1.1750e+02, 1.0000e+00]])
When printing each value separately:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0.0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0.0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0.0
5.484399795532227
-24.28619956970215
117.5000991821289
1.0
As you can see, we are getting the same values when printing each value separately.
BUT you can lose some info if you choose the wrong tensor types, for example HalfTensor instead of FloatTensor. Here is an example:
Code:
pt_tensor_from_list = torch.HalfTensor(test_list)
print('When printing HalfTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
Output:
When printing HalfTensor:
torch.float16
tensor([[-2.9993e-04, -8.9979e-04, 4.0007e-04, 0.0000e+00],
[ 1.0004e-03, -2.0003e-04, 2.9993e-04, 0.0000e+00],
[ 2.0003e-04, -5.0020e-04, -8.9979e-04, 0.0000e+00],
[ 5.4844e+00, -2.4281e+01, 1.1750e+02, 1.0000e+00]],
dtype=torch.float16)
When printing each value separately:
-0.0002999305725097656
-0.0008997917175292969
0.0004000663757324219
0.0
0.0010004043579101562
-0.00020003318786621094
0.0002999305725097656
0.0
0.00020003318786621094
-0.0005002021789550781
-0.0008997917175292969
0.0
5.484375
-24.28125
117.5
1.0
As you can notice now the values are (a little bit) different. Visit pytorch tensor docs to learn more about different types of torch.tensor.
| https://stackoverflow.com/questions/56166364/ |
Pytorch argsort ordered, with duplicate elements in the tensor | I have a vector A = [0,1,2,3,0,0,1,1,2,2,3,3]. I need to sort it in an increasing matter such that it is listed in an ordered fashion and from that extract the argsort. To better explain this I need to sort A to such that it returns B = [0,4,5,1,6,7,2,8,9,3,10,11]. However, when I use pyotrch's torch.argsort(A) it returns B = [4,5,0,1,6,7,2,8,9,3,10,11].
I'm assuming the algorithm that does so cannot be controlled on my end. Is there anyway to approach this without introducing for loops? Such operation are part of my NN model and will cause performance issues if not done efficiently. Thanks!
| Here is a pure PyTorch based solution leveraging broadcasting, torch.unique(), and torch.nonzero(). This would be of great boost particularly for a GPU based implementation/run, which is not possible if we have to switch back to NumPy, argsort it and then transfer back to PyTorch (as suggested in other approaches).
# our input tensor
In [50]: A = torch.tensor([0,1,2,3,0,0,1,1,2,2,3,3])
# construct an intermediate boolean tensor
In [51]: boolean = A[:, None] == torch.unique(A)
In [52]: boolean
Out[52]:
tensor([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 1]], dtype=torch.uint8)
Once we have this boolean tensor, we can find the desired indices by checking for positions where there is an 1 after transposing the boolean tensor.
That would give us both sorted input and the indices. Since we want only the indices, we can just grab those by indexing for the last column (1 or -1)
In [53]: torch.nonzero(boolean.t())[:, -1]
Out[53]: tensor([ 0, 4, 5, 1, 6, 7, 2, 8, 9, 3, 10, 11])
Here's the result for one more example provided by OP in the comments:
In [55]: A_large = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9])
In [56]: boolean_large = A_large[:, None] == torch.unique(A_large)
In [57]: torch.nonzero(boolean_large.t())[:, -1]
Out[57]:
tensor([ 0, 10, 11, 1, 12, 13, 2, 14, 15, 3, 16, 17, 4, 18, 19, 5, 20, 21,
6, 22, 23, 7, 24, 25, 8, 26, 27, 9, 28, 29])
Note: Unlike with NumPy-based solution proposed in other answers, here we don't have to worry about what kind of sorting algorithm we've to use because we are not using any sorting at all.
| https://stackoverflow.com/questions/56176439/ |
How to reproduce RNN results on several runs? | I call same model on same input twice in a row and I don't get the same result, this model have nn.GRU layers so I suspect that it have some internal state that should be release before second run?
How to reset RNN hidden state to make it the same as if model was initially loaded?
UPDATE:
Some context:
I'm trying to run model from here:
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L93
I'm calling generate:
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L148
Here it's actually have some code using random generator in pytorch:
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L200
https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py#L110
https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py#L129
I have placed (I'm running code on CPU):
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)
in
https://github.com/erogol/WaveRNN/blob/master/utils/distribution.py
after all imports.
I have checked GRU weights between runs and they are the same:
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L153
Also I have checked logits and sample between runs and logits are the same but sample are not, so @Andrew Naguib seems were right about random seeding, but I'm not sure where the code that fixes random seed should be placed?
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L200
UPDATE 2:
I have placed seed init inside generate and now results are consistent:
https://github.com/erogol/WaveRNN/blob/master/models/wavernn.py#L148
| I believe this may be highly related to Random Seeding. To ensure reproducible results (as stated by them) you have to seed torch as in this:
import torch
torch.manual_seed(0)
And also, the CuDNN module.
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
If you're using numpy, you could also do:
import numpy as np
np.random.seed(0)
However, they warn you:
Deterministic mode can have a performance impact, depending on your model.
A suggested script I regularly use which has been working very good to reproduce results is:
# imports
import numpy as np
import random
import torch
# ...
""" Set Random Seed """
if args.random_seed is not None:
"""Following seeding lines of code are to ensure reproducible results
Seeding the two pseudorandom number generators involved in PyTorch"""
random.seed(args.random_seed)
np.random.seed(args.random_seed)
torch.manual_seed(args.random_seed)
# https://pytorch.org/docs/master/notes/randomness.html#cudnn
if not args.cpu_only:
torch.cuda.manual_seed(args.random_seed)
cudnn.deterministic = True
cudnn.benchmark = False
| https://stackoverflow.com/questions/56190274/ |
How to access the predictions of pytorch classification model? (BERT) | I'm running this file:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py
This is the prediction code for one input batch:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(input_ids, segment_ids, input_mask, labels=None)
loss_fct = CrossEntropyLoss()
tmp_eval_loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if len(preds) == 0:
preds.append(logits.detach().cpu().numpy())
else:
preds[0] = np.append(preds[0], logits.detach().cpu().numpy(), axis=0)
The task is a binary classification.
I want to access the binary output.
I've tried this:
curr_pred = logits.detach().cpu()
if len(preds) == 0:
preds.append(curr_pred.numpy())
else:
preds[0] = np.append(preds[0], curr_pred.numpy(), axis=0)
probablities = curr_pred.softmax(1).numpy()[:, 1]
But the results seem weird. So I'm not sure if it's the correct way.
My hypothesis - I'm receiving the output of the last layer, therefore after softmax, it's the true probabilities (vector of dim 2 - the probability to the 1st and probability to the 2nd class.)
| After looking at this part of the run_classifier.py code:
# copied from the run_classifier.py code
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
if output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif output_mode == "regression":
preds = np.squeeze(preds)
result = compute_metrics(task_name, preds, all_label_ids.numpy())
You are just missing:
preds = preds[0]
preds = np.argmax(preds, axis=1)
Then they just use preds to compute the accuracy as:
def simple_accuracy(preds, labels):
return (preds == labels).mean()
| https://stackoverflow.com/questions/56201147/ |
Will installing Nvidia CUDA ver10 break CUDA ver9 on the same machine? | I am currently using pytorch and tensorflow with cuda ver9.0. I am pondering whether to install the latest cuda version 10. Will installing cuda v10 break cuda v9? Can both co-exist on the same desktop PC? Is it advisable to uninstall cuda v9 after installing cuda v10 or is it better to leave both versions installed?
I am using Windows 10.
| I will answer my own question. Installing CUDA v10 will not break CUDA v9 on the same machine. Both can co-exist.
I installed CUDA v10 successfully. Pytorch has been tested to work successfully with CUDA v10.
| https://stackoverflow.com/questions/56204127/ |
Error while converting pytorch model to core-ml | C = torch.cat((A,B),1)
shape of tensors:
A is (1, 128, 128, 256)
B is (1, 1, 128, 256)
Expected C value is (1, 129, 128, 256)
This code is working on pytorch, but while converting to core-ml it gives me below error:
"Error while converting op of type: {}. Error message: {}\n".format(node.op_type, err_message, )
TypeError: Error while converting op of type: Concat. Error message: unable to translate constant array shape to CoreML shape"
| It was coremltools version related issue. Tried with latest beta coremltools 3.0b2.
Following works without any error with latest beta.
import torch
class cat_model(torch.nn.Module):
def __init__(self):
super(cat_model, self).__init__()
def forward(self, a, b):
c = torch.cat((a, b), 1)
# print(c.shape)
return c
a = torch.randn((1, 128, 128, 256))
b = torch.randn((1, 1, 128, 256))
model = cat_model()
torch.onnx.export(model, (a, b), 'cat_model.onnx')
import onnx
model = onnx.load('cat_model.onnx')
onnx.checker.check_model(model)
print(onnx.helper.printable_graph(model.graph))
from onnx_coreml import convert
mlmodel = convert(model)
| https://stackoverflow.com/questions/56217454/ |
How to properly convert pytorch LSTM to keras CuDNNLSTM? | I am trying to hand-convert a Pytorch model to Tensorflow for deployment. ONNX doesn't seem to natively go from Pytorch LSTMs to Tensorflow CuDNNLSTMs so that's why I'm writing it by hand.
I've tried the code below:
This is running in an Anaconda environment running Python 2.7, Pytorch 1.0, tensorflow 1.12, cuda9. I'm running this with no bias in the Pytorch layer as it follows a batchnorm, but since Keras does not provide that option I'm simply assigning a 0 bias.
import torch
import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import CuDNNLSTM, Bidirectional
from tensorflow.keras.models import Sequential, Model
input_size = 80
hidden_size = 512
with torch.no_grad():
rnn1 = torch.nn.LSTM(input_size=input_size, hidden_size=hidden_size, bidirectional=True, bias=False, batch_first=True).cuda()
model = Sequential()
model.add(Bidirectional(CuDNNLSTM(hidden_size, return_sequences=True), input_shape=(None, input_size), name='rnn'))
bias_size = rnn1.weight_hh_l0.detach().cpu().numpy().T.shape[1] * 2
keras_format_weights = [
rnn1.weight_ih_l0.detach().cpu().numpy().T,
rnn1.weight_hh_l0.detach().cpu().numpy().T,
np.zeros(bias_size,),
rnn1.weight_ih_l0_reverse.detach().cpu().numpy().T,
rnn1.weight_hh_l0_reverse.detach().cpu().numpy().T,
np.zeros(bias_size,),
]
model.layers[0].set_weights(keras_format_weights)
random_test = np.random.rand(1, 1, 80)
res1, _ = rnn1.forward(torch.FloatTensor(random_test).cuda())
res1 = res1.detach().cpu().numpy()
res2 = model.predict(random_test)
print(np.allclose(res1, res2, atol=1e-2))
print(res1)
print(res2)
False
[[[ 0.01265562 0.07478553 0.0470101 ... -0.02260824 0.0243004
-0.0261014 ]]]
[[[-0.05316251 -0.00230848 0.03070898 ... 0.01497027 0.00976444
-0.01095549]]]
Now, this does work with the generic Keras LSTM:
model = Sequential()
model.add(Bidirectional(LSTM(hidden_size, recurrent_activation='sigmoid', return_sequences=True), input_shape=(None, input_size), name='rnn'))
bias_size = rnn1.weight_hh_l0.detach().cpu().numpy().T.shape[1]
keras_format_weights = [
rnn1.weight_ih_l0.detach().cpu().numpy().T,
rnn1.weight_hh_l0.detach().cpu().numpy().T,
np.zeros((bias_size,)),
rnn1.weight_ih_l0_reverse.detach().cpu().numpy().T,
rnn1.weight_hh_l0_reverse.detach().cpu().numpy().T,
np.zeros((bias_size,))
]
But I need the speed advantages of the CuDNNLSTM, and Pytorch is using the same backend anyway.
| Update: the solution was to convert the torch model into a keras base LSTM model, then call
base_lstm_model.save_weights('1.h5')
cudnn_lstm_model.load_weights('1.h5')
| https://stackoverflow.com/questions/56230511/ |
how to increase number of images with data augmentation | I'm trying to apply data augmentation with pytorch. In particular, I have a dataset of 150 images and I want to apply 5 transformations (horizontal flip, 3 random rotation ad vertical flip) to every single image to have 750 images, but with my code I always have 150 images.
'train': transforms.Compose([
transforms.Resize(224),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(degrees = (90,90)),
transforms.RandomRotation(degrees = (180,180)),
transforms.RandomRotation(degrees = (270,270)),
transforms.RandomVerticalFlip(p=1),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
| You're misunderstanding the API. When you add some transform to your dataset, it is essentially a function which is being applied to every sample from that dataset and then returned. transforms.Compose applies sub-transforms sequentially, rather than returning multiple results (with each translation either being applied or not). So
transforms.Compose([
transforms.RandomRotation(degrees = (90, -90)),
transforms.RandomRotation(degrees = (180, -180)),
])
will just rotate your image once at a random angle between 90 and 90 degrees (in other words, by exactly 90 degrees) and then again by 180. This is equivalent to a single RandomRotation(degrees=(270, 270)) (it is actually worse because it leads to more data corruption in the process).
So, most transforms are as above - "linear" - one input, one output. There are some "forking" transforms which produce more outputs than inputs. An example is FiveCrop. Please pay attention to its note on how to deal with that. Even with "forking" transforms, you will still get the same number of items in your dataset, it's just that your batches will be bigger.
If you specifically want to have a dataset which contains 4 differently rotated copies of each item and yields them randomly (ie. possibly each rotated variant comes in a different batch), you will have to write some custom data loading logic. For that, you may want to base your work on source of DatasetFolder.
Why is the API made the way it is? In practice, most people are fine with the transforms as they are currently - in your place, they would just write a transform which randomly flips by 0, 90, 180 or 270 degrees and then train their network for 4 times more epochs than you would, on average getting one sample of each.
| https://stackoverflow.com/questions/56235136/ |
Is there a `tensor` operation or function in Pytorch that works like cv2.dilate in OpenCV? | I built several masks through a network. These masks are stored in a torch.tensor variable. I would like to do a cv2.dilate like operation on every channel of the tensor.
I know there is a way that convert the tensor to numpy.ndarray and then apply cv2.dilate to every channel using a for loop. But since there are about 32 channels, this method might slow down the forward operation in the network.
| I think dilate is essentially conv2d operation in torch. See the code below
import cv2
import numpy as np
import torch
im = np.array([ [0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 0] ], dtype=np.float32)
kernel = np.array([ [1, 1, 1],
[1, 1, 1],
[1, 1, 1] ], dtype=np.float32)
print(cv2.dilate(im, kernel))
# [[1. 1. 1. 0. 0.]
# [1. 1. 1. 1. 0.]
# [1. 1. 1. 1. 1.]
# [1. 1. 1. 1. 1.]
# [0. 0. 1. 1. 1.]]
im_tensor = torch.Tensor(np.expand_dims(np.expand_dims(im, 0), 0)) # size:(1, 1, 5, 5)
kernel_tensor = torch.Tensor(np.expand_dims(np.expand_dims(kernel, 0), 0)) # size: (1, 1, 3, 3)
torch_result = torch.clamp(torch.nn.functional.conv2d(im_tensor, kernel_tensor, padding=(1, 1)), 0, 1)
print(torch_result)
# tensor([[[[1., 1., 1., 0., 0.],
# [1., 1., 1., 1., 0.],
# [1., 1., 1., 1., 1.],
# [1., 1., 1., 1., 1.],
# [0., 0., 1., 1., 1.]]]])
| https://stackoverflow.com/questions/56235733/ |
How to compute gradient of the error with respect to the model input? | Given a simple 2 layer neural network, the traditional idea is to compute the gradient w.r.t. the weights/model parameters. For an experiment, I want to compute the gradient of the error w.r.t the input. Are there existing Pytorch methods that can allow me to do this?
More concretely, consider the following neural network:
import torch.nn as nn
import torch.nn.functional as F
class NeuralNet(nn.Module):
def __init__(self, n_features, n_hidden, n_classes, dropout):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(n_features, n_hidden)
self.sigmoid = nn.Sigmoid()
self.fc2 = nn.Linear(n_hidden, n_classes)
self.dropout = dropout
def forward(self, x):
x = self.sigmoid(self.fc1(x))
x = F.dropout(x, self.dropout, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
I instantiate the model and an optimizer for the weights as follows:
import torch.optim as optim
model = NeuralNet(n_features=args.n_features,
n_hidden=args.n_hidden,
n_classes=args.n_classes,
dropout=args.dropout)
optimizer_w = optim.SGD(model.parameters(), lr=0.001)
While training, I update the weights as usual. Now, given that I have values for the weights, I should be able to use them to compute the gradient w.r.t. the input. I am unable to figure out how.
def train(epoch):
t = time.time()
model.train()
optimizer.zero_grad()
output = model(features)
loss_train = F.nll_loss(output[idx_train], labels[idx_train])
acc_train = accuracy(output[idx_train], labels[idx_train])
loss_train.backward()
optimizer_w.step()
# grad_features = loss_train.backward() w.r.t to features
# features -= 0.001 * grad_features
for epoch in range(args.epochs):
train(epoch)
| It is possible, just set input.requires_grad = True for each input batch you're feeding in, and then after loss.backward() you should see that input.grad holds the expected gradient. In other words, if your input to the model (which you call features in your code) is some M x N x ... tensor, features.grad will be a tensor of the same shape, where each element of grad holds the gradient with respect to the corresponding element of features. In my comments below, I use i as a generalized index - if your parameters has for instance 3 dimensions, replace it with features.grad[i, j, k], etc.
Regarding the error you're getting: PyTorch operations build a tree representing the mathematical operation they are describing, which is then used for differentiation. For instance c = a + b will create a tree where a and b are leaf nodes and c is not a leaf (since it results from other expressions). Your model is the expression, and its inputs as well as parameters are the leaves, whereas all intermediate and final outputs are not leaves. You can think of leaves as "constants" or "parameters" and of all other variables as of functions of those. This message tells you that you can only set requires_grad of leaf variables.
Your problem is that at the first iteration, features is random (or however else you initialize) and is therefore a valid leaf. After your first iteration, features is no longer a leaf, since it becomes an expression calculated based on the previous ones. In pseudocode, you have
f_1 = initial_value # valid leaf
f_2 = f_1 + your_grad_stuff # not a leaf: f_2 is a function of f_1
to deal with that you need to use detach, which breaks the links in the tree, and makes the autograd treat a tensor as if it was constant, no matter how it was created. In particular, no gradient calculations will be backpropagated through detach. So you need something like
features = features.detach() - 0.01 * features.grad
Note: perhaps you need to sprinkle a couple more detaches here and there, which is hard to say without seeing your whole code and knowing the exact purpose.
| https://stackoverflow.com/questions/56238599/ |
Loading custom dataset in pytorch | Normally, when we are loading data in pytorch, we do the followings
for x, y in dataloaders:
# Do something
However, in this dataset called MusicNet, they declare their own dataset and dataloader like this
train_set = musicnet.MusicNet(root=root, train=True, download=True, window=window)#, pitch_shift=5, jitter=.1)
test_set = musicnet.MusicNet(root=root, train=False, window=window, epoch_size=50000)
train_loader = torch.utils.data.DataLoader(dataset=train_set,batch_size=batch_size,**kwargs)
test_loader = torch.utils.data.DataLoader(dataset=test_set,batch_size=batch_size,**kwargs)
Then they load the data like this
with train_set, test_set:
for i, (x, y) in enumerate(train_loader):
# Do something
Question 1
I don't understand why the code doesn't work without the line with train_set, test_set.
Question 2
Also, how do I access the data?
I tried
train_set.access(2560,0)
and
with train_set, test_set:
x, y = train_set.access(2560,0)
They either give me an error message like
KeyError Traceback (most recent call last) in
----> 1 train_set.access(2560,0)
/workspace/raven_data/AMT/MusicNet/pytorch_musicnet/musicnet.py in
access(self, rec_id, s, shift, jitter) 106 107 if self.mmap:
--> 108 x = np.frombuffer(self.records[rec_id][0][ssz_float:int(s+scaleself.window)*sz_float],
dtype=np.float32).copy() 109 else: 110 fid,_ = self.records[rec_id]
KeyError: 2560
or giving me an empty x and y
|
Question 1
I don't understand why the code doesn't work without the line with train_set, test_set.
For you to be able to use the torch.utils.data.DataLoader with a custom dataset design, you must create a class of your dataset which subclasses torch.utils.data.Dataset (and implementing specific functions) and pass it to the dataloader, even they say so:
All other datasets should subclass it. All subclasses should override __len__, that provides the size of the dataset, and __getitem__, supporting integer indexing in range from 0 to len(self) exclusive.
This is what happens in:
train_set = musicnet.MusicNet(root=root, train=True, download=True, window=window)#, pitch_shift=5, jitter=.1)
test_set = musicnet.MusicNet(root=root, train=False, window=window, epoch_size=50000)
train_loader = torch.utils.data.DataLoader(dataset=train_set,batch_size=batch_size,**kwargs)
test_loader = torch.utils.data.DataLoader(dataset=test_set,batch_size=batch_size,**k
If you check their musicnet.MusicNet, you will find that they do so.
Question 2
Also, how do I access the data?
There are possible ways:
To get only a batch from the dataset, you can do:
batch = next(iter(train_loader))
To access the whole dataset (especially, in your example):
dataset = train_loader.dataset.records
(The .records is the part which may vary from dataset to another, I said .records because this is what I found in here)
| https://stackoverflow.com/questions/56238732/ |
BucketIterator throws 'Field' object has no attribute 'vocab' | It's not a new question, references I found without any solution working for me first and second.
I'm a newbie to PyTorch, facing AttributeError: 'Field' object has no attribute 'vocab' while creating batches of the text data in PyTorch using torchtext.
Following up the book Deep Learning with PyTorch I wrote the same example as explained in the book.
Here's the snippet:
from torchtext import data
from torchtext import datasets
from torchtext.vocab import GloVe
TEXT = data.Field(lower=True, batch_first=True, fix_length=20)
LABEL = data.Field(sequential=False)
train, test = datasets.IMDB.splits(TEXT, LABEL)
print("train.fields:", train.fields)
print()
print(vars(train[0])) # prints the object
TEXT.build_vocab(train, vectors=GloVe(name="6B", dim=300),
max_size=10000, min_freq=10)
# VOCABULARY
# print(TEXT.vocab.freqs) # freq
# print(TEXT.vocab.vectors) # vectors
# print(TEXT.vocab.stoi) # Index
train_iter, test_iter = data.BucketIterator.splits(
(train, test), batch_size=128, device=-1, shuffle=True, repeat=False) # -1 for cpu, None for gpu
# Not working (FROM BOOK)
# batch = next(iter(train_iter))
# print(batch.text)
# print()
# print(batch.label)
# This also not working (FROM Second solution)
for i in train_iter:
print (i.text)
print (i.label)
Here's the stacktrace:
AttributeError Traceback (most recent call last)
<ipython-input-33-433ec3a2ca3c> in <module>()
7
8
----> 9 for i in train_iter:
10 print (i.text)
11 print (i.label)
/anaconda3/lib/python3.6/site-packages/torchtext/data/iterator.py in __iter__(self)
155 else:
156 minibatch.sort(key=self.sort_key, reverse=True)
--> 157 yield Batch(minibatch, self.dataset, self.device)
158 if not self.repeat:
159 return
/anaconda3/lib/python3.6/site-packages/torchtext/data/batch.py in __init__(self, data, dataset, device)
32 if field is not None:
33 batch = [getattr(x, name) for x in data]
---> 34 setattr(self, name, field.process(batch, device=device))
35
36 @classmethod
/anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in process(self, batch, device)
199 """
200 padded = self.pad(batch)
--> 201 tensor = self.numericalize(padded, device=device)
202 return tensor
203
/anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in numericalize(self, arr, device)
300 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
301 else:
--> 302 arr = [self.vocab.stoi[x] for x in arr]
303
304 if self.postprocessing is not None:
/anaconda3/lib/python3.6/site-packages/torchtext/data/field.py in <listcomp>(.0)
300 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
301 else:
--> 302 arr = [self.vocab.stoi[x] for x in arr]
303
304 if self.postprocessing is not None:
AttributeError: 'Field' object has no attribute 'vocab'
If not using BucketIterator, what else I can use to get a similar
output?
| You haven't built the vocab for the LABEL field.
After TEXT.build_vocab(train, ...), run LABEL.build_vocab(train), and the rest will run.
| https://stackoverflow.com/questions/56251267/ |
Pytorch - Inferring linear layer in_features | I am building a toy model to take in some images and give me a classification. My model looks like:
conv2d -> pool -> conv2d -> linear -> linear.
My issue is that when we create the model, we have to calculate the size of the first linear layer in_features based on the size of the input image. If we get new images of different sizes, we have to recalculate in_features for our linear layer. Why do we have to do this? Can't it just be inferred?
| As of 1.8, PyTorch now has LazyLinear which infers the input dimension:
A torch.nn.Linear module where in_features is inferred.
| https://stackoverflow.com/questions/56262712/ |
Is it possible to use Keras module in Pytorch script? | I'm creating a network similar to lpcnet introduced by mozilla but in PyTorch. At the one point I need the module src/mdense.py in my script but it is written with keras. Is it possible to import keras module to torch framework? What could be the easiest way? I don't want to re-write the module with PyTorch.
There exists some converters but as I see, these are in early stages. Is there any I can depend on?
| No, not really, you can import it (as in python import), but Keras code won't work with PyTorch as they use different differentiation methods and they are completely different libraries. The only way is to rewrite the module using PyTorch's API.
| https://stackoverflow.com/questions/56270834/ |
Pytorch RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0 | I'm building a simple content based recommendations system. In order to compute the cosine similarity in a GPU accelerated way, i'm using Pytorch.
At the time of creating the tfidf vocabulary tensor from a csr_matrix, it promts the following RuntimeErrorr
RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0
I'm doing it in this way:
coo = tfidf_matrix.tocoo()
values = coo.data
indices = np.vstack( (coo.row, coo.col ))
i = torch.LongTensor(indices)
v = torch.FloatTensor(values)
tfidf_matrix_tensor = torch.sparse.FloatTensor(i, v, torch.Size(coo1.shape)).to_dense()
# Prompts the error
I tried with a small test (tfidf matrix size = 10,296) dataset and it works.
The tfidf matrix size from the real dataset is (27639, 226957)
| I tried the same piece of code that was throwing this error with the older version of PyTorch. It said that I need to have more RAM. Therefore, it's not a PyTorch bug. The only solution is to reduce matrix size somehow.
| https://stackoverflow.com/questions/56272981/ |
How can I get predictions from these pretrained models? | I've been trying to generate human pose estimations, I came across many pretrained models (ex. Pose2Seg, deep-high-resolution-net ), however these models only include scripts for training and testing, this seems to be the norm in code written to implement models from research papers ,in deep-high-resolution-net I have tried to write a script to load the pretrained model and feed it my images, but the output I got was a bunch of tensors and I have no idea how to convert them to the .json annotations that I need.
total newbie here, sorry for my poor English in advance, ANY tips are appreciated.
I would include my script but its over 100 lines.
PS: is it polite to contact the authors and ask them if they can help?
because it seems a little distasteful.
| Im not doing skeleton detection research, but your problem seems to be general.
(1) I dont think other people should teaching you from begining on how to load data and run their code from begining.
(2) For running other peoples code, just modify their test script which is provided e.g
https://github.com/leoxiaobin/deep-high-resolution-net.pytorch/blob/master/tools/test.py
They already helps you loaded the model
model = eval('models.'+cfg.MODEL.NAME+'.get_pose_net')(
cfg, is_train=False
)
if cfg.TEST.MODEL_FILE:
logger.info('=> loading model from {}'.format(cfg.TEST.MODEL_FILE))
model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False)
else:
model_state_file = os.path.join(
final_output_dir, 'final_state.pth'
)
logger.info('=> loading model from {}'.format(model_state_file))
model.load_state_dict(torch.load(model_state_file))
model = torch.nn.DataParallel(model, device_ids=cfg.GPUS).cuda()
Just call
# evaluate on Variable x with testing data
y = model(x)
# access Variable's tensor, copy back to CPU, convert to numpy
arr = y.data.cpu().numpy()
# write CSV
np.savetxt('output.csv', arr)
You should be able to open it in excel
(3) "convert them to the .json annotations that I need".
That's the problem nobody can help. We don't know what format you want. For their format, it can be obtained either by their paper. Or looking at their training data by
X, y = torch.load('some_training_set_with_labels.pt')
By correlating the x and y. Then you should have a pretty good idea.
| https://stackoverflow.com/questions/56284107/ |
Unexpected data types when trying to train a pytorch model | I'm putting together a basic neural network to learn pytorch. Attempting to train it always fails with the message "Expected object of scalar type Float but got scalar type Double for argument #4 'mat1'". I suspect I'm doing something wrong with putting the data together, but I don't know what.
The data in question is a couple of one-dimensional lists of numbers that I've generated, which should be linearly separable.
I've pasted my code below.
class MyDataset(Dataset):
def __init__(self, xs, ys):
assert len(xs) == len(ys), "Input and output tensors must be the same length"
self.xs = np.array(xs, dtype=np.double)
self.ys = np.array(ys, dtype=np.double)
def __getitem__(self, idx):
return (self.xs[idx], self.ys[idx])
def __len__(self):
return len(self.xs)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer1 = nn.Linear(1, 1)
def forward(self, x):
x = F.relu(self.layer1(x))
return x
def train(data, validation, net, epochs=100):
learning_rate = 0.01
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
criterion = nn.MSELoss()
for epoch in range(0, epochs):
print('Beginning epoch ', epoch+1)
training_losses = []
validation_losses = []
for x_batch, y_batch in data:
optimizer.zero_grad()
yhat = net(x_batch)
loss = criterion(y_batch, yhat)
loss.backward()
optimizer.step()
optimizer.zero_grad()
training_losses.append(loss)
with torch.no_grad():
for x_batch, y_batch in validation:
net.eval()
yhat = net(x_batch)
loss = criterion(y_batch, yhat)
validation_losses.append(loss)
print('Ending epoch ', epoch+1, 'Training loss: ', np.mean(training_losses), 'Validation loss: ', np.mean(validation_losses))
And this is how I'm generating the data and attempting to train it:
num_samples = 10000
foos = [100 + np.random.normal(scale=20) for x in range(0, num_samples)]
bars = [200 + np.random.normal(scale=20) for x in range(0, num_samples)]
xs = foos + bars
xs = torch.tensor([[x] for x in xs])
ys = np.concatenate([np.zeros(num_samples), np.ones(num_samples)])
ys = torch.tensor([[y] for y in ys])
dataset = MyDataset(xs, ys)
train_dataset, val_dataset = random_split(dataset, [16000, 4000])
train_loader = DataLoader(dataset=train_dataset, batch_size=16)
val_loader = DataLoader(dataset=val_dataset, batch_size=20)
net = Net()
train(train_loader, val_loader, net)
Finally, here's the stack trace:
<ipython-input-114-ab674ae015a5> in train(data, validation, net, epochs)
13 print('x_batch: ', type(x_batch[0].item()))
14 print('y_batch: ', type(y_batch[0].item()))
---> 15 yhat = net(x_batch)
16 loss = criterion(y_batch, yhat)
17 loss.backward()
/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
<ipython-input-58-ec2e6d981760> in forward(self, x)
5
6 def forward(self, x):
----> 7 x = F.relu(self.layer1(x))
8 return x
/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
65 @weak_script_method
66 def forward(self, input):
---> 67 return F.linear(input, self.weight, self.bias)
68
69 def extra_repr(self):
/usr/local/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1350 if input.dim() == 2 and bias is not None:
1351 # fused op is marginally faster
-> 1352 ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
1353 else:
1354 output = input.matmul(weight.t())
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #4 'mat1'
I've attempted to debug by logging the types of x_batch and y_batch from within the train method, but they're both showing as float, so I'm stumped as to where the Double is coming from.
Any suggestions?
| PyTorch uses single-precision floats by default.
In the lines:
self.xs = np.array(xs, dtype=np.double)
self.ys = np.array(ys, dtype=np.double)
Replace np.double with np.float32.
| https://stackoverflow.com/questions/56297496/ |
How can I convert numpy.ndarray having type object to torch.tensor? | I'm trying to work on lstm in pytorch. It takes only tensors as the input. The data that I have is in the form of a numpy.object_ and if I convert this to a numpy.float, then it can be converted to tensor.
I checked the data type using print(type(array)) it gives class 'numpy.ndarray' as output and print(arr.dtype.type) gives class 'numpy.object_' as output.
Or is there any way to convert tuple directly to torch.tensor?
| The pytorch LSTM returns a tuple. So you get this error as your second LSTM layer self.seq2 can not handle this tuple. So,
change
prefix1=self.seq1(input1)
suffix1=self.seq1(input2)
to something like this:
prefix1_out, prefix1_states = self.seq1(input1)
suffix1_out, suffix1_states = self.seq1(input2)
and then pass prefix1_out and suffix1_out tensors to the next LSTM layers as
prefix2_out, prefix2_states = self.seq2(prefix1_out)
suffix2_out, suffix2_states = self.seq2(suffix1_out)
And, concat prefix1_out and suffix1_out tensors like this
result = torch.cat([out1,out2],1)
Also, change
r1=F.sigmoid(self.fc1(result))
r2=self.fc2(r1)
to something like this:
out_ll = self.fc1(result)
r1 = nn.Sigmoid()
r2 = self.fc2(r1(out_ll))
| https://stackoverflow.com/questions/56325104/ |
Pyinstaller executable fails importing torchvision | This is my main.py:
import torchvision
input("Press key")
It runs correctly in the command line: python main.py
I need an executable for windows. So I did : pyinstaller main.py
But when I launched the main.exe, inside /dist/main I got this error:
Traceback (most recent call last):
File "main.py", line 1, in <module>
... (omitted)
File "site-packages\torchvision\ops\misc.py", line 135, in <module>
File "site-packages\torchvision\ops\misc.py", line 148, in FrozenBatchNorm2d
File "site-packages\torch\jit\__init__.py", line 850, in script_method
File "site-packages\torch\jit\frontend.py", line 152, in get_jit_def
File "inspect.py", line 973, in getsource
File "inspect.py", line 955, in getsourcelines
File "inspect.py", line 786, in findsource
OSError: could not get source code
[2836] Failed to execute script main
It seems that some source code is not correctly imported from pyinstaller. I am not sure if the problems is the torch module or torchvision.
Additional info:
I recently installed Visual Studio 2019
System info:
Window 10
Python 3.7
torch-1.1.0
torchvision-0.3.0
[EDIT]
I found that the problem is in the definition of the class FrozenBatchNorm2d inside torchvision. The following script produce the same error as the one before posted:
main.py
import torch
class FrozenBatchNorm2d(torch.jit.ScriptModule):
def __init__(self, n):
super(FrozenBatchNorm2d, self).__init__()
@torch.jit.script_method
def forward(self):
pass
I copied all the torch source file. But I still got the error...
| Downgrade torchvision to the previous version fix the error.
pip uninstall torchvision
pip install torchvision==0.2.2.post3
| https://stackoverflow.com/questions/56325181/ |
How does python map works with torch.tensor? | I am now in python so I am trying to understand this line from pytorch tutorial.
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
I understand how map works on a single element
def sqr(a):
return a * a
a = [1, 2, 3, 4]
a = map(sqr, a)
print(list(a))
And here I need to use list(a) to convert map object back to list.
But what I don't understand, is how does it work on multiple variables?
If I try to do this
def sqr(a):
return a * a
a = [1, 2, 3, 4]
b = [1, 3, 5, 7]
a, b = map(sqr, (a, b))
print(list(a))
print(list(b))
I get an error: TypeError: can't multiply sequence by non-int of type 'list'
Please clarify this for me
Thank you
| map works on a single the same way it works on list/tuple of lists, it fetches an element of the given input regardless what is it.
The reason why torch.tensor works, is that it accepts a list as input.
If you unfold the following line you provided:
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
it's the same as doing:
x_train, y_train, x_valid, y_valid = [torch.tensor(x_train), torch.tensor(y_train), torch.tensor(x_valid), torch.tensor(y_valid)]
On other hand, your sqr function does not accept lists. It expects a scalar type to square, which is not the case for your a an b, they are lists.
However, if you change sqr to:
def sqr(a):
return [s * s for s in a]
a = [1, 2, 3, 4]
b = [1, 3, 5, 7]
a, b = map(sqr, (a, b))
or as suggested by @Jean, a, b = map(sqr, x) for x in (a, b)
It will work.
| https://stackoverflow.com/questions/56327213/ |
pytorch masked_fill: why can't I mask all zeros? | I want to mask the all the zeros in the score matrix with -np.inf, but I can only get part of zeros masked, looked like
you see in the upper right corner there are still zeros that didn't get masked with -np.inf
Here's my codes:
q = torch.Tensor([np.random.random(10),np.random.random(10),np.random.random(10), np.random.random(10), np.zeros((10,1)), np.zeros((10,1))])
k = torch.Tensor([np.random.random(10),np.random.random(10),np.random.random(10), np.random.random(10), np.zeros((10,1)), np.zeros((10,1))])
scores = torch.matmul(q, k.transpose(0,1)) / math.sqrt(10)
mask = torch.Tensor([1,1,1,1,0,0])
mask = mask.unsqueeze(1)
scores = scores.masked_fill(mask==0, -np.inf)
Maybe the mask is wrong?
| Your mask is wrong. Try
scores = scores.masked_fill(scores == 0, -np.inf)
scores now looks like
tensor([[1.4796, 1.2361, 1.2137, 0.9487, -inf, -inf],
[0.6889, 0.4428, 0.6302, 0.4388, -inf, -inf],
[0.8842, 0.7614, 0.8311, 0.6431, -inf, -inf],
[0.9884, 0.8430, 0.7982, 0.7323, -inf, -inf],
[ -inf, -inf, -inf, -inf, -inf, -inf],
[ -inf, -inf, -inf, -inf, -inf, -inf]])
| https://stackoverflow.com/questions/56328630/ |
Why does PyTorch gather function require index argument to be of type LongTensor? | I'm writing some code in PyTorch and I came across the gather function. Checking the documentation I saw that the index argument takes in a LongTensor, why is that? Why does it need to take in a LongTensor instead of another type such as IntTensor? What are the benefits?
| By default all indices in pytorch are represented as long tensors - allowing for indexing very large tensors beyond just 4GB elements (maximal value of "regular" int).
| https://stackoverflow.com/questions/56335215/ |
How to properly display Pytorch's math notation inside docstring? | When looking at the docstring of Pytorch functions, the math notations are not properly displayed, e.g.:
https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html
.. math::
\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right)
Only if I use my IDE to display the documentation, it renders the latex notation properly.
Is there any switch for displaying math on the website?
| The docs in source code won't render unless you try some script injection via Greasemonkey or so.
But first have a look at the standard docs where you can find the rendered formula.
| https://stackoverflow.com/questions/56337956/ |
Matrix multiplication (element-wise) from numpy to Pytorch | I got two numpy arrays (image and and environment map),
MatA
MatB
Both with shapes (256, 512, 3)
When I did the multiplication (element-wise) with numpy:
prod = np.multiply(MatA,MatB)
I got the wanted result (visualize via Pillow when turning back to Image)
But when I did it using pytorch, I got a really strange result(not even close to the aforementioned).
I did it with the following code:
MatATensor = transforms.ToTensor()(MatA)
MatBTensor = transforms.ToTensor()(MatB)
prodTensor = MatATensor * MatBTensor
For some reasons, the shape for both MatATensor and MatBtensor is
torch.Size([3, 256, 512])
Same for the prodTensor too.
When I tried to reshape to (256,512,3), I got an error.
Is there a way to get the same result?
I am new to pytorch, so any help would be appreciated.
| If you read the documentation of transforms.ToTensor() you'll see this transformation does not only convert a numpy array to torch.FloatTensor, but also transpose its dimensions from HxWx3 to 3xHxW.
To "undo" this you'll need to
prodasNp = (prodTensor.permute(2, 0, 1) * 255).to(torch.uint8).numpy()
See permute for more information.
| https://stackoverflow.com/questions/56342193/ |
free(): invalid pointer Aborted (core dumped) | I-m trying to run my python program it seems that it should run smoothly however I encounter an error that I haven't seen before it says:
free(): invalid pointer
Aborted (core dumped)
However I'm not sure how to try and fix error since it doesn't give me too much information about the problem itself.
At first I thought it should be a problem with the sizes of the tensor in my network however they are completely fine. I've google the problem a little and found that I can see that is a problem with allocating memory where I shouldn't, but I don't know how to fix this problem
My code is divided in two different files, and I use two libraries to be able to use Sinkhorn loss function and make sample randomly a mesh.
import argparse
import point_cloud_utils as pcu
import time
import numpy as np
import torch
import torch.nn as nn
from fml.nn import SinkhornLoss
import common
def main():
# x is a tensor of shape [n, 3] containing the positions of the vertices that
x = torch._C.from_numpy(common.loadpointcloud("sphere.txt"))
# t is a tensor of shape [n, 3] containing a set of nicely distributed samples in the unit cube
v, f = common.unit_cube()
t = torch._C.sample_mesh_lloyd(pcu.lloyd(v,f,x.shape[0]).astype(np.float32)) # sample randomly a point cloud (cube for now?)
# The model is a simple fully connected network mapping a 3D parameter point to 3D
phi = common.MLP(in_dim=3, out_dim=3)
# Eps is 1/lambda and max_iters is the maximum number of Sinkhorn iterations to do
emd_loss_fun = SinkhornLoss(eps=1e-3, max_iters=20,
stop_thresh=1e-3, return_transport_matrix=True)
mse_loss_fun = torch.nn.MSELoss()
# Adam optimizer at first
optimizer = torch.optim.Adam(phi.parameters(), lr= 10e-3)
fit_start_time = time.time()
for epoch in range(100):
optimizer.zero_grad()
# Do the forward pass of the neural net, evaluating the function at the parametric points
y = phi(t)
# Compute the Sinkhorn divergence between the reconstruction*(using the francis library) and the target
# NOTE: The Sinkhorn function expects a batch of b point sets (i.e. tensors of shape [b, n, 3])
# since we only have 1, we unsqueeze so x and y have dimension [1, n, 3]
with torch.no_grad():
_, P = emd_loss_fun(phi(t).unsqueeze(0), x.unsqueeze(0))
# Project the transport matrix onto the space of permutation matrices and compute the L-2 loss
# between the permuted points
loss = mse_loss_fun(y[P.squeeze().max(0)[1], :], x)
# loss = mse_loss_fun(P.squeeze() @ y, x) # Use the transport matrix directly
# Take an optimizer step
loss.backward()
optimizer.step()
print("Epoch %d, loss = %f" % (epoch, loss.item()))
fit_end_time = time.time()
print("Total time = %f" % (fit_end_time - fit_start_time))
# Plot the ground truth, reconstructed points, and a mesh representing the fitted function, phi
common.visualitation(x,t,phi)
if __name__ == "__main__":
main()
The error message is:
free(): invalid pointer
Aborted (core dumped)
That again doesn't help me that much. I'll appreciate it a lot if someone has any idea what is happening or if you know more about this error.
| Edit: The cause is actually known. The recommended solution is to build both packages from source.
There is a known issue with importing both open3d and PyTorch. The cause is unknown. https://github.com/pytorch/pytorch/issues/19739
A few possible workarounds exist:
(1) Some people have found that changing the order in which you import the two packages can resolve the issue, though in my personal testing both ways crash.
(2) Other people have found compiling both packages from source to help.
(3) Still others have found that moving open3d and PyTorch to be called from separate scripts resolves the issue.
| https://stackoverflow.com/questions/56346569/ |
What is the difference between model.training = False and model.param.require_grad = False | What is the difference between these two:
model.training = False
and
for param in model.parameters():
param.require_grad = False
| model.training = False sets the module in evaluation mode, i.e.,
if model.training == True:
# Train mode
if model.training == False:
# Evaluation mode
So, effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly.
while
for param in model.parameters():
param.require_grad = False
freeze the layers so that these layers are not trainable.
The basic idea is that all models have a function model.children() which returns it’s layers. Within each layer, there are parameters (or weights), which can be obtained using .param() on any children (i.e. layer). Now, every parameter has an attribute called requires_grad which is by default True. True means it will be backpropagrated and hence to freeze a layer you need to set requires_grad to False for all parameters of a layer.
| https://stackoverflow.com/questions/56353381/ |
Reproducibility and performance in PyTorch | The documentation states:
Deterministic mode can have a performance impact, depending on your model.
My question is, what is meant by performance here. Processing speed or model quality (i.e. minimal loss)? In other words, when setting manual seeds and making the model perform in a deterministic way, does that cause longer training time until minimal loss is found, or is that minimal loss worse than when the model is non-deterministic?
For completeness' sake, I manually make the model deterministic by setting all of these properties:
def set_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
| Performance refers to the run time; CuDNN has several ways of implementations, when cudnn.deterministic is set to true, you're telling CuDNN that you only need the deterministic implementations (or what we believe they are). In a nutshell, when you are doing this, you should expect the same results on the CPU or the GPU on the same system when feeding the same inputs. Why would it affect the performance? CuDNN uses heuristics for the choice of the implementation. So, it actually depends on your model how CuDNN will behave; choosing it to be deterministic may affect the runtime because their could have been, let's say, faster way of choosing them at the same point of running.
Concerning your snippet, I do the exact seeding, it has been working good (in terms of reproducibility) for 100+ DL experiments.
| https://stackoverflow.com/questions/56354461/ |
PyTorch: Sigmoid of weights? | I'm new to neural networks/PyTorch. I'm trying to make a net that takes in a vector x, first layer is h_j = w_j^T * x + b_j, output is max_j{h_j}. The only thing is that I want the w_j to be restricted between 0 and 1, by having w_j = S(k*a_j), where S is the sigmoid function, k is some constant, and a_j are the actual weight variables (w_j is just a function of a_j). How do I do this in PyTorch? I can't just use a torch.nn.Linear layer, there has to be something else/additional to add in the sigmoid function on the weights?
Side question, for that last output layer, can I just use torch.max to get the max of the previous layer's outputs? Does that behave nicely, or is there some torch.nn.Max or some pooling stuff that I don't understand that needs to happen?
Thanks!
| I am really not sure why would you do that but you can declare a custom layer as below to apply sigmoid to weights.
class NewLayer(nn.Module):
def __init__ (self, input_size, output_size):
super().__init__()
self.W = nn.Parameter(torch.zeros(input_size, output_size))
# kaiming initialization (use any other if you like)
self.W = nn.init.kaiming_normal_(self.W)
self.b = nn.Parameter(torch.ones(output_size))
def forward(self, x):
# applying sigmoid to weights and getting results
ret = torch.addmm(self.b, x, torch.sigmoid(self.W))
return ret
Once you do this, you can use this as you would use linear layer in your code.
| https://stackoverflow.com/questions/56364712/ |
Pytorch nn Module generalization | Let us take a look at the simple class:
class Temp1(nn.Module):
def __init__(self, stateSize, actionSize, layers=[10, 5], activations=[F.tanh, F.tanh] ):
super(Temp1, self).__init__()
self.layer1 = nn.Linear(stateSize, layers[0])
self.layer2 = nn.Linear(layers[0], layers[1])
self.fcFinal = nn.Linear( layers[1], actionSize )
return
This is a fairly straight forward pytorch module. It creates a simple sequential dense network. If we check its hidden parameters, we see the following:
t1 = Temp1(2, 2)
list(t1.parameters())
This is the expected result ...
[Parameter containing:
tensor([[-0.0311, -0.5513],
[-0.0634, -0.3783],
[-0.2514, 0.6139],
[ 0.4711, -0.0241],
[-0.1739, 0.2208],
[-0.1533, 0.3838],
[-0.6490, -0.5784],
[ 0.5312, 0.6703],
[ 0.3506, 0.3652],
[ 0.1768, -0.4158]], requires_grad=True), Parameter containing:
tensor([-0.3199, -0.4154, -0.5530, -0.6738, -0.4411, 0.2641, -0.3576, 0.0447,
0.0254, 0.0965], requires_grad=True), Parameter containing:
tensor([[-2.8257e-01, 6.7583e-02, 9.0356e-02, 1.0868e-01, 4.0876e-02,
4.0616e-02, 4.4419e-02, -8.1544e-02, 2.5244e-01, 3.8777e-03],
[-8.0950e-03, -1.4175e-01, -2.9492e-01, 3.1439e-01, -2.3065e-01,
-6.6631e-02, 3.0047e-01, 2.8353e-01, 2.3457e-01, -3.1399e-03],
[-5.2522e-02, -2.2183e-01, -1.5485e-01, 2.6317e-01, 2.8273e-01,
-7.4823e-02, -5.3704e-02, 9.3526e-02, -1.7916e-01, -3.1132e-04],
[ 8.9063e-02, 2.9263e-01, -1.0052e-01, 8.7005e-02, -1.1246e-01,
-2.7968e-01, 4.1411e-02, -1.6776e-01, 1.2363e-01, -2.2808e-01],
[ 2.9244e-02, 5.8296e-02, -2.9729e-01, -3.1437e-01, -9.3182e-02,
-7.5236e-03, 5.6159e-02, -2.2075e-02, 1.0337e-01, 8.1123e-02]],
requires_grad=True), Parameter containing:
tensor([ 0.2240, 0.0997, -0.0047, -0.1784, -0.0369], requires_grad=True), Parameter containing:
tensor([[ 0.3546, -0.2180, 0.1723, -0.0463, 0.2572],
[-0.1669, -0.1364, -0.0398, 0.2233, -0.1805]], requires_grad=True), Parameter containing:
tensor([ 0.0871, -0.1698], requires_grad=True)]
Now, let us try to generalize this a bit:
class Temp(nn.Module):
def __init__(self, stateSize, actionSize, layers=[10, 5], activations=[F.tanh, F.tanh] ):
super(Temp, self).__init__()
# Generate the fullly connected layer functions
self.fcLayers = []
oldN = stateSize
for i, layer in enumerate(layers):
self.fcLayers.append( nn.Linear(oldN, layer) )
oldN = layer
self.fcFinal = nn.Linear( oldN, actionSize )
return
It turns out that the number of parameters within this module is no longer the same ...
t = Temp(2, 3)
list(t.parameters())
[Parameter containing:
tensor([[-0.3342, 0.4111, 0.0418, 0.4457, 0.0648],
[ 0.4364, -0.0360, -0.2239, 0.4025, 0.1661],
[ 0.1932, -0.0896, 0.3269, -0.2179, 0.1035]], requires_grad=True),
Parameter containing:
tensor([-0.2867, -0.1354, -0.0026], requires_grad=True)]
I believe understand why this is happening. The bigger question is, how do we overcome this problem? The second, generalized method for example will not be sent to the GPU properly, and will not be trained by an optimizer.
| The problem is that most of the nn.Linear layers in the "generalized" version are stored in a regular pythonic list (self.fcLayers). pytorch does not know to look for nn.Paramters inside regular pythonic members of nn.Module.
Solution:
If you wish to store nn.Modules in a way that pytorch can manage them, you need to use specialized pytorch containers.
For instance, if you use nn.ModuleList instead of a regular pythonic list:
self.fcLayers = nn.ModuleList([])
your example should work fine.
BTW,
you need pytorch to know that members of your nn.Module are modules themselves not only to get their parameters, but also for other functions, such as moving them to gpu/cpu, setting their mode to eval/training etc.
| https://stackoverflow.com/questions/56370283/ |
Can torchtext's BucketIterator pad all batches to the same length? | I recently started using torchtext to replace my glue code and I'm running into an issue where I'd like to use an attention layer in my architecture. In order to do this, I need to know the maximum sequence length of my training data.
The problem is that torchtext.data.BucketIterator does padding on a per-batch basis:
# All 4 examples in the batch will be padded to maxlen in the batch
train_iter = torchtext.data.BucketIterator(dataset=train, batch_size=4)
Is there some way to ensure that all training examples are padded to the same length; i.e., the maxlen in training?
| When instantiating a torchtext.data.Field, there's an optional keyword argument called fix_length which, when set, defines the length to which all samples will be padded; by default it is not set which implies flexible padding.
| https://stackoverflow.com/questions/56370964/ |
Pytorch : Results of vector multiplications are different for same input | I did not understand why these multiplication outputs are different.
print(features*weights)
print('------------')
print(features*weights.view(5,1))
print('------------')
print(torch.mm(features,weights.view(5,1)))
outputs:
tensor([[ 0.1314, -0.2796, 1.1668, -0.1540, -2.8442]])
------------
tensor([[ 0.1314, -0.7035, -0.8472, 0.9971, -1.5130],
[ 0.0522, -0.2796, -0.3367, 0.3963, -0.6013],
[-0.1809, 0.9688, 1.1668, -1.3733, 2.0837],
[-0.0203, 0.1086, 0.1308, -0.1540, 0.2336],
[ 0.2469, -1.3224, -1.5927, 1.8745, -2.8442]])
------------
tensor([[-1.9796]])
| If I'm not wrong, what you are trying to understand is:
features = torch.rand(1, 5)
weights = torch.Tensor([1, 2, 3, 4, 5])
print(features)
print(weights)
# Element-wise multiplication of shape (1 x 5)
# out = [f1*w1, f2*w2, f3*w3, f4*w4, f5*w5]
print(features*weights)
# weights has been reshaped to (5, 1)
# Element-wise multiplication of shape (5 x 5)
# out = [f1*w1, f2*w1, f3*w1, f4*w1, f5*w1]
# [f1*w2, f2*w2, f3*w2, f4*w2, f5*w2]
# [f1*w3, f2*w3, f3*w3, f4*w3, f5*w3]
# [f1*w4, f2*w4, f3*w4, f4*w4, f5*w4]
# [f1*w5, f2*w5, f3*w5, f4*w5, f5*w5]
print(features*weights.view(5, 1))
# Matrix-multiplication
# (1, 5) * (5, 1) -> (1, 1)
# out = [f1*w1 + f2*w2 + f3*w3 + f4*w4 + f5*w5]
print(torch.mm(features, weights.view(5, 1)))
Output:
tensor([[0.1467, 0.6925, 0.0987, 0.5244, 0.6491]]) # features
tensor([1., 2., 3., 4., 5.]) # weights
tensor([[0.1467, 1.3851, 0.2961, 2.0976, 3.2455]]) # features*weights
tensor([[0.1467, 0.6925, 0.0987, 0.5244, 0.6491],
[0.2934, 1.3851, 0.1974, 1.0488, 1.2982],
[0.4400, 2.0776, 0.2961, 1.5732, 1.9473],
[0.5867, 2.7701, 0.3947, 2.0976, 2.5964],
[0.7334, 3.4627, 0.4934, 2.6220, 3.2455]]) # features*weights.view(5,1)
tensor([[7.1709]]) # torch.mm(features, weights.view(5, 1))
| https://stackoverflow.com/questions/56388586/ |
Conv2d not accepting tensor as input, saying its not tensor | I want to pass a tensor through a convolutional 2 layer. I am not able to execute it as I am getting a type error even though I have converted my numpy array to a tensor.
I tried using tf.convert_to_tensor() to solve this problem. Didn't work
import numpy as np
import tensorflow as tf
class Generator():
def __init__(self):
self.conv1 = nn.Conv2d(1, 28, kernel_size=3, stride=1, padding=1)
self.pool1 = nn.MaxPool2d(kernel_size=3, stride=0, padding=1)
self.fc1 = nn.Linear(100, 10)
self.fc2 = nn.Linear(10, 5)
def forward_pass(self, x): #Why do we pass the object itself in every method?
x = self.conv1(x)
print(x)
x = self.pool1(x)
print(x)
x = self.fc1(x)
print(x)
x = self.fc2(x)
print(x)
return x
arr = tf.convert_to_tensor(np.random.random((3,28,28)))
gen = Generator()
gen.forward_pass(arr)
Error message -
TypeError Traceback (most recent call last)
<ipython-input-31-9fa8e764dcdb> in <module>()
1 gen = Generator()
----> 2 gen.forward_pass(arr)
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
336 _pair(0), self.dilation, self.groups)
337 return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338 self.padding, self.dilation, self.groups)
339
340
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not Tensor
| You are trying to pass a TensorFlow tensor to a PyTorch function. TensorFlow and PyTorch are separate projects with different data structures which, in general, cannot be used interchangeably in this way.
To convert a NumPy array to a PyTorch tensor, you can use:
import torch
arr = torch.from_numpy(np.random.random((3,28,28)))
| https://stackoverflow.com/questions/56390348/ |
Pytorch expected type Long but got type int | I revived an error
Expected object of scalar type Long but got scalar type Int for argument #3 'index'
This is from this line.
targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
I am not sure what to do as I tried to convert this to a long using several places. I tried putting a
.long
at the end as well as setting the dtype to be torch.long which still didn't work.
Very similar to this but he didn't do anything to get the answer
"Expected Long but got Int" while running PyTorch script
I have change a lot of the code and here is my last rendition, but is now giving me the same issue.
def forward(self, inputs, targets):
"""
Args:
inputs: prediction matrix (before softmax) with shape (batch_size, num_classes)
targets: ground truth labels with shape (num_classes)
"""
log_probs = self.logsoftmax(inputs)
targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1)
if self.use_gpu: targets = targets.to(torch.device('cuda'))
targets = (1 - self.epsilon) * targets + self.epsilon / self.num_classes
loss = (- targets * log_probs).mean(0).sum()
return loss
| The dtype of your index argument (i.e., targets.unsqueeze(1).data.cpu()) needs to be torch.int64.
(The error message is a bit confusing: torch.long doesn't exist. But "Long" in PyTorch internals means int64).
| https://stackoverflow.com/questions/56391202/ |
Max pooling layers have weights. interesting work | In the pytorch tutorial step of " Deep Learning with PyTorch: A 60 Minute Blitz > Neural Networks "
I have a question that what dose mean params[1] in the networks?
The reason why i have this think is because of max polling dose not have any weight values.
for example.
If you write some codes like that
'
def init(self) :
self.conv1 = nn.Conv2d(1 , 6 , 5)
'
this means input has 1 channel, 6 output channel, conv(5,5)
So i understood that params[0] has 6 channel, 5 by 5 matrix random mapping values when init.
for the same reason
params[2] has like same form, but 16 channel. i understood this too.
but params[1], what dose mean?
Maybe it is just presentation method of existence for max polling.
but at the end of this tutorial, in step of the " update the weights "
It's likely updated by this code below.
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
this is code for construct a network
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
params = list(net.parameters())
print(params[1])
Parameter containing:
tensor([-0.0614, -0.0778, 0.0968, -0.0420, 0.1779, -0.0843],
requires_grad=True)
please visit this pytorch tutorial site.
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py
Summary
I have a one question.
that is why max pooling layers has four weights which can be updated ?
I think they shouldn't have any weights right?
Am I wrong?
Please help me. I'm a korean.
| You are wrong about that. It has nothing to do with max_pooling.
As you can read in your "linked" Tutorial, is the "nn.paramter"-Tensor automatically registered as parameter when it gets assigned to a Module.
Which, in your case basically, means that everything listed within the __init__ is a module and parameter can be assigned to.
What the values mean inside parameter, well its the parameter your model needs to calculate its steps. to picture it
params[0] -> self.conf1 -> Layer-Input
params[1] -> self.conf1 -> Layer-Output
params[2] -> self.conf2 -> Layer-Input
params[3] -> self.conf2 -> Layer-Output
params[4] -> self.fc1 -> Layer-Input
params[5] -> self.fc1 -> Layer-Output
and so on until you reach params[9], which is the end of your whole parameter list.
EDIT: forgot about the weights
These values are indicator of what your Net has learned.
Therefore you have the ability to alter these values in order to fine-tune your Net to fit your needs.
And if you ask why then 2 rows for each layer?
Well, when you do backpropagation you need these values to locate issues within your Layers.
That's why it stored before passed into a Layer and then after returning from that layer.
hope things are a little clearer now.
| https://stackoverflow.com/questions/56391692/ |
Pytorch neural network (probably) does not learn | my homework is to train a network on a given data set of 3000 frogs, cats and dogs. The network I built doesn't seem to improve at all. Why is that?
The training data x_train is a numpy ndarray of shape (3000,32,32,3).
class Netz(nn.Module):
def __init__(self):
super(Netz, self).__init__()
self.conv1 = nn.Conv2d(3,28,5)
self.conv2 = nn.Conv2d(28,100,5)
self.fc1 = nn.Linear(2500,120)
self.fc2 = nn.Linear(120,3)
def forward(self, x):
x = self.conv1(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = self.conv2(x)
x = F.max_pool2d(x,2)
x = F.relu(x)
x = x.view(-1,2500)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x)
model = Netz()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.8)
def train(epoch):
model.train()
avg_loss = 0
correct = 0
criterion = F.nll_loss
for i in range(len(x_train)):
optimizer.zero_grad()
x = torch.tensor(x_train[i])
x = x.permute(2, 0, 1)
x = Variable(x)
x = x.unsqueeze(0)
target = Variable(torch.Tensor([y_train[i]]).type(torch.LongTensor))
out = model(x)
loss = criterion(out, target)
avg_loss += loss
pred = out.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
loss.backward()
optimizer.step()
if i%64==0:
print("epoch ", epoch, " [", i, "/", len(x_train), "] average loss: ", avg_loss.item() / 64, " correct: ", correct, "/64")
avg_loss = 0
correct = 0
I expect the mean error to decrease over time, but it seems to keep fluctuating around the same number...
| Your loss is fluctuating means your network is not powerful enough to extract meaningful embeddings. I can recommend trying one of these few things.
Add more layers.
Use a smaller learning rate.
Use a larger dataset or use a pre-trained model if you only have a small dataset.
Normalize your dataset.
Shuffle training set.
Play with hyperparameters.
| https://stackoverflow.com/questions/56398768/ |
How to properly Forward the dropout layer | I created the following deep network with dropout layers like below:
class QNet_dropout(nn.Module):
"""
A MLP with 2 hidden layer and dropout
observation_dim (int): number of observation features
action_dim (int): Dimension of each action
seed (int): Random seed
"""
def __init__(self, observation_dim, action_dim, seed):
super(QNet_dropout, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(observation_dim, 128)
self.fc2 = nn.Dropout(0.5)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Dropout(0.5)
self.fc5 = nn.Linear(64, action_dim)
def forward(self, observations):
"""
Forward propagation of neural network
"""
x = F.relu(self.fc1(observations))
x = F.linear(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.linear(self.fc4(x))
x = self.fc5(x)
return x
However, when I tried to run the code, I got the following errors:
/home/workspace/QNetworks.py in forward(self, observations)
90
91 x = F.relu(self.fc1(observations))
---> 92 x = F.linear(self.fc2(x))
93 x = F.relu(self.fc3(x))
94 x = F.linear(self.fc4(x))
TypeError: linear() missing 1 required positional argument: 'weight'
It seems like I didn't properly use/forward the dropout layer. What should be the correct way to do the Forward for the dropout layer? Thanks!
| The F.linear() function used incorrectly. You should use your stated linear function instead of torch.nn.functional. The dropout layer should be after Relu. You can call Relu function from torch.nn.functional.
import torch
import torch.nn.functional as F
class QNet_dropout(nn.Module):
"""
A MLP with 2 hidden layer and dropout
observation_dim (int): number of observation features
action_dim (int): Dimension of each action
seed (int): Random seed
"""
def __init__(self, observation_dim, action_dim, seed):
super(QNet_dropout, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(observation_dim, 128)
self.fc2 = nn.Dropout(0.5)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Dropout(0.5)
self.fc5 = nn.Linear(64, action_dim)
def forward(self, observations):
"""
Forward propagation of neural network
"""
x = self.fc2(F.relu(self.fc1(observations)))
x = self.fc4(F.relu(self.fc3(x)))
x = self.fc5(x)
return x
observation_dim = 512
model = QNet_dropout(observation_dim, 10, 512)
batch_size = 8
inpt = torch.rand(batch_size, observation_dim)
output = model(inpt)
print ("output shape: ", output.shape)
| https://stackoverflow.com/questions/56401266/ |
Autoencoder model either oscillates or doesn't converge on MNIST dataset | Already ran the code 3 months ago with intended results. Changed nothing. Tried troubleshooting by using codes from (several) earlier versions, including among the earliest (which definitely worked). The problem persists.
# 4 - Constructing the undercomplete architecture
class autoenc(nn.Module):
def __init__(self, nodes = 100):
super(autoenc, self).__init__() # inheritence
self.full_connection0 = nn.Linear(784, nodes) # encoding weights
self.full_connection1 = nn.Linear(nodes, 784) # decoding weights
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.full_connection0(x)) # input encoding
x = self.full_connection1(x) # output decoding
return x
# 5 - Initializing autoencoder, squared L2 norm, and optimization algorithm
model = autoenc().cuda()
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(),
lr = 1e-3, weight_decay = 1/2)
# 6 - Training the undercomplete autoencoder model
num_epochs = 500
batch_size = 32
length = int(len(trn_data) / batch_size)
loss_epoch1 = []
for epoch in range(num_epochs):
train_loss = 0
score = 0.
for num_data in range(length - 2):
batch_ind = (batch_size * num_data)
input = Variable(trn_data[batch_ind : batch_ind + batch_size]).cuda()
# === forward propagation ===
output = model(input)
loss = criterion(output, trn_data[batch_ind : batch_ind + batch_size])
# === backward propagation ===
loss.backward()
# === calculating epoch loss ===
train_loss += np.sqrt(loss.item())
score += 1. #<- add for average loss error instead of total
optimizer.step()
loss_calculated = train_loss/score
print('epoch: ' + str(epoch + 1) + ' loss: ' + str(loss_calculated))
loss_epoch1.append(loss_calculated)
When plotting the loss now, it oscillates oscillates wildly (at lr = 1e-3). Whereas 3 months ago, it was steadily converging (at lr = 1e-3).
Can't upload pictures yet due to recently created account.
How it looks like now.
Though this is when I reduce the learning rate to 1e-5. When it's at 1e-3, it's just all over the places.
How it should look like, and used to look like at lr = 1e-3.
| You should do optimizer.zero_grad() before you do loss.backward() since the gradients accumulate. This is most likely causing the issue.
The general order to be followed during training phase :
optimizer.zero_grad()
output = model(input)
loss = criterion(output, label)
loss.backward()
optimizer.step()
Also, the value of weight decay used (1 / 2) was causing an issue.
| https://stackoverflow.com/questions/56402753/ |
Why do we pass nn.Module as an argument to class definition for neural nets? | I want to understand why we pass torch.nn.Module as a argument when we define the class for a neural network like GAN's
import torch
import torch.nn as nn
class Generator(nn.Module):
def __init__(self, input_size, hidden_size, output_size, f):
super(Generator, self).__init__()
self.map1 = nn.Linear(input_size, hidden_size)
self.map2 = nn.Linear(hidden_size, hidden_size)
self.map3 = nn.Linear(hidden_size, output_size)
self.f = f
| This line
class Generator(nn.Module):
simple means the Generator class will inherit the nn.Module class, it is not an argument.
However, the dunder init method:
def __init__(self, input_size, hidden_size, output_size, f):
Has self which is why you may consider this as an argument.
Well this is Python class instance self. There were tinkering battles should it stay or should it go, but Guido, explained in his blog why it has to stay.
| https://stackoverflow.com/questions/56405652/ |
Implementing FFT with Pytorch | I am trying to implement FFT by using the conv1d function provided in Pytorch.
Generating artifical signal
import numpy as np
import torch
from torch.autograd import Variable
from torch.nn.functional import conv1d
from scipy import fft, fftpack
import matplotlib.pyplot as plt
%matplotlib inline
# Creating filters
d = 4096 # size of windows
def create_filters(d):
x = np.arange(0, d, 1)
wsin = np.empty((d,1,d), dtype=np.float32)
wcos = np.empty((d,1,d), dtype=np.float32)
window_mask = 1.0-1.0*np.cos(x)
for ind in range(d):
wsin[ind,0,:] = np.sin(2*np.pi*((ind+1)/d)*x)
wcos[ind,0,:] = np.cos(2*np.pi*((ind+1)/d)*x)
return wsin,wcos
wsin, wcos = create_filters(d)
wsin_var = Variable(torch.from_numpy(wsin), requires_grad=False)
wcos_var = Variable(torch.from_numpy(wcos),requires_grad=False)
# Creating signal
t = np.linspace(0,1,4096)
x = np.sin(2*np.pi*100*t)+np.sin(2*np.pi*200*t)+np.random.normal(scale=5,size=(4096))
plt.plot(x)
FFT with Pytorch
signal_input = torch.from_numpy(x.reshape(1,-1),)[:,None,:4096]
signal_input = signal_input.float()
zx = conv1d(signal_input, wsin_var, stride=1).pow(2)+conv1d(signal_input, wcos_var, stride=1).pow(2)
FFT with Scipy
fig = plt.figure(figsize=(20,5))
plt.plot(np.abs(fft(x).reshape(-1))[:500])
My Question
As you can see the two outputs are quite similar in terms of the peaks characteristics. That means my implementation is not totally wrong.
However, there are also some subtleties, such as the scale of the spectrum, and the signal to noise ratio. I am unable to figure out what's missing here to get the exact same result.
| You calculated the power rather than the amplitude.
You simply need to add the line zx = zx.pow(0.5) to take the square root to get the amplitude.
| https://stackoverflow.com/questions/56408603/ |
In PyTorch/Numpy, how to multiply rows of a matrix with "matrices" in a 3-D tensor? | For example, a = torch.Tensor([[1,2],[3,4]]) (for numpy it is just a = np.array([[1,2],[3,4]])), and b = torch.ones((2,2,2)),
I would like to product every row of a with the two 2x2 matrices, and get a new matrix [[3,3],[7,7]] (i.e. [1,2]*[[1,1],[1,1]]=[3,3], [3,4]*[[1,1],[1,1]]=[7,7]). Is it possible to achieve this? Thanks!
| I consider this an ugly solution, but perhaps this is what you want to achieve:
a = torch.Tensor([[1,2],[3,4]])
b = torch.ones((2,2,2))
A = torch.mm(a[0].view(-1, 2), b[0])
B = torch.mm(a[1].view(-1, 2), b[1])
res = torch.cat([A, B], dim=0)
print(res)
output:
tensor([[3., 3.],
[7., 7.]])
| https://stackoverflow.com/questions/56411257/ |
Pytorch's packed_sequence/pad_sequence pads tensors vertically for list of tensors | I am trying to pad sequence of tensors for LSTM mini-batching, where each timestep in the sequence contains a sub-list of tensors (representing multiple features in a single timestep).
For example, sequence 1 would have 3 timesteps and within each timestep there are 2 features. An example below would be:
Sequence 1 = [[1,2],[2,2],[3,3],[3,2],[3,2]]
Sequence 2 = [[4,2],[5,1],[4,4]]
Sequence 3 = [[6,9]]
I run pytorch's pad_sequence function (this goes for pack_sequence too) like below:
import torch
import torch.nn.utils.rnn as rnn_utils
a = torch.tensor([[1,2],[2,2],[3,3],[3,2],[3,2]])
b = torch.tensor([[4,2],[5,1],[4,4]])
c = torch.tensor([[6,9]])
result = rnn_utils.pad_sequence([a, b, c])
My expected output is as follows:
Sequence 1 = [[1,2],[2,2],[3,3],[3,2],[3,2]]
Sequence 2 = [[4,2],[5,1],[4,4],[0,0],[0,0]]
Sequence 3 = [[6,9],[0,0],[0,0],[0,0],[0,0]]
However, the output I got is as follows:
tensor([[[1, 2],
[4, 2],
[6, 9]],
[[2, 2],
[5, 1],
[0, 0]],
[[3, 3],
[4, 4],
[0, 0]],
[[3, 2],
[0, 0],
[0, 0]],
[[3, 2],
[0, 0],
[0, 0]]])
The padding seems to go vertically rather than what I expect. How do I go about getting the correct padding that I need?
| Simply change
result = rnn_utils.pad_sequence([a, b, c])
to
result = rnn_utils.pad_sequence([a, b, c], batch_first=True)
seq1 = result[0]
seq2 = result[1]
seq3 = result[2]
By default, batch_first is False. Output will be in B x T x * if True, or in T x B x * otherwise, where
B is batch size. It is equal to the number of elements in sequences,
T is length of the longest sequence, and
* is any number of trailing dimensions, including none.
output:
tensor([[1, 2],
[2, 2],
[3, 3],
[3, 2],
[3, 2]]) # sequence 1
tensor([[4, 2],
[5, 1],
[4, 4],
[0, 0],
[0, 0]]) # sequence 2
tensor([[6, 9],
[0, 0],
[0, 0],
[0, 0],
[0, 0]]) # sequence 3
| https://stackoverflow.com/questions/56412810/ |
RuntimeError: input must have 2 dimensions, got 3 | I am trying to run a sequence of bigrams into an LSTM. This involves:
2.Padding the sequences using pad_sequence
3.Inputting the padded sequences in the embedding layer
4.Packing the output of the embedding layer
5.Inserting the pack into the LSTM.
class LSTMClassifier(nn.Module):
# LSTM initialization
def __init__(self, embedding_dim=32, hidden_dim=50, vocab_size=7138, label_size=2, static_size, batch_size=32):
super(LSTMClassifier, self).__init__()
# Initializing batch size
self.batch_size = batch_size
# Setting the hidden layer dimension of the LSTM
self.hidden_dim = hidden_dim
# Initializing the embedding layer
self.embeddings = nn.Embedding(vocab_size, embedding_dim-2)
# Initializing the LSTM layer with one hidden layer
self.lstm = nn.LSTM(((embedding_dim*vocab_size)+static_size), hidden_dim, num_layers=1, batch_first=True)
# Initializing linear linear that takes the hidden layer output
self.hidden2label = nn.Linear(hidden_dim, label_size)
# Initializing the hidden layer
self.hidden = self.init_hidden()
# Defining the hidding state of the LSTM
def init_hidden(self):
# the first is the hidden h
# the second is the cell c
return (autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim).cuda()),
autograd.Variable(torch.zeros(1, self.batch_size, self.hidden_dim).cuda()))
# Defining the feed forward logic of the LSTM. It contains:
# 1. The embedding layer
# 2. The LSTM layer with one hidden layer
# 3. The software layer
def forward(self, seq, freq, time, static):
# reset the LSTM hidden state. Must be done before you run a new batch. Otherwise the LSTM will treat
# a new batch as a continuation of a sequence
self.hidden = self.init_hidden()
# Get sequence lengths
seq_lengths = torch.LongTensor(list(map(len, seq))) # length of 59
# Pad the sequences
seq = rnn_utils.pad_sequence(seq, batch_first = True)
freq = rnn_utils.pad_sequence(freq, batch_first = True)
time = rnn_utils.pad_sequence(time, batch_first = True)
static = rnn_utils.pad_sequence(static, batch_first = True)
seq = autograd.Variable(seq)
freq = autograd.Variable(freq)
time = autograd.Variable(time)
static = autograd.Variable(static)
# This is the pass to the embedding layer.
# The sequence is of dimension N and the output is N x Demb
embeds = self.embeddings(seq)
embeds = torch.cat((embeds,freq), dim=3)
embeds = torch.cat((embeds,time), dim=3)
print(embeds.size()) #torch.Size([32, 59, 7138, 32])
x = embeds.view(self.batch_size, seq.size()[1], 1,-1)
print(x.size()) #torch.Size([32, 59, 1, 228416])
static = static.view(self.batch_size, -1,1,3)
x = torch.cat([x, static], dim=3)
print(x.size()) #torch.Size([32, 59, 1, 228419])
# pack the padded sequence so that paddings are ignored
x = torch.nn.utils.rnn.pack_padded_sequence(x, seq_lengths, batch_first=True)
lstm_out, self.hidden = self.lstm(x, self.hidden)
# unpack the packed padded sequence so that it is ready for prediction
lstm_out = torch.nn.utils.rnn.pad_packed_sequence(lstm_out, batch_first=True)
y = self.hidden2label(lstm_out[-1])
log_probs = F.log_softmax(y)
return log_probs
However, I get an error that is the following:
---> 66 lstm_out, self.hidden = self.lstm(x, self.hidden)
67
68 # unpack the packed padded sequence so that it is ready for prediction
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
173 hx = (hx, hx)
174
--> 175 self.check_forward_args(input, hx, batch_sizes)
176 _impl = _rnn_impls[self.mode]
177 if batch_sizes is None:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
129 raise RuntimeError(
130 'input must have {} dimensions, got {}'.format(
--> 131 expected_input_dim, input.dim()))
132 if self.input_size != input.size(-1):
133 raise RuntimeError(
RuntimeError: input must have 2 dimensions, got 3
x = rnn_utils.pad_sequence(x, batch_first=True)
import torch
I thought LSTM model required 3 dimensional input? I am very confused as to why it requires 2 dimensions. How should I fix it?
| Finally found solution after 5 hours of browsing...
def forward(self, input):
input = input.unsqueeze(0).unsqueeze(0)
# Initializing hidden state for first input with zeros
h0 = torch.zeros(1, input.size(0), 128)
# Initializing cell state for first input with zeros
c0 = torch.zeros(1, input.size(0), 128)
hidden = (h0.detach(), c0.detach())
out1, _ = self.lstm(input, hidden)
out2 = F.leaky_relu(self.fc1(out1))
qvalue = self.fc2(out2)
return qvalue
| https://stackoverflow.com/questions/56418580/ |
what is the right usage of _extra_files arg in torch.jit.save | one option I tried is pickling vocab and saving with extrafiles arg
import torch
import pickle
class Vocab(object):
pass
vocab = Vocab()
pickle.dump(open('path/to/vocab.pkl','w'))
m = torch.jit.ScriptModule()
## I am not sure about the usage of this arg, the docs didn't help me
extra_files = torch._C.ExtraFilesMap()
extra_files['vocab.pkl'] = 'path/to/vocab.pkl'
# I also tried pickle.dumps(vocab), and directly vocab
torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files)
## Load with extra files.
files = {'vocab.pkl': ''}
torch.jit.load('scriptmodule.pt', _extra_files = files)
this gives
TypeError: import_ir_module(): incompatible function arguments. The following argument types are supported:
1. (arg0: Callable[[List[str]], torch._C.ScriptModule], arg1: str, arg2: object, arg3: torch._C.ExtraFilesMap) -> None
other option is obviously to load the pickle separately, but I was looking for single file option.
it would be nice if one could just add vocab to to the torchscript ... it would also be nice to know if there is some reason for not doing this that I am obviously not aware of.
| I believe that the documentation for torch.jit.load is incorrect. You need to create an ExtraFilesmap() object to load the saved files.
The following is an example of how I got things to work:
Step 1: Save model
extra_files = torch._C.ExtraFilesMap()
extra_files['foo.txt'] = 'bar'
traced_script_module.save(serialized_model_path, _extra_files=extra_files)
Step 2: Load model
files = torch._C.ExtraFilesMap()
files['foo.txt'] = ''
loaded_model = torch.jit.load(serialized_model_path, _extra_files=files)
print(files)
| https://stackoverflow.com/questions/56418938/ |
When using Conv2D in PyTorch, does padding or dilation happen first? | Consider the following bit of code:
torch.nn.Conv2d(1, 1, 2, padding = 1, dilation = 2)
Which of the following two cases is a correct interpretation?
| If you look at the bottom of the nn.Conv2d documentation you'll see the formula used to compute the output size of the conv layer:
Notice how padding is not affected by the value of dilation. I suppose this indicates "pad first" approach.
| https://stackoverflow.com/questions/56420160/ |
Import from Github : How to fix ImportError | I want to use the open source person re-identification library in Python
on Ubuntu 19.04
with Anaconda
no CUDA
in the terminal PyCharm (or not)
Python version 3.7.3
PyTorch version 1.1.0
For that I have to follow instruction like on their deposite git :
git clone https://github.com/Cysu/open-reid.git
cd open-reid
python setup.py install
python examples/softmax_loss.py -d viper -b 64 -j 2 -a resnet50 --logs-dir logs/softmax-loss/viper-resnet50
I receive the following error:
from sklearn.utils.extmath
import pinvh
ImportError: cannot import name 'pinvh'
I have tried to create virtual environments with previous versions of PyTorch (0.4.1, 0.4.0 and 1.0.1) but I always got:
File "examples/softmax_loss.py", line 12, in <module>
from reid import datasets
ModuleNotFoundError: No module named 'reid'
I do not know how to fix it.
EDIT :
Hi thanks for the answer, the problem is that the import are like :
from reid import datasets
from reid import models
from reid.dist_metric import DistanceMetric
from reid.trainers import Trainer
from reid.evaluators import Evaluator
from reid.utils.data import transforms as T
from reid.utils.data.preprocessor import Preprocessor
from reid.utils.logging import Logger
from reid.utils.serialization import load_checkpoint, save_checkpoint
I tried :
from ../reid import datasets
But I got a
File "examples/softmax_loss.py", line 12
from ../reid import datasets
^
SyntaxError: invalid syntax
EDIT 2 :
After re-installing Python 3.7.3 and pytorch 1.1.0 the problem persist with pinvh... I still got this message :
ImportError: cannot import name 'pinvh' from 'sklearn.utils.extmath'
If you can tell me how to fix it or try to tell me if it works please
| Since the directory structure is as below:
/(root)-->|
|
|-->reid |--> (contents inside reid)
|
|
|-->examples | -->softmax_loss.py
|
|-->(Other contents in root directory)
It can be observed that reid is not in the same directory as softmax_loss.py, but instead in the parent directory.
So, in the file softmax_loss.py, at line number 12 and below, replace reid with ../reid, this looks for the directory reid in the parent directory.
The other method is to use: import ../reid as R or any other variable; Then use from R import datasets, and so on
| https://stackoverflow.com/questions/56429621/ |
Assigning values to torch tensors | I'm trying to assign some values to a torch tensor. In the sample code below, I initialized a tensor U and try to assign a tensor b to its last 2 dimensions. In reality, this is a loop over i and j that solves some relation for a number of training data (here 10) and assigns it to its corresponding location.
import torch
U = torch.zeros([10, 1, 4, 4])
b = torch.rand([10, 1, 1, 1])
i = 2
j = 2
U[:, :, i, j] = b
I was expecting vector b to be assigned for dimensions i and j of corresponding training data (shape of training data being (10,1)) but it gives me an error. The error that I get is the following
RuntimeError: expand(torch.FloatTensor{[10, 1, 1, 1]}, size=[10, 1]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (4)
Any suggestions on how to fix it would be appreciated.
As an example, you can think of this as if '[10, 1]' is the shape of my data. Imagine it is 10 images, each of which has one channel. Then imagine each image is of shape '[4, 4]'. In each iteration of the loop, pixel '[i, j]' for all images and channels is being calculated.
| Your b tensor has too much dimensions.
U[:, :, i, j] has a [10, 1] shape (try U[:, :, i, j].shape)
Use b = torch.rand([10, 1]) instead.
| https://stackoverflow.com/questions/56435110/ |
How to access the network weights while using PyTorch 'nn.Sequential'? | I'm building a neural network and I don't know how to access the model weights for each layer.
I've tried
model.input_size.weight
Code:
input_size = 784
hidden_sizes = [128, 64]
output_size = 10
# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
nn.ReLU(),
nn.Linear(hidden_sizes[0], hidden_sizes[1]),
nn.ReLU(),
nn.Linear(hidden_sizes[1], output_size),
nn.Softmax(dim=1))
I expected to get the weights but I got
'Sequential' object has no attribute 'input_size'
| I've tried many ways, and it seems that the only way is by naming each layer by passing OrderedDict
from collections import OrderedDict
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('output', nn.Linear(hidden_sizes[1], output_size)),
('softmax', nn.Softmax(dim=1))]))
So to access the weights of each layer, we need to call it by its own unique layer name.
For example to access weights of layer 1 model.fc1.weight
Parameter containing:
tensor([[-7.3584e-03, -2.3753e-02, -2.2565e-02, ..., 2.1965e-02,
1.0699e-02, -2.8968e-02],
[ 2.2930e-02, -2.4317e-02, 2.9939e-02, ..., 1.1536e-02,
1.9830e-02, -1.4294e-02],
[ 3.0891e-02, 2.5781e-02, -2.5248e-02, ..., -1.5813e-02,
6.1708e-03, -1.8673e-02],
...,
[-1.2596e-03, -1.2320e-05, 1.9106e-02, ..., 2.1987e-02,
-3.3817e-02, -9.4880e-03],
[ 1.4234e-02, 2.1246e-02, -1.0369e-02, ..., -1.2366e-02,
-4.7024e-04, -2.5259e-02],
[ 7.5356e-03, 3.4400e-02, -1.0673e-02, ..., 2.8880e-02,
-1.0365e-02, -1.2916e-02]], requires_grad=True)
| https://stackoverflow.com/questions/56435961/ |
Getting alignment/attention during translation in OpenNMT-py | Does anyone know how to get the alignments weights when translating in Opennmt-py? Usually the only output are the resulting sentences and I have tried to find a debugging flag or similar for the attention weights. So far, I have been unsuccessful.
| You can get the attention matrices. Note that it is not the same as alignment which is a term from statistical (not neural) machine translation.
There is a thread on github discussing it. Here is a snippet from the discussion. When you get the translations from the mode, the attentions are in the attn field.
import onmt
import onmt.io
import onmt.translate
import onmt.ModelConstructor
from collections import namedtuple
# Load the model.
Opt = namedtuple('Opt', ['model', 'data_type', 'reuse_copy_attn', "gpu"])
opt = Opt("PATH_TO_SAVED_MODEL", "text", False, 0)
fields, model, model_opt = onmt.ModelConstructor.load_test_model(
opt, {"reuse_copy_attn" : False})
# Test data
data = onmt.io.build_dataset(
fields, "text", "PATH_TO_DATA", None, use_filter_pred=False)
data_iter = onmt.io.OrderedIterator(
dataset=data, device=0,
batch_size=1, train=False, sort=False,
sort_within_batch=True, shuffle=False)
# Translator
translator = onmt.translate.Translator(
model, fields, beam_size=5, n_best=1,
global_scorer=None, cuda=True)
builder = onmt.translate.TranslationBuilder(
data, translator.fields, 1, False, None)
batch = next(data_iter)
batch_data = translator.translate_batch(batch, data)
translations = builder.from_batch(batch_data)
translations[0].attn # <--- here are the attentions
| https://stackoverflow.com/questions/56440732/ |
Getting the gradients of a model trained in OpenNMT-py | When training a model using OpenNMT-py, we get a dict as output, containing the weights and biases of the network. However, these tensors have requires_grad = False, and so, do not have a gradient. For example. with one layer, we might have the following tensors, denoting embeddings as well as weights and biases in the encoder and decoder. None of them have a gradient attribute.
encoder.embeddings.emb_luts.0.weight
decoder.embeddings.emb_luts.0.weight
encoder.rnn.weight_ih_l0
encoder.rnn.weight_hh_l0
encoder.rnn.bias_ih_l0
encoder.rnn.bias_hh_l0
decoder.rnn.layers.0.weight_ih
decoder.rnn.layers.0.weight_hh
decoder.rnn.layers.0.bias_ih
decoder.rnn.layers.0.bias_hh
Can OpenNMT-py be made to set requires_gradient = True with some option I have not found or is there some other way to obtain the gradient of these tensors?
| The gradients are accessible only inside the training loop, where optim.step() is called. If you want to log the gradients (or norm of gradients or whatever) to TensorBoard, you can probably best get them before the optimizer step is called. It happens in the _gradient_accumulation method of the Trainer object.
Be aware that there are two places where optim.step() is called. Which one is used depends on whether you do the update after every batch or whether you accumulate gradient from multiple batches and do the update afterward.
| https://stackoverflow.com/questions/56447123/ |
How to calculate output sizes after a convolution layer in a configuration file? | I'm new to convolutional neural networks and wanted to know how to calculate or figure out the output sizes between layers of a model given a configuration file for pytorch similar to those following instructions in this link.
Most of the stuff I've already looked at hasn't been very clear and concise. How am I supposed to calculate the sizes through each layer?
Below is a snippet of a configuration file that would be parsed.
# (3, 640, 640)
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
# (16, 320, 320)
| In short, there is a common formula for output dims calculation:
You can find explanation in A guide to receptive field arithmetic for Convolutional Neural Networks.
In addition, I'd like to recommend amazing article A guide to convolution arithmetic for deep learning.
And this repo conv_arithmetic with convolution animations.
| https://stackoverflow.com/questions/56450969/ |
TypeError: can't assign a str to a torch.LongTensor pytoch | I'm trying to convert a list of strings into tensors but I get this error
lengths = [len(cap) for cap in captions]
targets = torch.zeros(len(captions), max(lengths)).long()
for i, cap in enumerate(captions):
end = lengths[i]
targets[i, :end] = cap[:end]
| You can use python's ord to convert characters to their unicode:
targets[i, :end] = torch.from_numpy(np.array(list(map(ord, cap[:end])))).to(torch.long)
| https://stackoverflow.com/questions/56452900/ |
Subsets and Splits