id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st103000 | Additionally, there is also a helper macro THFloatTensor_fastGet2d(...) 63, which is roughtly equivalent to the function sent by @Frank_Lee but has no boundary checks, so it can be optimized out during compilation |
st103001 | I am surprised that no one has mentioned this post by Adam Paszke: https://apaszke.github.io/torch-internals.html 203
It gives quite clear introduction into cpu C API. The GPU API is similar if you call your function from python, when the CUDA memory allocation and streams are initialized. |
st103002 | I have another question, trying to understand the architecture of pytorch… Where is implemented torch.mm ?
What I understand is that torch.addmm is implemented in THTensorMath.c, using gemm from THBlas. Then, I expected to find torch.mm as another function in THTensorMath.c also using Blas but with a null factor for addition. But I can’t find it. Is there any intermediate step between TH functions from the lib and “torch.x” functions ? |
st103003 | See pytorch/torch/csrc/generic/methods/TensorMath.cwrap 89.
torch.mm uses addmm, but with a 0 scalar that multiplies the output tensor, so it only contains the result of matrix multiplication. |
st103004 | Ah ok ! thanks!
Now I’m starting to understand better how you create a python library with C code. Also, this link is really helpfull: I don’t know if somewhere else you provide the list of all the “torch.x” available functions. |
st103005 | I think the doc for the C api would be nice, there’s so much super cool functionality that gets a bit obscured by the macro concatenation in the C source and the source being scattered across files plus navigation tools getting confused by the macros (I really like the design of the API though and how the macros are used, it was clearly designed with maximum simplicity in mind which is awesome). Also incremental updates in release notes on what’s new in the C API would help keeping track of changes. I could also see how added overhead of something like this might be an opportunity cost. Just an idea. |
st103006 | Hi,
I found that the latest pytorch 0.5.0a the direct access of struct member is a little bit limited. I couldn’t use the approach like int *sizes_ptr = sizes->storage->data + sizes->storageOffset; since the compiler complains that ‘forward declaration of THTensor {aka struct THTensor}’ or something. How could I handle these errors? |
st103007 | In the DCGAN 1 example, we will train the D network first with the fake and real label. After that, we will train the G network, and do not update the D network. Why didn’t the DCGAN code set requires_grad = False when training the G network? This is what I think
for param in D.parameters():
param.requires_grad = True
# train with real
...
# train with fake
...
#-------Update G network------------
# Do not update D network
for param in D.parameters():
param.requires_grad = False
netG.zero_grad()
output = netD(fake)
errG = criterion(output, label)
errG.backward()
optimizerG.step() |
st103008 | Solved by chenglu in post #4
Yes of course,
If the data which been fed into D is no longer a Variable but a plain tensor, then if you do backward() on the output of D, the computation in the network D will not influence the grad of G. So there is no need to do required_grad = False, the detach already done this. |
st103009 | when training D, the input have been detached, it a data only not a Variable anymore.
github.com
pytorch/examples/blob/master/dcgan/main.py#L215 2
output = netD(real_cpu)
errD_real = criterion(output, label)
errD_real.backward()
D_x = output.mean().item()
# train with fake
noise = torch.randn(batch_size, nz, 1, 1, device=device)
fake = netG(noise)
label.fill_(fake_label)
output = netD(fake.detach())
errD_fake = criterion(output, label)
errD_fake.backward()
D_G_z1 = output.mean().item()
errD = errD_real + errD_fake
optimizerD.step()
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad() |
st103010 | chenglu:
t a data only not a Variable anymore.
But my question is training G. |
st103011 | Yes of course,
If the data which been fed into D is no longer a Variable but a plain tensor, then if you do backward() on the output of D, the computation in the network D will not influence the grad of G. So there is no need to do required_grad = False, the detach already done this. |
st103012 | Great. So during training G, If i want to update G network only and fed forward data to D network, I still need to set param.requires_grad = False. Am I right?
Note that, this is different with the above case, in which during training G, I also want to get a prediction from D network. The code like
for param in D.parameters():
param.requires_grad = False
netG.zero_grad()
output = netD(fake)
output2= netG(fake) #one more here
errG = criterion(output, label)
errD = criterion(output2, label)
errG.backward()
optimizerG.step() |
st103013 | No you don’t have to do that. In this example you are using two different optimizers (one per network). They only get parameters of a certain network to optimize. If optimizing G there is no need for setting param.requires_grad = False as long as you don’t call optimizer_D.step() |
st103014 | I’m confused about your code, why you fed the fake image to both G and D? The fake is the ouput of netG(noise). And, if you are training G, and you also want to get a prediction from D, why not JUST PRINT IT OUT, the prediction of D is needed when training for Gnet. Here is the code from example of G training:
netG.zero_grad()
label.fill_(real_label)
output = netD(fake)
print(output) # here , if you want to print it, just print it.
# the whole code just train G because it only
# does optimizerG.step()
errG = criterion(output, label)
errG.backward()
D_G_z2 = output.mean().item()
optimizerG.step() |
st103015 | justusschock:
you
I fed data to G to check if the G network is trained or not. I know the normal dcgan just train G and keep D fixed. But I just want to check if during training G, does the network D and G trained nor nor |
st103016 | Hi. I’m new to pytorch and I’m working on the babi data set. I have some code that works when I train, but not eval. During train I get improving accuracy over time, but during eval I don’t. I get a maximum below 48 % accuracy. Meanwhile while I’m training I can actually acheive 99 or 100 % accuracy. The dataset comes in two parts. There is a set of 1000 questions and another test set of 1000 more. I have split the test set in half for validation and testing. From this I have a data set with train/validation/test proportions of 50/25/25. My code is messy and I want to include as little of it as possible. The pytorch modules are composed of some ‘gru’ and some ‘linear’ and some other components (like embeddings). My basic question is how do I make sure that dropout computations (and similar things) are removed when I call the ‘eval()’ method on a model? What is the proper way of calling eval() ?? I will include some module code. This isn’t everything.
class EpisodicAttn(nn.Module):
def __init__(self, hidden_size, a_list_size=5):
super(EpisodicAttn, self).__init__()
self.hidden_size = hidden_size
self.a_list_size = a_list_size
self.W_1 = nn.Linear( self.a_list_size * hidden_size,1)
self.W_2 = nn.Linear(1, hidden_size)
self.next_mem = nn.Linear(3 * hidden_size, hidden_size)
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self,concat_list):
assert len(concat_list) == self.a_list_size
''' attention list '''
self.c_list_z = torch.cat(concat_list,dim=1)
self.c_list_z = self.c_list_z.view(1,-1)
self.l_1 = self.W_1(self.c_list_z)
self.l_1 = torch.tanh(self.l_1)
self.l_2 = self.W_2(self.l_1)
self.G = F.sigmoid(self.l_2)[0]
return self.G
class CustomGRU(nn.Module):
def __init__(self, input_size, hidden_size):
super(CustomGRU, self).__init__()
self.hidden_size = hidden_size
self.Wr = nn.Linear(input_size, hidden_size)
self.Ur = nn.Linear(hidden_size, hidden_size)
self.W = nn.Linear(input_size, hidden_size)
self.U = nn.Linear(hidden_size, hidden_size)
self.reset_parameters()
def reset_parameters(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, fact, C):
r = F.sigmoid(self.Wr(fact) + self.Ur(C))
h_tilda = F.tanh(self.W(fact) + r * self.U(C))
return h_tilda
class MemRNN(nn.Module):
def __init__(self, hidden_size):
super(MemRNN, self).__init__()
self.hidden_size = hidden_size
self.gru = nn.GRU(hidden_size, hidden_size, num_layers=1, batch_first=False,bidirectional=False)
#self.gru = CustomGRU(hidden_size,hidden_size)
def forward(self, input, hidden=None):
#_, hidden = self.gru(input,hidden)
output,hidden = self.gru(input, hidden)
#output = 0
return output, hidden
class Encoder(nn.Module):
def __init__(self, source_vocab_size, embed_dim, hidden_dim,
n_layers, dropout, bidirectional=False, embedding=None):
super(Encoder, self).__init__()
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.bidirectional = bidirectional
self.embed = nn.Embedding(source_vocab_size, embed_dim, padding_idx=1)
self.gru = nn.GRU(embed_dim, hidden_dim, n_layers, dropout=dropout, bidirectional=bidirectional)
if embedding is not None:
self.embed.weight.data.copy_(torch.from_numpy(embedding))
print('embedding encoder')
#self.gru = MGRU(self.hidden_dim)
def forward(self, source, hidden=None):
embedded = self.embed(source) # (batch_size, seq_len, embed_dim)
encoder_out, encoder_hidden = self.gru( embedded, hidden) # (seq_len, batch, hidden_dim*2)
#encoder_hidden = self.gru( embedded, hidden) # (seq_len, batch, hidden_dim*2)
# sum bidirectional outputs, the other option is to retain concat features
if self.bidirectional:
encoder_out = (encoder_out[:, :, :self.hidden_dim] +
encoder_out[:, :, self.hidden_dim:])
#encoder_out = 0
return encoder_out, encoder_hidden |
st103017 | Before calling the evaluation code you should set your model into evaluation mode by calling model.eval() on it.
This will make sure to set all layers to eval, such as Dropout and BachNorm layers.
You don’t need to remove these layers or do something else with it.
Also during evaluation you should use a context manager to disable the gradient calculation:
model.eval()
with torch.no_grad():
# your evaluation code
Remember to set the model to train before training with model.train().
Regarding the overfitting:
Are you observing the training and validation accuracy? Does the validation accuracy drop after a while?
Have you considered early stopping?
It’s good to see that your resubstitution error is low, i.e. that you can overfit on your training data.
Now you could add some more regularization techniques like weight decay, more dropout etc. |
st103018 | Thanks for responding so fast. I have tried the code you suggested and I am stuck without any improvement. My testing and evaluation data shows the same accuracy. The accuracy doesn’t drop after a while. If it does it’s only by a few points and then it comes back to previous levels. (46 - 48%) I am able to overfit on the training data as you said. I have also tried lowering the learning rate after a while by stopping the program and starting it again with new learning rate. This helps the training but still not the eval or test.
Are you suggesting that I should add more dropout now? Is the goal to bring the training down or the eval up? I would of course prefer the latter. |
st103019 | You might lose some of your training accuracy, but will most likely gain more validation accuracy, which is your goal. Unfortunately I’m not really familiar with text data, so I don’t know, if any data augmentations are available or what kind of regularization is state of the art now. |
st103020 | is there a way to print out the eval() or train() status while doing these things so that I can be sure I’m properly applying the two? |
st103021 | Sure, you can just use print(model.training) to check, if it’s currently in train or not. |
st103022 | I’m still stuck with a validation accuracy around 50%. I have tried approaching people who have done this problem before (using the babi dataset) but none of them have responded. I would like to keep this thread open and try to get some more feedback from the pytorch community. Does anyone have any ideas why my model would evaluate at 50% during validation and 99-100% during training? I have made changes to the pytorch models. I have increased my dropout to 0.5 in many of my submodules. I have tried weight decay. A link to my most recent code is here: https://github.com/radiodee1/awesome-chatbot/blob/master/model/babi_iv.py 2 . Thanks.
EDIT: I found this paper and it helps alot: https://arxiv.org/abs/1603.01417 2 |
st103023 | Hi,
I use an RNN model to process video frame by frame such that the forward looks something like this:
for frame in range(x.shape[1]):
out1, hidden = self.rnn_layer1(x[:,frame,:,:], hidden)
return out1
But that code returns error “one of the variables needed for gradient computation has been modified by an inplace operation”
After some trial and error i realized that the problem is that the sliced frame still holds gradient information of the whole video. I use an ugly hack that works just fine but i’m looking for a cleaner way to do this… here is the hack:
for frame in range(x.shape[1]):
out1, hidden = self.rnn_layer1(torch.cat([x[:, frame, :, :]], dim=1).unsqueeze_(1), hidden)
return out1
Essentially concatenating the tensor to nothing… |
st103024 | Can you make sure your code is correct? It looks like your torch.cat call includes hidden as the second argument. |
st103025 | Despite the fact, that your actual code does not seem to work (@aplassard mentioned), this should work:
x_perm = x.transpose(0,1)
for frame in x_perm.split(1):
out1, hidden = self.rnn_layer1(frame.squeeze(0), hidden)
Edit: what your hack actually does is allocating new storage and copying the tensor. Regarding this a simple x[:,frame,:,:].clone() or x[:,frame,:,:].contiguous() might do the trick as well |
st103026 | import torch
import torch.nn as nn
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import torch.nn.functional as F
batch_size = 50
# MNIST dataset
train_dataset = dsets.MNIST(root='../../data_sets/mnist', # 选择数据的根目录
train=True, # 选择训练集
transform=transforms.ToTensor(), # 转换成tensor变量
download=False) # 不从网络上download图片
test_dataset = dsets.MNIST(root='../../data_sets/mnist', # 选择数据的根目录
train=False, # 选择训练集
transform=transforms.ToTensor(), # 转换成tensor变量
download=False) # 不从网络上download图片
# 加载数据
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True) # 将数据打乱
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=True)
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1, 16, 5, 1, 2),# 10 *32 *32 out = 16*28*28
nn.ReLU(),
nn.MaxPool2d(2)#16*14*14
)#output 10*
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2), # 10 × 18 x 18-> 14 *14 * 32
nn.ReLU(),
nn.MaxPool2d(2)#32*7*7
)
self.out = nn.Linear(32*7*7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
x = self.out(x)
return x
# define the neural net
net = Net()
if torch.cuda.is_available():
net.cuda()
optimizer = torch.optim.SGD(net.parameters(), lr=0.0510) # 0.052
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
loss_func = torch.nn.CrossEntropyLoss()
for epoch in range(10):
for i, (images, labels) in enumerate(train_loader): # 利用enumerate取出一个可迭代对象的内容
#images = images.reshape(-1, 28 * 28)
labels = labels
if torch.cuda.is_available():
images=images.cuda()
labels=labels.cuda()
out = net(images)
loss = loss_func(out, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % 100 == 0:
print('current loss = %.5f' % loss.item())
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
#images = images.reshape(-1, 28 * 28)
labels = labels
if torch.cuda.is_available():
images = images.cuda()
labels = labels.cuda()
output_t = net(images)
_, predicted = torch.max(output_t.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))
in this code , if add function reshape(-1,28,28) to images
it will be error ,and
i found that Conv Layer input shape is [batch_size,input_channel,size_x,size_y]
and how to reshape like this ?
i correct to this :reshape(50,1,28,28)?
and it can Run
is it right? |
st103027 | I am not familiar with how autograd works at the backend. Currently I have to modify the output of my network before passing to a criterion. My network output two things. The code is as follows:
x_dot_wT, phi_theta = output
# must clone else modify in place
f_y = x_dot_wT.clone()
batch_size = target.size(0)
idxs = torch.arange(0, batch_size, dtype=torch.long)
f_y[idxs, target] = ((_lambda * f_y[idxs, target]) + phi_theta[idxs, target]) / (1 + _lambda)
loss = criterion(f_y, target)
I conveniently clone x_dot_wT to avoid the “variable is modified in place” error. I wonder if this will affect autograd negatively and cause my gradient to be computed wrongly. Does it? |
st103028 | Hi,
As long as you don’t use the now deprecated .data, the autograd engine will detect anything that would lead to wrong gradient being computed. In your case for example, this inplace modification. Cloning here is the right fix and will give you the correct gradient. |
st103029 | Will the gradient computed in this case the same as when I move the modification (code posted) into the network? It seems to me that there will be difference. I have not done enough experiments to conclude though. |
st103030 | Cloning does not change any gradient, wherever you put it, so if the only change you make is putting a .clone() at different position, this will give you the same results.
If you’re doing something else, i’m sorry I don’t understand what you meant? |
st103031 | When I move the block of code into the forward function of my network, I do not have to clone. I am not sure why though (Pytorch doesn’t complain). That’s why I have the question if cloning will cause gradient to be computed differently. Within the network, I do not have to clone. Outside of it, I have to. |
st103032 | Or must I be doing something wrong, that modified in place will be complained wherever the code is placed? |
st103033 | I is possible that when you unpack your network outputs, you create multiple tensors that share the same underlying storage and so doing inplace operations become forbidden.
For example when doing x_dot_wT, phi_theta = output. |
st103034 | It means it depends on the rest of your code, which piece do you move around, what is between the other place where you put it and where it is now etc… |
st103035 | Correct me if I am wrong. Pytorch does not allow in place modification because it needs to know all the operations carried out so that it can compute gradient in the future. So right now, since I clone the variable and do something to it and pass this new variable f_y into the criterion to calculate loss, autograd wouldn’t know these operations that are done to x_dot_wT and would assume that the loss is calculated directly using x_dot_wT and so it would be different from the case where autograd knows these modifications when I do not clone?
So in my code when I do loss.backward(), pytorch thinks that the loss is computed using x_dot_wT and not f_y? Is that the case? |
st103036 | I’m not sure to understand your question here.
Keep in mind that not all inplace operations are forbidden, it depends if the tensor is actually needed. For example, you can change inplace the output of a batchnorm layer, but you cannot change inplace the output of a conv layer.
Note as well that indexing a tensor, concatenating it or splitting it counts as autograd ops, and so may prevent you from changing the tensor inplace (depending on which operation you use).
Whatever happens, the autograd engine will give you the correct gradients for the parameters of your net if it does not raise an error. |
st103037 | Just curious, why sometimes I can modify inplace sometimes I can’t? Is there anywhere that I can read that gives detailed explanation? |
st103038 | I have the following case that I want to add some loss on top of two DataParallel modules, and train m1 only:
m1 = nn.DataParallel(m1)
m2 = nn.DataParallel(m2)
m1_loss, m1_out = m1(input_data)
m2_out = m2(input_data)
then
added_loss = some_operation(m1_out, m2_out)
loss = m1_loss+added_loss
backprop()
some_operation involves conv layer, so how can I do this, if I add conv layer in some_operation without DataParallel, would this cause problem when backprop? |
st103039 | when use the torch.matmul or mm, the system return the segmentation fault err. Why? |
st103040 | Try to re-install pytorch in a proper way. Looks like some extension is not compatible.
image.png606×506 24.5 KB |
st103041 | I try to re-install the pytorch. Nothing is changed. The veision that I installed few month ago works fine . Maybe you can try to re-install in your server. |
st103042 | i try the upgrade command on my old machine, the version is 0.4.0, which i installed the pytorch in 2018/06./05 , get the message:
QQ图片20180719164954.png707×108 5.49 KB
on my new server, i try to uninstall the torch and re-install it, still get the same message: |
st103043 | You may have to dig into the core dump file to find some clues. I guess some of the librarys are not compatible. |
st103044 | Is it possible to build a CNN like the following easily in Pytorch?
X_train[0:99] -> Con1 -> Conv2 - MaxPool / + X[99:105] -> linear1 -> linear2 ->Ouput
As in we are adding more information to the fully connected layers than simply what the conv layers tell us. For example imagine doing NLP on movie reviews but you know the type of movie and you know which actors were in it etc would you be able to add that information to the fully connected layers while having the conv layers analyze the actual sentences of the review.
Is this possible? Any examples I could look at ? Is it worth trying out this technique? |
st103045 | Solved by ptrblck in post #2
How would you like to split your input data?
Since it seems you would like to use a one-dimensional conv layer, your input should be of shape [batch_size, channels, length].
Are you splitting X_train based on the length? Also, do you want to concatenate the split input?
If so, this could be a sta… |
st103046 | How would you like to split your input data?
Since it seems you would like to use a one-dimensional conv layer, your input should be of shape [batch_size, channels, length].
Are you splitting X_train based on the length? Also, do you want to concatenate the split input?
If so, this could be a starter code:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv1d(5, 10, 3, 1, 1)
self.conv2 = nn.Conv1d(10, 10, 3 ,1, 1)
self.pool2 = nn.MaxPool1d(2)
self.fc1 = nn.Linear(10 * 50 + 5 * 5, 100)
self.fc2 = nn.Linear(100, 2)
def forward(self, x):
x1 = x[:, :, :100]
x2 = x[:, :, 100:].contiguous().view(x.size(0), -1)
x1 = F.relu(self.conv1(x1))
x1 = F.relu(self.conv2(x1))
x1 = self.pool2(x1)
x1 = x1.view(x1.size(0), -1)
x = torch.cat((x1, x2), 1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
batch_size = 1
channels = 5
length = 105
x = torch.randn(batch_size, channels, length)
model = MyModel()
output = model(x)
Note that you might want to check the ranges before the concatenation, as they might be quite different, which might lead to training difficulties. |
st103047 | @ptrblck works perfectly! By any chance do you anyone who does similar things so that I can look at what hyper parameters and architecture they are using? Read all of cs231n was super interesting thank you for that advice last week. |
st103048 | We had quite a long discussion about a similar topic in this thread 8.
Maybe you can get some ideas for your approach. |
st103049 | sorry to bother you. It looks like I have the opposite problem of the thread. My Network won’t overfit. Below is the loss curve. When it jumps downwards it is becuase I have decreased the learning rate (learning rate annealing). Using Adam as my optimizer. y axis is loss and x axis is number of epochs.
download.png611×578 19.6 KB
last 100 epochs
download.png624×578 20.2 KB
any idea on how i can force it to overfit? |
st103050 | I would scale down the problem to just a single input and try to overfit your model.
If that’s not possible, your architecture, the hyperparameters or the training routine might have a bug or are not suitable for the problem. |
st103051 | I am using the openai gym cartpole-v0 environment. Here the code that doesn’t work. (optimizer.zero_grad() and optimizer.step() are performed outside the function)
def make_step(model, optimizer, criterion, observation, action, reward, next_observation):
inp = torch.from_numpy(observation)
target = model(torch.from_numpy(observation)).detach().numpy()
next_target = model(torch.from_numpy(next_observation)).detach().numpy()
new_reward = np.max(next_target)
target[action] = reward
target[action] += new_reward
obv_reward = model(inp.double())
target_reward = torch.from_numpy(target)
loss = criterion(obv_reward, target_reward)
loss.backward()
On running the code, the agent learns nothing and achieves no more than 10 reward.
Now if I flip the gamma term to the left and remove the network’s foresight, it does slightly better, achieving around 30-120 reward.
def make_step(model, optimizer, criterion, observation, action, reward, next_observation):
inp = torch.from_numpy(observation)
target = model(torch.from_numpy(observation)).detach().numpy()
#next_target = model(torch.from_numpy(next_observation)).detach().numpy()
#new_reward = np.max(next_target)
target[action] = reward
#target[action] += new_reward
obv_reward = model(inp.double()) - model(torch.from_numpy(next_observation))
target_reward = torch.from_numpy(target)
loss = criterion(obv_reward, target_reward)
loss.backward()
Why is the first one not working and how do I fix it? |
st103052 | For example, I have a matrix M of size (m, n). And I want to sum elementwise powers of this matrix M: e.g.
M^a + M^b + M^c. [a,b,c] will be given as a tensor. Note that it is elementwise not matrix multiplication.
Using for loop is too slow. |
st103053 | Is this what you are after?
m, n = 2, 3
M = torch.randint(10,(m, n))
a, b, c = 2.0, 3.0, 4.0
power = torch.tensor([a,b,c])
torch.pow(M.repeat(power.shape[0],1).reshape(power.shape[0],-1), power.repeat(m*n,1).t()).sum(0).view(m,n) |
st103054 | Thanks for help. I have solved the problem. What I need is something like this. If my matrix M is of size [2, 10572] and n is of size [3]. I can do M[:, None, :] ** n[:, None] and get output of size [2, 3, 10572]. Then i just need to sum along axis 1.
I think this is very efficient. Not sure about other solutions provided here though. Have not tested. |
st103055 | When using torch.mm function I get an underflow error such that I get an output matrix of NaNs that is passed to subsequent layers of my model. How can I avoid this underflow error? Is it possible to calculate a matrix multiplication in log-scale, for example? |
st103056 | The below piece of code gives me back really weird data augmented image. There are no other transformations besides the Resize. I am using pytorch 0.40
transformed_dataset = data_sets(csv_path=csv_path,
image_path = image_path,
transform = transforms.Compose([
transforms.Resize(256),
transforms.ToTensor(),
]) )
dataloader = DataLoader(transformed_dataset, batch_size=2,
shuffle=True, num_workers=0)
for idx, _ in enumerate(dataloader):
#print(_['input'].size())
for _ in _['input']:
img = _.view(256, 256, 3).numpy()
plt.imshow(img)
I am really not familiar with matplotlib so maybe this is the default behavior? |
st103057 | Solved by ptrblck in post #5
Could you try to use img = img.permute(1, 2, 0) instead of the view call? |
st103058 | What is the photo supposed to look like? Several possibilities are the data is floating point and imshow is interpreting it as an integer (expected range is 0-255) but the data is float (expected range is 0-1) or vice versa. |
st103059 | Its suppose to be a single color image of a black and white line art. I didn’t want to convert it to gray scale because I want to train a classifier to ID the between color and black/white lineart. |
st103060 | ptrblck:
img = img.permute(1, 2, 0
Thanks! That fixed it, do you know why view didn’t work? Edited: A quick loop shows permute works for some images but not on all of them?
Edited: Nvm there was left over old code. Thanks for the solution! |
st103061 | Hello! I want to implements Deep Feature Selection method from this paper: https://pdfs.semanticscholar.org/ac2a/c075773cc936a206c3ebb2339376d586bcbd.pdf 1
But I can’t understand how I can make Weight Layer after input layer? Is it just nn.Linear or smth else? Thx! |
st103062 | I am stuck, my custom classification network fails at the very last batch because of its size.
When I get my batch from data loader with five-crop applies I get the following input shapes.
torch.Size([6, 5, 3, 256, 256]) torch.Size([6])
torch.Size([6, 5, 3, 256, 256]) torch.Size([6])
torch.Size([6, 5, 3, 256, 256]) torch.Size([6])
...
torch.Size([1, 5, 3, 256, 256]) torch.Size([1])
According to the documentation you should collapse the batch_size and ncrops before feeding to the network
so I did input.view(-1, 3, 256, 256), the final output of after convolutions and fully connected layers is a shape of [30, 64, 1, 1], I then flattened the 1 dimensions so I get [30, 64] as result. The next step I did was to average the crops according to the documentation eg result.view(bs, ncrops, -1).mean(1) so I did results.view(6, 5, -1).mean(1) which gave me back a tensor shape of [6, 1]. You can probably tell what the issue is now if you have been following along.
Remeber the last size is:
torch.Size([1, 5, 3, 256, 256]) torch.Size([1])
This will fail when I tried to use result.view(bs, ncrops, -1).mean(1)
How do I fix this? I am self taught please go easy on me.
EDIT: I AM STUPID thats why we do. I thought were just place holders.
bs, ncrops, c, h, w = input.size() |
st103063 | Hi,
Official release of Caffe2 at old repository (https://github.com/caffe2/caffe2/releases) is 1 year old.
Last pyTorch release was before the migration.
Any guidance of which Caffe2 branch/checkpoint/changeset one can use or is the recommendation is use whatever is on the master ?
Regards & thanks
Kapil |
st103064 | Hello everyone,
I have a pre-trained model called SoundNet, and the weights are available in TensorFlow. I created the model and I loaded the weights. The implementation is available here 8 .
The preprocessing and many parameters were taken from the Tensorflow repository
The output of the first convolutional layer is exactly the same as the one in Tensorflow, however, the following BatchNorm2d layer yields different numbers.
There is an ambiguity in the way we should use the mean and the variance of Batchnorm layer in Pytorch. I am doubting about the way I load the weights of BatchNorm in Pytorch and the way I use it:
def put_weights(batchnorm, conv, params_w, batch_norm=True):
if batch_norm:
bn_bs = params_w['beta']
batchnorm.bias = torch.nn.Parameter(torch.from_numpy(bn_bs))
bn_ws = params_w['gamma']
batchnorm.weight = torch.nn.Parameter(torch.from_numpy(bn_ws))
bn_mean = params_w['mean']
batchnorm.mean = torch.nn.Parameter(torch.from_numpy(bn_mean))
bn_var = params_w['var']
batchnorm.variance = torch.nn.Parameter(torch.from_numpy(bn_var))
conv_bs = params_w['biases']
conv.bias = torch.nn.Parameter(torch.from_numpy(conv_bs))
conv_ws = params_w['weights']
conv_ws = torch.from_numpy(conv_ws).permute(3, 2, 0, 1)
conv.weight = torch.nn.Parameter(conv_ws)
return batchnorm, conv
When I use the model to extract the features, I put the model in eval() mode
model.eval() # this to ensure that pre-calculated means and variances are used during feature extraction
Does anybody have an idea why the I got this kind of output? Do I use the model weights properly? |
st103065 | I calculated the BatchNorm2d 18 manually by Numpy Broadcasting 11 as defined in PyTorch reference. I got similar values to the ones in Tensorflow implementation, which confirms my doubt about the way I use BatchNorm2d in my implementation.
Any idea how to use these parameters from a pre-trained model for feature extraction? |
st103066 | .mean and variance do not exist in nn.BatchNorm.
Try to use .running_mean and .running_var instead.
EDIT: Also, I think you should register them as tensors not nn.Parameters. |
st103067 | After spending a few hours on this, as a beginner in PyTorch, I will put the answer, perhaps it will be useful for some people out there:
For mean and variance we should use running_mean.data and running_variance.data when assigning the pre-trained weights. This is strange since for the other variables, assigning the weights directly without explicit .data
It is safer to use the variable.data to assign weights in general. The function to load weights would look like this:
def put_weights(batchnorm, conv, params_w, batch_norm=True):
if batch_norm:
bn_bs = params_w['beta']
batchnorm.bias.data = torch.from_numpy(bn_bs)
bn_ws = params_w['gamma']
batchnorm.weight.data = torch.from_numpy(bn_ws)
bn_mean = params_w['mean']
batchnorm.running_mean.data = torch.from_numpy(bn_mean)
bn_var = params_w['var']
batchnorm.running_var.data = torch.from_numpy(bn_var)
conv_bs = params_w['biases']
conv.bias.data = torch.from_numpy(conv_bs)
conv_ws = params_w['weights']
conv.weight.data = torch.from_numpy(conv_ws).permute(3, 2, 0, 1)
return batchnorm, conv |
st103068 | Thank you very much,
That is true, and I should have registered them as tensors to tensor.data |
st103069 | Hi Everyone,
Suppose I have a pre-trained classic CNN model (e.g., ResNet which is trained on a dataset) in PyTorch. Now, I want to convert this model to C++ (fro some reasons like better performance). Could you help me please to do that?
Is the PyTorch ATen (A TENsor library) solution of this work? |
st103070 | There is no directly C++ support for pytorch’s model. May be can export to an onnx model , deploy this mode using caffe2. |
st103071 | Dear @Teaonly,
Thank you for your answer. But in my opinion the syntax of caffe2 framework is so hard and its tutorials & docs are really bad. Is another solution exists for the above-mentioned task?
What is your opinion about Pytorch C++ Library 167? |
st103072 | Have you try https://github.com/warmspringwinds/pytorch-cpp 186.
When I compiled pytorch-cpp, I got errors:
pytorch-cpp/src/pytorch.cpp:341:58: error: ‘Threshold_updateOutput’ was not declared in this scope
…
Any advise? Thanks. |
st103073 | @Hyer_Chen,
No, I haven’t use the above-mentioned repository. You can report your error in the Issue section of that repository. Maybe the below repository also be interesting for you:
GitHub
ebetica/autogradpp 95
autogradpp - Direct C++ Interface to PyTorch
However, I didn’t use that again!
Note that in the near future, maybe the PyTorch team release a PyTorch like C++ Interface. You can visit the below links:
1- https://github.com/pytorch/pytorch/issues/3335 82
2- https://github.com/caffe2/caffe2/issues/2439 28
3- https://github.com/pytorch/pytorch/issues/6032 39 |
st103074 | Thanks. I think there may be some head file missing. And I am really looking forward to the “PyTorch team release a PyTorch like C++ Interface”. |
st103075 | Caffe 2 is now merged into PyTorch, so the C++ inference should be available in future, theoretically |
st103076 | You can always try using my implementation of inference Eval model in C++ code 250
It implements only a few modules (ReLU, Sigmoid, Linear, Conv2D, BatchNorm2D) but that should be easy to extend or you can just simply use it to implement your own inference from scratch. |
st103077 | So I was following the five-crop documentation and everything is going great until I am trying to wrap my head around how my labels are going to be synced with input images?
I have a csv file which contains 1 hot encodings, think dogs vs cats. In my dataloader I have batch size of 8 so after five-crop augmentation, I have the batch size of [8, 5, 3, 256, 256]. I then did .view(-1, 3, 256, 256) which then gives me 40 images in a batch, now question is how do I make sure the labels are paired correctly? A quick look at the images shows that they are in order, so I am guessing I need to expand my label from [8] to [8, 5] and collapse it into [40] is this correct? Is this the correct way of doing this? Also how do I duplicate each element in a tensor 5 times so I can have 40 labels? |
st103078 | I think the usual approach would be to average the predictions from the crops and just keep the batch size.
If you really want to extend your batch size, you could try the following:
target = torch.empty(8, dtype=torch.long).random_(10)
target.view(-1, 1).expand(-1, 5).contiguous().view(40) |
st103079 | @ptrblck Okay so that’s why these 2 lines are in five/ten-crop the documentation.
>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops |
st103080 | The documentation of torch.squeeze 22 (PyTorch 0.4) states that “As an exception to the above, a 1-dimensional tensor of size 1 will not have its dimensions changed.”
However, when I torch.squeeze() a 1-dimensional tensor, it becomes zero-dimensional: i.e. it does get squeezed!
>>> import torch
>>> torch.__version__
'0.4.0'
>>> a = torch.tensor([512])
>>> a.shape
torch.Size([1])
>>> a = a.squeeze()
>>> a.shape
torch.Size([])
Is this an implementation bug, a documentation bug, or am I the bug? |
st103081 | Solved by albanD in post #4
Ho I guess this bit of the doc was forgotten when the 0-dimensional tensors were introduced.
I will fix that, thanks for the report. |
st103082 | Hi,
This is expected. Now the notion of 0-dimensional tensor exists and represent a tensor containing a single element. You can see it as a number.
For example, if you call .sum() on a tensor, you will see that you get a 0-dimensionnal tensor as output. |
st103083 | Thanks, but I was referring to the documentation being misaligned with the behavior.
The docs say that torch.squeeze() “Returns a tensor with all the dimensions of input of size 1 removed… As an exception to the above, a 1-dimensional tensor of size 1 will not have its dimensions changed," but I show that I create a 1-dim tensor and squeeze() does change the tensor’s shape.
What am I missing?
Note that I intentionally created a 1-dim tensor with 1 element in it:
a = torch.tensor([512])
And not a 0-dim tensor:
a = torch.tensor(512) |
st103084 | Ho I guess this bit of the doc was forgotten when the 0-dimensional tensors were introduced.
I will fix that, thanks for the report. |
st103085 | I tried the multilabel classifier training,but function loss.backwork can not
pass through. It throws the error as follows:
RunTimeerror: grad can be implicitly created only for scalar outputs
some of my code is as follows:
loss=criterion(pred,label)
where pred and label have shapes as: [batchsize,class_number]
criterion = nn.BCEWithLogitsLoss(weight=None, reduce=False)
I read from document that loss must is scalar, should I use only batchsize 1?
Could anybody help me? |
st103086 | I assume you are not summing up / reducing the loss-value in any other (manual) way? You should be able to do
loss = criterion(pred, label)
loss.backward(torch.ones_like(loss))
Per default a scalar one is propagated back to calculate the gradients but for an arbitrary loss-tensor-size you need to manually specify the tensor which should be propagated back. |
st103087 | Thank you. I have changed the criteron definition as:
criterion = nn.BCEWithLogitsLoss (weight=None, reduce=True),
It’s now can train. |
st103088 | why my multi-label model can not converge, each image have 60 labels and I use 1000 image to train. |
st103089 | I have already build my network and now I would like to pass input to network. My forward() method looks:
def forward(self, input, index=None):
if index is not None:
#Do something
else:
#Do something else
And the way I pass input to network is:
output = net(input, 1)
And this leads to problem as index is still ‘None’ even I pass value ‘1’ to network.
After searching online, it seems that forward() method can only accept input in Variable type.
Therefore it seems alright after I modified the codes as following:
output = net(input, Variables(torch.IntTensor([1])))
However, is there any simpler way of doing this? Thank you for your time. |
st103090 | Hi @henrych4.
I’ve the same problem but I think it’s due the fact that internally, PyTorch calls _ call _ that in turns calls forward with only input parameter, so index takes default value None.
The simplest solution I found was to add a set_index(self, index) function to my network and propagate it through the modules.
If you found a better solution, please share |
st103091 | Hello,
I want to bring pretrained parameters to my model.
parameters are saved as txt file.
is there any way to bring and use these parameters? |
st103092 | Could you post a small example of your saved parameters?
How did you save them? The format would be interesting to see. |
st103093 | I apologize if this question is redundant. Looking at other similar questions they talk about downloading Conv2dLocal (Add Conv2dLocal module 15) . I was not able to find this module on the Pytorch documentation. I assume you that it is not present in the current version of Pytorch? How can i get this module? and does Conv1dLocal exist? |
st103094 | Pytorch beginner, my English is not good, I will try to express, I use my own image data to train a model of my own, how to load the model, and input the data into the trained model, extract the final convolution of the model The output characteristics of the layer? |
st103095 | I suggest pytorch beginners to go through the official tutorials for once. Your question is very common so if you done the tutorial, it will be solved. https://pytorch.org/tutorials/ 2 |
st103096 | If I have a single network that is outputting two sets of outputs, one that performs a classification operation and another that performs a regression operation, how would I go about computing the loss and backpropagating? I understand I could use two different loss functions add them and then call backward, however regression and classification losses have different units, so would this skew my results in an unpredictable way? How can I go about training a single network to output these two types of outputs? |
st103097 | One way is to take a weighted sum of the regression loss and classification loss. |
st103098 | Hi,
I’m trying to incorporate graphical structure in to my model. Specifically, I’m trying to convert an out_tensors to a graph in networkx package (there’re graph algorithms that aren’t expressible as operations on tensors) and then go back to tensors. In my understanding, autograd won’t work here and I’m unsure of a suitable way to go on. This is non-trivial to me and I hope you all can help me with this.
Thanks in advance! |
st103099 | I have build exactly same model in both TF and Pytorch. And I trained in TF. For some reason, I have to transfer the pretrained weight to Pytorch.
The network is like:
image.png515×556 38.7 KB
In TF, Conv2d filter shape is [filter_height, filter_width, in_channels, out_channels], while in Pytorch is (out_channels, in_channels, kernel_size[0], kernel_size[1]).
So I have done below in TF:
and I transfer to pytorch like:
It turns out that the DQN in pytorch is not working well as in TF! |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.