id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st80268 | Thanks! But if we were trying to do simple classification of the entire image, then we would need some more logic here. In particular, if were trying to do NLL against say 10 classes, then perhaps one approach would be to average the values in each of the 10 feature maps of the last layer. I suppose this would be the “obvious” spatial-dimension invariant simple classifier. |
st80269 | Hi smth! How should I do if the spatial size of the input for convolutional layers is the same but the depth size of the input is variable? Does the AdaptiveAvgPooling still work? Thanks! |
st80270 | Hi,
I have a 3d tensor sized (batch_size, S, 5).
I also have a list containing a list for each batch, that contains indices.
For example, if the batch size is 2, the list looks like this:
[[idx0_0, idx0_1, …], [idx1_0, idx1_1, …]] (where each idx is between 0 to S).
I want to get a tensor that contains, for each batch, the 3rd dimension’s stuff from the original tensor, only from the indices that appear in the list of the same batch, while keeping the gradient of the original tensor.
For example, if the original tensor is
[[[A],
[B],
[C]]]
(batch_size = 1, S = 3, A, B, C are sized 5)
and the list is [[0, 2]]
The result will be:
[[[A],
[C]]]
and that it’ll be possible to backpropagate through this tensor.
I heard about the gather function but i’m not sure if it fits here and how to use it in this case.
Thank you! |
st80271 | Hi, can you specify what you mean by “keeping the gradient of the original tensor”?
If it means that you just want to propagate the gradient incoming to:
[[[A],
[C]]]
we can note:
[[[grad(A)],
[grad(C)]]]
to the input of the operation in the most straight forward way you would obtain:
[[[grad(A)],
[None],
[grad(C)]]]
Which is what would torch.gather do.
In case you want something else, please specify but I assume you will have to extend torch.autograd as in:
https://pytorch.org/docs/stable/notes/extending.html
Best,
Samuel |
st80272 | Hi and thank you for the reply.
I meant that i want to know the operation needed to get the tensor with the appropriate values from the original tensor & the list (as i explained), in such way that it’ll be possible to backpropagate through the result. (This computation is done while computing the loss so it has to be differential).
Thank you! |
st80273 | Ok in that case, torch.gather will do the job.
I think you could also do:
results = [[[A],
[B],
[C]]]
l = [0, 2]
updatd_results = results[:, l, :] |
st80274 | I am learning attention mechanism.
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html 13
In the tutorial, the alignment score is calculated based on the decoder’s input and hidden state.
However, I read several papers about attention. They used current target hidden
state ht with each source hidden state hs to compute the alignment score, such as this one, entitled Effective Approaches to Attention-based Neural Machine Translation. I do not know why. |
st80275 | Hello,
I’m trying to understand the calculations and manually calculate this simple network.
All calculations are good except for the weight.
What am I doing wrong?
import torch
import torch.nn as nn
import torch.optim as optim
weight_0 = 0.25
bias_0 = 0.68
l_rate = 0.01
input_data = torch.Tensor([[2.2], [4.0]])
target_data = torch.Tensor([[4.1], [5.1]])
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(1, 1)
self.fc1.weight.data = torch.tensor([[weight_0]])
self.fc1.bias.data = torch.tensor([bias_0])
def forward(self, x):
x = self.fc1(x)
return x
net = Net()
loss_f = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr = l_rate)
#net.train() #one epoch
net_out = net(input_data)
loss = loss_f(net_out, target_data)
optimizer.zero_grad()
loss.backward()
optimizer.step()
net.eval()
print("loss: ", loss.data) # 9.9666
print("weight:", net.fc1.weight.data, net.fc1.weight.grad.data) # 0.4499, -19.9940
print("bias: ", net.fc1.bias.data, net.fc1.bias.grad.data) # 0.7429, -6.2900
print("calculations:")
in_mean = (input_data[0] + input_data[1]) / 2
out_1 = (input_data[0] * weight_0 + bias_0)
out_2 = (input_data[1] * weight_0 + bias_0)
loss_1 = (out_1 - target_data[0]) ** 2
loss_2 = (out_2 - target_data[1]) ** 2
loss_out = (loss_1 + loss_2) / 2
loss_d_1 = (out_1 - target_data[0]) * 2
loss_d_2 = (out_2 - target_data[1]) * 2
loss_d_out = (loss_d_1 + loss_d_2) / 2
weight = weight_0 - loss_d_out * l_rate * in_mean
bias = bias_0 - loss_d_out * l_rate
print("loss:", loss_out) # 9.9666
print("loss_d:", loss_d_out) # -6.2900
print("weight:", weight) # 0.4450 (should be 0.4499)
print("bias:", bias) # 0.7429
print(net.fc1.weight.data, "-", weight, "=", net.fc1.weight.data-weight.data) # 0.4499 - 0.4450 = 0.0050
Thanks! |
st80276 | Solved by phan_phan in post #2
It’s because you should have separated the computations for each element of you batch!
Here, instead of:
loss_d_out = (loss_d_1 + loss_d_2) / 2
weight = weight_0 - l_rate * loss_d_out * in_mean
bias = bias_0 - l_rate * loss_d_out
it should rather be:
weight = weight_0 - l_rate * (loss_d_1 * in_1… |
st80277 | It’s because you should have separated the computations for each element of you batch!
Here, instead of:
loss_d_out = (loss_d_1 + loss_d_2) / 2
weight = weight_0 - l_rate * loss_d_out * in_mean
bias = bias_0 - l_rate * loss_d_out
it should rather be:
weight = weight_0 - l_rate * (loss_d_1 * in_1 + loss_d_2 * in_2) / 2
bias = bias_0 - l_rate * (loss_d_1 + loss_d_2) / 2
(Though it doesn’t change anything for the bias) |
st80278 | Hello everyone, now, I have a 2-D tensor, A = [1, 2 ; 3, 4], and a 1-D vector, B = [5, 6]. I want to achieve C which is C = [15, 25; 36, 46]. Can pytorch0.4.1 calculate this efficiently? Sure, I know that this can be achieved by A*diag(B), but diag(B) need too much memory. Looking forward to your reply. |
st80279 | Hi,
You can use expand and element-wise multiplication:
C = A * B.unsqueeze(1).expand_as(A). |
st80280 | So, I’ve been trying to modify http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 142 to work with DenseNet instead of ResNet, but I can’t seem to figure out what to change the fc layer to.
This is what I currently have:
model_ft = models.densenet169(pretrained=True)
num_features = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_features, len(CLASS_NAMES))
but running the code gives me this error:
Traceback (most recent call last):
File "pytorch_densenet.py", line 154, in <module>
num_features = model_ft.fc.in_features
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 398, in __getattr__
type(self).__name__, name))
AttributeError: 'DenseNet' object has no attribute 'fc'
So, uh… How do I change the last layer of DenseNet to work with Transfer Learning?
Thanks a lot. |
st80281 | Looking at the model https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py 287 It looks like they assigned all of the conv layers to self.features and all of the and all of the classification layers to self.classifier.
In your case, model.features should give you the feature extractor that you want but not all of the torchvision models follow the same api. In the future you will just have to look at the model in that repo and figure out where the last conv layer is. |
st80282 | Did you figure how to use densenet instead of resnet? I cannot find an example that does so.
I get the following error:
/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
Downloading: "https://download.pytorch.org/models/densenet161-8d451a50.pth" to /home/grad3/jalal/.torch/models/densenet161-8d451a50.pth
100%|██████████| 115730790/115730790 [00:04<00:00, 24886091.87it/s]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-290-2e17f45b78dc> in <module>()
12
13
---> 14 num_ftrs = model_ft.fc.in_features
15 model_ft.fc = nn.Linear(num_ftrs, 9)
16
/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
516 return modules[name]
517 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 518 type(self).__name__, name))
519
520 def __setattr__(self, name, value):
AttributeError: 'DenseNet' object has no attribute 'fc'
for the following code:
######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#
class_weights = torch.FloatTensor(weight).cuda()
#model_ft = models.resnet18(pretrained=True)
###model_ft = models.resnet50(pretrained=True)
model_ft = models.densenet161(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 9)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss(weight=class_weights)
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
###optim.Adam(amsgrad=True)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) |
st80283 | actually I replaced these two lines and seems the training is happening but I am not sure if replacing resnet50 with densenet161 would only need these two line replacements:
###num_ftrs = model_ft.fc.in_features
num_ftrs = model_ft.classifier.in_features
###model_ft.fc = nn.Linear(num_ftrs, 9)
model_ft.classifier = nn.Linear(num_ftrs, 9) |
st80284 | Resnet uses the name fc for its last layer while Densenet uses the name classifier for its last layer. You may see these naming and indexing by printing out the model
model = models.densenet161(pretrained=True)
print(model) |
st80285 | This is well documented in pytorch tutorials. Here an extract with your solution (agreeing with what @rusty said above):
Preformatted text`To reshape the network, we reinitialize the classifier’s linear layer as model.classifier = nn.Linear(1024, num_classes)
Source: https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html 167 |
st80286 | To just add some extra information, if you want to use each block of DenseNet in your network, you can use the following:
pretrained_model = densenet121()
self.features = pretrained_model._modules[‘features’]
self.block = self.features._modules[‘denseblock1’] |
st80287 | You can add a customized classifier as follows:
Check the architecture of your model, in this case it is a Densenet-161. Printing it yields and displaying here the last layers:
…
)
(denselayer24): _DenseLayer(
(norm1): BatchNorm2d(2160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace=True)
(conv1): Conv2d(2160, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace=True)
(conv2): Conv2d(192, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
)
(norm5): BatchNorm2d(2208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(classifier): Linear(in_features=2208, out_features=1000, bias=True)
)
So you can see that “norm5” has an output size of 2208, that is the output size of the Densenet-161 without classifier.
You can verifiy the number features (equals output size) by
num_ftrs = model_transfer.classifier.in_features
num_ftrs
This prints again 2208.
Now you can add your classifier to the network:
model_transfer.classifier = nn.Sequential(
nn.Linear(num_ftrs, 256),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(256, n_classes),
nn.Softmax(dim=1))
Verify by printing model with
model_transfer
yields:
…
(norm5): BatchNorm2d(2208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(classifier): Sequential(
(0): Linear(in_features=2208, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.4, inplace=False)
(3): Linear(in_features=256, out_features=133, bias=True)
(4): LogSoftmax()
)
)
Now you are ready to use your own Densenet-161! |
st80288 | I started using Pytorch. I find it very Nice, I tried to implement a basic linear regression model using a simple Feed forward neural network, I trained the model and I get to observe the loss decreasing over time but I didnt figure out how to get the accuracy of such a model with pytorch! in Keras that’s easy using metrics = ["accuracy"] inside the compile function same in scikit learn it is also very easy using r2_score or other score functions. why such things didn’t exist in Pytorch? and if I have to implement an Accuracy function myself for a Linear regression model, how can I do that ?
ps: I know how to implement such a function for a Classification Problem because it have discrete values. In linear Regression it is different because we have continuous values that’s why I couldn’t find a way to implement an Accuracy function for it. I could just see the value of the loss and based on it say whether my model is good or not but it would be better if I achieve to implement an Accuracy function like in keras or sklearn to print the accuracy of the Model for every epoch. that would be great |
st80289 | Hello
i have a code that it is subclassing nn.module
class Connection(torch.nn.module):
super().__init__()
def reset_(self) -> None:
"""
Contains resetting logic for the connection.
"""
super().reset_()
i don’t know what exactly reset_() does
and i did not find any reset_() function in nn.module source code
So how can i figure out what reset_() exactly does??? |
st80290 | Solved by ptrblck in post #5
Thanks for code.
I tried to create a small reproducible code snippet and it just seems the base class’ reset_ is called, which does nothing, since it’s implemented as a pass:
class AbstractConnection(ABC, nn.Module):
def __init__(self):
super(AbstractConnection, self).__init__()
… |
st80291 | As you said, the nn.Module does not have the reset_ method implemented.
Where did you find the code?
Also, there are a few issues regardless of the unknown reset_ method:
nn.module does not exist and should probably be nn.Module
the super().__init__() call is outside of the class’ __init__ method
the indentation of super().reset_() doesn’t match the def reset indent. |
st80292 | Hey …thanks for your response
this is the full code almost:
class AbstractConnection(ABC, Module):
def __init__(
self,
source: Nodes,
target: Nodes,
nu: Optional[Union[float, Sequence[float]]] = None,
reduction: Optional[callable] = None,
weight_decay: float = 0.0,
**kwargs
) -> None:
super().__init__()
class Connection(AbstractConnection):
def __init__(
self,
source: Nodes,
target: Nodes,
nu: Optional[Union[float, Sequence[float]]] = None,
reduction: Optional[callable] = None,
weight_decay: float = 0.0,
**kwargs
) -> None:
def reset_(self) -> None:
# language=rst
"""
Contains resetting logic for the connection.
"""
super().reset_() |
st80293 | Thanks for code.
I tried to create a small reproducible code snippet and it just seems the base class’ reset_ is called, which does nothing, since it’s implemented as a pass:
class AbstractConnection(ABC, nn.Module):
def __init__(self):
super(AbstractConnection, self).__init__()
@abstractmethod
def reset_(self):
print("base class called")
class Connection(AbstractConnection):
def reset_(self):
super().reset_()
con = Connection()
con.reset_()
> base class called
Is this method implemented in another using other options than the super().reset_() call? |
st80294 | thank you @ptrblck very much … Just why we have to write that when we are not using this function reset_() ?
I just saw in another module , that reset_() has been defined there , but it was not calling by this module…that’s why i don’t know what this functions does… |
st80295 | I’m not familiar with the complete code, but I would assume this function is somewhere implemented and used.
Did another class derive from the class containing the implemented reset_ method? |
st80296 | yeah there is another class that is subclassing the class which reset_ method is defined into … but that object is not calling in this class that i sent the code … is that reasonable that we use a method that it has defined in another object and i’m not calling that module/object? |
st80297 | I’m really not sure, what the complete code is doing, so either the author had something in mind and didn’t implement it or some other code parts are using the implemented logic and you are missing them.
Could you post a link to the original implementation, so that we could have a look? |
st80298 | @ptrblck your questions helped me find my answer …i found this function reset_() in another module that was imported to this module … really very very thank you to all of time u gave me and i appreciate your help :)))))))))))) |
st80299 | I’m trying to create a Siamese network for the Omniglot dataset, for which I need to implement a custom sampler that will yield batches of pairs of images on which the neural net will be run. I need a little help with building this custom sampler. A little guidance in the right direction should be enough I believe. |
st80300 | The usual approach would be to write a custom Dataset and load and return the image pairs in the __geitem__ method of it.
How would you like to create these pairs, i.e. do you have any specific logic in mind?
Also, have a look at the Data Loading tutorial 71 for more information on how to create a Dataset. |
st80301 | Hey!
Yeah so basically I have to just generate random pairs of images and if they are same I label the output as 1, otherwise I label it as 0. My only concern is I want each batch to have the same number of different pairs as there are same pairs of images, because of which I was hoping to create a Sampler that can generate these batches for me, which I can then use to initialize the DataLoader class. |
st80302 | I am proficient enough to understand how to read Pytorch code and reimplement them to fit my own needs but being self-taught there are still a lot of things I do not understand. Truth to be told I didn’t do a lot of OOP at all before learning Pytorch, I mainly just made many functions and chain them together to make my network work. Since I started to look at other people’s code to learn Pytorch I have noticed that there are 2 major ways of coding your networks. One way is to stuff all the layers in nn.sequential() and just assign that to a model OR define a class then assign that to a model. My question is what is the major difference? I have tried both ways and IMO nn.sequential is easier, I have also seen nn.sequential defined within the model class as well. |
st80303 | You can use whatever fits your use case.
While some people like to use the nn.Sequential approach a lot, I usually just use it for small sub-modules and like to define my model in a functional way, i.e. derive from nn.Module and write the forward method myself.
This approach gives you more flexibility in my opinion, since you are able to easily create skip connections etc.
On the other hand if you just want to stack layers together, nn.Sequential is totally fine. |
st80304 | There shouldn’t be any differences regarding the performance, but let us know, if you’ve encountered something. |
st80305 | I have to create a model which has 3 parallel CNNs network. An image is fed into all the 3 networks and finally the outputs of the three networks are concatenated.
Can I model this if I define all the CNN networks in different classes?
nn.sequential can work for this if I define 3 different layers in the same class and concatenate them in forward method.
But I want to know if I can model such networks in three different classes and finally concatenate them when I train them? Will there be a problem in backpropogation?
Thanks. ! |
st80306 | Yes, this should be possible.
You could create a “parent model” and pass or initialize the submodels in its __init__ method.
In the forward just call each submodel with the corresponding data and concatenate their outputs afterwards.
This won’t create any issues regarding backpropagation. |
st80307 | Hi !
I have a matrix n*m of n different vectors of dimensions m.
I would like to get n matrices of size m*m with each matrix being a diagonal of a vector.
I guess I could do:
d = []
for vec in mat:
d.append(torch.diag(vec))
torch.stack(d)
But isn’t there any better ‘pytorchic’ way ? |
st80308 | Solved by SimonW in post #2
Yes! It’s kind of hidden, but here you go:
>>> mat = Variable(torch.randn(3, 4))
>>> res = Variable(torch.zeros(3, 4, 4))
>>> res.as_strided(mat.size(), [res.stride(0), res.size(2) + 1]).copy_(mat)
Variable containing:
0.0422 0.0896 1.4919 -0.7167
-2.4854 0.3412 -1.4421 -0.5081
-0.6238 0.2446 … |
st80309 | Yes! It’s kind of hidden, but here you go:
>>> mat = Variable(torch.randn(3, 4))
>>> res = Variable(torch.zeros(3, 4, 4))
>>> res.as_strided(mat.size(), [res.stride(0), res.size(2) + 1]).copy_(mat)
Variable containing:
0.0422 0.0896 1.4919 -0.7167
-2.4854 0.3412 -1.4421 -0.5081
-0.6238 0.2446 -0.2848 -0.4184
[torch.FloatTensor of size (3,4)]
>>> res
(0 ,.,.) =
0.0422 0.0000 0.0000 0.0000
0.0000 0.0896 0.0000 0.0000
0.0000 0.0000 1.4919 0.0000
0.0000 0.0000 0.0000 -0.7167
(1 ,.,.) =
-2.4854 0.0000 0.0000 0.0000
0.0000 0.3412 0.0000 0.0000
0.0000 0.0000 -1.4421 0.0000
0.0000 0.0000 0.0000 -0.5081
(2 ,.,.) =
-0.6238 0.0000 0.0000 0.0000
0.0000 0.2446 0.0000 0.0000
0.0000 0.0000 -0.2848 0.0000
0.0000 0.0000 0.0000 -0.4184
[torch.FloatTensor of size 3x4x4]
I’ve created an issue for this at https://github.com/pytorch/pytorch/issues/5198 222 |
st80310 | For the sake of completeness and to save future readers trouble, the function you are looking for is torch.diag_embed. |
st80311 | A simple example is given in below,
class my_model(nn.Module):
def __init__(self):
super(my_model,self).__init__()
self.all_layers = nn.ModuleList()
for i in range(10):
layers = []
layers.append(nn.Linear(10,10))
layers.append(nn.BatchNorm1d(10))
self.main = nn.Sequential(*layers)
self.all_layers.append(self.main)
def forward(self,zn,x):
output = []
for i in range(10):
mask = x==i
temp = zn[mask]
print(temp.size())
output.append(self.all_layers[i](temp))
return output
model = my_model()
print(model)
target = torch.randn(60,10)
loss =nn.MSELoss()
input = torch.randn(60,10)
labels = torch.randint(0,10,(60,))
zn = torch.randn(60,10)
predict = model(zn, labels)
print(len(predict))
print(predict)
criteria = loss(predict,target)
TypeError: expected Tensor as element 0 in argument 0, but got list |
st80312 | I’m not exactly sure, how the code is supposed to work.
Currently you have some issues in your code:
self.main is never used
you recreate layers = [] inside the loop, and overwrite self.all_layers with the last two layers
the returned output list will throw the mentioned error, since the criterion expects tensors not lists
even if you try to torch.stack the output (which would be the usual workflow), you will most likely encounter errors, since you are using a mask to index zn, which will create variable sized outputs |
st80313 | Thank you for your kind reply.
The idea is simple.
let say a,b,c are the same model architecures (eg, simple a nn.Linear layer). These three models are trained on the specific classes samples from a single batch. The final output is d = a+b+c (all the indiviual model outputs are mutually exclusives) i.e. a is trained on first 15 samples, b is trained on next 20 samples and c is trained on last 15 samples. The final loss (sum of indiviual losses) backward for updating the respective model parameters. In keras, we can create a single model with 3 different dictionaries. Each dictionary model can be updated by specific training samples of the current batch i.e. all the individual dictionary models are trained separately based on specific training samples.
For the pytorch solution, I have created a main model in which the 3 submodels (a,b,c) are assigned by nn.ModuleList(). That means, all the indiviual models are sharing the same structure but mutually exclusive in nature. Now, the mask is defined for selecting the models from the main models. The final output passed through nn.mse loss. If I update the optimizer steps, the models will be updated based on selecting samples. Am I do something wrong? Please tell me. If I am wrong then how do I solve the problem? |
st80314 | Thanks for the clarification!
In that case you would have to deal with some edge cases:
if the current batch does not contain samples from a specific class, you should skip it. Otherwise you will get a all-zero mask and the code will raise an exception. Maybe you could use torch.unqiue to get all current class labels and loop over these indices instead of the loop for all classes
if the current batch only contains a single sample of a class, your nn.BatchNorm1d layer(s) will throw an exception, since they cannot calculate the batch stats for a single sample (and 1 sequence length).
However, if you somehow make sure that your batches are balanced, your code works fine:
model = my_model()
print(model)
target = torch.randn(60,10)
loss =nn.MSELoss()
input = torch.randn(60,10)
labels = torch.arange(10).view(10, 1).repeat(1, 6).view(-1)
zn = torch.randn(60,10)
predict = model(zn, labels)
output = torch.cat(predict)
criteria = loss(output,target) |
st80315 | Thank you for your kind reply.
torch.unique is a good idea to use instead of class labels. I never thought about that.
" if the current batch only contains a single sample of a class, your nn.BatchNorm1d layer(s) will throw an exception, since they cannot calculate the batch stats for a single sample (and 1 sequence length)." exactly. but the model will be trained on the balanced dataset.
I have a fundamental question regarding the loss backward. The final loss is the summations of all losses. I think scalar loss value will update the parameters of individual models that are connected by the closed graph of specific input labels. Is the assumption correct? |
st80316 | TanmDL:
I have a fundamental question regarding the loss backward. The final loss is the summations of all losses. I think scalar loss value will update the parameters of individual models that are connected by the closed graph of specific input labels. Is the assumption correct?
I think your explanation is correct, although I’m not sure, what “closed” graph means exactly.
However, each operation in the forward pass will create a computation graph, which will be used to calculate the gradients for the involved parameters in the backward call. |
st80317 | Thank you for your reply.
“closed graph” means that model “a” parameters will update based on 15 samples, model ‘b’ parameters update on the next 20 samples and model ‘c’ parameters update on the last 25 samples respectively. Although, the final loss is summations of all 3 models losses and loss gradient backward to update the individual model parameters. that means model ‘b’ would not depend on the first 15 samples and the last 25 samples respectively (because model b parameters don’t have closed connection between two sample spaces). This is my understanding. Maybe I am wrong. |
st80318 | HI guys!
I’m college student who is studying ML.
I made Binary Classifier for classifying which one is horse picture or human picture.
It worked in one layer NN but it did’t work in multi layers like two or three.
This is my code for train part.
if someone know the problem on my code , Let me know what is wrong
Thanks!!
#This is code
for epoch in range(NUM_EPOCH+1):
#forward propagation(train)
trZ1=np.dot(u,trainX)+a#Layer 1
trZ2=np.dot(v,trA1)+b#Layer 2
trA2=sigmoid(trZ2)
trZ3=np.dot(w,trA2)+c#Layer 3
trA3=sigmoid(trZ3)
#get train loss
trloss=-(np.multiply(trainY,np.log(trA3))+np.multiply((1-trainY),np.log(1-trA3)))
trloss=1/trDataNum*np.sum(trloss)
trLossArray[epoch]=trloss
#forward propagation(test)
tZ1=np.dot(u,testX)+a#Layer 1
tA1=sigmoid(tZ1)
tZ2=np.dot(v,tA1)+b#Layer 2
tA2=sigmoid(tZ2)
tZ3=np.dot(w,tA2)+c#Layer 3
tA3=sigmoid(tZ3)
#get test loss
tloss=-(np.multiply(testY,np.log(tA3))+np.multiply((1-testY),np.log(1-tA3)))
tloss=1/tDataNum*np.sum(tloss)
tLossArray[epoch]=tloss
#get Accuracy
trainPY=np.where(trA3>=0.5,1.,0.)
trAccuracy=((trainPY == trainY).sum())/trDataNum
trAcArray[epoch]=trAccuracy
testPY=np.where(tA3>=0.5,1.,0.)
tAccuracy=((testPY == testY).sum())/tDataNum
tAcArray[epoch]=tAccuracy
#backward propagation
dz3=trA3-trainY
dw=1/trDataNum*np.dot(dz3,trA2.T)
dc=1/tDataNum*np.sum(dz3,axis=1,keepdims=True)
dz2=np.multiply(np.dot(w.T,dz3),sigmoid(trA2,True))
dv=1/trDataNum*np.dot(dz2,trA1.T)
db=1/tDataNum*np.sum(dz2,axis=1,keepdims=True)
dz1=np.multiply(np.dot(v.T,dz2),sigmoid(trA1,True))
du=1/trDataNum*np.dot(dz1,trainX.T)
da=1/tDataNum*np.sum(dz1,axis=1,keepdims=True)
#update weight and bias
u=u-lr*du
a=a-lr*da
v=v-lr*dv
b=b-lr*db
w=w-lr*dw
c=c-lr*dc
#check the data per epoch 50
if epoch%50==0:
print("epoch :" + str(epoch+1))
print("train loss : " +np.array2string(trloss))
print("test loss : " +np.array2string(tloss))
print("train accuracy : " +np.array2string(trAccuracy))
print("test accuracy : " +np.array2string(tAccuracy)) |
st80319 | Hi friends in Pytorch community,
I want to find out the inference time of each (conv, bn, relu) layer in a pytorch model (like resnet101 implemented in torchvision package), is there any way to do this? I guess maybe register_buffer 3 is suitable for this task, but I don’t know how to figure it out. Does anyone has any idea? Thanks in advance |
st80320 | Solved by albanD in post #2
Hi,
register_buffer is used to save buffers that you then use during the forward pass.
I would recommend using the autograd profiler to measure the runtime. |
st80321 | Hi,
register_buffer is used to save buffers that you then use during the forward pass.
I would recommend using the autograd profiler 111 to measure the runtime. |
st80322 | torch.cuda.is_available() returns False with cuda10.0, but True with cuda8.0. So I am confused with this problem.
I have installed cuda with different version(8.0 as well as 10.0) and the corresponding cudnn, but the returned value is opposite. Is this relative to the NVIDIA gpu driver version(which is 375.26)? Does this makes the cuda10.0 not working? |
st80323 | I strongly believe that you are correct. The issue has to do with the CUDA and driver incompatibility.
Reference : https://docs.nvidia.com/deploy/cuda-compatibility/index.html 225
So you may have to upgrade drivers for CUDA 10 support |
st80324 | Also with torch.cuda.is_available () had false.
But when installing the Nvidia driver to the most updated version 436.48, True is displayed. I previously updated Pytorch to 1.2.0.
I have windows 10 and anaconda.
py.jpg923×364 80.4 KB |
st80325 | Hi, all, when using pytorch for training, I met some problems:
image.png1839×167 33.6 KB
image.png1422×402 148 KB
pytorch version: 1.2.0.dev20190715
cuda version: 9.0
gpu: GTX 1080ti.
I’ve spent some time to debug this problem, but failed unfortunately. Hope to get some help here! |
st80326 | Do you see the same error, if you run the code on CPU?
This might yield a clearer error message than the current CUDA one.
If it’s working fine on the CPU, could you rerun the code using
CUDA_LAUNCH_BLOCKING=1 python script.py args
and post the stack trace again?
PS: You can post code directly by wrapping it in three backticks ``` |
st80327 | See ptrblck’s post for ways to debug the issue. But any reason you’re apparently both using an old nightly of 1.2.0 when 1.2.0 has now been released and also running against CUDA 9.0 when PyTorch is built against 9.2? One of those may well be the cause of the issue. |
st80328 | I’ve tried to rerun the code with “CUDA_LAUNCH_BLOCKING=” before python. Unfortunately, the code couldn’t run into the training process with this flag.
I’ve added “torch.autograd.set_detect_anomaly(True)” into the code, then the error became:
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 539, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1365, in linear
ret = torch.addmm(bias, input, weight.t())
Traceback (most recent call last):
File "/workspace/pyface/engine/trainer.py", line 287, in <module>
main(args)
File "/workspace/pyface/engine/trainer.py", line 69, in main
main_worker(args.gpu, ngpus_per_node, args)
File "/workspace/pyface/engine/trainer.py", line 137, in main_worker
do_train(train_loader, model, criterion, optimizer, epoch, args, architect, valid_loader)
File "/workspace/pyface/engine/trainer.py", line 270, in do_train
loss.backward()
File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function 'AddmmBackward' returned nan values in its 2th output.
It seems some nan in the grad? |
st80329 | Thanks for your advice, I’ve already changed the pytorch version to 1.2(stable), cuda10.0 cudnn7.6.
The error remains the same. |
st80330 | I found when I run the training with batchsize 1024 on 8 1080 ti gpu cards, it will report the error. When I run the code with batchsize 512 on 8 gpu cards, the error disappeared. when I run the code with smaller batchsize on a single gpu card, the error can’t be reproduced. So it might be hard to reproduce the error on cpu. |
st80331 | Yes, looks like a bug in the PyTorch AddmmBackward causing it to output a nan (or something in the autograd). Unless someone else can see something I’m missing you should probably submit an issue. It all seems to be PyTorch code.
In order to get a reliable reproduction you could try adding code in your training loop to store a reference to the current inputs (making sure to overwrite each batch and not keep old ones around). Then when the error happens you should have the inputs that caused it. You will also want to save the model weights as it will depend on these. Then hopefully you can get a set of inputs and weights that will reliably trigger it. |
st80332 | The real issue is that in topk an invariant condition failed. It would be very useful if you can
run with CUDA_LAUNCH_BLOCKING=1
save every inputs to topk to a file (you can overwrite in each iteration)
upload the input that triggers this assert.
Thanks! |
st80333 | Really thanks for the reply. I tried to reproduced the error at another gpu machine using same docker container setting, while found no error occurs. The error may be highly related to a broken 1080ti gpu card, Although the regular checks don’t show the error. |
st80334 | Hi,
I am creating a simple network:
class RandomNet(nn.Module):
def __init__(self, vocab_size):
super(RandomNet, self).__init__()
self.vocab_size = vocab_size
self.linear = nn.Linear(1, 2)
def save_checkpoint(self, path):
th.save(self.state_dict(), path)
def load_checkpoint(self, path, cpu=False):
if cpu:
self.load_state_dict(th.load(path,
map_location=lambda storage, loc: storage))
else:
self.load_state_dict(th.load(path))
def forward(self, y):
h = self.linear(y)
return [h[0], h[1]]
When I wrapped this model inside a wrapper model, and just print the parameters:
self.net = RandomNet(src_vocab_size)
print(model.parameters())
AttributeError: 'RandomModel' object has no attribute 'parameters'
But I do have the linear parameter. How could I fix the error?
Am I making a mistake in using the linear layer? |
st80335 | Solved by dejanbatanjac in post #2
I am not sure where you used self.net but I create a working example for you:
import torch
import torch.nn as nn
class RandomModel(nn.Module):
def __init__(self):
super().__init__();
self.net = None
self.linear = None
def forward(self, x):
retu… |
st80336 | I am not sure where you used self.net but I create a working example for you:
import torch
import torch.nn as nn
class RandomModel(nn.Module):
def __init__(self):
super().__init__();
self.net = None
self.linear = None
def forward(self, x):
return x
class RandomNet(nn.Module):
def __init__(self, vocab_size):
super(RandomNet, self).__init__()
self.vocab_size = vocab_size
self.linear = nn.Linear(1, 2)
def save_checkpoint(self, path):
th.save(self.state_dict(), path)
def load_checkpoint(self, path, cpu=False):
if cpu:
self.load_state_dict(th.load(path,
map_location=lambda storage, loc: storage))
else:
self.load_state_dict(th.load(path))
def forward(self, y):
h = self.linear(y)
return [h[0], h[1]]
src_vocab_size=20
net = RandomNet(src_vocab_size)
print(list(net.parameters()))
model = RandomModel()
model.net = RandomNet(src_vocab_size)
print(list(model.parameters()))
Out of the the Random Model will be:
[Parameter containing:
tensor([[0.1979],
[0.7473]], requires_grad=True), Parameter containing:
tensor([-0.4504, 0.0435], requires_grad=True)] |
st80337 | thanks! yes, so the model.net 17 params have to be passed to the optimizer, silly mistake since my model inherited from object |
st80338 | I am trying to use Pytorch to inspect the values of gradients at each layer of a simple model. I am doing this using a backwards hook at each layer. My function currently prints out the same value for input and output gradients, so clearly I am misunderstanding something. In my hook, why are the values for both input and output the same. My hook function is as follows:
def grad_hook(mod, inp, out):
print("")
print(mod)
print("-" * 10 + ' Incoming Gradients ' + '-' * 10)
print("")
print('Incoming Grad value: {}'.format(inp[0].data))
print("")
print('Upstream Grad value: {}'.format(out[0].data))
And an example output for a linear layer:
Linear(in_features=3, out_features=5, bias=True)
---------- Gradient Values ----------
Incoming Grad value: tensor([-1.3997, -2.1604, 0.8113, -1.0236, 0.3797])
Upstream Grad value: tensor([[-1.3997, -2.1604, 0.8113, -1.0236, 0.3797]])
-------------------------------------- |
st80339 | Hi,
As specified in the doc, the backward hooks on nn.Modules are not working as expected at the moment.
You can use Tensor hook to get the values you want. By adding register_hook() to the Tensors you want to inspect either in the forward pass code or by using an nn.Module forward_hook to access the input/output values of a given nn.Module. |
st80340 | Thank you for your reply. I saw the bug report on this Github issue (https://github.com/pytorch/pytorch/issues/16276 101). I’ll use register_hook on individual tensors for now. |
st80341 | Hi,
I am getting an error while importing torch, things used to work fine.
PyTorch was installed using conda:
Python 3.6.7 | packaged by conda-forge | (default, Feb 28 2019, 09:07:38)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dd/anaconda2/envs/dd/lib/python3.6/site-packages/torch/__init__.py", line 181, in <module>
from .serialization import save, load
File "/home/dd/anaconda2/envs/dd/lib/python3.6/site-packages/torch/serialization.py", line 11, in <module>
import tempfile
File "/home/dd/anaconda2/envs/dd/lib/python3.6/tempfile.py", line 45, in <module>
from random import Random as _Random
File "/home/dd/text_sum/AlexNet/video_/examples/random.py", line 5, in <module>
from torch.utils import data
File "/home/dd/anaconda2/envs/dd/lib/python3.6/site-packages/torch/utils/data/__init__.py", line 3, in <module>
from .dataset import Dataset, IterableDataset, TensorDataset, ConcatDataset, ChainDataset, Subset, random_split # noqa: F401
File "/home/dd/anaconda2/envs/dd/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 5, in <module>
from torch import randperm
ImportError: cannot import name 'randperm'
How to fix this issue? |
st80342 | Solved by albanD in post #6
Actually, looking at the stack trace I think you have a problem with local files:
the python tempfile library, when loading picks up your local random.py file as python’s original random library.
Can you import torch from another folder? If so, you will have to rename your random.py file I’m afrai… |
st80343 | Hi,
You might have multiple pytorch installs coexisting and causing issues.
I would make sure to uninstall all pytorch installs (via conda / pip etc) in your environment and then reinstall a brand new one. |
st80344 | I installed a brand new one on both mac & linux, in a new conda environment, still get the same error |
st80345 | Actually, looking at the stack trace I think you have a problem with local files:
the python tempfile library, when loading picks up your local random.py file as python’s original random library.
Can you import torch from another folder? If so, you will have to rename your random.py file I’m afraid. |
st80346 | I am trying to made a function of calculating flops and want to discuss about it. In many papers, I can see the flop numbers, but it is hard to see the details of computing them.
I have some questions:
Is it normal to include flops of ReLU, Batch normalization, …?
It seems common to consider the spatial dimension. For example, when calculating Conv2d layer, I need to know the image size. What is the common size of the original image size of width and height?
I am making a code as follows, but I wonder that there is a more elegant way to compute it. (In my code, it is hard to catch the dynamic spatial dimensions. For example, in ReLU, we don’t know the previous state. )
import torchvision
import re
def get_num_gen(gen):
return sum(1 for x in gen)
def flops_layer(layer):
"""
Calculate the number of flops for given a string information of layer.
We extract only resonable numbers and use them.
Args:
layer (str) : example
Linear (512 -> 1000)
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
"""
#print(layer)
idx_type_end = layer.find('(')
type_name = layer[:idx_type_end]
params = re.findall('[^a-z](\d+)', layer)
flops = 1
if layer.find('Linear') >= 0:
C1 = int(params[0])
C2 = int(params[1])
flops = C1*C2
elif layer.find('Conv2d') >= 0:
C1 = int(params[0])
C2 = int(params[1])
K1 = int(params[2])
K2 = int(params[3])
# image size
H = 32
W = 32
flops = C1*C2*K1*K2*H*W
# print(type_name, flops)
return flops
def calculate_flops(gen):
"""
Calculate the flops given a generator of pytorch model.
It only compute the flops of forward pass.
Example:
>>> net = torchvision.models.resnet18()
>>> calculate_flops(net.children())
"""
flops = 0;
for child in gen:
num_children = get_num_gen(child.children())
# leaf node
if num_children == 0:
flops += flops_layer(str(child))
else:
flops += calculate_flops(child.children())
return flops
net = torchvision.models.resnet18()
flops = calculate_flops(net.children())
print(flops / 10**9, 'G')
# 11.435429919 G |
st80347 | for resnets, the spatial dimension is 224 height and 224 width. For inceptionv3 it is 299x299.
Generally, since majority of flops are in conv and linear, nflops ~= X might show that you are approximating it, and that is prob sufficient for almost all things. |
st80348 | You are missing some critical issue in your implementation: the dimension of the conv layers input is changing through the model depth.
So i guess the right way to compute number of flops will be by using forward hook. |
st80349 | If someone still needs this, we wrote a small script to do that:
github.com
warmspringwinds/pytorch-segmentation-detection/blob/master/pytorch_segmentation_detection/utils/flops_benchmark.py 612
import torch
# ---- Public functions
def add_flops_counting_methods(net_main_module):
"""Adds flops counting functions to an existing model. After that
the flops count should be activated and the model should be run on an input
image.
Example:
fcn = add_flops_counting_methods(fcn)
fcn = fcn.cuda().train()
fcn.start_flops_count()
_ = fcn(batch)
fcn.compute_average_flops_cost() / 1e9 / 2 # Result in GFLOPs per image in batch
This file has been truncated. show original
Example of usage:
github.com
warmspringwinds/pytorch-segmentation-detection/blob/d5df5e066fe9c6078d38b26527d93436bf869b1c/pytorch_segmentation_detection/recipes/pascal_voc/segmentation/flops_counter.ipynb 172
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/daniil/repos/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py:482: UserWarning: src is not broadcastable to dst, but they have the same number of elements. Falling back to deprecated pointwise behavior.\n",
" own_state[name].copy_(param)\n"
]
}
],
"source": [
"import sys, os\n",
"sys.path.append(\"/home/daniil/repos/pytorch-segmentation-detection/\")\n",
"sys.path.insert(0, '/home/daniil/repos/pytorch-segmentation-detection/vision/')\n",
This file has been truncated. show original |
st80350 | I just wrote a simple script to calculate the flops,.
https://zhuanlan.zhihu.com/p/33992733 1.2k
You can search the function named “print_model_parm_flops” |
st80351 | model.apply(fn) combined with module.register_forward_hook(hook) allows for easy tracking of layers, but only works for nn.Module layers (conv, batchnorm, etc). This is effective for the majority of cases, but does not allow for tracking of functional calls, e.g. F.interpolate(...). Is there any way to detect functional calls in a forward pass? |
st80352 | Coming to this rather late, but in case people are interested:
It is possible to directly measure the floating point operation count of models directly using CPU performance monitoring units as an alternative to the approaches which track the FLOPS of each operation. Using the python-papi module this is quite easy to do and the results match the operation counting as implemented by the thop module: see http://www.bnikolic.co.uk/blog/python/flops/2019/10/01/pytorch-count-flops.html 440 for a comparison |
st80353 | I want to plot a statistical graph that shows the number of occurrences of each values in a torch tensor ?
Thank you |
st80354 | Hi everyone. I’m trying to implement a video classification scheme, everything seems fine so far except one thing: exploding gradients in validation loop. I know it sounds strange because there’s not supposed to be gradients in the validation process, but that’s also what I don’t get. I’ve made sure to turn on eval() mode, and use torch.no_grad(), and somehow exploding gradients (with NaN outputs) still happens ONLY when there is a validation loop. I’ve tried commented out the validation code and the training code ran smoothly, so I figured something must be wrong with the validation code but I can’t put my hands on it. I’d really appreciate some help to point me in the right direction.
My code:
for epoch in range(params.getint('num_epochs')):
print('Starting epoch %i:' % (epoch + 1))
print('*********Training*********')
training_loss = 0
training_losses = []
training_progress = tqdm(enumerate(train_loader))
artnet.train()
for batch_index, (frames, label) in training_progress:
training_progress.set_description('Batch no. %i: ' % batch_index)
frames = frames.to(device)
label = label.to(device)
optimizer.zero_grad()
output = artnet.forward(frames)
loss = criterion(output, label)
training_loss += loss.item()
loss.backward()
optimizer.step()
else:
avg_loss = training_loss / len(train_loader)
training_losses.append(avg_loss)
print(f'Training loss: {avg_loss}')
print('*********Validating*********')
validating_loss = 0
validating_losses = []
validating_progress = tqdm(enumerate(validation_loader))
artnet.eval()
with torch.no_grad():
for batch_index, (frames, label) in validating_progress:
validating_progress.set_description('Batch no. %i: ' % batch_index)
frames = frames.to(device)
label = label.to(device)
output = artnet.forward(frames)
loss = criterion(output, label)
validating_loss += loss.item()
else:
avg_loss = validating_loss / len(validation_loader)
validating_losses.append(avg_loss)
print(f'Validating loss: {avg_loss}')
print('=============================================')
print('Epoch %i complete' % (epoch + 1))
if (epoch + 1) % params.getint('ckpt') == 0:
print('Saving checkpoint...' )
torch.save(artnet.state_dict(), os.path.join(params['ckpt_path'], 'arnet_%i' % (epoch + 1)))
# Update LR
scheduler.step()
print('Training complete, saving final model....')
torch.save(artnet.state_dict(), os.path.join(params['ckpt_path'], 'arnet_final'))
return training_losses, validating_losses |
st80355 | Did you measure the individual gradient magnitudes for the loss w.r.t the network parameters? I don’t see it in the code how you meant the gradients are exploding in the validation phase. From your description it seems the training is fine i.e. loss is computed properly for training set and gradients backpropagating properly i.e. network is learning with loss reducing. But with validation your network generates nan output. By output you mean the loss that you are printing ? Or the actual output of your network which for example can be a softmax output? |
st80356 | Apparently, I was mistaken, the problem is something else entirely, although I’m not sure what. There is a square operation in the network that causes the output to go to infinity. The code looks like this:
def forward(self, x):
x_rel = self.conv1(x)
x_rel = self.bn1(x_rel)
x_rel = x_rel ** 2
x_rel = self.cc(x_rel)
x_rel = self.bn2(x_rel)
x_rel = F.relu(x_rel)
x_app = self.conv2(x)
x_app = self.bn3(x_app)
x_app = F.relu(x_app)
out = torch.cat((x_rel, x_app), dim=1)
out = self.reduction(out)
return out
If I comment the square operation out, no problem would occur, but since I’m trying to implement the network proposed in a paper, that would make my implementation unfaithful. If I leave the square operation as is, and comment the validation loop out, nothing will happen either. But if I leave both the square operation and the validation loop, the output of the validation loop will go to infinity, and every training loop after that will also produce infinity output. |
st80357 | Validation loop should not be effecting the model at all ideally.
Because you only do feed forward in that case. Make sure you are using model.eval() or torch.no_grad() contextmanager when evaluating. |
st80358 | You’re right, I made a mistake, it doesn’t affect the model’s weight as I thought. However, it does affect the training loop for reasons I’ve yet to identify. I do use model.eval() and torch.no_grad(), I do know the line of code that’s causing the problem, but I don’t know the reason why. Why exactly does a square op cause the output of the model to go to infinity after a few epochs, and why does that only happen if there exists a validation loop? |
st80359 | Batch norm uses some internal buffers to keep track of running mean and variance per channel. I am not sure why it could be happening. One thing might be to instead of using batch norm layer before and after just use only conv layer. Just a guess |
st80360 | I am a beginner in Deep Learning and try sample project in keras and pytorch. I am noticed a little difference in both framework when split data.
Keras : train and test
pytorch : train, test, and valid
could anybody explain why there is a valid data in pytorch meanwhile there is only test data in keras?
Thank you |
st80361 | Solved by JamesTrick in post #2
Hi! When using Keras and calling .fit() Keras there’s variable called validation_split which splits your training data to have a validation set. The default is 0.0, meaning the data won’t be split and you won’t have a validation set (unless you specify validation_data) - not a good idea.
Within PyT… |
st80362 | Hi! When using Keras and calling .fit() Keras there’s variable called validation_split which splits your training data to have a validation set. The default is 0.0, meaning the data won’t be split and you won’t have a validation set (unless you specify validation_data) - not a good idea.
Within PyTorch, there’s no fit style function so you need to write your own. I sense this is what is giving you the impression that there is no val in Keras as the train/test/val is a bit more clear in PyTorch’s case.
Does that help? It’s in Keras, just a bit more hidden. |
st80363 | I am building models for chronic disease prediction (binary classification) using a sequence of diagnoses, procedure and medication codes collected over a two-year period, 8 years before the confirmation of the disease.
Ex: training: 50k observations, 1800 codes/features
sequence length ptiles 25th = 12 codes, 50th = 30 codes, 75th = 71 codes
Model 1: FFN
computed matrix where columns are features and values are number of occurrence of a code. Built vanilla FFN.
ran extensive grid search. best arch has 4 hidden layers with 127 neurons in each, ReLu act
and dropout of .6 between all of them. #param: 279,785, f1: 27.6% train time/epoch 2.67s on V100/32GB
Model 2: RNN
sorted obs (and labels) based on sequence lengths from smallest to largest
(pack_sequence) => embed => pack_padded_sequence (bat_f=T) => LSTM => FFN
grid search. best arch: embed dim: 50, lstm_hdim: 255, fc layer widths: 63, 31, dropout .6, #param: 422,187, f1: 26.2%, train time/epoch 55s on same GPU.
Questions
the FFN has ~half the number of variables to be trained. Yet it is >20x faster. I realize that may mean nothing in light of algorithmic complexities. But, does that make sense? In both cases the entire instance is on the GPU (CPU is not used for anything). For the FFN I am using the TensorDataset class. For LSTM, it is DataSet with collate_fn for DataLoader suitably modified. Any ideas for speeding up the training?
there wasn’t an expectation that the sequence of codes 8 years out would be important as much as the code itself. But I did think the results would be closer. I tried with 2 LSTMs, bidirectional-LSTM, etc. etc. Is there any other arch change worth looking at? Attention? |
st80364 | Hi Pytorch Community! I was using BCEWithLogintsLoss to train a multi-label network and getting negative loss as shown below. As mentioned in the class documentation, this loss function combines sigmoid and BCELoss…But actually as it shows. I also checked its definition in pytorch/torch/nn/functional.py and I was not able to find the sigmoid operation in this loss. Maybe I used it wrong ?
I’m using version 1.0.1
criterion = nn.BCEWithLogitsLoss()
a = torch.tensor([[1., 1., 1., 0., 0.]])
b = torch.tensor([[0., 0.0011122, 8.9638, 0., 0.]])
c = nn.Sigmoid()(b)
print(criterion(a,b))
print(criterion(a,c))
"tensor(-0.7278)"
"tensor(0.6652)"
Should line 598 of pytorch/torch/nn/moduels/loss.py be changed as below ?
return F.binary_cross_entropy_with_logits(nn.Sigmoid()(input), target, |
st80365 | Solved by KFrank in post #3
Hello Rui An!
It appears that you have switched the order of your inputs to
BCEWithLogitsLoss.
BCEWithLogitsLoss (like
binary_cross_entropy_with_logits()) expects to be
called with predictions that are logits (-infinity to infinity) and
targets that are probabilities (0 to 1), in that order.
… |
st80366 | Hello Rui An!
ruian1:
I was using BCEWithLogintsLoss to train a multi-label network and getting negative loss as shown below.
…
criterion = nn.BCEWithLogitsLoss()
a = torch.tensor([[1., 1., 1., 0., 0.]])
b = torch.tensor([[0., 0.0011122, 8.9638, 0., 0.]])
c = nn.Sigmoid()(b)
print(criterion(a,b))
print(criterion(a,c))
"tensor(-0.7278)"
"tensor(0.6652)"
It appears that you have switched the order of your inputs to
BCEWithLogitsLoss.
BCEWithLogitsLoss (like
binary_cross_entropy_with_logits()) expects to be
called with predictions that are logits (-infinity to infinity) and
targets that are probabilities (0 to 1), in that order.
Your a are legitimate probabilities, so they are your targets, and
your b are legitimate logits, so they are your predictions. Your
call should therefore be:
print(criterion(b,a))
(In your version of the call your second argument, b, is out of
range (not 0 to 1), so the call returns the invalid negative result.
c, however, being the result of Sigmoid, is in (0, 1), so the
result of the call is indeed positive.)
Should line 598 of pytorch/torch/nn/moduels/loss.py be changed as below ?
return F.binary_cross_entropy_with_logits(nn.Sigmoid()(input), target,
No. binary_cross_entropy_with_logits() has built
into it (implicitly, in effect) the sigmoid function (just like
BCEWithLogitsLoss), so you want to pass logits as the first
argument (input), not probabilities (nn.Sigmoid()(input)).
You shouldn’t expect to see an explicit call to Sigmoid in the
source code for BCEWithLogitsLoss. In effect Sigmoid is
applied to input, but it’s implicit (using the “log-sum-exp trick”).
Good luck.
K. Frank |
st80367 | Hello KFrank, thanks a lot for a very detailed explanation! Could you also point me to the block where “log-sum-exp” is applied to ? I traced it back to functional.py but still can’t figure out how it is done… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.