id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st30068
|
My apologies. I am also using the libtorch libraries and cmake was looking for them in the wrong directory.
I am sorry for the inconvenice. Thank you!
|
st30069
|
For the record, this specific v? issue with cuDNN 8 was fixed by Initial support for building on Ampere GPU, CUDA 11, cuDNN 8 by zasdfgbnm · Pull Request #39277 · pytorch/pytorch · GitHub 28
|
st30070
|
I am working on multiple instance learning, and the first two steps before attempting to cluster images from a group are training a network (in this case a pre-trained DenseNet, of which I unfreeze the last few layers) on the images and then saving the encoded representation of these images. The issue is, despite catching in the right place, the forward hook I register does not actually return the value as I want it to.
Code:
def encoded_return(self, d_in, d_out):
print(d_out.size())
return d_out
def make_encoded_representations(studydir, network, verbose=False):
# find all series in studydir
DL = df.getDirectoryList(studydir)
# Take all paths, and check if they contain series (they should)
sis = 0
for seriesdir in DL:
# If it is an image series, files should be readable as a series
sitkreader = sitk.ImageSeriesReader()
# Check if there is a DICOM series in the dicom_directory
series_IDs = sitkreader.GetGDCMSeriesIDs(seriesdir)
if verbose:
print ("Loading dicom folder %" + seriesdir)
print ("Detected "+str(len(series_IDs))+" distinct series. Loading files ...")
for idx, ID in enumerate(series_IDs):
try:
# Get all file names
series_file_names = sitkreader.GetGDCMSeriesFileNames(seriesdir, series_IDs[idx])
if verbose:
print(str(len(series_file_names))+" files in series. Attempting cleanup if necessary ...")
file_sizes = []
# Try cleaning out garbage from series
for file in series_file_names:
filereader = sitk.ImageFileReader()
filereader.SetFileName(file)
tmp = filereader.Execute()
size = tmp.GetSize()
origin = tmp.GetOrigin()
spacing = tmp.GetSpacing()
file_sizes.append((size[0], size[1]))
size_hist = Counter(file_sizes)
wanted_size = max(size_hist, key=size_hist.get)
series_file_names = [name for idx, name in enumerate(series_file_names) if file_sizes[idx] == wanted_size]
# make representation
tensor_representation = representation(series_file_names)
# load network
net = torch.load(network)
# set to eval mode
net.eval()
# register forward hook to return an encoded representation and not the final classification result
net.module.features.denseblock4.denselayer24.register_forward_hook(encoded_return)
# let the network evaluate and grab encoded image
encoded_image = net(tensor_representation)
# save encoded image and class locally
torch.save((encoded_image, int(-1)), seriesdir+'/'+str(int(sis)).zfill(3)+'_ER.pth')
sis += 1
except:
if verbose:
print("Cannot make encoded representation for series "+str(ID)+" in dir "+str(seriesdir)+".")
raise
return None
Now, the print statement in the hook tells me there is a tensor of size(1,48,7,7) to be found, which is the one I want, yet the eventually saved tensor is of size(1,12), which is the original classification layer I added as part of the transfer learning in step 1. I thought that the return statement in the hook would terminate the entire forward pass of the network and yield me the desired tensor. How would I grab that tensor correctly?
|
st30071
|
Solved by ptrblck in post #2
No, the forward hook won’t terminate the execution and you could store the intermediate output in e.g. a list or dict as seen here.
|
st30072
|
No, the forward hook won’t terminate the execution and you could store the intermediate output in e.g. a list or dict as seen here 2.
|
st30073
|
Ah, cheers! That does the trick.
(And I can finally delete the hacky garbage I put into place as a workaround hehe)
|
st30074
|
Hi,
We want to profile the execution of scripted/frozen models.
I know that the profiler record_function op can be scripted, but I want to avoid having 2 versions of the model (one for profiling, and one for deployment).
Is there a suggested way to do this?
|
st30075
|
In General, I need a shared library file generated (.so) which I can load using <torch.ops.load_library()> function from a python script.
Working on Windows 10: 64 bit-operating system
Tried 2 approaches- Neither generate this File
1> CMakesList.txt - The build is succesful and I get a .dll and .lib file which I tried loading using python ctypes - cdll.load_library(), but no functions present, cross checked using dumpbin via vs code.
cmake_minimum_required (VERSION 3.8)
find_package(Torch REQUIRED)
project(reductionResNet VERSION 1.0 DESCRIPTION "Deep_Learning")
# add_executable (customOperator "customOperator.cpp" "customOperator.h")
# Define our library target
add_library(customOperator SHARED customOperator.cpp)
# Enable C++14
target_compile_features(customOperator PRIVATE cxx_std_14)
# Link against LibTorch
target_link_libraries(customOperator "${TORCH_LIBRARIES}")
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET customOperator
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:customOperator>)
endif (MSVC)
2> Python Setuptools.py - Throws the error → LINK : error LNK2001: unresolved external symbol PyInit_reduction.
from setuptools import setup, Extension
from torch.utils import cpp_extension
setup(name='customOperator',
ext_modules=[cpp_extension.CppExtension('customOperator', ['customOperator.cpp'],
include_dirs = ['C:/Libtorch/libtorch-win-shared-with-deps-1.8.1+cpu/libtorch'])],
cmdclass={'build_ext':cpp_extension.BuildExtension.with_options(no_python_abi_suffix=True)} )
These are the tutorials I have been following pytorchDocs & gitTutorial
This is the Tested .cpp file that holds the two functions - reduction and repeatInterleave
#include "customOperator.h"
#include <torch/torch.h>
using namespace std;
torch::Tensor repeatInterleave(
torch::Tensor input,
int val
) {
auto output_ = torch::repeat_interleave(input, val);
return output_;
}
torch::Tensor reduction(
torch::Tensor layerOne,
torch::Tensor layerTwo,
torch::Tensor layerThree,
torch::Tensor layerFour) {
auto layerOne_ = repeatInterleave(layerOne, 8);
auto layerTwo_ = repeatInterleave(layerTwo, 4);
auto layerThree_ = repeatInterleave(layerThree, 2);
int len = layerFour.sizes()[0];
//cout << len << endl;
torch::Tensor arr[512] = {};
//torch::Tensor* arr = new torch::Tensor[len];
//std::vector<std::string> x = { "a", "b", "c" };
//x.push_back("d");
//std::vector <torch::Tensor> arr = {};
for (int i = 0; i < len; i += 1) {
arr[i] = (layerOne_[i] + layerTwo_[i] + layerThree_[i] + layerFour[i]) / 4;
//arr.push_back((layerOne_[i] + layerTwo_[i] + layerThree_[i] + layerFour[i]) / 4);
//cout << arr[i] << endl;
}
//cout << arr << endl;
//torch::Tensor output = torch::zeros(layerFour.sizes());
//delete[] arr;
auto ouput = torch::stack(arr);
return ouput;
}
int main()
{
cout << "Hello CMake." << endl;
return 0;
}
|
st30076
|
There are several methods to achieve this, but I think you need to decide on one and then add the required interface declarations. (I think it’s a DLL on Python, not a .so btw.)
If you want to do C++ and Python, the custom operator approach you mention is a nice way.
For this, too you would want to add a the registration part, this can look like the one in the custom op tutorial 2:
TORCH_LIBRARY(my_ops, m) {
m.def("warp_perspective", warp_perspective);
}
This can be loaded with torch.ops.load_library.
If you want to call from Python: Write a torch extension module.
In this case you need a declaration like this one from the tutorial 1:
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward", &lltm_forward, "LLTM forward");
m.def("backward", &lltm_backward, "LLTM backward");
}
This seems to be missing in method 2 and the error messages is a cryptic way of telling you “this isn’t a complete python module”.
As an aside:
If you want to call from C++ only: you can skip the PYBIND11_MODULE likely need to declare your symbols as exported. This is particularly important on windows. (__declspec(dllexport) in the exporting library, __declspec(dllimport) in the programs using it). This seems to be the problem of of no symbols find in method 1. You don’t need a main for libraries (I think). I’m labelling this “C++ only” because I don’t know how well passing tensors via ctypes would work. You could try to go through DLPack if you wanted. I do use DLPack to get tensors to get third party C code (e.g. Apache TVM-generated kernels) in my courses, but even there I do the conversion in a custom PyTorch C++ extension module.
Best regards
Thomas
|
st30077
|
When giving the hidden tensor to the forward method in the LSTM, we give an initial vector for each batch, i.e. a tensor of size (num_laters, num_batches, len_hidden). Is there a way for the initial hidden of batch to be the last hidden of the former batch (this is for one forward run)?
I wrote a more detailed explanation in the comments
|
st30078
|
The output of the LSTM is o,(h,c) where o is the output of size (sequence, batch, hidden), and tuple (h,c) where h is the last hidden state. You can take this h and pass it as initial hidden state to next batch.
|
st30079
|
Hi Prerna, thanks for answering!
But that wasn’t what I was asking… What I don’t get is what happens in a single run.
The way I understand it is this:
input is a tensor of size (num_cells, num_batches, len_inputs) i.e. an input vector for every cell of every batch.
hidden is a tensor of size (num_layers, num_batches, len_hidden) i.e. every batch in the run gets a hidden initial for every layer.
Is that correct?
I don’t want it to give it the initial hidden for every batch, but would like it to use the hidden from the previous batch.
I’d like it to do this without the loop:
len_inputs = ...
num_layers = ...
len_hidden = ...
h = (torch.zeros(num_layers, 1, len_hidden))
c = copy.deepcopy(h)
for i in range(len(subsequences)):
optimizer.zero_grad()
out, h = model(subsequences[i], h)
loss = nn.WhateverLoss(out, subsequences[i])
optimizer.step()
I don’t understand if that’s what happens under the hood when I tell it:
h = (torch.zeros(num_layers, num_subsequences, len_hidden))
c = copy.deepcopy(h)
out, h = model(subsequences)
And if not, how do I make it so?
Hope I made it clear, please tell me if not I’m having trouble with this for a while…
Thanks!
|
st30080
|
Hi Danny,
input is a tensor of size (num_cells, num_batches, len_inputs) i.e. an input vector for every cell of every batch.
What do you mean by ‘cell’ here? The first dimension of input is sequence length, not sure what you mean by number of cells.
|
st30081
|
Yes I mean the length of the sequence (subsequence to be more accurate) i.e. number of time steps:) The way I visualize it is with cells where every cell gets an input of the data point and the former cell’s hidden output.
I just want to be able to break a big sequence into little ones so that I can train it in pieces, but still have the memory (hidden) generated from the whole sequence
|
st30082
|
Hi Danny,
So say you have an input x of size (120, 1, 20) - (sequence, batch, input). Now you break it up into x1 with shape (60,1,20) and x2 with shape (60,1,20). Then to retain the hidden state from the end of x1 to the beginning of x2, you would do something like this-
model = nn.LSTM(input_size = 20, hidden_size = h_size)
out1, (h1,c1) = model(x1)
out2, (h2,c2) = model(x2, (h1,c1))
Does this answer your question?
|
st30083
|
Nope:)
I would like to train it in one line:
I want the input to be of shape (60, 2, 20), which is a tensor of the connected x1 and x2 each being a batch.
And I want the run to be
out = model(x)
This will not produce an error, but I need to give an initial (h, c) for every one of the 2 batches. I want to give it only one (h,c) for the first batch, and have it use the output of the first batch as the initial hidden for the second batch. I think it will save computation time.
Am I clearer this time?
Thanks!
|
st30084
|
Hi,
did you manage to solve this problem then?
i am trying as well to be have a stateful lstm between different sequences passing the training data in one go.
|
st30085
|
how to implement skip connection for this coding 3 ?
class SkipEdge(Edge):
def __init__(self):
super().__init__()
self.f =
|
st30086
|
I am reproducing the paper " Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics". The loss function is defined as
This means that W and σ are the learned parameters of the network. We are the weights of the network while σ are used to calculate the weights of each task loss and also to regularize this task loss wight.
It is easy to implement the L1 and L2 (assume they are L1 loss)
loss1 = nn.L1Loss()
loss2 = nn.L1Loss()
input1 = torch.randn(3, 5, requires_grad=True)
input2 = torch.randn(3, 5, requires_grad=True)
target= torch.randn(3, 5)
loss_total = 1/(2*sigma1^2)*loss1(input1, target) + 1/(2*sigma2^2)* loss2(input2, target) + log(sigma1*sigma2)
loss_total.backward()
However, the weight σ also learned. How can I make the σ learnable in the combined loss
|
st30087
|
Solved by Tony-Y in post #43
Please see my experiment using a linear model bellow.
[summary fig]
MultiTaskLoss.ipynb · GitHub
In this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:
total_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta
total_l…
|
st30088
|
The following code can learn the loss weights sigma. nn.Parameter is used for adding a tensor to the parameter list of the module.
import torch
import torch.nn as nn
import torch.optim as optim
class MultiTaskLoss(nn.Module):
def __init__(self, tasks):
super(MultiTaskLoss, self).__init__()
self.tasks = nn.ModuleList(tasks)
self.sigma = nn.Parameter(torch.ones(len(tasks)))
self.mse = nn.MSELoss()
def forward(self, x, targets):
l = [self.mse(f(x), y) for y, f in zip(targets, self.tasks)]
l = 0.5 * torch.Tensor(l) / self.sigma**2
l = l.sum() + torch.log(self.sigma.prod())
return l
f1 = nn.Linear(5, 1, bias=False)
f2 = nn.Linear(5, 1, bias=False)
x = torch.randn(3, 5)
y1 = torch.randn(3)
y2 = torch.randn(3)
mtl = MultiTaskLoss([f1, f2])
print(list(mtl.parameters()))
optimizer = optim.SGD(mtl.parameters(), lr = 0.1)
optimizer.zero_grad()
mtl(x, [y1, y2]).backward()
optimizer.step()
output:
[Parameter containing:
tensor([1., 1.], requires_grad=True), Parameter containing:
tensor([[-0.4190, 0.1006, 0.1092, -0.1402, 0.1945]], requires_grad=True), Parameter containing:
tensor([[ 0.3770, -0.4309, -0.1455, 0.1247, 0.0380]], requires_grad=True)]
|
st30089
|
Hi Tony,
Thank you for code multitask loss for 2 tasks. If i want class work for 3 or more classes.
Does the fomula is:
1/(3sigma1^2)loss1(input1, target) + 1/(3sigma2^2) loss2(input2, target) + 1/(3sigma3^2) loss3(input3, target) + log(sigma1sigma2sigma3)
Does its right?
|
st30090
|
No.
arXiv.org
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and... 182
Numerous deep learning applications benefit from multi-task learning with
multiple regression and classification objectives. In this paper we make the
observation that the performance of such systems is strongly dependent on the
relative weighting...
Please see the equation (5) in the paper. The loss is the negated sum of three different terms represented as the eq (5).
|
st30091
|
tks for reply,
I try use your code with multi task loss for 2 taskes, but my model just try change parameter of sigma but loss just keep value. I don’t know what wrong with my code.
I dont write seperate loss class, I write loss function like code below.
class ModelTTestMultitask(nn.Module):
def __init__(self, cf):
self.sigma = nn.Parameter(torch.ones(2))
def forward():
""
do_something()
def loss(self, batch):
loss_1 = self.output_layer_task_1.loss(batch.label_task_1)
loss_2 = self.output_layer_task_2.loss(batch.label_task_2)
loss_combine = 0.5 * torch.Tensor([loss_1, loss_2]) / self.sigma ** 2
loss_combine = loss_combine.sum() + torch.log(self.sigma.prod())
return loss_combine
If I don’t use loss with uncertainty like this and my loss function like below, everything can work normal, but loss of each task scale another, and final model not good as i hope.
def loss(self, batch):
loss_1 = self.output_layer_task_1.loss(batch.label_task_1)
loss_2 = self.output_layer_task_2.loss(batch.label_task_2)
loss_combine = 0.5 * (loss_1 + loss_2)
return loss_combine
|
st30092
|
My task not same like the paper, I try multi task for NLP project, ner and pos. But loss of each task not decrease, just value of sigma change, the sigma change very large like [121, -12] ( I don’t rememeber extractly) but the sigma keep change and loss does’nt change nothing each iterator
|
st30093
|
I not try the idea increase the initial sigma, i just try init with sigma = [1, 1] and sigma = [0.5, 0.5]. I will try then report later. thank you.
And does you have any experience about this problem, and have any explain about my error.
|
st30094
|
I have no experience, but think the uncertainty is very large at the initial stage.
|
st30095
|
In order to avoid numerical instability, we should use a variable change:
eta = log(sigma)
The new variable eta can be defined within (-oo, +oo).
multitask_loss.png888×198 8.86 KB
Sample code:
import torch
import torch.nn as nn
import torch.optim as optim
class MultiTaskLoss(nn.Module):
def __init__(self, model, loss_fn, eta):
super(MultiTaskLoss, self).__init__()
self.model = model
self.loss_fn = loss_fn
self.eta = nn.Parameter(torch.Tensor(eta))
def forward(self, input, targets):
outputs = self.model(input)
loss = [l(o,y).sum() for l, o, y in zip(self.loss_fn, outputs, targets)]
total_loss = torch.Tensor(loss) * torch.exp(-self.eta) + self.eta
return loss, total_loss.sum() # omit 1/2
class MultiTaskModel(nn.Module):
def __init__(self):
super(MultiTaskModel, self).__init__()
self.f1 = nn.Linear(5, 1, bias=False)
self.f2 = nn.Linear(5, 1, bias=False)
def forward(self, input):
outputs = [self.f1(x).squeeze(), self.f2(x).squeeze()]
return outputs
mtl = MultiTaskLoss(model=MultiTaskModel(),
loss_fn=[nn.MSELoss(), nn.MSELoss()],
eta=[2.0, 1.0])
print(list(mtl.parameters()))
x = torch.randn(3, 5)
y1 = torch.randn(3)
y2 = torch.randn(3)
optimizer = optim.SGD(mtl.parameters(), lr=0.1)
optimizer.zero_grad()
loss, total_loss = mtl(x, [y1, y2])
print(loss, total_loss)
total_loss.backward()
optimizer.step()
Output:
[Parameter containing:
tensor([2., 1.], requires_grad=True), Parameter containing:
tensor([[-0.0387, 0.3287, 0.2549, 0.3336, 0.0195]], requires_grad=True), Parameter containing:
tensor([[0.2908, 0.2801, 0.1108, 0.4235, 0.0308]], requires_grad=True)]
[tensor(3.3697, grad_fn=<SumBackward0>), tensor(2.1123, grad_fn=<SumBackward0>)] tensor(4.2331, grad_fn=<SumBackward0>)
|
st30096
|
I already find the example of paper in keras with code same like you. But I don’t know why my eta just keep increasing.
I give you some parameter per epoch:
epoch 1:
list loss: [ 211.6204, 283.3055, 276.5063] and eta: [5.0511, 5.0714, 5.0698]
epoch 2:
list loss: [210.646, 281.631, 275.2699] and eta: [5.2132, 5.2701, 5.2673]
epoch 3:
list loss: [ 211.3304, 282.8942, 276.3101] and eta: [5.3005, 5.4210, 5.4148]
epoch 4:
list loss: [ 211.3207, 282.6045, 276.2361] and eta: [5.3320, 5.5211, 5.5101]
If I don’t think wrong. the loss_1 = torch.Tensor(loss) * torch.exp(-self.eta) = [3.3475, 3.4172, 3.4132]
and loss_2 = self.eta = [5.3320, 5.5211, 5.5101]
loss_2 > loss_1 and if keep increase eta, the loss_2 is still greater more than loss _1. But why me eta still increase.
my code when compute loss:
self.eta = nn.Parameter(torch.Tensor(cf['eta']))
loss_combine = torch.cuda.FloatTensor([loss_1.sum(), loss_2.sum(), loss_3.sum()]) * torch.exp(-self.eta) + self.eta
# print("loss combine: ", loss_combine)
loss_combine = loss_combine.sum()
return loss_combine
And 1 question about the solution: Approx. optimal weights mention in paper table 5. Does it use the sum weighted loss, mean i use grid search to choose the weighted for each loss and summarize like 1/2 * loss_1 + 1/3 * loss_2 + 1/5 * loss_3 ? Is it right?
And 1 question about the reason, why loss don’t in the same scale make the total loss uniform make the 1 task can converge and 2 task not converging. In my case, if i use simple loss sum uniform, loss_1 after 200 epoch approxi 0.5, loss_2 approxi 1.2, and loss 3 greate then 7. I try to search paper or more keyword but not have.
Thank you
|
st30097
|
I cannot answer soon. But, I think optimal weights are not used in a recent paper of natural language understanding:
arXiv.org
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for... 55
This paper explores the use of knowledge distillation to improve a Multi-Task
Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text
representations across multiple natural language understanding tasks. Although
ensemble learning can...
Please see Algorithm 1 in this paper.
|
st30098
|
I have understood that the total loss was decreasing from the following calculation:
>>> def total_loss(loss, eta):
... loss = torch.Tensor(loss)
... eta = torch.Tensor(eta)
... return (loss * torch.exp(-eta) + eta).sum()
...
>>> total_loss([ 211.6204, 283.3055, 276.5063], [5.0511, 5.0714, 5.0698])
tensor(20.0620)
>>> total_loss([210.646, 281.631, 275.2699], [5.2132, 5.2701, 5.2673])
tensor(19.7656)
>>> total_loss([ 211.3304, 282.8942, 276.3101], [5.3005, 5.4210, 5.4148])
tensor(19.6715)
>>> total_loss([ 211.3207, 282.6045, 276.2361], [5.3320, 5.5211, 5.5101])
tensor(19.6332)
I think that the uncertainties increase in the beginning but begins to decrease after some epochs, as shown in Figure 7 of the paper. You might need to optimize the learning rate.
Because sigma^2 must be near loss, eta can be estimated using the initial losses as
>>> torch.log(torch.Tensor([ 211.6204, 283.3055, 276.5063]))
tensor([5.3548, 5.6465, 5.6222])
I think the maximum of eta is somewhat greater than the estimated value.
Figure 2 of the paper shows the performance depends on weights. The total loss is given by Equation (1) where the sum of weights is 1.
Because I am not an expert of multi-task learning, you should make a new topic about multi-task learning on this site.
|
st30099
|
Tr_ng_Trang:
loss_combine = torch.cuda.FloatTensor([loss_1.sum(), loss_2.sum(), loss_3.sum()]) * torch.exp(-self.eta)
You might have to use mean(), i.e. not sum().
|
st30100
|
This paper proposes total loss composed of MSE and CrossEntropy losses. Other losses are outside the scope of the assumption. An implementation for Equation (10) where y1 is a continuous output and y2 is a discrete output:
import torch
import torch.nn as nn
import torch.optim as optim
class MultiTaskLoss(nn.Module):
def __init__(self, model, loss_fn, eta):
super(MultiTaskLoss, self).__init__()
self.model = model
self.loss_fn = loss_fn
self.eta = nn.Parameter(torch.Tensor(eta))
def forward(self, input, targets):
outputs = self.model(input)
loss = [l(o,y) for l, o, y in zip(self.loss_fn, outputs, targets)]
total_loss = torch.Tensor(loss) * torch.exp(-self.eta) + self.eta
return loss, total_loss.sum() # omit 1/2
class MultiTaskModel(nn.Module):
def __init__(self):
super(MultiTaskModel, self).__init__()
self.e = nn.Linear(5, 5, bias=False)
self.f1 = nn.Linear(5, 2, bias=False)
self.f2 = nn.Linear(5, 3, bias=False)
def forward(self, input):
x = self.e(input)
outputs = [self.f1(x), self.f2(x)]
return outputs
## For the normal distribution,
loss_fn1 = nn.MSELoss()
## For the Laplace distribution,
# loss_fn1 = nn.L1Loss()
##
## Note the original work uses the L1 loss for Instance Segmentation
## and Depth Regression, as described at page 6.
## https://arxiv.org/abs/1705.07115
##
cel = nn.CrossEntropyLoss()
def loss_fn2(x, cls):
return 2 * cel(x, cls)
mtl = MultiTaskLoss(model=MultiTaskModel(),
loss_fn=[loss_fn1, loss_fn2],
eta=[2.0, 1.0])
print(list(mtl.parameters()))
x = torch.randn(3, 5)
y1 = torch.randn(3, 2)
y2 = torch.LongTensor([0, 2, 1])
optimizer = optim.SGD(mtl.parameters(), lr=0.1)
optimizer.zero_grad()
loss, total_loss = mtl(x, [y1, y2])
print(loss, total_loss)
total_loss.backward()
optimizer.step()
|
st30101
|
All of my loss from 3 task are the same, its a CrossEntropyLosses. So I think
loss_combine_tensor = torch.cuda.FloatTensor([loss_1.sum(), loss_2.sum(), loss_3.sum()])
or
loss_combine_tensor = torch.cuda.FloatTensor([loss_1.mean(), loss_2.mean(), loss_3.mean()])
are the same value, just difference the gradfn = sum, or mean backward. Can you tell me what is the difference of each function when optimizer run ?
|
st30102
|
You need neither sum() nor mean() if you use CrossEntropyLoss() with default parameters. The CrossEntropyLoss must be multiplied by 2 according to Equation (10) in the paper. A sample code:
cel = nn.CrossEntropyLoss()
def loss_fn2(x, cls):
return 2 * cel(x, cls)
sum() or mean() for loss_1, loss_2, and loss_3 doesn’t influence optimizations.
|
st30103
|
Hello all.
I am having some questions regarding my approach to generate real-time image from a neural network. It works quite well (for a portable macbook, but after 2 or 3 seconds after starting the fan gets much faster and I can get frame drops). The process is:
Step 1) I have a tensor with pixel values in the “result” variable. with the shape torch.Size([1, 1, 310, 310]) being 310x310 the size and with grad_fn=<PermuteBackward>
Step 2) then I create x = torchvision.utils.make_grid(result)
Step 3) then I create an image with img = F.to_pil_image(x)
Step 4) then I pass everything to bytes data = img.tobytes("raw", "RGBX", 0, -1) to be displayed
And I do this in real time to give display in a rendering system (for the moment a GUI). I confess this seems like a bit of an intensive approach to generate frame by frame and get a decent framerate without the computer starting to complain.
Any help?
Thanks
|
st30104
|
I have this error appears after i have added . apply to object class calling as it was written this function has been deprecated when i used torch.autograde.function
the following is the code:
import torch
import math
import numpy
class Identity(torch.nn.Module):
def forward(self, input):
return input
class SegmentConsensus(torch.autograd.Function):
def __init__(self, consensus_type, dim=1):
self.consensus_type = consensus_type
self.dim = dim
self.shape = None
def forward(self, input_tensor):
print(input_tensor)
self.shape=numpy.array(input_tensor)
#self.shape = len(input_tensor)
print(self.shape)
if self.consensus_type.apply == 'avg':
output = input_tensor.mean(dim=self.dim, keepdim=True)
elif self.consensus_type == 'identity':
output = input_tensor
else:
output = None
return output
def backward(self, grad_output):
if self.consensus_type == 'avg':
grad_in = grad_output.expand(self.shape) / float(self.shape[self.dim])
elif self.consensus_type == 'identity':
grad_in = grad_output
else:
grad_in = None
return grad_in
class ConsensusModule(torch.nn.Module):
def __init__(self, consensus_type, dim=1):
super(ConsensusModule, self).__init__()
self.consensus_type = consensus_type if consensus_type != 'rnn' else 'identity'
self.dim = dim
def forward(self, input):
return SegmentConsensus((self.consensus_type, self.dim)).apply(input)
and this is the error
f self.consensus_type.apply == ‘avg’:
AttributeError: ‘SegmentConsensusBackward’ object has no attribute ‘consensus_type’
|
st30105
|
I guess the error might be raised, since you are using the legacy definition of custom autograd.Functions and would have to adapt the code to the new approach as explained here 6.
|
st30106
|
I am attempting to use my fine-tuned DistilBERT model to extract the embedding of the ‘[CLS]’ token. For every row in my dataset I want to extract this feature and return the result into an array.
However, my code seems to be suffering from a memory leak. I have roughly ~14K rows in my dataset and by the time the code has finished executing, Google Colab has either crashed or reports that I have used almost all 25GB of RAM!
Each embedding is a tensor with 768 elements. So for 14K elements, the returned array should be on the order of 20-30 MBs.
Here is my function that is failing from the memory leak:
def getPooledOutputs(model, encoded_dataset, batch_size = 128):
pooled_outputs = []
print("total number of iters ", len(encoded_dataset['input_ids'])//batch_size + 1)
for i in range(len(encoded_dataset['input_ids'])//batch_size + 1):
up_to = i*batch_size + batch_size
if len(encoded_dataset['input_ids']) < up_to:
up_to = len(encoded_dataset['input_ids'])
input_ids = th.LongTensor(encoded_dataset['input_ids'][i*batch_size:up_to])
attention_mask = th.LongTensor(encoded_dataset['attention_mask'][i*batch_size:up_to])
with torch.no_grad():
embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)['hidden_states'][-1][:,0] # Pooled output
pooled_outputs.extend(embeddings)
return pooled_outputs
|
st30107
|
Can you tell if memory usage is growing with each iteration? (e.g., if you forcibly reduce the number of iterations, does the memory usage go down?)
|
st30108
|
I’ve tried using a fourth of my dataset and I did not see a concerning amount of RAM usage. It is only towards the very end of the loop using the entire dataset does my RAM usage skyrocket. This is solely from monitoring what the Google Colab environment displays to me.
This image can give you an idea:
|
st30109
|
I have been unable to figure out if torch serve supports dynamic batching and if yes how:
I have some model where throughput could be optimized if we always run batchsize > 1 intances through the model at once.
So it would be cool if torchserve can collect requests that are received within a certain amount of time and group them into batches for processing.
This should be something that can be configured through the torch serve config file or through command line options when starting, not through the torchserve API itself, as it should be set up when the torch serve serve is started with the initial model.
But I cannot find any documentation about how to do this.
Is this supported and if yes, how?
|
st30110
|
Hi everyone,
I have data with size N that is separated into M chunks (N >> M). The data is too big to fit into RAM entirely. As we don’t have random access to data, I was looking for an implementation of a chunk Dataset that inherits IterableDataset which supports multiple workers. I didn’t find anything so I tried to implement it myself:
class ChunkDatasetIterator:
def __init__(self, file_paths):
self.path_permutation = np.random.permutation(file_paths)
self.current_df_index = -1
def __iter__(self):
return self
def __next__(self):
if self.current_df_index == -1:
if self.current_df_index == len(self.path_permutation) - 1:
raise StopIteration
self.current_df_index += 1
self.current_iterator = pd.read_parquet(self.path_permutation[self.current_df_index]).sample(frac=1).iterrows()
try:
result = next(self.current_iterator)[1]
except StopIteration:
if self.current_df_index == len(self.path_permutation) - 1:
raise StopIteration
else:
self.current_df_index += 1
self.current_iterator = pd.read_parquet(self.path_permutation[self.current_df_index]).sample(frac=1).iterrows()
result = next(self.current_iterator)[1]
return result
class ChunkDataset(torch.utils.data.IterableDataset):
def __init__(self, file_paths):
super(ChunkDataset).__init__()
self.file_paths = file_paths
self.dataset_size = 0
for file_path in file_paths:
self.dataset_size += len(pd.read_parquet(file_path))
def __len__(self):
return self.dataset_size
def __iter__(self):
worker_info = torch.utils.data.get_worker_info()
if worker_info is None:
return ChunkDatasetIterator(self.file_paths)
else:
return ChunkDatasetIterator(
[elem for ind, elem in enumerate(self.file_paths) if (ind % worker_info.num_workers) == worker_info.id])
This ChunkDataset creates a ChunkDatasetIterator for each worker and splits chunks of data across workers. Then each worker tries to shuffle the order of chunks and shuffle each chunk (two levels of shuffling, the best shuffling I came up with when I don’t have random access to whole data).
This code works very well for my use case. Is this a general good way of handling chunk data for multiple workers? Is there a better way? If this ChunkDataset is a good idea, should I try to make a pull request to the PyTorch project for it?
|
st30111
|
Solved by ptrblck in post #2
Thanks for sharing this implementation!
I think you could start with a feature request on GitHub and explain your use case as well as your implementation. Currently you are using 3rd party modules such as pandas, which I assume could be removed to allow for a more general use case.
Once the featur…
|
st30112
|
Thanks for sharing this implementation!
I think you could start with a feature request on GitHub and explain your use case as well as your implementation. Currently you are using 3rd party modules such as pandas, which I assume could be removed to allow for a more general use case.
Once the feature request is done the code owners will take a look at it
|
st30113
|
I am setting up a basic Unet model to segment three classes. The individual mask data is set up with shape (H, W, 1) where each pixel has value between 0 and 3. I am wondering whether I can use masks as they are or do they need to be transformed to one-hot-encoded format?
I.e. can the target simply be (H, W, batch_size)?
|
st30114
|
Assuming you are using nn.CrossEntropyLoss for the multi-class segmentation, the model output should contain logits in the shape [batch_size, nb_classes, height, width] and the target should contain values in the range [0, nb_classes-1] and have the shape [batch_size, height, width].
Given that you are using class indices in [0, 3] you would be dealing with 4 classes (maybe you implicitly explained it as 3 classes + background).
|
st30115
|
How can I exactly find what are the nodes present in a PyTorch model graph, and what are their inputs?
I tried to fetch a torch._C.Graph object using
scripted=torch.jit.script(MyModel().eval())
frozen_module = torch.jit.freeze(scripted)
print(frozen_module.inlined_graph)
which gave the following output
graph(%self : __torch__.___torch_mangle_2.MyModel,
%x1.1 : Tensor,
%x2.1 : Tensor,
%x3.1 : Tensor):
%4 : Float(52229:1, 4:52229, requires_grad=0, device=cpu) = prim::Constant[value=<Tensor>]()
%5 : Float(10:1, 5:10, requires_grad=0, device=cpu) = prim::Constant[value=<Tensor>]()
%6 : int[] = prim::Constant[value=[0, 0]]()
%7 : int[] = prim::Constant[value=[2, 2]]()
%8 : int[] = prim::Constant[value=[1, 1]]()
%9 : int = prim::Constant[value=2]()
%10 : bool = prim::Constant[value=0]()
%11 : int = prim::Constant[value=1]() # test.py:39:34
%12 : int = prim::Constant[value=0]() # test.py:39:29
%13 : int = prim::Constant[value=-1]() # test.py:39:33
%self.classifier.bias : Float(4:1, requires_grad=0, device=cpu) = prim::Constant[value=0.001 * 2.8424 1.0601 -1.3229 4.2920 [ CPUFloatType{4} ]]()
%self.features3.0.bias : Float(5:1, requires_grad=0, device=cpu) = prim::Constant[value= 0.0111 -0.0702 0.1396 0.1691 0.1335 [ CPUFloatType{5} ]]()
%self.features2.0.bias : Float(3:1, requires_grad=0, device=cpu) = prim::Constant[value= 0.3314 0.0165 0.2588 [ CPUFloatType{3} ]]()
%self.features2.0.weight : Float(3:9, 1:9, 3:3, 3:1, requires_grad=0, device=cpu) = prim::Constant[value=<Tensor>]()
%self.features1.0.bias : Float(3:1, requires_grad=0, device=cpu) = prim::Constant[value=0.01 * 2.5380 -31.8947 -15.3462 [ CPUFloatType{3} ]]()
%self.features1.0.weight : Float(3:9, 1:9, 3:3, 3:1, requires_grad=0, device=cpu) = prim::Constant[value=<Tensor>]()
%input.4 : Tensor = aten::conv2d(%x1.1, %self.features1.0.weight, %self.features1.0.bias, %8, %8, %8, %11)
%input.6 : Tensor = aten::max_pool2d(%input.4, %7, %7, %6, %8, %10)
%x1.3 : Tensor = aten::relu(%input.6)
%input.7 : Tensor = aten::conv2d(%x2.1, %self.features2.0.weight, %self.features2.0.bias, %8, %8, %8, %11)
%input.8 : Tensor = aten::max_pool2d(%input.7, %7, %7, %6, %8, %10)
%x2.3 : Tensor = aten::relu(%input.8)
%26 : int = aten::dim(%x3.1)
%27 : bool = aten::eq(%26, %9)
%input.3 : Tensor = prim::If(%27)
block0():
%ret.2 : Tensor = aten::addmm(%self.features3.0.bias, %x3.1, %5, %11, %11)
-> (%ret.2)
block1():
%output.2 : Tensor = aten::matmul(%x3.1, %5)
%output.4 : Tensor = aten::add_(%output.2, %self.features3.0.bias, %11)
-> (%output.4)
%x3.3 : Tensor = aten::relu(%input.3)
%33 : int = aten::size(%x1.3, %12)
%34 : int[] = prim::ListConstruct(%33, %13)
%x1.6 : Tensor = aten::view(%x1.3, %34)
%36 : int = aten::size(%x2.3, %12)
%37 : int[] = prim::ListConstruct(%36, %13)
%x2.6 : Tensor = aten::view(%x2.3, %37)
%39 : int = aten::size(%x3.3, %12)
%40 : int[] = prim::ListConstruct(%39, %13)
%x3.6 : Tensor = aten::view(%x3.3, %40)
%42 : Tensor[] = prim::ListConstruct(%x1.6, %x2.6, %x3.6)
%x.1 : Tensor = aten::cat(%42, %11)
%44 : int = aten::dim(%x.1)
%45 : bool = aten::eq(%44, %9)
%x.3 : Tensor = prim::If(%45)
block0():
%ret.1 : Tensor = aten::addmm(%self.classifier.bias, %x.1, %4, %11, %11)
-> (%ret.1)
block1():
%output.1 : Tensor = aten::matmul(%x.1, %4)
%output.3 : Tensor = aten::add_(%output.1, %self.classifier.bias, %11)
-> (%output.3)
return (%x.3)
But I am not able to iterate or find what exactly are the nodes present within or the inputs that it has. Do suggest if there is any other way to perform the said operation.
|
st30116
|
A for loop on gr.nodes() should work on a graph gr:
import torch
import torchvision
scripted=torch.jit.script(torchvision.models.resnet18().eval())
frozen_module = torch.jit.freeze(scripted)
gr = frozen_module.inlined_graph
for n in gr.nodes():
print(n)
Note that it is easy into trouble
if you change the graph while it is iterated over,
if the graph goes out of scope and disappears (e.g. that’s why I assign it to a variable).
I tried to make disappearing nodes not crash but I don’t exactly recall the state of the iterator implementation here.
Best regards
Thomas
|
st30117
|
0
I tried to load the image and make it inputable to the model.
cracks_guess = []
def check(arr_img):
print('length->',len(arr_img))
with torch.no_grad():
for idx,img in enumerate(arr_img):
img = torch.tensor(img).type(torch.FloatTensor).cuda()
img = img.permute(2,0,1)
img = img.unsqueeze(0)
#print('shape->',img.shape)
output = net(img)
output = torch.sigmoid(output)
pt = torch.round(output)
if pt==1:
#if output > 0.3:
cracks_guess.append(img)
print(idx,'output->',output)
is this right way to change the img shape to inputable to model?
|
st30118
|
Hey, what exactly is the error you are facing. Looks like you have the input images as an array. Dont see any problem there. If your data is too big to have it loaded into one array. You could loops over the path and load one at a time.
This works and you are predicting one at a time.
You could also just pass the whole array of dim [n_samples, n_channels, height, width] if data fits into memory or iterate over few samples at one.
or just write a dataset and dataloader, and change batchsize to load and compute many samples at once efficiently.
I hope this answers
|
st30119
|
I’m not facing some error but just make sure this is the right way to put input image.
afterl I collect my data I try to visualize the image like
imgA = cracks_guess[0]
imgA = imgA.detach().cpu()
imgA = imgA.squeeze(0)
print(imgA.shape)
imgA = imgA.permute(1,2,0)
plt.imshow(imgA)
#plt.show()
but can not see the image
|
st30120
|
why did you pass the images indiviually to the network? You can pass all the images in together and get the result. If you cannot see any image please check if your cracks_guess array is empty.
I personally would pass in all my images together, get output of a binary classifier network and then loop through the outputs to see if thy are 1 or not(append the respective image having +ve prediction in cracks_guess)
|
st30121
|
Hi all !
I found the code GitHub - lsrock1/maskrcnn_benchmark.cpp: Implementation maskrcnn-benchmark, pytorch c++ frontend to implements maskrcnn in C++ but this code is for Ubuntu 16.04 and is old.
I want to upgrade it to work in VS2019 to do inference in Windows.
Some of you could be agree to help me in the task ?
We will exchange our skype and work together at volunteer because it is for a volunteer work.
Thank you
Best regards
PS :
In case there will be noone I will try myself, for example could have I help to this error :
Expand Table
Gravité
Code
Description
Projet
Fichier
Ligne
État de la suppression
Erreur
C2893
La spécialisation du modèle de fonction ‘enable_if<std::is_same<X,T>::value,unknown-type>::type torch::autograd::Function::apply(Args &&…)’ a échoué
layers
C:\Users\Sylvain ARD\Dropbox\Documents Sylvain\travail avec Boubacar\maskrcnn C++\include\rcnn\layers\conv2d.h
20
thank you
best regards
|
st30122
|
Hi there, is there a way for PyTorch to calculate the intersection of two 2D tensors? The post here Intersection between to vectors/tensors 1 only provides the method for the 1D tensor. Many thanks!
|
st30123
|
Solved by bsridatta in post #6
Check (a==b) *a . This should give elements where intersecting and zeros if not. Use result.nonzeor() if you don’t want zeros
|
st30124
|
I think the link answers 2D as well. Does this answer?
a = torch.FloatTensor([[1,2,3],[4,5,6]])
b = torch.FloatTensor([[1,2,9],[8,7,6]])
(a==b).nonzero()
→ tensor([[0, 0], [0, 1], [1, 2]])
|
st30125
|
Thanks a lot. But it seemed only to return the element position. Can I get the tensor that indicates the intersection between 2D tensors?
|
st30126
|
(1) (a==b)
tensor([[ True, True, False],
[False, False, True]])
(2) c = torch.FloatTensor([[1,2,0],[0,0,6]])
Can I get the value of FloatTensor intersection(c in case (2)) rather than the bool value in case (1)?
Thanks!
|
st30127
|
Check (a==b) *a . This should give elements where intersecting and zeros if not. Use result.nonzeor() if you don’t want zeros
|
st30128
|
Funny issue I ran into. If you make a dtype=torch.float16 tensor on the gpu that takes up almost all of the available space, you will actually run out of memory when trying to print. This isn’t really an issue per se, but I was curious if anyone knew what was going on in the str method that is causing this, and maybe if this can be fixed.
|
st30129
|
Could you post an executable code snippet to reproduce this issue as well as the output of python -m torch.utils.collect_env, please?
|
st30130
|
Here is the output of the command:
Collecting environment information...
PyTorch version: 1.9.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.19.3
Libc version: N/A
Python version: 3.9 (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.2.67
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 461.92
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] torch==1.9.0+cu102
[pip3] torchaudio==0.9.0
[pip3] torchvision==0.10.0+cu102
[conda] Could not collect
And here is a snippet which could do the job. If it does not, you might have to do some fiddling to find the magic value for val - of course it depends on how much memory you have available on your GPU. You will have found the issue when the stack trace contains the actual print statement.
import torch
device = torch.device("cuda")
val = 17
while 1:
x = torch.ones((int(val), ), dtype=torch.float16, device=device)
print(x)
val *= 1.1
|
st30131
|
I think it’s expected that you would run out of memory, if you are using an infinite loop and allocate larger tensors in each iteration.
val is increased by a factor of 1.1, so that the tensor size for the first iterations would be:
torch.Size([17])
torch.Size([18])
torch.Size([20])
torch.Size([22])
torch.Size([24])
torch.Size([27])
torch.Size([30])
torch.Size([33])
torch.Size([36])
torch.Size([40])
Since the loop is never terminating you are eventually running out of memory.
|
st30132
|
It looks like torch.half/torch.float16 is a special case (distinct from float32) because the data is promoted (!) to float32 for printing (as printing is CPU side logic that happens in float32. This naturally causes PyTorch to try and grab [twice the current tensor]'s worth of GPU memory which is why you get the OOM on the call to print. It might be interesting to consider the speed (casting on GPU is probably faster) vs. the memory footprint tradeoff.
|
st30133
|
For the sake of completeness: as discussed offline it seems to come from this line of code 1 and it might be worth checking, if a printoptions flag could be added to lower the memory footprint and pay with a slower print operation.
|
st30134
|
Maybe I am not seeing the full picture, but printing a tensor usually reports very few values in the tensor. A solution could be to simply gather the values we intend to print, and only cast those, using very little memory and still casting on gpu.
|
st30135
|
Just saw this, sorry. This code is specifically meant to induce out of memory on print. You can replicate with a single alloc and print, but I don’t know a priori how much memory you have on your gpu.
|
st30136
|
Hi,
I’ve been scratching my head for a while now.
Env: Python 2.7; pytorch 1.8 + cuda
class Model(nn.Module):
def __init__(self, feature_extractor, dropout=0, pretrained=True, feat_dim=2048):
super().__init__()
self.dropout = dropout
self.feature_extractor = feature_extractor
self.feature_extractor.avgpool = nn.AdaptiveAvgPool2d(1)
fe_out_planes = self.feature_extractor.fc.in_features
self.feature_extractor.fc = nn.Linear(fe_out_planes, feat_dim)
self.fc_t = nn.Linear(feat_dim, 3)
self.fc_q = nn.Linear(feat_dim, 3)
# initialize the model
if pretrained:
init_modules = [self.feature_extractor.fc, self.fc_xyz, self.fc_wpqr]
else:
init_modules = self.modules()
for m in init_modules:
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
nn.init.constant_(m.weight.data, 0.01) # constant weights
if m.bias is not None:
nn.init.constant_(m.bias.data, 0)
def forward(self, x):
s = x.size()
x = x.view(-1, *s[2:])
x = self.feature_extractor(x)
x = F.relu(x)
"""if self.dropout > 0:
x = F.dropout(x, p=self.dropout)"""
t = self.fc_t(x)
q = self.fc_q(x)
out = torch.cat((t, q), 1)
out = out.view(s[0], s[1], -1)
return out
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
feature_extractor = models.resnet34(pretrained=True)
model = Model(feature_extractor, dropout=0, feat_dim=2048)
Now, the interesting part:
When I run,
model.cuda()
print('Feed a random batch to test the model: ')
input = torch.ones(1, 64, 3, 7, 7)*0.3
input = input.cuda()
model.eval()
output = model(input)
print(output)
tensor([[106.7721, 106.7721, 106.7721, 106.7721, 106.7721, 106.7721], … ]], device=‘cuda:0’)
Compared to when I run without model.cuda(), i.e.:
print('Feed a random batch to test the model: ')
input = torch.ones(1, 64, 3, 7, 7)*0.3
model.eval()
output = model(input)
print(output)
tensor([[53.3860, 53.3860, 53.3860, 53.3860, 53.3860, 53.3860],…])
Almost half the values and this is consistent with different inputs.
I actually discovered this when I was porting the original repo to Python 3 (v3.8 with same version of pytorch) and I tried comparing the outputs with same input data, same constant weight init, no shuffle, no dropout with model.eval() but I noticed different outputs.
Then after some many print statements, I noticed this. Interestingly, this is not the case in the Python 3 version i.e., running the script by putting the model and data into gpu by using model.cuda() and input = input.cuda() gives the same output as compared to when .cuda() is not used.
I am really confused as to what the issue is and I couldn’t find any relevant documentation regarding this.
Please help.
Thank you.
|
st30137
|
AntiLibrary5:
Interestingly, this is not the case in the Python 3 version i.e., running the script by putting the model and data into gpu by using model.cuda() and input = input.cuda() gives the same output as compared to when .cuda() is not used.
That’s an interesting finding, but you should also note that PyTorch dropped the Python2.x support when its end of life was triggered (Jan 2020), so I would recommend to update to Python3.x.
|
st30138
|
I am attempting to build a convolutional neural network from scratch for object recognition. My input image size is (3, 256, 256)
Here’s my architecture for CNN.
class NutSnackClassication(MultiLabelImageClassificationBase):
def __init__(self):
super().__init__()
self.network = nn.Sequential(
nn.Conv2d(3, 32, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Conv2d(32, 32, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Conv2d(32, 64, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.Conv2d(64, 128, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(128, 256, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.Conv2d(256, 256, kernel_size = 3, stride = 1, padding = 1),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Flatten(),
nn.Linear(7*7*256, 512),
nn.ReLU(),
nn.Linear(512, 258),
nn.LogSoftmax(dim = 1),
)
def forward(self, xb):
return self.network(xb)
Instead of using pretrained model I want to initialize pretrained weights from ResNet34 and then use it for predictions. A lot of examples show how to download the model and then use it which doesn’t solve my query. So how do I approach this?
|
st30139
|
You could use the state_dict of the pretrained resnet, manipulate its keys to match the layer names of your new model (assuming all parameters have the same shape), and load it to your model.
Something like this should work:
model = NutSnackClassication()
reference = models.resnet34(pretrained=True)
sd = reference.state_dict()
# change the keys of sd here
model.load_state_dict(sd)
|
st30140
|
I am trying to iterate through different permutations of a dataset with zip() like this…
for ((x1, y1), (x2, y2), ..., (xn, yn)) in zip([dataloader for dataloader in range(n)]):
...
I am getting the following error which doesn’t crash, but seems to cause everything to run very slow and hang for a long time…
Traceback (most recent call last):
File "/home/jeff/.venv/env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1324, in __del__
self._shutdown_workers()
File "/home/jeff/.venv/env/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1316, in _shutdown_workers
if w.is_alive():
File "/home/jeff/.pyenv/versions/3.8.6/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7fcbdcacce50>
Is using zip a valid way to accomplish this? Is there a better way to accomplish this without having to load multiple instantiations of the same dataloader?
|
st30141
|
I’m using Pytorch and want to perform the data augmentation of my images with Albumentations. My dataset object has two different targets: ‘blurry’ and ‘sharp’. Each instance of both targets needs to have identical changes. When I try to perform the data augmentation with a Dataset object like this:
class ApplyTransform(Dataset):
def init(self, dataset, transformation):
self.dataset = dataset
self.aug = transformation
def len(self):
return (len(self.dataset))
def getitem(self, idx):
sample, target = self.dataset[idx][‘blurry’], self.dataset[idx][‘sharp’]
transformedImgs = self.aug(image=sample, target_image=target)
sample_aug, target_aug = transformedImgs[“image”], transformedImgs[“target_image”]
return {‘blurry’: sample_aug, ‘sharp’: target_aug}
Unfortunately, I receive two images with two different augmentations:
Download841×845 229 KB
When I try the same without a Dataset object, I receive two images with the identical application of augmentations. Does anybody know how to make it work with a dataset object?
Here is my augmentation pipeline:
augmentation_transform = A.Compose(
[
A.Resize(1024,1024, p=1),
A.HorizontalFlip(p=0.25),
A.Rotate(limit=(-45, 65)),
A.VerticalFlip(p=0.24),
A.RandomContrast(limit=0.3, p=0.15),
A.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
A.pytorch.transforms.ToTensorV2(always_apply=True, p=1.0)
],
additional_targets={“target_image”: “image”}
)
|
st30142
|
I think a simple approach here, albeit more tedious, is to write your own transformation function directly via the functional versions of the transforms e.g.,
def buildmytransform(hflip_prob, vflip_prob, ...):
def _transform(image):
if torch.rand(1) < hflip_prob:
blur_tmp = albumentations.augmentations.functional.hflip(image)
sharp_tmp = albumentations.augmentations.functional.hflip(image)
blur_tmp = albumentations.augmentations.functional.rotate(blur_tmp, ...)
sharp_tmp = albumentations.augmentations.functional.rotate(sharp_tmp, ...)
if torch.rand(1) < vflip_prob:
...
return blur_tmp, sharp_tmp
return _transform
|
st30143
|
Lets say, I have a tensor
a = torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
the size of it is torch.Size([3, 2])
I want to resize it to torch.Size([3, 3]) by adding a 0 to each row. Like this,
torch.tensor([[0.1, 1.2, 0], [2.2, 3.1, 0], [4.9, 5.2, 0]])
How can I resize and add a 0?
|
st30144
|
Shrabani_Ghosh:
a = torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
Does simply allocating a new tensor work?
>>> a = torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
>>> b = torch.zeros([3, 3])
>>> b[:3, :2] = a
>>> b
tensor([[0.1000, 1.2000, 0.0000],
[2.2000, 3.1000, 0.0000],
[4.9000, 5.2000, 0.0000]])
My suspicion is that even if a native “resize” function were available the implementation would essentially do the same thing here.
|
st30145
|
Is there an elegant way to build a Torch.Tensor like this from a given set of values?
Here is an 3x3 example, but in my application I would have a matrix of any odd-size.
A function call gen_matrix([a, b, c, d, e, f]) should generate
1
EDIT: As of now, I implemented the following solution. A more elegant way is desirable without a for loop. Using plain torch operations is desirable.
def weights_to_symmetric(weights, N):
assert(weights.ndim == 3)
tensor = torch.zeros((*weights.shape[:2], N, N))
idx = 0
for diag in range(N):
size = N - diag
w = weights[:, :, idx:idx+size]
tensor += torch.diag_embed(w, offset=diag)
if diag > 0:
tensor += torch.diag_embed(w, offset=-diag)
idx += size
return tensor
|
st30146
|
Solved by eduardo4jesus in post #4
Here is a solution by swag2198 from stackoverflow.
>>> N = 5
>>> vals = torch.arange(N*(N+1)/2) + 1
>>> A = torch.zeros(N, N)
>>> i, j = torch.triu_indices(N, N)
>>> A[i, j] = vals
>>> A.T[i, j] = vals
>>> vals
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14.,
…
|
st30147
|
I’m not sure about pytorch, but gpytorch 1 has something called toeplitz. Maybe that’s what you are looking for ? (look into sym_toeplitz)
|
st30148
|
Thank you for the suggestion. sym_toeplitz is a resembles want, but it is not quite the same thing.
from gpytorch import utils
c = torch.tensor([1, 6, 4, 5], dtype=torch.float)
res = utils.toeplitz.sym_toeplitz(c)
res
# tensor([[1., 6., 4., 5.],
# [6., 1., 6., 4.],
# [4., 6., 1., 6.],
# [5., 4., 6., 1.]])
|
st30149
|
Here is a solution by swag2198 from stackoverflow 1.
>>> N = 5
>>> vals = torch.arange(N*(N+1)/2) + 1
>>> A = torch.zeros(N, N)
>>> i, j = torch.triu_indices(N, N)
>>> A[i, j] = vals
>>> A.T[i, j] = vals
>>> vals
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14.,
15.])
>>> A
tensor([[ 1., 2., 3., 4., 5.],
[ 2., 6., 7., 8., 9.],
[ 3., 7., 10., 11., 12.],
[ 4., 8., 11., 13., 14.],
[ 5., 9., 12., 14., 15.]])
|
st30150
|
Hi there,
Suppose we are using a forward hook to analyze a mid layer:
class Hook():
def __init__(self,m):
self.hook = m.register_forward_hook(self.hook_func)
def hook_func(self,m,i,o):
self.stored = o.detach().clone()
def remove():
self.hook.remove()
The Hook class is created for multiple cases. If I do not remove the hook after use, what negative effect will occur.
Thanks!
|
st30151
|
Looks fine to me. Its good that you detached and cloned, otherwise you would be keeping the graph or storage of the output alive.
In general having the hook registered on the module in itself won’t keep anything alive, if you were worried about that. Its what you do as a side effect in your hook, i.e., what you store that could be a problem, but that seems to be okay here.
|
st30152
|
def Conv1(in_planes, places, stride=2):
return nn.Sequential(
nn.Conv2d(in_channels=in_planes,out_channels=places,kernel_size=7,stride=stride,padding=3, bias=False),
nn.BatchNorm2d(places),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
class Bottleneck(nn.Module):
def __init__(self,in_places,places, stride=1,downsampling=False, expansion = 3):
super(Bottleneck,self).__init__()
self.expansion = expansion
#This expansion is to expand the number of channels in the 1*1 convolution to match the number of input channels in the next layer. 4*64=256
self.downsampling = downsampling
#Whether you need to down-sampling to jump the connection part, this mainly depends on the size of the neck network output and the size of the original image.
self.bottleneck = nn.Sequential(
nn.Conv2d(in_channels=10,out_channels=10,kernel_size=1,stride=1, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=3, stride=stride, padding=1, bias=False),
nn.BatchNorm2d(10),
nn.ReLU(inplace=True),
#nn.Conv2d(in_channels=places, out_channels=places*self.expansion, kernel_size=1, stride=1, bias=False),
#nn.BatchNorm2d(places*self.expansion),
)
if self.downsampling:
self.downsample = nn.Sequential(
nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(places*self.expansion)
)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
residual = x
out = self.bottleneck(x)
if self.downsampling:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self,blocks, num_classes=2, expansion = 2):
super(ResNet,self).__init__()
self.expansion = expansion
self.conv1 = Conv1(in_planes = 1, places= 10)
self.layer1 = self.make_layer(in_places = 10, places= 10, block=blocks[0], stride=1)
self.layer2 = self.make_layer(in_places = 10,places=10, block=blocks[1], stride=2)
#self.layer3 = self.make_layer(in_places=512,places=256, block=blocks[2], stride=2)
#self.layer4 = self.make_layer(in_places=1024,places=512, block=blocks[3], stride=2)
self.avgpool = nn.AvgPool2d(2, stride=1)
self.fc = nn.Linear(10,2)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def make_layer(self, in_places, places, block, stride):
layers = []
layers.append(Bottleneck(in_places, places,stride, downsampling =True))
for i in range(1, block):
layers.append(Bottleneck(places*self.expansion, places))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.layer1(x)
x = self.layer2(x)
#x = self.layer3(x)
#x = self.layer4(x)
#x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
RuntimeError: The size of tensor a (10) must match the size of tensor b (30) at non-singleton dimension 1
|
st30153
|
Hello,
I am using a pretrained resnet50 to classify some images. My problem is that when I had, in the same training function, both model.train and model.eval, the accuracies where fine (about 65% train and validation accuracies) but when I tried to separate them and use different functions for each (one for the model.train and one for the model.eval), the validation accuracy dropped to 20% and it remains constant for each epoch. Does someone have an idea of what’s happening?
I quite new to all this and I don’t know why it behaves like that.
|
st30154
|
Solved by eqy in post #38
Taking a second look, I still don’t understand why the (current code) has a best_model_wts = copy.deepcopy(model.state_dict()) at the beginning of the training function with model.load_state_dict(best_model_wts) at the end when the best_model_wts is never updated. This means that the training will h…
|
st30155
|
There can be many different causes of this (e.g., inadvertently using different transformations for the validation data vs. the training data). Can you post a code snippet of the evaluation functions?
|
st30156
|
Yes sure.
The transforamations I used are these ones:
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor()])
}
The functions:
def train_model(model, dataloaders, criterion, optimizer, scheduler, batch_size=5, num_epochs=10):#
since = time.time()
val_acc_history = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
#pdb.set_trace()
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train']:#, 'val']:
if phase == 'train':
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
average_precis_train = 0.001
average_precis_train_per_class = 0.001
loss_values = []
gr_truth_array = np.array([]) #convet to int dtype
preds_array = np.array([])
gr_truth_array = gr_truth_array.astype(int)
preds_array = preds_array.astype(int)
average_precision_array = np.array([]).astype(float)
print('Iterating over data:')
for batch_idx, (inputs, labels) in enumerate(dataloaders[phase]):
inputs = inputs.to(device)
labels = labels.to(device).float()
gt_data = labels
gt_data = gt_data.to(device)
gt_data = gt_data.cpu().data.numpy()
#average_precision_array = []
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
#pdb.set_trace()
if phase == 'train':
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
outputs = outputs.cpu()#.data.numpy()
preds = outputs.cpu().data.numpy()
preds = np.round(preds) #set a condition for binary
preds_int = preds.astype(int)
gt_data_np = np.round(gt_data)
gt_data_int = gt_data_np.astype(int)
gt_data = torch.from_numpy(gt_data_np)
loss = criterion(outputs, gt_data)
gr_truth_array = np.append(gr_truth_array, gt_data_int)
preds_array = np.append(preds_array ,preds_int)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
gr_truth_array = np.reshape(gr_truth_array, (-1, 40))
preds_array = np.reshape(preds_array, (-1, 40))
running_loss += loss.item() * inputs.size(0)
running_corrects += f1_score(gt_data, preds, average="samples")
if phase == 'train':
scheduler.step()
average_precis_train += average_precision_score(gr_truth_array, preds_array, average= "macro")
average_precis_train_per_class += average_precision_score(gr_truth_array, preds_array, average=None)
average_precision_array = np.append(average_precision_array, average_precis_train_per_class)
#pdb.set_trace()
av_precis_array = [j for i in zip(average_precision_array, attributes) for j in i]
av_precis_array = np.array(av_precis_array)
print("Average precision Training:", average_precis_train)
print("Average precision per Class Training:", av_precis_array)
#pdb.set_trace()
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects / len(dataloaders[phase].dataset) #running_corrects.float()
epoch_acc = np.round(epoch_acc, decimals=4)
print('{} Loss: {:.4f}'.format(phase, epoch_loss))
print("Acc:", epoch_acc)
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
model.load_state_dict(best_model_wts)
return model, val_acc_history
The evaluation is almost the same but the model is set to model.eval and I use with torch.no_grad(): instead of set_grad_enabled
|
st30157
|
I see the condition for model.train() statement in the code but it looks like model.eval() doesn’t have a corresponding branch?
|
st30158
|
I’m not sure this is the issue yet, but I don’t see model.eval() anywhere in the code you posted, just model.train().
|
st30159
|
Have you inspected the outputs of the model to see if they behave strangely during validation? For example, are they stuck at the same output (or the same class) for every example? Does the validation accuracy change at all between epochs?
|
st30160
|
What happens when you remove the model.load_state_dict(best_model_wts)? It looks like the best model is never updated so this may just return the same model every iteration.
|
st30161
|
Ok, then can you verify the data is changing along with the model predictions during validation? Or are the predictions the same regardless of the input?
|
st30162
|
It seems that the outputs change with every iteration, so I guess there is no issue there
|
st30163
|
You might want to also add a sanity check that the model parameters are changing between validation epochs.
|
st30164
|
This code gives an example of how to count the number of parameters in the model.
How do I check the number of parameters of a model? - PyTorch Forums 4
If you want to check that the parameters are changing, you can try printing the sum of the parameters rather than the count and see if this is changing between training epochs.
|
st30165
|
Hello again,
So in this line of code
def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad)
since it returns the sum of the parameters, I should only take out the numel() in order to get the sum right?
|
st30166
|
Something like that. You might need to do a second sum if you end up with just a list of summed parameters for each layer (or you can just compare them directly if the ordering is the same).
|
st30167
|
Ok, because I got this error here
----> return sum(p for p in model.parameters() if p.requires_grad)
RuntimeError: The size of tensor a (7) must match the size of tensor b (64) at non-singleton dimension 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.