id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st82568 | I would like to add a module graph to tensorboard using SummaryWriter.add_graph. My module’s forward method requires some *args and some **kwargs.
Walking down the stack, it looks like add_graph calls torch.jit.get_trace_graph(model, args), i.e. not passing any kwargs, even though torch.jit.get_trace_graph would support it.
Is there a way to pass kwargs? The only solution I see right now is inspecting the foward parameters and rearranging the arguments so that they are all converted to args, which I would really like to avoid. |
st82569 | I wrote a simple self-define module:
struct BasicConvOptions
{
BasicConvOptions(int ch_in, int ch_out, int ksize) :
ch_in_(ch_in), ch_out_(ch_out), ksize_(ksize) {}
TORCH_ARG(int, ch_in);
TORCH_ARG(int, ch_out);
TORCH_ARG(int, ksize);
TORCH_ARG(int, stride) = 1;
TORCH_ARG(int, padding) = 0;
};
class BasicConvImpl : public torch::nn::Cloneable<BasicConvImpl>
{
public:
explicit BasicConvImpl(BasicConvOptions options);
torch::Tensor forward(const torch::Tensor& input);
void reset() override;
/// Pretty prints the `BatchNorm` module into the given `stream`.
void pretty_print(std::ostream& stream) const override;
protected:
//torch::nn::Sequential seq = nullptr;
torch::nn::Conv2d conv = nullptr;
torch::nn::BatchNorm bn = nullptr;
};
TORCH_MODULE(BasicConv);
There would don’t save the network structure when using torch::save;
actually, I want to load the module torch::jit::load in the future.
can any one help me? |
st82570 | Hi, I am using the floor function and found this strange behavior:
a=torch.tensor(1-5.9605e-8)
a.floor()
=>tensor(0.)
(a+1).floor()
=>tensor(2.)
How is floor computed and why does not (a+1).floor() output 1.0? |
st82571 | Solved by KFrank in post #2
Hello An!
I haven’t checked the arithmetic precisely, but I believe the following
is going on:
A torch.tensor by default uses single-precision (32-bit) floating
point numbers. These have approximately 7 decimal places of
precision. Your small number, 5.9e-8, is (relative to 1) just on
the e… |
st82572 | Hello An!
anxu:
Hi, I am using the floor function and found this strange behavior:
a=torch.tensor(1-5.9605e-8)
a.floor()
=>tensor(0.)
(a+1).floor()
=>tensor(2.)
How is floor computed and why does not (a+1).floor() output 1.0?
I haven’t checked the arithmetic precisely, but I believe the following
is going on:
A torch.tensor by default uses single-precision (32-bit) floating
point numbers. These have approximately 7 decimal places of
precision. Your small number, 5.9e-8, is (relative to 1) just on
the edge of this precision.
So it turns out that 1 - 5.9e-8, when represented by a 32-bit
floating-point number, is, indeed, not equal to (and a little bit less
than) 1, so floor() takes it down to 0. But (1 - 5.9e-8) + 1,
when represented by a 32-bit floating-point number, is equal
to 2, so floor() leaves it at 2.
Redo your experiment with
a = torch.tensor (1 - 5.9605e-8, dtype = torch.float64)
and you should be your expected results. (That is, when
represented by a 64-bit floating-point number, (1 - 5.9e-8) + 1
will, indeed, not be equal to (and will be a little bit less than) 2,
so floor() will take it down to 1, rather than leaving it at 2.)
Again, I haven’t done the arithmetic exactly, but if you do the
float64 experiment, but try various values of your small number around 1.e-16, you should be able to find a value where your
“strange behavior” shows up again – just pushed down to smaller
numbers by the increased precision of float64 relative to float32.
Best.
K. Frank |
st82573 | Hi, I wantto implement the following network, each layer receives actication of the lower layer along with speaker code, and using back-propagation to update the speaker code. but I don’t know how to implement it. |
st82574 | In your desired architecture the inputs of the layers are a combination of speaker code and feature previous layer outputs or the feature vector input. I guess you want to condition on the speaker code.
You could pass the speaker code as additional input, e.g. by concatenating it to the regular layer input. Another very simple method is to use addition to mix in the speaker code. From your diagram I cannot see what kind of network you want, e.g. if the red layers are fully-connected layers or conv layers. The first thing you could do is to write down all the tensor dimensionalities and then think about bringing them into a ‘compatible form’, e.g. if the dimensionality does not match you could try use a linear layer to adapt it. |
st82575 | I tried to implement this structure, but I‘am not sure that’s right. The following code just a example.
class TestDNN(nn.Module):
def __init__(self):
super(TestDNN, self).__init__()
self.speaker_code = nn.Parameter(torch.randn(1))
self.fc1 = nn.Linear(2, 2, bias=False)
self.fc2 = nn.Linear(3, 1, bias=False)
def forward(self, input):
x = torch.cat((self.speaker_code.repeat(input.shape[0],1), input), 1)
x = self.fc1(x)
x = torch.cat((self.speaker_code.repeat(input.shape[0],1), x),1)
return self.fc2(x) |
st82576 | Currently I am generated normal random number by the following,
zn = torch.FloatTensor(dim1, dim2)
zn.normal_()
I am wondering how could i set seed for each tensor?
Thanks |
st82577 | Hi Nick!
Nick_8229:
Currently I am generated normal random number by the following,
‘’’
zn = torch.FloatTensor(dim1, dim2)
zn.normal_()
‘’’
I am wondering how could i set seed for each tensor?
manual_seed() should work, e.g.:
zn1 = torch.FloatTensor(dim1, dim2)
zn2 = torch.FloatTensor(dim1, dim2)
zn3 = torch.FloatTensor(dim1, dim2)
torch.manual_seed (seed1)
zn1.normal_()
torch.manual_seed (seed2)
zn2.normal_()
torch.manual_seed (seed3)
zn3.normal_()
Is this what you were asking?
Bear in mind, though, that you don’t generally want to do this.
You are probably better off letting pytorch’s (pseudo) random
number generator do its thing, rather than potentially making
it less random (at the margins – probably won’t matter in
practice) by injecting new seeds all the time.
My typical use of setting the random-number seed is to make
some overall run repeatable from run to run, so that, for example,
my random network weights start out as the same random weights
for each run. But I wouldn’t use different seeds, for example, for
the weights in different layers.
Good luck.
K. Frank |
st82578 | Thanks Frank.
What you gave is fix seed for each RNG.
What I want is fix seed for each tensor, kind of like followings,
zn = torch.FloatTensor(dim)
for i:
zn[i].manual_seed(get_seed(i))
zn[i].normal_()
Thanks again |
st82579 | Hello Nick!
Nick_8229:
What you gave is fix seed for each RNG.
Not really. Pytorch has (roughly speaking) a single global RNG.
This global RNG is used (drawing random number from its
current state) by things like torch.FloatTensor.normal_().
I simply reseeded the global RNG three times.
What I want is fix seed for each tensor, kind of like followings,
I’m not aware of any way to attach a per-tensor RNG to a specific
tensor. (But you shouldn’t want to do this.)
zn = torch.FloatTensor(dim)
for i:
zn[i].manual_seed(get_seed(i))
zn[i].normal_()
This code suggests that you want to be able to attach a separate
(and separately seeded) RNG to each “row” of your tensor. Is
that what you mean?
But, in any event, other than the (nominally irrelevant) actual values
of the randomly set elements of the tensor, how are the results of
your pseudocode (or what you are trying to achieve) intended to
differ from the results of the code I posted? In both cases I see
the elements of your tensors being set to (pseudo) random values drawn from a normal distribution.
What – in terms of results – is it that you want that differs from
what my code snippet does?
Best.
K. Frank |
st82580 | Not sure where do I report this but as per the topic suggest, the Start Locally | PyTorch 2 gives the wrong install instruction if used without user interaction (default).
As of now, without user interaction (default), the install instruction is:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch -c defaults -c numba/label/dev
for CUDA 10.0. However, when the user clicks on CUDA 10.0, it then proceeds to give the correct install instruction, so this is just some initialization problem. |
st82581 | Thanks for letting us know!
The install command indeed points to the CDUA9.2 install while CUDA10.0 is selected.
As you explained, the right command is shown after reselecting CUDA10.0. |
st82582 | Thanks for reporting. I fixed it via https://github.com/pytorch/pytorch.github.io/commit/0c87f939997aa964ff42248d7f15f939f4c54ef2 3 and the fix should be live in 5 minutes. |
st82583 | This is a clipping from an article: “Here instead of using the embedding, I simply used a linear transformation to transform the 11-dimensional data into an n-dimensional space. This is similar to the embedding with words.”
I can’t understand how I can do such a linear transformation. Can someone explain to me with a simple example? |
st82584 | The vector that we presented in n-dimensional space (similar to ebedding), should its n-dimensional value be constant, or is it a trained parameter? |
st82585 | The n-dimensional outputs of the linear layer will not be constant (for reasonable weight values). They will be different for each input vector and the projection of a given input vector changes over the course of the training as the weights of the linear layer are adjusted. |
st82586 | How can I pad my input sequence before feeding it to the a RNN?
I have a particular vector that I want to be used to pad the input sequence. |
st82587 | Hi, I want to concate two different dimensions tensor, like x = [[1,2], [3, 4], [5, 6]] , y =[ [1, 1], [2,2]], and the result of concatenate should be z = [[1, 1, 1, 2], [1, 1, 3, 4], [2, 2, 5, 6]]. |
st82588 | On a multi core system, where we are using MPI to run parallel threads of our pytorch program.
if we run:
model.to(device=torch.device(F"cpu:{device_id}"))
Does the “device_id” correspond to the “rank” of the MPI thread? Or should we send everything to cpu:0? |
st82589 | What is the benefit of using register_buffer compared to declaring a normal tensor in __init__?
class MyModule(nn.Module):
def __init__(self, child):
self.child = torch.as_tensor(child).int()
# vs
self.register_buffer('child', torch.from_numpy(np.array(child, dtype=np.int32)))
The buffer is serialized along with the module, but if we initialize it in __init__ then that would not matter.
Also buffer will be pushed to cuda if the model is pushed to cuda, but we could ourselves check the device and push accordingly
So what is the special case that only buffer can do?
It would be nice to have a .register_buffer that does not serialize the tensor, just push to cuda with model.cuda |
st82590 | Solved by ptrblck in post #2
Since buffers are serialized, they can also be restored using model.load_state_dict. Otherwise you would end up with a newly initialized buffer.
That’s possible, but not convenient, e.g. if you are using nn.DataParallel, as each model will be replicated on the specified devices. Hard-coding a d… |
st82591 | Since buffers are serialized, they can also be restored using model.load_state_dict. Otherwise you would end up with a newly initialized buffer.
That’s possible, but not convenient, e.g. if you are using nn.DataParallel, as each model will be replicated on the specified devices. Hard-coding a device inside your model won’t work, so you would end up using some utility functions are pushing the tensor inside forward to the right device. |
st82592 | Would be nice as a feature to have a register_buffer option that avoid the serializing part. I have some intermediate tensors that are constant, so there is no point in serializing them for example. But still will benefit from automatic .cuda() pushing to the correct device for example. |
st82593 | Hey; I construct a very simple classification model to classify mixture of gaussian.
In this case, bivariate Gaussian. The data close to mode one has label 0 and data close to mode two has label 1.
Here is how I generate train samples
from torch.distributions.multivariate_normal import MultivariateNormal
m1 = MultivariateNormal(torch.zeros(2) + 300,torch.eye(2) * .01)
m2 = MultivariateNormal(torch.zeros(2) + 200.,torch.eye(2) * .01)
x1 = m1.sample((1000,)) # mode 1
x2 = m2.sample((1000,)) # mode 2
c1 = torch.zeros(1000) # labels for mode 1
c2 = torch.ones(1000) # labels for mode 2
x = torch.cat([x1,x2],dim=0)
c = torch.cat([c1,c2],dim=0).view(-1,1)
The train sample look like this
Now I construct a simple classifier
class Classifier(nn.Module):
def __init__(self,num_in_dim=2,num_hidden=100):
super(Classifier, self).__init__()
self.fc1 = nn.Sequential(
nn.Linear(num_in_dim, num_hidden),
nn.ReLU(inplace=True),
nn.Linear(num_hidden, 1),
nn.Sigmoid())
def forward(self,x):
return self.fc1(x)
Now I set up my training as
net = Classifier()
optimizer = optim.Adam(net.parameters(), lr=1e-3)
criterion = nn.BCELoss()
for i in range(100):
optimizer.zero_grad()
a = net(x)
loss = criterion(a, c)
loss.backward()
optimizer.step()
if i % 100 == 0:
print(loss.item())
The weird behaviour I observe is that. Here are the train loss. I use the same set of batch samples train the same network with only initial parameters different. Each line is a one initialization of network. We see that sometimes the network doesn’t learn at all but some times it works really well. Though, there are many theory on optimization about local minimal etc. But I think example like this is too trivia and same thing happens even I use linear neural network. |
st82594 | Solved by ptrblck in post #2
I tried a few approaches using different weight init methods etc., but the main issue seems to be the loss in numerical precision using sigmoid + nn.BCELoss.
If you remove the sigmoid in your model and use nn.BCEWithLogitsLoss as your criterion (which uses the LogSumExp trick for stability), your m… |
st82595 | I tried a few approaches using different weight init methods etc., but the main issue seems to be the loss in numerical precision using sigmoid + nn.BCELoss.
If you remove the sigmoid in your model and use nn.BCEWithLogitsLoss as your criterion (which uses the LogSumExp trick for stability), your model seems to converge in all runs. |
st82596 | Thanks for the reply; so I guess that this is also true that using nn.CrossEntropyLoss is more numerically stable than using nn.LogSoftmax + nn.NLLLoss ? |
st82597 | No, internally F.nll_loss(F.log_softmax) will be used as seen in this line of code 4.
However, F.log_softmax is more numerically stable than F.log(F.softmax). |
st82598 | I see. Thanks. So in the neural network with binary classification, nn.BCEWithLogitsLoss as my loss function and in my neural network has no activation in the last layers. In the test time, I just manually apply sigmoid function to my output from neural network |
st82599 | Yes, you could do that to e.g. apply a probability threshold to get the predicted class. |
st82600 | Hi,
I am working on someone’s code which is written in torch 0.3. I realized torch 0.3 doesn’t support torch.no_grad() or model.eval() in this regard, I wonder how I can make sure that no back propagation is being done while doing the validation on dev or test set?
Thanks |
st82601 | Solved by SimonW in post #2
Variable(..., volatile=True) for inputs |
st82602 | Following https://pytorch.org/cppdocs/installing.html 6.
~/example-app/build$ ~/Downloads/cmake-3.15.2-Linux-x86_64/bin/cmake -DCMAKE_PREFIX_PATH=/home/user/anaconda3/envs/th/lib/python3.6/site-packages/torch/share/cmake/Torch -D_GLIBCXX_USE_CXX11_ABI=0 ..
-- Caffe2: CUDA detected: 9.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 9.0
-- Found cuDNN: v7.1.2 (include: /home/user/anaconda3/envs/th/include, library: /home/user/anaconda3/envs/th/lib/libcudnn.so)
-- Autodetected CUDA architecture(s): 6.1
-- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61
-- Configuring done
CMake Warning at CMakeLists.txt:6 (add_executable):
Cannot generate a safe runtime search path for target example-app because
there is a cycle in the constraint graph:
dir 0 is [/home/user/anaconda3/envs/th/lib/python3.6/site-packages/torch/lib]
dir 1 is [/usr/local/cuda-9.0/lib64/stubs]
dir 2 is [/home/user/anaconda3/envs/th/lib]
dir 3 must precede it due to runtime library [libcudart.so.9.0]
dir 3 is [/usr/local/cuda/lib64]
dir 2 must precede it due to runtime library [libnvrtc.so.9.0]
Some of these libraries may not be found correctly.
-- Generating done
-- Build files have been written to: /home/user/example-app/build
CMakeFiles/example-app.dir/example-app.cpp.o: In function `torch::jit::Graph::insertNode(torch::jit::Node*)':
example-app.cpp:(.text._ZN5torch3jit5Graph10insertNodeEPNS0_4NodeE[_ZN5torch3jit5Graph10insertNodeEPNS0_4NodeE]+0xf7): undefined reference to `torch::jit::Node::insertBefore(torch::jit::Node*)'
CMakeFiles/example-app.dir/example-app.cpp.o: In function `torch::jit::tracer::isTracing()':
example-app.cpp:(.text._ZN5torch3jit6tracer9isTracingEv[_ZN5torch3jit6tracer9isTracingEv]+0x5): undefined reference to `torch::jit::tracer::getTracingState()'
CMakeFiles/example-app.dir/example-app.cpp.o: In function `torch::rand(c10::ArrayRef<long>, c10::TensorOptions const&)':
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x66): undefined reference to `torch::jit::tracer::getTracingState()'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0xf4): undefined reference to `torch::jit::Graph::create(c10::Symbol, unsigned long)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x104): undefined reference to `torch::jit::tracer::recordSourceLocation(torch::jit::Node*)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x126): undefined reference to `torch::jit::tracer::addInputs(torch::jit::Node*, char const*, c10::ArrayRef<long>)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x13e): undefined reference to `torch::jit::tracer::addInputs(torch::jit::Node*, char const*, c10::TensorOptions const&)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x185): undefined reference to `torch::jit::tracer::setTracingState(std::shared_ptr<torch::jit::tracer::TracingState>)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x2a6): undefined reference to `torch::jit::tracer::setTracingState(std::shared_ptr<torch::jit::tracer::TracingState>)'
example-app.cpp:(.text._ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE[_ZN5torch4randEN3c108ArrayRefIlEERKNS0_13TensorOptionsE]+0x2c8): undefined reference to `torch::jit::tracer::addOutput(torch::jit::Node*, at::Tensor const&)'
CMakeFiles/example-app.dir/example-app.cpp.o: In function `torch::jit::script::SimpleValue::SimpleValue(torch::jit::Value*)':
example-app.cpp:(.text._ZN5torch3jit6script11SimpleValueC2EPNS0_5ValueE[_ZN5torch3jit6script11SimpleValueC5EPNS0_5ValueE]+0x1d): undefined reference to `vtable for torch::jit::script::SimpleValue'
CMakeFiles/example-app.dir/example-app.cpp.o: In function `c10::intrusive_ptr<torch::autograd::Variable::Impl, c10::detail::intrusive_target_default_null_type<torch::autograd::Variable::Impl> > c10::intrusive_ptr<torch::autograd::Variable::Impl, c10::detail::intrusive_target_default_null_type<torch::autograd::Variable::Impl> >::make<at::Tensor, std::unique_ptr<torch::autograd::Variable::AutogradMeta, std::default_delete<torch::autograd::Variable::AutogradMeta> >, bool&>(at::Tensor&&, std::unique_ptr<torch::autograd::Variable::AutogradMeta, std::default_delete<torch::autograd::Variable::AutogradMeta> >&&, bool&)':
example-app.cpp:(.text._ZN3c1013intrusive_ptrIN5torch8autograd8Variable4ImplENS_6detail34intrusive_target_default_null_typeIS4_EEE4makeIJN2at6TensorESt10unique_ptrINS3_12AutogradMetaESt14default_deleteISD_EERbEEES8_DpOT_[_ZN3c1013intrusive_ptrIN5torch8autograd8Variable4ImplENS_6detail34intrusive_target_default_null_typeIS4_EEE4makeIJN2at6TensorESt10unique_ptrINS3_12AutogradMetaESt14default_deleteISD_EERbEEES8_DpOT_]+0xc0): undefined reference to `torch::autograd::Variable::Impl::Impl(at::Tensor, std::unique_ptr<torch::autograd::Variable::AutogradMeta, std::default_delete<torch::autograd::Variable::AutogradMeta> >, bool, torch::autograd::Edge)'
collect2: error: ld returned 1 exit status
CMakeFiles/example-app.dir/build.make:100: recipe for target 'example-app' failed
make[2]: *** [example-app] Error 1
CMakeFiles/Makefile2:75: recipe for target 'CMakeFiles/example-app.dir/all' failed
make[1]: *** [CMakeFiles/example-app.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
The files are the same as in the tutorial.
Using Ubuntu 16.04. |
st82603 | The error was in my LD_LIBRARY_PATH. It was missing /usr/lib for some reason. Also, I had an issue where I had the flag -Wl,--whole-archive. |
st82604 | I am using PyTorch 1.2 and exporting hugging face GPT-2 model into ONNX.
But, exported model is of ONNX v4 file format and is based on ONNX op version 9.
I am expecting and would like to export to v5 file format and op version 10 (i.e. ONNX 1.5.0)
How do we control exported ONNX version?
As per PyTorch 1.2 release notes (https://github.com/pytorch/pytorch/releases 13) this support exists. |
st82605 | I am working to export a trained model to other format. Now I can get the graph and nodes from onnx export function but I cannot read the parameters of some operations(or layers like convolution) of network. these values sames be stored in the instance of ‘torch._C.Value’. Anyone know how to read the parameters?
Thanks. |
st82606 | https://pytorch.org/docs/stable/nn.html#lstmcell 1
Where can we set the “Depth” of the LSTMCell found in Stacked LSTMs?
Is the batch dimension here like sentence length? How do we setup variable length ‘batch’ then?
The data is (N, nL, nE) where N is number of sentences, nL is a sentence length (Variable) and nE is the dimension of each word.
How do i setup the LSTM cell for this? |
st82607 | Hello,
I modified my LSTM based network so that it’s input be packed_padded_sequences, thinking that batch processing might be faster or parralleliized better (I am a noob in optimization), and also modified the training loop accordingly but now my model is 3 times solwer than before … any idea as to why it’s that way ?
the RNN before :
class myLSTM(nn.Module):
def __init__(self,pitch_size,pos_size,util_size,chord_size,hidden_size):
super().__init__()
self.input_size = pitch_size + pos_size + util_size + chord_size
self.hidden_size = hidden_size
self.lstm = nn.LSTM(self.input_size, hidden_size, batch_first = True)
self.notes_layer = nn.Linear(hidden_size,pitch_size)
self.pos_layer = nn.Linear(hidden_size,pos_size)
self.utils_layer = nn.Linear(hidden_size,util_size - 1)
self.tanh = nn.Tanh()
self.tmp_pos = pitch_size + pos_size
self.softmax = nn.LogSoftmax(dim = 2)
self.sigmoid = nn.Sigmoid()
self.drop_layer = nn.Dropout(p = 0.5)
def forward(self, input, hidden = None):
if hidden == None:
out, hidden = self.lstm(input,hidden)
out = self.drop_layer(self.sigmoid(out))
out_notes = self.softmax(self.notes_layer(out))
out_pos = self.sigmoid(self.pos_layer(out))
out_utils = self.softmax(self.utils_layer(out))
out = torch.cat((out_notes,out_pos,out_utils),2)
elif hidden != None:
out, hidden = self.lstm(input,hidden)
out = self.drop_layer(self.sigmoid(out))
out_notes = self.softmax(self.notes_layer(out))
out_pos = self.sigmoid(self.pos_layer(out))
out_utils = self.softmax(self.utils_layer(out))
out = torch.cat((out_notes,out_pos,out_utils),2)
return out, hidden
and after :
class myLSTM(nn.Module):
def __init__(self,pitch_size,pos_size,util_size,chord_size,hidden_size):
super().__init__()
self.input_size = pitch_size + pos_size + util_size + chord_size
self.hidden_size = hidden_size
self.lstm = nn.LSTM(self.input_size, hidden_size, batch_first = True)
self.notes_layer = nn.Linear(hidden_size,pitch_size)
self.pos_layer = nn.Linear(hidden_size,pos_size)
self.tempo_layer = nn.Linear(hidden_size,1)
self.utils_layer = nn.Linear(hidden_size,util_size - 1)
self.tanh = nn.Tanh()
self.tmp_pos = pitch_size + pos_size
self.softmax = nn.LogSoftmax(dim = 2)
self.sigmoid = nn.Sigmoid()
self.drop_layer = nn.Dropout(p = 0.5)
def forward(self, input, lengths, hidden = None):
input = nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first = True, enforce_sorted = False)
if hidden == None:
out, hidden = self.lstm(input,hidden)
out = nn.utils.rnn.pad_packed_sequence(out, batch_first = True, padding_value= -1)[0]
out = self.drop_layer(self.sigmoid(out))
out_notes = self.softmax(self.notes_layer(out))
out_pos = self.sigmoid(self.pos_layer(out))
out_utils = self.softmax(self.utils_layer(out))
out = torch.cat((out_notes,out_pos,out_utils),2)
elif hidden != None:
out, hidden = self.lstm(input,hidden)
out = nn.utils.rnn.pad_packed_sequence(out, batch_first = True, padding_value= -1)[0]
out = self.drop_layer(self.sigmoid(out))
out_notes = self.softmax(self.notes_layer(out))
out_pos = self.sigmoid(self.pos_layer(out))
out_utils = self.softmax(self.utils_layer(out))
out = torch.cat((out_notes,out_pos,out_utils),2)
return out, hidden
the training loop now looks something like that :
for iter in range(1,n_iters+1):
batch = np.random.randint(0,10,10)
lengths = torch.as_tensor([dataSet[b]["inputTensor"].size(0) for b in batch], dtype=torch.int64, device='cpu')
inputTensor = nn.utils.rnn.pad_sequence([dataSet[b]["inputTensor"] for b in batch], batch_first = True, padding_value= -1)
target = [dataSet[b]["target"] for b in batch]
optimizer.zero_grad()
loss = 0
output, hidden = model(inputTensor, lengths)
for b in batch:
pads = max(lengths) - lengths[b]
dim = output[b,:,:].size(0) - pads
masked_out = output[b,:,:].view(-1)[:-pads*output[b,:,:].size(1)].reshape(dim,-1)
if pads == 0:
masked_out = output[b,:,:]
ln = criterion(masked_out[:,0:n_pitch], utilities.targetTensor(target[b][:,:n_pitch]))
lp = pos_criterion(masked_out[:, n_pitch:n_pitch+n_pos], target[b][:,n_pitch:n_pitch + n_pos])
lu = criterion(masked_out[:, n_pitch + n_pos:n_pitch + n_pos + n_util - 1], utilities.targetTensor(target[b][:,n_pitch+n_pos:]))
loss += 5*lp + ln + lu
if iter == n_iters - 1:
print("ln : %.3f,lp : %.3f, lu : %.3f" %(ln,lp,lu))
loss.backward()
optimizer.step()
Both version of the code are learning properly, but I was hoping for a speedup of what is already pretty slow, not the other way around |
st82608 | How to use nccl for communication when using multi-node GPU for distributed training, and use the ib card on the machine? |
st82609 | I made a simple toy experiment. I want to get feedback about this… so let’s get started
Last part, I made a simple question about this situation.
Hypothesis
I made a simple training code lines like this.
I wanted to know whether the previous input value is contaminating model grad values.
If the previously input value contaminate model, the grad values would be changed based on the previous value.
So I made some experiment like this.
if the mean of single_input_history is almost the same as multi_input_history:
it means that previous input values are considered independent on the current input.
class TestNet(nn.Module):
def __init__(self):
super(TestNet, self).__init__()
self.layer = nn.Linear(1, 1)
def forward(self, x):
out = self.layer(x)
return out
model = TestNet()
l1 = nn.L1Loss()
single_input_history = []
multi_input_history =[]
for _ in range(10000):
model = TestNet()
x = torch.tensor([1.0], requires_grad = True)
y = torch.tensor([2.0], requires_grad = True)
# z = torch.tensor([4.0], requires_grad = True)
temp = model(x)
loss = l1(temp,z)
loss.backward()
single_input_history.append(x.grad.data.item())
for _ in range(10000):
model = TestNet()
x = torch.tensor([1.0], requires_grad = True)
y = torch.tensor([2.0], requires_grad = True)
z = torch.tensor([4.0], requires_grad = True)
temp = model(y)
temp = model(x)
loss = l1(temp,z)
loss.backward()
multi_input_history.append(x.grad.data.item())
print(sum(single_input_history)/len(single_input_history))
print(sum(multi_input_history)/len(multi_input_history))
is my experiment method got right? any suggestions or evaluation about this outcome would be very helpful to me. |
st82610 | Is there any way to split single GPU and use a single GPU as multiple GPUs?
For example, we have 2 different ResNet18 model and we want to forward pass these two models in parallel just in one GPU (with enough memory, e.g., 12Gb). I mean that the forward pass of these two models runs in parallel and concurrent in just one GPU. |
st82611 | if your code is Torch Distributed compatible, you can spawn two processes on the same device. |
st82612 | I am not sure if I understood your question properly.
Can’t you just do
python model1.py
python model2.py
in two different shells??
(Although this slows down both individual processes, havent seen any recommended method that maintains the speed!) |
st82613 | I’m not sure if I get what you want, but possibly my last topic (Couple of models in production 343) will guide you.
Cheers,
Anton |
st82614 | You could try using data parallel (as for multiple GPUs) and pass the same GPU ID several times |
st82615 | Thanks all for above replies.
I have read the topic (Couple of models in production 75) and according it I have implemented these codes:
First Scenario (Sequential Forward Pass):
import torch
import time
from torchvision import models
from torch.autograd import Variable
# Check use GPU or not
use_gpu = torch.cuda.is_available() # use GPU
torch.manual_seed(123)
if use_gpu:
torch.cuda.manual_seed(456)
# Define CNN Models:
model1 = models.resnet18(pretrained=True)
model2 = models.resnet50(pretrained=True)
# Eval Mode:
model1.eval()
model2.eval()
# Put on GPU:
if use_gpu:
model1 = model1.cuda()
model2 = model2.cuda()
# Create tmp Variable:
x = Variable(torch.randn(10, 3, 224, 224))
if use_gpu:
x = x.cuda()
# Forward Pass:
tic1 = time.time()
out1 = model1(x)
out2 = model2(x)
tic2 = time.time()
sequential_forward_pass = tic2 - tic1
print('Time = ', sequential_forward_pass) # example output --> Time = 0.6485
Now I want to perform the forward passes in parallel in just one single GPU.
Second Scenario (Parallel Forward Pass):
import time
import torch
from torchvision import models
import torch.multiprocessing as mp
from torch.autograd import Variable
# Check use GPU or not
use_gpu = torch.cuda.is_available() # use GPU
torch.manual_seed(123)
if use_gpu:
torch.cuda.manual_seed(456)
# Define Forward Pass Method:
def forward_pass_method(model, tmp_variable):
output = model(tmp_variable)
return output
# Define CNN Models:
model1 = models.resnet18(pretrained=True)
model2 = models.resnet50(pretrained=True)
# Eval Mode:
model1.eval()
model2.eval()
# Put on GPU:
if use_gpu:
model1 = model1.cuda()
model2 = model2.cuda()
# Create tmp Variable:
x = Variable(torch.randn(10, 3, 224, 224))
if use_gpu:
x = x.cuda()
# Parallelized the Forward Passes:
tic1 = time.time()
model1.share_memory()
model2.share_memory()
processes = []
num_processes = 2
for i in range(num_processes):
if i == 0:
p = mp.Process(target=forward_pass_method, args=(model1, x))
else:
p = mp.Process(target=forward_pass_method, args=(model2, x))
p.start()
processes.append(p)
for p in processes:
p.join()
tic2 = time.time()
parallel_forward_pass = tic2 - tic1
print('Time = ', parallel_forward_pass)
However the second method has the below error:
...RuntimeError: CUDA error (3): initialization error
Would you please kindly help me to address the error?
However, It is worth nothing that, I am in doubt that parallelizing by just one single GPU is a feasible task or not. |
st82616 | Dear @ptrblck,
Do you have any idea about my last post?
I have followed your opinion 39. |
st82617 | The error seems to be related to some issues with multiprocessing and CUDA.
Have a look at the doc on Sharing CUDA tensors 86.
You have to use the “spawn” or “forkserver” start method.
Also, in your first script your time measurement is a bit wrong, because you have to call torch.cuda.synchronize() before getting the end time.
CUDA calls are asynchronous, so that the end time might be stored before the CUDA operation is done.
Here is a small script for your second use case, which might be a starter (I’m not sure, if you need modelX.share_memory()):
import torch
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as _mp
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
# Globals
mp = _mp.get_context('spawn')
use_cuda = True
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, x):
x = x.view(x.size(0), -1)
return x
def get_model():
model = nn.Sequential(
nn.Conv2d(3, 6, 3 ,1, 1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(6, 16, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(16, 1, 3, 1, 1),
nn.MaxPool2d(2),
Flatten(),
nn.Linear(28*28, 10),
nn.LogSoftmax(dim=1)
)
return model
def train(model, data_loader, optimizer, criterion):
for data, labels in data_loader:
labels = labels.long()
if use_cuda:
data, labels = data.to('cuda'), labels.to('cuda')
optimizer.zero_grad()
output = model(data)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
if __name__=='__main__':
num_processes = 2
model1 = get_model()
model2 = get_model()
if use_cuda:
model1 = model1.to('cuda')
model2 = model2.to('cuda')
dataset = datasets.FakeData(transform=transforms.ToTensor())
data_loader = DataLoader(dataset, batch_size=2,
num_workers=0,
pin_memory=False)
criterion = nn.NLLLoss()
optimizer1 = optim.SGD(model1.parameters(), lr=1e-3)
optimizer2 = optim.SGD(model2.parameters(), lr=1e-3)
#model1.share_memory()
#model2.share_memory()
processes = []
p1 = mp.Process(target=train, args=(model1, data_loader, optimizer1, criterion))
p1.start()
processes.append(p1)
p2 = mp.Process(target=train, args=(model2, data_loader, optimizer2, criterion))
p2.start()
processes.append(p2)
for p in processes:
p.join()
print('Done')
However, I’m still not sure, if you’ll see any performance advantage.
It would be nice, if you could time your script and report the results using the sequential and multiprocessing way. |
st82618 | Dear @ptrblck,
Thank you for your time & response. I have used the spawn start method, and the errors have addressed. Now, my modified code is as below:
import time
import torch
from torchvision import models
import torch.multiprocessing as mp
from torch.autograd import Variable
# Check use GPU or not
use_gpu = torch.cuda.is_available() # use GPU
torch.manual_seed(123)
if use_gpu:
torch.cuda.manual_seed(456)
# spawn start method:
mp = mp.get_context('spawn')
# Define Forward Pass Method:
def forward_pass_method(model, tmp_variable):
output = model(tmp_variable)
return output
# Define CNN Models:
model1 = models.resnet18(pretrained=True)
model2 = models.resnet50(pretrained=True)
# Eval Mode:
model1.eval()
model2.eval()
# Put on GPU:
if use_gpu:
model1 = model1.cuda()
model2 = model2.cuda()
# Create tmp Variable:
x = Variable(torch.randn(10, 3, 224, 224))
if use_gpu:
x = x.cuda()
# model1.share_memory()
# model2.share_memory()
if __name__ == '__main__':
# Parallelized the Forward Passes:
tic1 = time.time()
processes = []
num_processes = 2
for i in range(num_processes):
if i == 0:
p = mp.Process(target=forward_pass_method, args=(model1, x))
else:
p = mp.Process(target=forward_pass_method, args=(model2, x))
p.start()
processes.append(p)
for p in processes:
p.join()
tic2 = time.time()
parallel_forward_pass = tic2 - tic1
print('Time = ', parallel_forward_pass)
However the computational time increased a lot (both qualitatively and quantitatively) in comparison to the sequential way. As a result, I came to the conclusion that the multiprocessing usage of GPU hasn’t any performance advantage. |
st82619 | Have you tried in with a larger workload or just a single forward pass?
I think the startup might take much longer in the multi-processing case, so it’s maybe still faster in the long run.
But as I said, it’s a lot of speculation, since I haven’t used this approach yet. |
st82620 | @ahkarami @ptrblck Failed to use this code to inference multiple models on single GPU on windows.
RuntimeError: cuda runtime error (71) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245
Could you pls help me to solve this problem? |
st82621 | Your error seems to be this one 5.
RuntimeError: cuda runtime error (71) : operation not supported at
https://pytorch.org/docs/stable/notes/windows.html#multiprocessing-error-without-if-clause-protection |
st82622 | Actually, you can’t share models across processes on Windows, but you can share tensors instead. |
st82623 | Thanks ptrblck.
I have used if-clause protection, but still run into this issue. |
st82624 | Thanks peterjc123,
Could you share more details?
I’m using three instance segmentation models which were trained by maskrcnn-benchmark, and would like to inference by using multiprocessing.
How to share tensors across processes on Windows? |
st82625 | That is also possible, use multiprocessing and pass model paths and two queues (input/output) as arguments and then push tensors into the input queue and retrieve the answers from the output queue. |
st82626 | Thanks!
What’s diffenrence between this solution and multiprocessing.Pool?
Do you mean that I should load a single model in each process and use the producer-consumer pattern like queues to input/output? |
st82627 | How can I know I’m dealing with a number or a tensor other than checking the dim() method?
is there a method for this? or should I just use dim() for this?
Thank you all in advance |
st82628 | Thanks, I have sth like :
if one_hot_tensor.size(0) > 1 :
return torch.argmax(one_hot_tensor)
else:
return one_hot_tensor[0].int()
and this always returns a tensor and none of these proposed methods work! |
st82629 | it means that one_hot_tensor.size(0) > 1 has always been true, and torch.argmax() will return a tensor |
st82630 | I have a code to obrain 2D mesgrid in pytorch such as
import torch
x = torch.randn(1,2,3,4)
B, C, H, W = x.size()
# mesh grid
xx = torch.arange(0, W).view(1,-1).repeat(H,1)
yy = torch.arange(0, H).view(-1,1).repeat(1,W)
xx = xx.view(1,1,H,W).repeat(B,1,1,1)
yy = yy.view(1,1,H,W).repeat(B,1,1,1)
grid = torch.cat((xx,yy),1).float()
For 3D, we have x size of BxCxDxHxW. What should I add to obtain 3D mesgrid? Thanks
This is what I tried
import torch
x = torch.randn(1,2,3,4,5)
B, C, D, H, W = x.size()
# mesh grid
xx = torch.arange(0, W).view(1,-1).repeat(D,H,1)
yy = torch.arange(0, H).view(-1,1).repeat(D,1,W)
zz = torch.arange(0, D).view(1,-1).repeat(1,H,W)
print (xx.shape,yy.shape,zz.shape)
xx = xx.view(1,1,D,H,W).repeat(B,1,1,1,1)
yy = yy.view(1,1,D,H,W).repeat(B,1,1,1,1)
zz = zz.view(1,1,D,H,W).repeat(B,1,1,1,1)
grid = torch.cat((xx,yy,zz),1).float()
print (xx.shape,yy.shape,zz.shape)
print (grid)
The z direction looks wrong
torch.Size([3, 4, 5]) torch.Size([3, 4, 5]) torch.Size([1, 4, 15])
torch.Size([1, 1, 3, 4, 5]) torch.Size([1, 1, 3, 4, 5]) torch.Size([1, 1, 3, 4, 5])
tensor([[[[[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.]],
[[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.]],
[[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.],
[0., 1., 2., 3., 4.]]],
[[[0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3.]],
[[0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3.]],
[[0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1.],
[2., 2., 2., 2., 2.],
[3., 3., 3., 3., 3.]]],
[[[0., 1., 2., 0., 1.],
[2., 0., 1., 2., 0.],
[1., 2., 0., 1., 2.],
[0., 1., 2., 0., 1.]],
[[2., 0., 1., 2., 0.],
[1., 2., 0., 1., 2.],
[0., 1., 2., 0., 1.],
[2., 0., 1., 2., 0.]],
[[1., 2., 0., 1., 2.],
[0., 1., 2., 0., 1.],
[2., 0., 1., 2., 0.],
[1., 2., 0., 1., 2.]]]]]) |
st82631 | How about using torch.meshgrid(torch.arange(4), torch.arange(5), torch.arange(6))? It will return three tensors that you can produce 3d coordinates. |
st82632 | Hello !
I am trying to speed up my code and unfortunately don;t have a GPU that supports CUDA, so I wanted to know if I could parralelise my code (things like batch processing of input by a LSTM) to let it run over several cores. Is this automatically taken care of in pytorch or does one have to write extensive complicated modification to achieve this ?
Best,
J. |
st82633 | I try to use of eager execution having original nodes with parameters.
For training, I code with multiple iterations for training, so I did option of;
loss.backward(retain_graph = True)
Then I meet error of;
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-80ae8e772693> in <module>()
19 y = model()
20 loss = criterion(y, t)
---> 21 loss.backward(retain_graph = True)
22 optimizer.step()
1 frames
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
105 products. Defaults to ``False``.
106 """
--> 107 torch.autograd.backward(self, gradient, retain_graph, create_graph)
108
109 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [28, 128]] is at version 25088; expected version 21504 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Does this mean of that path for back-propagation is disconnected (cannot do it)? |
st82634 | Training code is as follows;
model.train()
for epoch in range(EPOCH):
for x, t in dataloader_train:
t = t.to(device)
for time in range(TIME_STEPS):
x_ = x[0][0][time]
x_ = x_.to(device)
for index_a in range(NUM_INPUT):
for index_b in range(NUM_HIDDEN-1, 0, -1):
model.fw_x[index_a][index_b] = model.fw_x[index_a][index_b - 1]
model.fw_h[index_a][index_b] = model.fw_h[index_a][index_b - 1]
model.fw_x[index_a][0:NUM_INPUT] = x_
model.fw_h[index_a][0] = 0.0
model.zero_grad()
y = model()
loss = criterion(y, t)
loss.backward(retain_graph = True)
optimizer.step()
Is such the FIFO coding not allowed? Then how to write same function with PyTorch
grammer? |
st82635 | Hi,
This writing Tensors inplace can be problematic.
If you can use lists instead, it will solve the problem.
Otherwise, you need to avoid problematic inplace operations either by creating new Tensors every time or adding a clone() of fw_x/fw_h after all your inplace ops. |
st82636 | Hi,
I try this style;
model.fw_x = torch.stack((model.fw_x[1:], x_))
where “x_” and “fw_x” are one and two dimension, repectively.
torch.cat() needs same shape between them, so instead I used the stack with “[1:]” in order to make FIFO function, but I meet error of;
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 3 and 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:702
I still do not understand. Any suggestion? |
st82637 | I wrote a Seq2Seq model for conversation generation but the speed is extremely slow.
I saw this post, and test my model as @apaszke said
torch.cuda.synchronize()
start = # get start time
output = model(input)
torch.cuda.synchronize()
end = # get end time
When I run 10 batches on the Seq2Seq with dynamic attention, the result is like:
Model time one batch: 6.50730013847
Model time one batch: 5.17414689064
Model time one batch: 4.81271314621
Model time one batch: 4.43320679665
Model time one batch: 4.23180413246
Model time one batch: 4.5174510479
Model time one batch: 8.60860896111
Model time one batch: 4.29604315758
Model time one batch: 4.30749702454
Model time one batch: 4.46091389656
Model time one batch: 4.33084321022
dynamic attention means I need to compute context vector in each decode time step. As you can see, I use two for loops to compute attn_weights at each time step.
class Attn(nn.Module):
def __init__(self, input_size, attn_size, cuda=True):
super(Attn, self).__init__()
self.input_size = input_size
self.attn_size = attn_size
self.cuda = cuda
self.attn = nn.Linear(self.input_size * 2, attn_size)
self.mlp = nn.Sequential(
nn.Linear(attn_size, attn_size),
nn.Tanh()
)
self.v = nn.Parameter(torch.FloatTensor(1, attn_size))
init.xavier_uniform(self.v)
def forward(self, state, encoder_outputs, encoder_input_lengths):
"""
:Parameters:
:state: decoder last time step state, shape=[num_layers, B, hidden_size]
:encoder_outputs: encoder outputs of all time steps, shape=[B, T, hidden_size]
:encoder_input_lengths: List, [B_enc]
:Return:
"""
this_batch_size = state.size(1)
max_len = encoder_outputs.size(1)
attn_energies = Variable(torch.zeros(this_batch_size, max_len))
if self.cuda:
attn_energies = attn_energies.cuda()
for i in range(this_batch_size):
for j in range(encoder_input_lengths[i]):
attn_energies[i, j] = self.score(state[-1][i], encoder_outputs[i, j])
attn_mask = attn_energies.ne(0).float()
attn_exp = torch.exp(attn_energies) * attn_mask
attn_weights = attn_exp / torch.cat([attn_exp.sum(1)]*max_len, 1)
return attn_weights
def score(self, si, hj):
"""
:Parameters:
:si: time=i-1,decoder state
:hj: time=j, encoder output state
"""
# v*tanh(W*concat(si, hj))
inp = torch.cat((si, hj)).unsqueeze(0)
energy = self.attn(inp)
energy = self.mlp(energy) #F.tanh(energy)
energy = self.v.dot(energy)
return energy
static attention means context vector is unchanged in each decode time step, so there is no need to compute multiple times.
Note: I think the reported time is just for forward(), if doing backward(), maybe much slower than that. : (
And 10 batches on the Seq2Seq with static attention:
Model time one batch: 1.55489993095
Model time one batch: 0.443991184235
Model time one batch: 0.185837030411
Model time one batch: 0.196111917496
Model time one batch: 0.193861961365
Model time one batch: 0.194068908691
Model time one batch: 0.190461874008
Model time one batch: 0.18402504921
Model time one batch: 0.186547040939
Model time one batch: 0.183899879456
Model time one batch: 0.191169023514
And 10 batches on the raw Seq2Seq (without any attention context):
Model time one batch: 1.13855099678
Model time one batch: 0.356266021729
Model time one batch: 0.185835123062
Model time one batch: 0.170114040375
Model time one batch: 0.170575141907
Model time one batch: 0.171154975891
Model time one batch: 0.17102599144
Model time one batch: 0.185311079025
Model time one batch: 0.166770935059
Model time one batch: 0.163444042206
Model time one batch: 0.169273138046
The raw Seq2Seq needs 0.17 average on batches, it also slower than tensorflow. So I wonder how I can detect the bottleneck of my model?
According to the comparision, it seems like dynamic attention is time consuming. How can I improve this part of code to make it run faster?
Thank! |
st82638 | Did you manage to speed this up? I’m facing the exact same problem.
Edit 1:
Speeded it up by vectorizing the operations using torch.baddbmm instead of the double for loops.
Used this fact a lot:
For a batch-first scenario,
x = torch.randn(10, 4, 8) # encoder inputs (batch * word * ?)
y = torch.randn(10, 8, 1) # hidden state
# this for loop version
aa = torch.zeros(10, 4)
for batch in range(x.size()[0]):
for word in range(x.size()[1]):
aa[batch,word] = x[batch,word].dot(y[batch,:].squeeze(1))
# is equivalent to this vector version
bb = torch.baddbmm(torch.zeros(4,1), x, y).squeeze(2)
# verify if they are equal
aa == bb
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1
# 1 1 1 1 |
st82639 | Hi ixaxaar,
I also have exactly the same problem. So I’m trying to make it faster with baddbmm, though, I have a trouble.
In my case, encoder_outputs has (length x batch_size x hidden_vector_size) and hidden has (1 x batch_size x hidden_vector_size) in forward function. I mean they are not batch first.
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, hidden_size)
self.v = nn.Parameter(torch.FloatTensor(1, hidden_size))
def forward(self, hidden, encoder_outputs):
max_len = encoder_outputs.size(0)
this_batch_size = encoder_outputs.size(1)
# Create variable to store attention energies
attn_energies = Variable(torch.zeros(this_batch_size, max_len)) # B x S
if USE_CUDA:
attn_energies = attn_energies.cuda()
# For each batch of encoder outputs
for b in range(this_batch_size):
# Calculate energy for each encoder output
for i in range(max_len):
attn_energies[b, i] = self.score(hidden[:, b], encoder_outputs[i, b].unsqueeze(0))
#try to replace for loop with baddbmm
bb = torch.baddbmm(Variable(torch.zeros(max_len, 1).cuda()), encoder_outputs.transpose(0,1), hidden.transpose(0,1)).squeeze(2)
# Normalize energies to weights in range 0 to 1, resize to 1 x B x S
return F.softmax(attn_energies).unsqueeze(1)
def score(self, hidden, encoder_output):
if self.method == 'dot':
energy = hidden.dot(encoder_output)
return energy
elif self.method == 'general':
energy = self.attn(encoder_output)
energy = hidden.dot(energy)
return energy
elif self.method == 'concat':
energy = self.attn(torch.cat((hidden, encoder_output), 1))
energy = self.v.dot(energy)
return energy
Instead of for loop, I’m trying to use baddbmm like this.
for b in range(this_batch_size):
# Calculate energy for each encoder output
for i in range(max_len):
attn_energies[b, i] = self.score(hidden[:, b], encoder_outputs[i, b].unsqueeze(0))
bb = torch.baddbmm(Variable(torch.zeros(max_len, 1).cuda()), encoder_outputs.transpose(0,1), hidden.transpose(0,1)).squeeze(2)
But I got following error and it’s not working.
RuntimeError: expected 3D tensor at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorMathBlas.cu:437
If you could figure out how to apply baddmm for my case, could you help me? |
st82640 | The only way to speed up ops on torch is to use vectorized ops.
As @cakeeatingpolarbear mentioned, you could use torch.bmm as well as others to do so.
Perhaps you’d also like to check out tensor comprehensions 20 if you have very specialized ops not natively available on pytorch. |
st82641 | class rnn(nn.Module):
def __init__(self, input_size, hidden_size, num_layers):
super(rnn, self).__init__()
self.num_layers = num_layers
self.hidden_size = hidden_size
self.rnn = nn.LSTM(input_size, hidden_size=hidden_size, num_layers=num_layers)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x, hidden):
out, hidden = self.rnn(x, hidden)
output = self.fc(out)
return output, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
if (use_gpu):
hidden = (weight.new(self.num_layers, batch_size, self.hidden_size).zero_().cuda(),
weight.new(self.num_layers, batch_size, self.hidden_size).zero_().cuda())
else:
hidden = (weight.new(self.num_layers, batch_size, self.hidden_size).zero_(),
weight.new(self.num_layers, batch_size, self.hidden_size).zero_())
return hidden
model = rnn(input_size, hidden_size, num_layers)
print(model)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
for epoch in range(1):
model.cuda()
loss = 0
inputs, targets = otu_handler.get_N_samples_and_targets(batch_size, seq_len)
hidden = model.init_hidden(batch_size)
# convert inputs and targets to tensors
tensor1 = torch.FloatTensor(inputs)
tensor2 = torch.FloatTensor(targets).unsqueeze(0)
# send the tensors to cuda
input_tensor = Variable(tensor1.cuda(), requires_grad = True)
input_tensor = input_tensor.transpose(1,2).transpose(0,1) # reshape input tensor to feed to forward
targets = Variable(tensor2.cuda(), requires_grad = False) # send targets tensor to cuda
out, hidden = model(input_tensor, hidden)
loss += criterion(out, targets)
print("epoch: %d, loss: %0.2f" % (epoch +1, loss.item()))
optimizer.zero_grad()
loss.backward()
# nn.utils.clip_grad_norm_(model.parameters(),2)
optimizer.step()
The output is something like this:
epoch: 1, loss: 17322482.00
epoch: 2, loss: 13263158.00
epoch: 3, loss: 17829800.00
epoch: 4, loss: 2325517312.00
epoch: 5, loss: 25319476.00
epoch: 6, loss: 18578794.00
epoch: 7, loss: 12342423.00
epoch: 8, loss: 168415920.00
epoch: 9, loss: 17259250.00
epoch: 10, loss: 175974816.00
epoch: 11, loss: 31784454.00
epoch: 12, loss: 17423422.00
epoch: 13, loss: 20637374.00
epoch: 14, loss: 36012860.00
epoch: 15, loss: 27672722.00
epoch: 16, loss: 19312454.00
epoch: 17, loss: 91513688.00
epoch: 18, loss: 1654813952.00
epoch: 19, loss: 19895126.00
epoch: 20, loss: 60809964.00
epoch: 21, loss: 20559496.00
epoch: 22, loss: 18604082.00
epoch: 23, loss: 18246324.00
epoch: 24, loss: 36698088.00
epoch: 25, loss: 21916944.00
epoch: 26, loss: 26092220.00
epoch: 27, loss: 17202180.00
epoch: 28, loss: 20631326.00
epoch: 29, loss: 22352708.00
epoch: 30, loss: 17544972.00
epoch: 31, loss: 19844386.00
epoch: 32, loss: 3089386496.00
epoch: 33, loss: 21927742.00
epoch: 34, loss: 19233062.00
epoch: 35, loss: 24233808.00
epoch: 36, loss: 14247420.00
epoch: 37, loss: 19866096.00
epoch: 38, loss: 19247676.00
epoch: 39, loss: 40788848.00
epoch: 40, loss: 178087904.00
epoch: 41, loss: 32632774.00
epoch: 42, loss: 49278888.00
epoch: 43, loss: 13424591.00
epoch: 44, loss: 13337734.00
epoch: 45, loss: 17201510.00
epoch: 46, loss: 44591204.00
epoch: 47, loss: 25328970.00
epoch: 48, loss: 14413733.00
epoch: 49, loss: 22293836.00
epoch: 50, loss: 23427574.00
epoch: 51, loss: 189332624.00
epoch: 52, loss: 26622992.00
epoch: 53, loss: 47797516.00
epoch: 54, loss: 45296728.00
epoch: 55, loss: 41071708.00
epoch: 56, loss: 25053186.00
epoch: 57, loss: 27240572.00
epoch: 58, loss: 33122594.00
epoch: 59, loss: 14874048.00
epoch: 60, loss: 20430304.00
epoch: 61, loss: 21469500.00
epoch: 62, loss: 15457670.00
epoch: 63, loss: 17139502.00
epoch: 64, loss: 17082172.00
epoch: 65, loss: 26391324.00
epoch: 66, loss: 40719556.00
epoch: 67, loss: 18023896.00
epoch: 68, loss: 16934692.00
epoch: 69, loss: 26133756.00
epoch: 70, loss: 14400602.00
epoch: 71, loss: 17984878.00
epoch: 72, loss: 926914624.00
epoch: 73, loss: 21649504.00
epoch: 74, loss: 16226421.00
epoch: 75, loss: 15451624.00
epoch: 76, loss: 22588744.00
epoch: 77, loss: 42169820.00
I know there is some logical error in the code that i am not able to pin-point.
Any help is appreciated.
Thank you |
st82642 | Hi,
You should check that your datas are not too wild: they should be roughly centered with not too high standard deviation.
You can also make sure that the learning rate is not too high. |
st82643 | albanD:
ou can also make sure that the learning rate is not too high.
Hi alban…
Yes my data is normalized. I can check with learning rate.
But the code looks fine?
Regards,
Anurag |
st82644 | The code looks fine yes.
You can also check your code by using a small part of your dataset (or a dataset that you generate by hand). |
st82645 | I was reading some material and I think i have an exploding gradient problem. So I changed my Loss function from MSELoss to SmoothL!Loss and the loss are under the range. But some how the loss is not decreasing. I think this is a new problem now…hahaha…
Thank you for your time |
st82646 | hi all,
the displayed image doesn’t match the label name ??
Screenshot from 2019-08-22 16-53-51.png1521×436 94.5 KB
thanks in advance |
st82647 | Looks like your X_train and y_train are not aligned properly… Could you share more code? |
st82648 | That looks alright to me… Could you display images as in the original post but directly with x and y rather than X_train and y_train ? The data could be misaligned before the sklearn split. |
st82649 | Hello. I’m working on a conditional model that generates music. I added a label to the input as a one_hot_vector that could be an artist, a genre or an instrument. My worry is that the label is a single one_hot_vector (so max 1) while the input is very long (let’s say len = 100 for example) and values range from 1 to 256. Therefore I’m afraid that priming with the label has no impact over all.
Is there a way to know if the label has an impact ? And if it doesn’t have an impact, how to “artificially” improve the influence of the label on training / generation ?
Thank you ! |
st82650 | Hi,
First of all, it is usually recommended to normalize the input, so maybe you could try to change the range of the input from (1-256) to (-1, 1). I think this might help.
If you want know how much does an input affect to the output, maybe you could try monitoring the gradient vector’s magnitude. I mean the gradient with respect to the input label, not with respect to the weights. If the gradient is very small, it might mean that the input makes small difference on the output.
Ideally, the model should be able to learn the importance of the label in the input, but if you want to “artificially” force that, maybe you could try repeating the label a bunch of times in the input?
Hope this helps |
st82651 | Thanks for the help Ben,
Nevermind I just checked the input is indeed normalized, still I feel like the label should have more importance for some reason. Intuitively, it must have a very big influence and it’s hard to check its influence with regards to generation (the results are still a bit messy anyway). |
st82652 | Hi guys, I’m getting this kind of error in two places:"RuntimeError: cuda runtime error (59) : device-side assert triggered ".
when calculating accuracy:
_, pred = outputs.topk(1, 1, True)
pred = pred.t()
correct = pred.eq(targets.view(1, -1))
n_correct_elems = correct.float().sum().data[0]
Tried to do this:
n_correct_elems = correct.float().sum().item
still didn’t help the exect error is:
/opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion t >= 0 && t < n_classes failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/generated/…/THCReduceAll.cuh line=317 error=59 : device-side assert triggered
Traceback (most recent call last):
File “main.py”, line 141, in
train_logger, train_batch_logger)
File “/path/train.py”, line 37, in train_epoch
acc = calculate_accuracy(outputs, targets)
File “/path/utils.py”, line 58, in calculate_accuracy
n_correct_elems = correct.float().sum().item()
RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/generated/…/THCReduceAll.cuh:317
I wanted to go forward, so I skipped this function (just entered a number), and then I got, that he is falling in csv.writer (I checked the path for the file):
the exact error log is:
/opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/generic/THCTensorCopy.cpp line=70 error=59 : device-side assert triggered
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f80c09ac748>>
Traceback (most recent call last):
File "path/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in __del__
self._shutdown_workers()
File "path/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
self.worker_result_queue.get()
File "path/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "path/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "path/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "path/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "path/anaconda3/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "path/anaconda3/lib/python3.6/multiprocessing/connection.py", line 737, in answer_challenge
response = connection.recv_bytes(256) # reject large message
File "path/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "path/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "path/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Traceback (most recent call last):
File "main.py", line 141, in <module>
train_logger, train_batch_logger)
File "path/train.py", line 55, in train_epoch
'lr': optimizer.param_groups[0]['lr']
File "path/utils.py", line 41, in log
self.logger.writerow(write_values)
File "path/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 57, in __repr__
return torch._tensor_str._str(self)
File "path/anaconda3/lib/python3.6/site-packages/torch/_tensor_str.py", line 256, in _str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "path/anaconda3/lib/python3.6/site-packages/torch/_tensor_str.py", line 82, in __init__
copy = torch.empty(tensor.size(), dtype=torch.float64).copy_(tensor).view(tensor.nelement())
RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/generic/THCTensorCopy.cpp:70
I must day the I debugged on the windows, and the program ran, now I’m running it on distance machine, and it’s killing me, every step different error, I’m not far from breaking down, please help.
Thanks. |
st82653 | barakb:
Assertion t >= 0 && t < n_classes failed.
This is the error message. The target tensor you passed to nll_loss has some out-of-bound class.
You can’t recover from a CUDA device assert failure. This is a CUDA limitation. All you can do is fixing the bug and restart your process. |
st82654 | can you elaborate just a bit, what does it means “out-of-bounds class” and what is the nll_loss (last one for curiosity), the sizes of each tensor were checked carefully, I’ll be happy for your help, because it’s seems that every step I’m getting this error , also in:
targets = targets.to('cuda')
Edit, I checked again and it works on cpu with batch = 1, when batch > 1, I get some similar error:
“cur_target >= 0 && cur_target < n_classes”
One more thing, it crashes only after few steps… is this “hinting” about the problem.
Thanks. |
st82655 | The out of bounds error is thrown if you pass class indices, which are negative (t < 0) or greater or equal to the number of classes (t >= n_classes).
E.g. if you have 5 classes, the class indices shoule be in [0, 4].
Could you check your target tensor for these out of bounds values?
nn.NLLLoss is the negative log likelihood loss which is used in the usual classification use case.
nn.CrossEntropyLoss uses internally a nn.LogSoftmax layer and nn.NLLLoss to calculate the loss. |
st82656 | Thanks for your answer, But I still have some issues with that.
I’m debugging on my windows (cpu only) , and I don’t have this problem, but when I take exactly the same files to my machine on (unix with 1 GPU) I do get this error, do you have any Idea , why is that happening?
maybe the nn.CrossEntropyLoss.cuda() is acting differently?
Thanks. |
st82657 | That’s a bit strange indeed!
Could you load all your target values and check for the value range?
torch.unique might be helpful or just a comparison.
If you don’t find any out of bounds values, would it be possible to provide the target tensor, i.e. upload it somewhere? I would like to have a look at it, as I would like to exclude a possible silent bug on the CPU side. |
st82658 | After printing the target I see I have out of bound values.
I’ll go figure out why |
st82659 | Continuing the discussion from RuntimeError: cuda runtime error (59) : device-side assert triggered , 2 different places please help me understand:
Sorry,I have the same problem as you.Can I ask that why the problem of “it crashes only after few steps… is this “hinting” about the problem.” happened?And finally how you solve your problem.I am not far from breakind down,please give me some advices,thanks very much. |
st82660 | I too face the same issue sporadically. I am using GPU. during first few runs, there are no issues but all of sudden receiving the error even there is no change in the code.
Can someone highlight how this can be fixed? |
st82661 | I found the total number of output class of multi-class claasifier is less than number of labels. That’s why I got this error.
As for me:
number of label: 0-31, 32 classes.
number of output: 0-30, 31 classes. |
st82662 | I met the same issue with you.
Did you solve it?
Can you tell me how to solve it?
Best wishes! |
st82663 | Hi!
From my understanding, using the BCEWithLogitsLoss should yield the same results as BCELoss composed with sigmoid units. And the only difference between the two is that the former (BCEWithLogitsLoss) is numerically more stable.
However, when I test their behavior, I get significantly different results as soon as I deal with loggits with values over 10e2.
Minimal example:
import torch
from torch import nn
preds = torch.rand(10)
preds[0] = 1e2
labels = torch.zeros(10)
criterion = nn.BCELoss()
print(criterion(nn.Sigmoid()(preds), labels)) #outputs tensor(3.5969)
criterion = nn.BCEWithLogitsLoss()
print(criterion(preds, labels)) #outputs tensor(10.8338)
I am using pytorch 1.2.0.
Could someone please tell me whether I am doing anything wrong or this behavior is to be expected?
Thanks in advance! |
st82664 | Solved by ptrblck in post #2
You are saturating the sigmoid with such a high number.
Have a look at this small test:
labels = torch.zeros([1])
for x in torch.linspace(10, 100):
print((torch.sigmoid(x) - 1.) == 0, ' for ', x)
x = x.view(1)
ce = F.binary_cross_entropy(torch.sigmoid(x), labels)
ce_logit = F.bina… |
st82665 | You are saturating the sigmoid with such a high number.
Have a look at this small test:
labels = torch.zeros([1])
for x in torch.linspace(10, 100):
print((torch.sigmoid(x) - 1.) == 0, ' for ', x)
x = x.view(1)
ce = F.binary_cross_entropy(torch.sigmoid(x), labels)
ce_logit = F.binary_cross_entropy_with_logits(x, labels)
err = torch.abs(ce - ce_logit)
print('error ', err)
As you can see, the first print statement will return True for logits of ~17, which means the limited floating point precision returns a value of 1. for torch.sigmoid(x).
Likewise the error will jump up by a large margin. |
st82666 | The Model is based on Alexnet. I was training it with python 3.7 pytorch 0.4.1.
But in the forward process it showed the following errors:
Traceback (most recent call last):
File “interpretable_CNN-Zhang-learned_template.py”, line 683, in
model=train_model(model,dataloaders,loss_func,optimizer,scheduler,device,100)
File “interpretable_CNN-Zhang-learned_template.py”, line 494, in train_model
outputs=model(inputs)
File “/home/TUE/20183494/miniconda3/envs/env_new/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “interpretable_CNN-Zhang-learned_template.py”, line 402, in forward
x=self.dropout(x)
File “/home/TUE/20183494/miniconda3/envs/env_new/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/home/TUE/20183494/miniconda3/envs/env_new/lib/python3.7/site-packages/torch/nn/modules/dropout.py”, line 53, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File “/home/TUE/20183494/miniconda3/envs/env_new/lib/python3.7/site-packages/torch/nn/functional.py”, line 595, in dropout
return functions.dropout.Dropout.apply(input, p, training, inplace)
File "/home/TUE/20183494/miniconda3/envs/env_new/lib/python3.7/site-packages/torch/nn/functions/dropout.py", line 40, in forward
ctx.noise.bernoulli(1 - ctx.p).div(1 - ctx.p)
RuntimeError: Creating MTGP constants failed. at /opt/conda/conda-bld/pytorch_1535493744281/work/aten/src/THC/THCTensorRandom.cu:34
Is anybody encounter this problem? I don’t understand what caused this problem, is there anybody can help me ?
Many thanks |
st82667 | I am sorry if this is naive, but I am learning PyTorch and Machine Learning as I go, and I am running into some trouble with my CNN. The model is meant to classify images based on a parameter called m/E, and I have looked at the images and confirmed that they are considerably different. The training and testing sets have both been groomed so that there are exactly 50% of all events in each of the two categories. Training on about 2000 images, testing on about 400.
No matter how many epochs I run, it will always predict the exact same outputs. Further, the model tends to predict all or nearly all images in the same category. Right now I believe it is a training issue so I will just show that part of the code for now.
Here is my CNN:
class CNN(nn.Module):
def __init__(self, input_size, n_feature, output_size, pp=False):
super(CNN, self).__init__()
self.pp = pp
self.n_feature = n_feature
self.conv1 = nn.Conv2d(in_channels=1, out_channels=n_features, kernel_size=5)
self.conv2 = nn.Conv2d(n_feature, n_feature, kernel_size=5)
self.fc1 = nn.Linear(n_feature*4*4, 50)
self.fc2 = nn.Linear(50, output_size)
def forward(self, x, verbose=False):
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, kernel_size=2)
x = x.view(-1, self.n_feature*4*4)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.softmax(x, dim=1)
return x
And here is my Training Loop:
def train(epoch, model, perm=torch.arange(0, isize*isize).long()):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
data = data.float()
target = target.float()
data = data.view(-1, isize*isize)
data = data[:, perm]
data = data.view(-1, 1, isize, isize)
optimizer.zero_grad()
output = model(data)
loss = F.binary_cross_entropy(output, target, reduction='sum')
loss.backward()
optimizer.step()
if batch_idx % 16 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tloss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
And finally, how everything gets implemented:
model_cnn_train = CNN(input_size, n_features, output_size, pp=False)
model_cnn_train.to(device)
model_cnn_test = CNN(input_size, n_features, output_size, pp=False)
model_cnn_test.to(device)
optimizer = optim.SGD(model_cnn_train.parameters(), lr=0.01, momentum=0.)
print("Convolutional Neural Network")
print('Number of parameters: {}'.format(get_n_params(model_cnn_train)))
for epoch in range(0, nEpochs):
truearray = []
targarray = []
train(epoch, model_cnn_train)
print("\n\nTESTING TIME\n\n")
test(model_cnn_test)
tcount = 0
for pp in targarray:
if pp > 0.5:
tcount += 1
print("I guessed that there would be {} objects with m/E > {}\n".format(tcount,MoEthreshold))
A sample of some output for 3 training epochs may look like this:
Classifying based on MoE
Convolutional Neural Network
Number of parameters: 19784
Train Epoch: 0 [0/2012 (0%)] loss: 41.824711
Train Epoch: 0 [512/2012 (25%)] loss: 884.192932
Train Epoch: 0 [1024/2012 (51%)] loss: 1049.979126
Train Epoch: 0 [1536/2012 (76%)] loss: 828.930847
TESTING TIME
Test set: Average loss: 1.4427, Accuracy: 196/413 (47%)
I guessed that there would be 13 objects with m/E > 0.005
Train Epoch: 1 [0/2012 (0%)] loss: 884.192932
Train Epoch: 1 [512/2012 (25%)] loss: 773.668762
Train Epoch: 1 [1024/2012 (51%)] loss: 663.144592
Train Epoch: 1 [1536/2012 (76%)] loss: 828.930847
TESTING TIME
Test set: Average loss: 1.4427, Accuracy: 196/413 (47%)
I guessed that there would be 13 objects with m/E > 0.005
Train Epoch: 2 [0/2012 (0%)] loss: 718.406677
Train Epoch: 2 [512/2012 (25%)] loss: 552.620422
Train Epoch: 2 [1024/2012 (51%)] loss: 663.144592
Train Epoch: 2 [1536/2012 (76%)] loss: 773.668762
TESTING TIME
Test set: Average loss: 1.4427, Accuracy: 196/413 (47%)
I guessed that there would be 13 objects with m/E > 0.005
I am not sure why it is performing so poorly, or why the predictions don’t change after each epoch. I really appreciate any help, so thank you in advance and let me know if you need to see more of the code or know more about my environment |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.