id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st83768 | Dear apaszke,
I am trying to implement inter-GPU communiation by using pytorch+mpi+gpu.
Following are the tested code, which is designed to make sure that process0 runs on GPU0 and process1 runs on GPU1. However, the code can not be run successfully. Do you know why?
import os
import socket
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import platform
def run(rank, size):
if rank == 0:
tensor = torch.zeros(1).cuda(0)
# Send the tensor to process 1
tensor += 1
dist.send(tensor=tensor, dst=1)
else:
tensor = torch.zeros(1).cuda(1)
# Receive tensor from process 0
dist.recv(tensor=tensor, src=0)
print('Rank ', rank, ’ has data ', tensor[0])
def init_processes(fn):
“”" Initialize the distributed environment. “”"
dist.init_process_group(‘mpi’)
rank = dist.get_rank()
size = dist.get_world_size()
print('I am rank ', rank, ’ on ', platform.node())
fn(rank, size)
if name == “ main ”:
init_processes(run)
Following is the error message.
[osherlab:21377] *** Process received signal ***
[osherlab:21377] Signal: Segmentation fault (11)
[osherlab:21377] Signal code: Invalid permissions (2)
[osherlab:21377] Failing at address: 0x10030800000
[osherlab:21377] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f9cd4e4b390]
[osherlab:21377] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x14e04b)[0x7f9cd4bbe04b]
[osherlab:21377] [ 2] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/libopen-pal.so.40(opal_convertor_unpack+0x11b)[0x7f9c79c0363b]
[osherlab:21377] [ 3] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback_match+0x4de)[0x7f9c51f9c1fe]
[osherlab:21377] [ 4] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/openmpi/mca_btl_smcuda.so(mca_btl_smcuda_component_progress+0x3b9)[0x7f9c51d6be99]
[osherlab:21377] [ 5] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/libopen-pal.so.40(opal_progress+0x2c)[0x7f9c79bf1dac]
[osherlab:21377] [ 6] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/libopen-pal.so.40(ompi_sync_wait_mt+0xb5)[0x7f9c79bf86a5]
[osherlab:21377] [ 7] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/libmpi.so.40(ompi_request_default_wait+0x20f)[0x7f9ca9f11c9f]
[osherlab:21377] [ 8] /home/osherlab/guanlei/software/openmpi-4.0.0/openmpi/lib/libmpi.so.40(PMPI_Wait+0x4e)[0x7f9ca9f56c2e]
[osherlab:21377] [ 9] /home/osherlab/guanlei/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.so(_ZN4c10d15ProcessGroupMPI9AsyncWork4waitEv+0x6d)[0x7f9cc375191d]
[osherlab:21377] [10] /home/osherlab/guanlei/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x5fd98e)[0x7f9cc369298e]
[osherlab:21377] [11] /home/osherlab/guanlei/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.so(+0x112f5d)[0x7f9cc31a7f5d]
[osherlab:21377] [12] python(_PyCFunction_FastCallDict+0x154)[0x563fdb4a0b94]
[osherlab:21377] [13] python(+0x19e7ce)[0x563fdb5307ce]
[osherlab:21377] [14] python(_PyEval_EvalFrameDefault+0x2fa)[0x563fdb552cba]
[osherlab:21377] [15] python(+0x197a94)[0x563fdb529a94]
[osherlab:21377] [16] python(+0x198941)[0x563fdb52a941]
[osherlab:21377] [17] python(+0x19e755)[0x563fdb530755]
[osherlab:21377] [18] python(_PyEval_EvalFrameDefault+0x10ba)[0x563fdb553a7a]
[osherlab:21377] [19] python(+0x19870b)[0x563fdb52a70b]
[osherlab:21377] [20] python(+0x19e755)[0x563fdb530755]
[osherlab:21377] [21] python(_PyEval_EvalFrameDefault+0x2fa)[0x563fdb552cba]
[osherlab:21377] [22] python(+0x197a94)[0x563fdb529a94]
[osherlab:21377] [23] python(+0x198941)[0x563fdb52a941]
[osherlab:21377] [24] python(+0x19e755)[0x563fdb530755]
[osherlab:21377] [25] python(_PyEval_EvalFrameDefault+0x10ba)[0x563fdb553a7a]
[osherlab:21377] [26] python(PyEval_EvalCodeEx+0x329)[0x563fdb52b459]
[osherlab:21377] [27] python(PyEval_EvalCode+0x1c)[0x563fdb52c1ec]
[osherlab:21377] [28] python(+0x2149a4)[0x563fdb5a69a4]
[osherlab:21377] [29] python(PyRun_FileExFlags+0xa1)[0x563fdb5a6da1]
[osherlab:21377] *** End of error message ***
Rank 0 has data tensor(1., device=‘cuda:0’)
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
mpirun noticed that process rank 1 with PID 0 on node osherlab exited on signal 11 (Segmentation fault).
Your help will be appreciated. Thank you. |
st83769 | Would this also work for one 1 gpu with two sequential steps somehow? If my model is too large to fit on one gpu can I somehow do the forward/backward pass sequtially where I only have one part in gpu memory and somehow cache the other part for the backward pass later.
Somehow like this:
x = submodule1(x)
#somehow unload intermediate results of submodule1 from gpu here and cache for later backward pass
#(and then load on gpu again when needed in backward pass of submodule1)
x = submodule2(x)
I could imagine how this works but then I don’t know how I would pass the gradients that come from submodule2 back to submodule1 and initiate the backward pass on submodule1. |
st83770 | Same problem here. Inter-GPU communication gets a similar error with pytorch+mpi+gpu.
Have you found a way out?
Thx |
st83771 | You may want to checkout the concept of Gradient checkpointing.
Here is an example repo csrhddlam/pytorch-checkpoint 125 |
st83772 | I am using custom loss function to calculate loss in each layer. I need to update the weights for that layer only. Is this possible |
st83773 | Could you explain your approach a bit more?
Generally, if you only pass the parameters of your special layer to the optimizer, only these parameters will be updated.
Would that work for you? |
st83774 | consider i have 3 layers and i am calculating loss that is caused by each layer(3 loses) separately by a custom loss function and i need to update the weights |
st83775 | Are you able to call .backward() on your custom loss?
If so, my approach should work. |
st83776 | thank you first of all and can u explain how to optimize only single layer ,like we pass model.parameters() in optimizer what should be passed for single layer and yes .backward is working |
st83777 | For a single layer, you could pass only these parameters to the optimizer:
optimizer = torch.optim.SGD(model.layer1.parameters(), lr=1e-3)
If you need to pass some combination of different parameter sets, you can just pass them as a list:
params = list(model.layer1.parameters()) + list(model.layer17.parameters())
torch.optim.SGD(params, lr=1e-3) |
st83778 | According to the document, nn.Parameter will:
they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator
and nn.Module.register_parameter will
Adds a parameter to the module.
I wonder since nn.Parameter will add tensor into parameters automatically, why we need register_parameter function? |
st83779 | I guess it is because there are some tasks to add parameters to given module. Here is an example code. (Below code is just example, but I did this kind of thing when I implemented spectral normalization layer)
import torch
import torch.nn as nn
class Test(nn.Module):
def __init__(self, module):
super(Test, self).__init__()
self.module = module
self.register_param()
def register_param():
exist_w = hasattr(self.module, 'w')
if not exist_w:
w = nn.Parameter(torch.ones(1))
self.module.register_parameter(w) # register 'w' to module
def forward(self, x)
return x
conv = nn.Conv2d(3, 3)
conv_w = Test(conv) |
st83780 | nn.Module.register_parameter takes the tensor or None but first checks if the name is in dictionary of the module. While nn.Parameter doesn’t have such check. |
st83781 | I’m using torch.save(model.state_dict(),'model') to save my model.
My jupyter notebook shows the error:
model is not UTF-8 encodes, saving disabled.
Someone knows how to solve this? |
st83782 | Hi, thank you for your reply, I think it’s not about model, I just defined a file named “model” |
st83783 | What happens if you do
torch.save(model.state_dict(),'model.pth')
If that still gives and error, can you report what the model.state_dict() contains, and we might be able to spot something weird in there |
st83784 | That happens due to the format of the saved file with is Zip and cannot be handled by Jupyter. You can unzip it and open in another app (VSCode) and that would work. |
st83785 | Does anyone has a comprehensive evaluation on the communication performance for these frameworks, especially the comparison between gloo and mpi?
thanks |
st83786 | So I have the input sequence in the shape of (17,240,512) where the dimensions represented sequence_index, sequence_length and num_features respectively. I dont want to treat these 17 sequences as one whole sequence and squeeze it into an LSTM by merging the first and the third dimensions, so I have to write some very repulsive code:
self.LSTMs = nn.ModuleList([nn.LSTM(512,512)])
out_tensor = torch.zeros(x.shape)
for i in range(x.shape[1]):
out_tensor[:,i,:,:] = self.LSTMs[i](x[:,i,:,:])
However, since LSTMs are running sequentially, I personally think that different LSTMs in this case can be ran in parallel for a quite significant speedup, so is there some kind of layer (Bagging layer or something similar) or custom code that can make me achieve this? |
st83787 | I was going through the chatbot tutorial and noticed the comment and name of the functions didn’t quite match:
# Unpack padding
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
# Sum bidirectional GRU outputs
is that a typo/bug?
code:
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
# Initialize GRU; the input_size and hidden_size params are both set to 'hidden_size'
# because our input size is a word embedding with number of features == hidden_size
self.gru = nn.GRU(hidden_size, hidden_size, n_layers,
dropout=(0 if n_layers == 1 else dropout), bidirectional=True)
def forward(self, input_seq, input_lengths, hidden=None):
# Convert word indexes to embeddings
embedded = self.embedding(input_seq)
# Pack padded batch of sequences for RNN module
packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
# Forward pass through GRU
outputs, hidden = self.gru(packed, hidden)
# Unpack padding
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
# Sum bidirectional GRU outputs
outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:]
# Return output and final hidden state
return outputs, hidden |
st83788 | I am trying to export my LSTM Anomally-Detection Pytorch model to ONNX, but I’m experiencing errors. Please take a look at my code below.
Note: My data is shaped as [2685, 5, 6].
Here is where I define my model:
class Model(torch.nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim):
super(Model, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc1 = torch.nn.Linear(hidden_dim, hidden_dim)
self.fc2 = torch.nn.Linear(hidden_dim, input_dim)
def forward(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc1(out)
out = self.fc2(out)
return out
input_dim = 6
hidden_dim = 3
layer_dim = 2
model = Model(input_dim, hidden_dim, layer_dim)
I can train it and test with it fine. But the problem comes when exporting:
model.eval()
import torch.onnx
torch_out = torch.onnx.export(model,
torch.randn(2685, 5, 6),
"onnx_model.onnx",
export_params = True
)
But I have the following error:
LSTM(6, 3, num_layers=2, batch_first=True)
Linear(in_features=3, out_features=3, bias=True)
Linear(in_features=3, out_features=6, bias=True)
['input_1', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear', 'l_lstm_LSTM', 'l_fc1_Linear', 'l_fc2_Linear']
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/symbolic.py:173: UserWarning: ONNX export failed on RNN/GRU/LSTM because batch_first not supported
warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported")
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-264-28c6c55537ab> in <module>()
10 torch.randn(2685, 5, 6),
11 "onnx_model.onnx",
---> 12 export_params = True
13 )
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/__init__.py in export(*args, **kwargs)
23 def export(*args, **kwargs):
24 from torch.onnx import utils
---> 25 return utils.export(*args, **kwargs)
26
27
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
129 operator_export_type=operator_export_type, opset_version=opset_version,
130 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
--> 131 strip_doc_string=strip_doc_string)
132
133
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate, opset_version, _retain_param_name, do_constant_folding, strip_doc_string)
367 if export_params:
368 proto, export_map = graph._export_onnx(params_dict, opset_version, defer_weight_export, operator_export_type,
--> 369 strip_doc_string)
370 else:
371 proto, export_map = graph._export_onnx({}, opset_version, False, operator_export_type, strip_doc_string)
RuntimeError: ONNX export failed: Couldn't export operator aten::lstm
Defined at:
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(522): forward_impl
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(539): forward_tensor
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/rnn.py(559): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(481): _slow_forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(491): __call__
<ipython-input-255-468cef410a2c>(14): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(481): _slow_forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(491): __call__
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(294): forward
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(493): __call__
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(231): get_trace_graph
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(225): _trace_and_get_graph_from_model
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(266): _model_to_graph
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(363): _export
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/utils.py(131): export
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/onnx/__init__.py(25): export
<ipython-input-264-28c6c55537ab>(12): <module>
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2903): run_ast_nodes
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/events.py(145): _run
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/base_events.py(1432): _run_once
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/asyncio/base_events.py(422): run_forever
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/ipykernel/__main__.py(3): <module>
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/runpy.py(85): _run_code
/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/runpy.py(193): _run_module_as_main
Graph we tried to export:
graph(%input.1 : Float(2685, 5, 6),
%lstm.weight_ih_l0 : Float(12, 6),
%lstm.weight_hh_l0 : Float(12, 3),
%lstm.bias_ih_l0 : Float(12),
%lstm.bias_hh_l0 : Float(12),
%lstm.weight_ih_l1 : Float(12, 3),
%lstm.weight_hh_l1 : Float(12, 3),
%lstm.bias_ih_l1 : Float(12),
%lstm.bias_hh_l1 : Float(12),
%fc1.weight : Float(3, 3),
%fc1.bias : Float(3),
%fc2.weight : Float(6, 3),
%fc2.bias : Float(6)):
%13 : Long() = onnx::Constant[value={0}](), scope: Model
%14 : Tensor = onnx::Shape(%input.1), scope: Model
%15 : Long() = onnx::Gather[axis=0](%14, %13), scope: Model
%16 : Long() = onnx::Constant[value={2}](), scope: Model
%17 : Long() = onnx::Constant[value={3}](), scope: Model
%18 : Tensor = onnx::Unsqueeze[axes=[0]](%16)
%19 : Tensor = onnx::Unsqueeze[axes=[0]](%15)
%20 : Tensor = onnx::Unsqueeze[axes=[0]](%17)
%21 : Tensor = onnx::Concat[axis=0](%18, %19, %20)
%22 : Float(2, 2685, 3) = onnx::ConstantOfShape[value={0}](%21), scope: Model
%23 : Long() = onnx::Constant[value={0}](), scope: Model
%24 : Tensor = onnx::Shape(%input.1), scope: Model
%25 : Long() = onnx::Gather[axis=0](%24, %23), scope: Model
%26 : Long() = onnx::Constant[value={2}](), scope: Model
%27 : Long() = onnx::Constant[value={3}](), scope: Model
%28 : Tensor = onnx::Unsqueeze[axes=[0]](%26)
%29 : Tensor = onnx::Unsqueeze[axes=[0]](%25)
%30 : Tensor = onnx::Unsqueeze[axes=[0]](%27)
%31 : Tensor = onnx::Concat[axis=0](%28, %29, %30)
%32 : Float(2, 2685, 3) = onnx::ConstantOfShape[value={0}](%31), scope: Model
%33 : Long() = onnx::Constant[value={1}](), scope: Model/LSTM[lstm]
%34 : Long() = onnx::Constant[value={2}](), scope: Model/LSTM[lstm]
%35 : Double() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%36 : Long() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%37 : Long() = onnx::Constant[value={0}](), scope: Model/LSTM[lstm]
%38 : Long() = onnx::Constant[value={1}](), scope: Model/LSTM[lstm]
%input.2 : Float(2685!, 5!, 3), %40 : Float(2, 2685, 3), %41 : Float(2, 2685, 3) = aten::lstm(%input.1, %22, %32, %lstm.weight_ih_l0, %lstm.weight_hh_l0, %lstm.bias_ih_l0, %lstm.bias_hh_l0, %lstm.weight_ih_l1, %lstm.weight_hh_l1, %lstm.bias_ih_l1, %lstm.bias_hh_l1, %33, %34, %35, %36, %37, %38), scope: Model/LSTM[lstm]
%42 : Float(3!, 3!) = onnx::Transpose[perm=[1, 0]](%fc1.weight), scope: Model/Linear[fc1]
%43 : Float(2685, 5, 3) = onnx::MatMul(%input.2, %42), scope: Model/Linear[fc1]
%44 : Float(2685, 5, 3) = onnx::Add(%43, %fc1.bias), scope: Model/Linear[fc1]
%45 : Float(3!, 6!) = onnx::Transpose[perm=[1, 0]](%fc2.weight), scope: Model/Linear[fc2]
%46 : Float(2685, 5, 6) = onnx::MatMul(%44, %45), scope: Model/Linear[fc2]
%47 : Float(2685, 5, 6) = onnx::Add(%46, %fc2.bias), scope: Model/Linear[fc2]
return (%47)
What does this mean? What should I do to export properly? |
st83789 | I have two issues that I believe are related.
First One:
I have my dataloader specified to work with numerous workers but once I created a custom random sampler class, the loading to the gpu became single threaded and when I view nvidia-smi -l there are never more spawned to load to the gpu. When loading tensors sequentially by idx then the multiple workers are spawned and perform well. I am using a connection pool to provide database connections to the multiple workers.
How do I use a custom random sampler with numerous workers in my dataloader?
Second One:
My dataloader works for a single batch but when I place the loader inside of an epoch loop I get a CUDA error: initialization error from my call of label_stack = torch.stack(label_list).to(‘cuda’) in the getitem method.
Since the getitem plays a role in both of these issues I figured it is an issue with my getitem method which I posted below.
Is the issue because I return an entire batch of tensors from my getitem method?
How do I avoid this runtime error to allow for multiple epochs of training?
def __getitem__(self, idx):
query = """SELECT ls.taxonomic_id, it.tensor
FROM genomics.tensors2 AS it
INNER JOIN genomics.labeled_sequences AS ls
ON ls.accession_number = it.accession_number
WHERE (%s) <= it.index
AND CARDINALITY(tensor) = 89
LIMIT (%s) OFFSET (%s)"""
shuffle_query = """
SELECT ls.taxonomic_id, it.tensor
FROM genomics.tensors2 AS it
INNER JOIN genomics.labeled_sequences AS ls
ON ls.accession_number = it.accession_number
WHERE (%s) <= it.index
AND CARDINALITY(tensor) = 89
LIMIT (%s)
"""
batch_size = 500
query_data = (idx, batch_size, batch_size)
shuffle_query_data = (idx, batch_size)
result = None
results = None
conn = self.conn_pool.getconn()
try:
conn.set_session(readonly=True, autocommit=True)
cursor = conn.cursor()
cursor.execute(shuffle_query, shuffle_query_data)
results = cursor.fetchall()
self.conn_pool.putconn(conn)
print(idx)
except Error as conn_pool_error:
print('Multithreaded __getitem__ query error')
print(conn_pool_error)
label_list = []
sequence_list = []
for (i,result) in enumerate(results):
if result is not None:
result = self.create_batch_stack_element(result)
if result is not None:
label_list.append(result[0])
sequence_list.append(result[1])
label_stack = torch.stack(label_list).to('cuda')
sequence_stack = torch.stack(sequence_list).to('cuda')
#print('label_stack.size')
#print(label_stack.size())
#print('sequence_stack.size')
#print(sequence_stack.size())
return (label_stack, sequence_stack)
E: I think I may have realized the errors in my way and am going to attempt to wrote a custom collate_fn to create the batches instead of performing this in the getitem method.
E2: The problem for the second issue was the .to(‘cuda’) statement. I moved it outside of the function and to the training loop on the label and sequences and I got a single thread loading to the GPU with 2 epochs and correct batch sizes.
E3: The problem for the first issue was solved by using torch.multiprocessing to call multiple trains which each call a single dataloader instead of performing the multiprocessing at the dataloader level. |
st83790 | In my setup.py, I have both torch and torchvision specified for install_requires:
install_requires=['torch', 'torchvision'],
I setup a test package on test.pypi.org, and when I try to download my project I get this error:
ERROR: Could not find a version that satisfies the requirement torch (from project-name) (from versions: none)
ERROR: No matching distribution found for torch (from project-name)
If I install torch ahead of time using sudo pip3 install torch or sudo pip install torch, I don’t get the error message and torchvision is installed automatically. |
st83791 | I am reading this snippet of code and I don’t quite get the backward static function. Specifically, what is grad_ouput and why do we copy grad_output. I can guess that grad_input is the value that’s store in .grad field of variables that are set requires_grad=True but how is it related to grad_output?
class MyReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return input.clamp(min=0)
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input |
st83792 | Hi,
my sine approximation during training works fine, while the prediction (which is feed by previous predictions, with a head start of 20 original values) does not approximate.
I guess there is a bug/misunderstanding in my prediction code A hint would be great.
gist.github.com
https://gist.github.com/saschalippert/36b72313afff86e00f3e10254fb4ff25
simplesine.py
from collections import deque
import torch
from torch import nn
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import TensorDataset, DataLoader
class RNN(nn.Module):
def __init__(self, input_size, output_size, hidden_size, n_layers):
This file has been truncated. show original
[Album] Imgur
2
Thanks |
st83793 | Further testing showed that feeding the network with the predicted values (which have a slight deviation from the target values) cause to spiral the future values out of control. Using a more complex model and training for more episodes seems to solve the issue. |
st83794 | Hi,
I have an old GPU (GTX660 with compute capability 3.0) and are trying to build pytorch v1.0.1 from source since v1.1 is not supporting cards with compute capability lower than 3.5.
I installed the nvidia-driver-418, cuda 10.1 and cudnn7.6.2. After the installation, the samples of the cuda installation as well as the samples of the cudnn installation worked flawlessly.
I followed the instructions for building pytorch from source, checked out v1.0.1 and installed conda and all its dependencies. To compile pytorch i ran the following command:
TORCH_CUDA_ARCH_LIST="3.0" python setup.py install
Then the building process starts and apparently cuda and cudnn is recognized. However after some time i get the following error:
error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
I attached the full output of the build process at the bottom. I already googled this error but I didn’t find any solution.
Does anyone had this issue and can help me?
Output of the build process:
Building wheel torch-1.0.0a0+8322165
running install
setup.py::run()
running build_deps
setup.py::build_deps::run()
+ SYNC_COMMAND=cp
++ command -v rsync
+ '[' -x /usr/bin/rsync ']'
+ SYNC_COMMAND='rsync -lptgoD'
+ CMAKE_COMMAND=cmake
++ command -v cmake3
+ [[ -x '' ]]
+ USE_CUDA=0
+ USE_FBGEMM=0
+ USE_ROCM=0
+ USE_NNPACK=0
+ USE_MKLDNN=0
+ USE_QNNPACK=0
+ USE_GLOO_IBVERBS=0
+ CAFFE2_STATIC_LINK_CUDA=0
+ RERUN_CMAKE=1
+ [[ 5 -gt 0 ]]
+ case "$1" in
+ USE_CUDA=1
+ shift
+ [[ 4 -gt 0 ]]
+ case "$1" in
+ USE_NNPACK=1
+ shift
+ [[ 3 -gt 0 ]]
+ case "$1" in
+ USE_MKLDNN=1
+ shift
+ [[ 2 -gt 0 ]]
+ case "$1" in
+ USE_QNNPACK=1
+ shift
+ [[ 1 -gt 0 ]]
+ case "$1" in
+ break
+ CMAKE_INSTALL='make install'
+ BUILD_SHARED_LIBS=ON
+ USER_CFLAGS=
+ USER_LDFLAGS=
+ [[ -n '' ]]
+ [[ -n '' ]]
+ [[ -n '' ]]
++ uname
+ '[' Linux == Darwin ']'
+++ dirname ../tools/build_pytorch_libs.sh
++ cd ../tools/..
+++ pwd
++ printf '%q\n' /home/dominik/Desktop/pytorch
+ BASE_DIR=/home/dominik/Desktop/pytorch
+ TORCH_LIB_DIR=/home/dominik/Desktop/pytorch/torch/lib
+ INSTALL_DIR=/home/dominik/Desktop/pytorch/torch/lib/tmp_install
+ THIRD_PARTY_DIR=/home/dominik/Desktop/pytorch/third_party
+ C_FLAGS=
+ C_FLAGS=' -DOMPI_SKIP_MPICXX=1'
+ LDFLAGS=
+ LD_POSTFIX=.so
++ uname
+ [[ Linux == \D\a\r\w\i\n ]]
+ [[ 0 -eq 1 ]]
+ LDFLAGS=' -Wl,-rpath,$ORIGIN'
+ CPP_FLAGS=' -std=c++11 '
+ THD_FLAGS=
+ [[ 0 -eq 1 ]]
+ CUDA_NVCC_FLAGS=' -DOMPI_SKIP_MPICXX=1'
+ [[ -z '' ]]
+ CUDA_DEVICE_DEBUG=0
+ '[' -z '' ']'
++ getconf _NPROCESSORS_ONLN
+ MAX_JOBS=4
+ BUILD_TYPE=Release
+ [[ -n '' ]]
+ [[ -n '' ]]
+ echo 'Building in Release mode'
Building in Release mode
+ mkdir -p /home/dominik/Desktop/pytorch/torch/lib/tmp_install
+ for arg in "$@"
+ [[ caffe2 == \c\a\f\f\e\2 ]]
+ build_caffe2
+ [[ -z '' ]]
+ EXTRA_CAFFE2_CMAKE_FLAGS=()
+ [[ -n '' ]]
+ [[ -n /home/dominik/anaconda3;/home/dominik/anaconda3/lib/python3.7/site-packages ]]
+ EXTRA_CAFFE2_CMAKE_FLAGS+=("-DCMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH")
+ [[ 1 -eq 1 ]]
+ cmake /home/dominik/Desktop/pytorch -DPYTHON_EXECUTABLE=/home/dominik/anaconda3/bin/python -DPYTHON_LIBRARY=/home/dominik/anaconda3/lib/libpython3.7m.so.1.0 -DPYTHON_INCLUDE_DIR=/home/dominik/anaconda3/include/python3.7m -DBUILDING_WITH_TORCH_LIBS=ON -DTORCH_BUILD_VERSION=1.0.0a0+8322165 -DCMAKE_BUILD_TYPE=Release -DBUILD_TORCH=ON -DBUILD_PYTHON=ON -DBUILD_SHARED_LIBS=ON -DBUILD_BINARY=OFF -DBUILD_TEST=ON -DINSTALL_TEST=ON -DBUILD_CAFFE2_OPS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=1 -DUSE_DISTRIBUTED=ON -DUSE_FBGEMM=0 -DUSE_NUMPY= -DNUMPY_INCLUDE_DIR=/home/dominik/anaconda3/lib/python3.7/site-packages/numpy/core/include -DUSE_SYSTEM_NCCL=OFF -DNCCL_INCLUDE_DIR= -DNCCL_ROOT_DIR= -DNCCL_SYSTEM_LIB= -DCAFFE2_STATIC_LINK_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DUSE_LEVELDB=OFF -DUSE_LMDB=OFF -DUSE_OPENCV=OFF -DUSE_QNNPACK=1 -DUSE_FFMPEG=OFF -DUSE_GLOG=OFF -DUSE_GFLAGS=OFF -DUSE_SYSTEM_EIGEN_INSTALL=OFF -DCUDNN_INCLUDE_DIR=/usr/include/ -DCUDNN_LIB_DIR=/usr/lib/x86_64-linux-gnu/ -DCUDNN_LIBRARY=/usr/lib/x86_64-linux-gnu/libcudnn.so.7 -DUSE_MKLDNN=1 -DNCCL_EXTERNAL=1 -DCMAKE_INSTALL_PREFIX=/home/dominik/Desktop/pytorch/torch/lib/tmp_install -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= '-DCMAKE_EXE_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' '-DCMAKE_SHARED_LINKER_FLAGS= -Wl,-rpath,$ORIGIN ' -DTHD_SO_VERSION=1 '-DCMAKE_PREFIX_PATH=/home/dominik/anaconda3;/home/dominik/anaconda3/lib/python3.7/site-packages'
-- std::exception_ptr is supported.
-- NUMA is disabled
-- Turning off deprecation warning due to glog.
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/dominik/Desktop/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- The BLAS backend of choice:MKL
-- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
-- Library mkl_intel_lp64: /home/dominik/anaconda3/lib/libmkl_intel_lp64.so
-- Library mkl_gnu_thread: /home/dominik/anaconda3/lib/libmkl_gnu_thread.so
-- Library mkl_core: /home/dominik/anaconda3/lib/libmkl_core.so
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- Library gomp: -fopenmp
-- Library pthread: /usr/lib/x86_64-linux-gnu/libpthread.so
-- Library m: /usr/lib/x86_64-linux-gnu/libm.so
-- Library dl: /usr/lib/x86_64-linux-gnu/libdl.so
-- Brace yourself, we are building NNPACK
-- Found PythonInterp: /home/dominik/anaconda3/bin/python (found version "3.7.3")
-- Failed to find LLVM FileCheck
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Using third party subdirectory Eigen.
Python 3.7.3
-- Found PythonInterp: /home/dominik/anaconda3/bin/python (found suitable version "3.7.3", minimum required is "2.7")
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
-- Using third_party/pybind11.
-- Found CUDA: /usr/local/cuda (found suitable version "10.1", minimum required is "7.0")
-- Caffe2: CUDA detected: 10.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 10.1
-- Found cuDNN: v7.6.2 (include: /usr/include/, library: /usr/lib/x86_64-linux-gnu/libcudnn.so.7)
CMake Warning at cmake/public/utils.cmake:169 (message):
In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
to cmake instead of implicitly setting it as an env variable. This will
become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
cmake/public/cuda.cmake:349 (torch_cuda_get_nvcc_gencode_flag)
cmake/Dependencies.cmake:656 (include)
CMakeLists.txt:201 (include)
-- Added CUDA NVCC flags for: -gencode;arch=compute_30,code=sm_30
CMake Warning at cmake/public/utils.cmake:169 (message):
In the future we will require one to explicitly pass TORCH_CUDA_ARCH_LIST
to cmake instead of implicitly setting it as an env variable. This will
become a FATAL_ERROR in future version of pytorch.
Call Stack (most recent call first):
cmake/External/nccl.cmake:24 (torch_cuda_get_nvcc_gencode_flag)
cmake/Dependencies.cmake:783 (include)
CMakeLists.txt:201 (include)
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR)
-- CUDA detected: 10.1
--
-- ******** Summary ********
-- CMake version : 3.14.0
-- CMake command : /home/dominik/anaconda3/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 7.4.0
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL
-- CMAKE_PREFIX_PATH : /home/dominik/anaconda3;/home/dominik/anaconda3/lib/python3.7/site-packages
-- CMAKE_INSTALL_PREFIX : /home/dominik/Desktop/pytorch/torch/lib/tmp_install
-- CMAKE_MODULE_PATH : /home/dominik/Desktop/pytorch/cmake/Modules;/home/dominik/Desktop/pytorch/cmake/public/../Modules_CUDA_fix
--
-- ONNX version : 1.3.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Removing -DNDEBUG from compile flags
-- Compiling with OpenMP support
-- Compiling with MAGMA support
-- MAGMA INCLUDE DIRECTORIES: /home/dominik/anaconda3/include
-- MAGMA LIBRARIES: /home/dominik/anaconda3/lib/libmagma.a
-- MAGMA V2 check: 1
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- AVX compiler support found
-- AVX2 compiler support found
-- Atomics: using C11 intrinsics
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API. (mkl)
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- OpenMP lib: provided by compiler
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- VTune profiling environment is unset
-- Found MKL-DNN: TRUE
-- GCC 7.4.0: Adding gcc and gcc_s libs to link line
-- Using python found in /home/dominik/anaconda3/bin/python
-- Configuring build for SLEEF-v3.2
Target system: Linux-4.18.0-25-generic
Target processor: x86_64
Host system: Linux-4.18.0-25-generic
Host processor: x86_64
Detected C compiler: GNU @ /usr/bin/cc
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : /home/dominik/anaconda3/lib/libmpfr.so
-- MPFR header file in /home/dominik/anaconda3/include
-- GMP : /home/dominik/anaconda3/lib/libgmp.so
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP : 1
-- Using python found in /home/dominik/anaconda3/bin/python
-- /usr/bin/c++ /home/dominik/Desktop/pytorch/torch/abi-check.cpp -o /home/dominik/Desktop/pytorch/build/abi-check
-- Determined _GLIBCXX_USE_CXX11_ABI=1
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
-- Found CUDA: /usr/local/cuda (found suitable version "10.1", minimum required is "7.5")
-- Building the gloo backend with TCP support only
-- Found CUDA: /usr/local/cuda (found version "10.1")
-- Building C10D with CUDA support
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
-- Not able to find MPI, will compile c10d without MPI support
-- Include NCCL operators
-- Including IDEEP operators
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
-- Include Observer library
-- Using lib/python3.7/site-packages as python relative installation path
-- Automatically generating missing __init__.py files.
-- A previous caffe2 cmake run already created the __init__.py files.
CMake Warning at CMakeLists.txt:389 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.14.0
-- CMake command : /home/dominik/anaconda3/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 7.4.0
-- BLAS : MKL
-- CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -Wno-stringop-overflow
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;MAGMA_V2;USE_C11_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /home/dominik/anaconda3;/home/dominik/anaconda3/lib/python3.7/site-packages
-- CMAKE_INSTALL_PREFIX : /home/dominik/Desktop/pytorch/torch/lib/tmp_install
--
-- TORCH_VERSION : 1.0.0
-- CAFFE2_VERSION : 1.0.0
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 3.7.3
-- Python executable : /home/dominik/anaconda3/bin/python
-- Pythonlibs version : 3.7.3
-- Python library : /home/dominik/anaconda3/lib/libpython3.7m.so.1.0
-- Python includes : /home/dominik/anaconda3/include/python3.7m
-- Python site-packages: lib/python3.7/site-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ASAN : OFF
-- USE_CUDA : 1
-- CUDA static link : 0
-- USE_CUDNN : ON
-- CUDA version : 10.1
-- cuDNN version : 7.6.2
-- CUDA root directory : /usr/local/cuda
-- CUDA library : /usr/lib/x86_64-linux-gnu/libcuda.so
-- cudart library : /usr/local/cuda/lib64/libcudart.so
-- cublas library : /usr/lib/x86_64-linux-gnu/libcublas.so
-- cufft library : /usr/local/cuda/lib64/libcufft.so
-- curand library : /usr/local/cuda/lib64/libcurand.so
-- cuDNN library : /usr/lib/x86_64-linux-gnu/libcudnn.so.7
-- nvrtc : /usr/local/cuda/lib64/libnvrtc.so
-- CUDA include path : /usr/local/cuda/include
-- NVCC executable : /usr/local/cuda/bin/nvcc
-- CUDA host compiler : /usr/bin/cc
-- USE_TENSORRT : OFF
-- USE_ROCM : 0
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : 0
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL : ON
-- USE_MKLDNN : ON
-- USE_MOBILE_OPENGL : OFF
-- USE_NCCL : ON
-- USE_SYSTEM_NCCL : OFF
-- USE_NNPACK : 1
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : 1
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn
-- Private Dependencies : qnnpack;nnpack;cpuinfo;fp16;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dominik/Desktop/pytorch/build
+ make install -j4
[ 1%] Built target js_embed
[ 1%] Built target nccl_external
[ 2%] Built target libprotobuf-lite
[ 2%] Built target clog
[ 2%] Built target pthreadpool
[ 2%] Building CXX object third_party/googletest/googletest/CMakeFiles/gtest.dir/src/gtest-all.cc.o
[ 3%] Built target benchmark
[ 4%] Built target gloo
[ 8%] Built target libprotobuf
[ 8%] Built target onnxifi_loader
[ 8%] Built target onnxifi_dummy
[ 10%] Built target c10
[ 10%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_rnn.cpp.o
[ 10%] Built target ATEN_CPU_FILES_GEN_TARGET
[ 10%] Built target common
[ 10%] Built target mkrename
[ 10%] Built target mkalias
[ 10%] Built target renameAVX512F.h_generated
[ 11%] Built target mkdisp
[ 11%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/simple_concat.cpp.o
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t<aprop>::pack_weights(int, int, int, int, int, int, int, float**, int, int*, const float*, float*, bool) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)64]’:
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1183:17: required from here
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t<aprop>::free_packed_weights(int, int, int, float**) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)64]’:
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1183:17: required from here
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t<aprop>::pack_weights(int, int, int, int, int, int, int, float**, int, int*, const float*, float*, bool) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)128]’:
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1184:17: required from here
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:891:53: error: ‘float* cblas_sgemm_alloc(CBLAS_IDENTIFIER, int, int, int)’ is deprecated [-Werror=deprecated-declarations]
weights(i, d, p) = cblas_sgemm_alloc(CblasAMatrix, m_p, n, k_p);
~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:801:25: note: declared here
MKL_DEPRECATED_C float* cblas_sgemm_alloc(const CBLAS_IDENTIFIER identifier,
^~~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp: In instantiation of ‘void mkldnn::impl::cpu::_ref_rnn_common_t<aprop>::free_packed_weights(int, int, int, float**) [with mkldnn_prop_kind_t aprop = (mkldnn_prop_kind_t)128]’:
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:1184:17: required from here
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
/home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:979:33: error: ‘void cblas_sgemm_free(float*)’ is deprecated [-Werror=deprecated-declarations]
cblas_sgemm_free(weights(i, j, k));
~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
In file included from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/gemm/os_blas.hpp:39:0,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.hpp:30,
from /home/dominik/Desktop/pytorch/third_party/ideep/mkl-dnn/src/cpu/ref_rnn.cpp:39:
/home/dominik/anaconda3/include/mkl_cblas.h:814:23: note: declared here
MKL_DEPRECATED_C void cblas_sgemm_free(float *dest);
^~~~~~~~~~~~~~~~
[ 11%] Built target python_copy_files
[ 12%] Built target headers
[ 12%] Built target renamedsp128.h_generated
[ 12%] Built target renamedsp256.h_generated
[ 13%] Built target dispavx.c_generated
[ 13%] Built target renameSSE2.h_generated
[ 13%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/simple_sum.cpp.o
[ 13%] Built target renameFMA4.h_generated
[ 14%] Built target renameAVX2.h_generated
[ 14%] Built target renameSSE4.h_generated
[ 14%] Built target ATEN_CUDA_FILES_GEN_TARGET
[ 14%] Built target mkrename_gnuabi
[ 14%] Built target mkmasked_gnuabi
[ 14%] Built target arraymap
[ 14%] Built target torch_shm_manager
[ 14%] Built target c10_utils_hip
[ 14%] Built target c10_utils_cpu
[ 14%] Built target c10_utils_gpu
[ 14%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/js/js_generator.cc.o
cc1plus: all warnings being treated as errors
third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/build.make:1232: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_rnn.cpp.o' failed
make[2]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_rnn.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 14%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_field.cc.o
CMakeFiles/Makefile2:1619: recipe for target 'third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/all' failed
make[1]: *** [third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn --use-qnnpack caffe2' |
st83795 | Try this: conda install numpy pyyaml mkl=2019.3 mkl-include setuptools cmake cffi typing and rebuild. |
st83796 | Hi,
Thanks for your answer.
I fixed it by using the mkl-dnn version v0.18.1. Now i can compile it from source. |
st83797 | When I run my code with multiple GPUs, it crashes occasionally with the following error:
File "main.py", line 132, in train
model.train(train_loader, val_loader)
File "/mnt/DATA/code/bitbucket/drn_seg/segment/seg_model.py", line 54, in train
self.train_epoch(epoch, train_loader, val_loader)
File "/mnt/DATA/code/bitbucket/drn_seg/segment/seg_model.py", line 93, in train_epoch
output = self.model_seg(image)[0]
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 113, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 118, in replicate
return replicate(module, device_ids)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast.apply(devices, *params)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/_functions.py", line 17, in forward
outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
File "/usr/local/lib/python3.5/dist-packages/torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: NCCL Error 2: unhandled system error |
st83798 | Sorry for the basic question:
If model already on cpu does model.to(‘cpu’) return a copy of the model or a reference to the existing model?
Same question if model already on gpu and model.to(‘gpu’)? |
st83799 | Solved by SimonW in post #2
It copies the parameters and buffer only if the requested device/dtype is different. It never copies the module object. |
st83800 | It copies the parameters and buffer only if the requested device/dtype is different. It never copies the module object. |
st83801 | I did:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
Stepping: 2
CPU MHz: 2596.995
BogoMIPS: 5193.99
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-5
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt md_clear
is cuz the # of cpus 6 I should set num_workers=6? is it the number of cpus? |
st83802 | It usually refers to amunt of threads. That’s usually equivalen to 2* n_cpus.
Anyway you can do the following
import multiprocessing as mp
max_cpus = mp.cpu_count() |
st83803 | JuanFMontesinos:
import multiprocessing as mp max_cpus = mp.cpu_count()
so you set number of workers to 2 times the cpus usually? i.e.
num_workers = max_cpus * 2 |
st83804 | Cpu_count returns amount of threads which is usually equal to 2 times amount of cpus. Anyway if you set a number bigger than the real the effect is as is set the max. |
st83805 | What do u mean bigger than the real? Are you saying that the code we should always have is:
num_workers = max_cpus
or less? |
st83806 | Hi,
I suggest you to follow this thread; it is active for months.
Guidelines for assigning num_workers to DataLoader
Yeah I’ve since changed tune about convergence. Agree. That’s what I got when batch size was the same, so parallelisation was NOT working properly. Overheads of parallelisation don’t pay off without increasing batch size. Right? |
st83807 | oh wow. Thanks.
Though it seems there isn’t an agreement…and it also seems it depends on the # of gpus! What a nightmare. Is there any heuristic/rough number that always makes things better but doesn’t overload?
I think at this point I don’t care about being optimal, just making it run faster than in the main thread without running the risk of overdoing it. |
st83808 | Yes, this kind of situations never have an exact answer. Actually, when you follow the thread, you can see that everyone has got a different result using same configuration and even some of them has got error!
So the best approach is to make sure your model is ok defualt value which is 0, then if you have resource or time, you can play with different configurations to achieve your best. I have same problem too.
Good luck |
st83809 | Cool! Thanks!
My hunch is that at the very least one can put num_workers = 2 and always get a benefit. As long as there is at least a single GPU and 2 real CPUs.
That is the sort of minimal advice I was hoping to hear. I know this is not for sure but I trust its probably right. |
st83810 | Hello,
I am making a VQA model with adaptive image features (10 to 100) per each image. My batch size is 512 and each image feature has a dimension of 2048. Every image with less than 100 features is padded so that the shape of each minibatch is always (512, 100, 2048).
While calculating these adaptive features, I follow this process:
batch_size = n_objs.size(0)
weight_mold = torch.zeros(batch_size, self.max_objs, 1).to(self.device)
total_objs = int(n_objs.sum().item())
q_mold = torch.zeros(total_objs, self.q_emb).to(self.device)
obj_p = 0
for n, i in enumerate(n_objs):
n_i = int(i.item())
q_i = q2[n]
q_i = q_i.repeat(n_i, 1)
q_mold[obj_p:n_i + obj_p, :] = q_i
obj_p += n_i
mask = generate_mask(v2, n_objs, device = self.device)
flattened_objs = torch.masked_select(v2, mask)
total_objs = self.v_proj(flattened_objs.view((-1, self.v_emb)))
q_proj = self.q_proj(q_mold)
#fusion = q_mold * total_objs
fusion = - (v_proj - q_proj)**2 + relu(v_proj + q_proj)
fusion = self.dropout(fusion)
fusion = self.linear(fusion)
obj_p = 0
for n, i in enumerate(n_objs):
n_i = int(i.item())
objs_n = fusion[obj_p:n_i + obj_p, :]
objs_n = softmax(objs_n, 0)
if n == 0:
print(objs_n)
weight_mold[n, :n_i, :] = objs_n
obj_p += n_i
return weight_mold
To walk through the code real quick, I:
Get the batch size, get the total number of feature objects (total_objs) in the batch
Create an empty matrix for the attention weights size: (512, 100, 1) which will hold the attention scores for each object in each image in the batch
Create an emtpy matrix to hold the question embedding (dim = 5000) which is size: (total_objs, 5000).
I then use a for loop to repeat the ith question n time for the n objects in the ith image
I then use a mask to and a flattening procedure to reshape the image features (512, 100, 2048) to (total_objs, 2048) essentially selecting only the unpadded image features.
Then I project each image feature (total_objs, 2048) and each question (total_objs, 5000) both to the same size (total_objs, 1024) so that I can perform element wise multiplication to fuse them.
I then perform another projection so to get the logits for each fusion item (total_objs, 1)
Then in another for loop, I softmax the n logits for the ith image to compute their final scores and place them in the wieght_mold (512, 100, 1).
However when I train my accuracy converges to about ~44% so I know this process isn’t working. I have switched my network to use fixed features which achieves ~63%. Which allows me to know my logic above is exactly the problem.
Does anyone have any ideas on how to compute the attention scores for these adaptive features or can correct my logic above? I have been mind boggled by this for too long lol.
Thank you in advance! |
st83811 | Solved by eltoto1219 in post #2
I actually can confirm the code above works. My model just took a bit longer to converge for some reason! |
st83812 | I actually can confirm the code above works. My model just took a bit longer to converge for some reason! |
st83813 | Hi,
I am re-implementing my Tensorflow models with pytorch. The problem comes when I want to test it by loading weights of previously trained Tensorflow models, since I got very different performance. Obviously something goes wrong.
For debugging I start with a single conv layer where I initialise the conv kernel with the same weights and apply to a same input. Surprisingly, the Pytorch implementation and Tensorflow implementation gives different results.
Here’s the code:
# Tensorflow Python2.7
import numpy as np
import tensorflow as tf
# Different weights for testing
np.save('weg.npy', 0.008*np.random.randint(1,1000,(3,3,3,6))) # Option1
#np.save('weg.npy', np.random.randint(1,1000,(3,3,3,6))) # Option2
#np.save('weg.npy', 10.12*np.random.randint(1,1000,(3,3,3,6))) # Option3
inputs = tf.Variable(1.5*np.ones((1, 10, 10,3), dtype=np.float32))
net = tf.contrib.slim.conv2d(inputs, 6, [3, 3], stride=1,
weights_initializer=tf.constant_initializer(np.load('weg.npy')),
biases_initializer=tf.constant_initializer(0),
activation_fn=None)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
x = sess.run(net)
np.save('tf.npy', x)
print x.shape
Since I am using python3.6 environment for pytorch, so I didn’t put 2 codes together.
# Pytorch Python3.6
import torch.nn as nn
import torch
import numpy as np
# Prepare weights
weights = torch.from_numpy(np.load('weg.npy')).permute((3, 2, 0, 1)) # swap to NCHW
biases = torch.from_numpy(np.zeros(6))
weight_dict = OrderedDict()
weight_dict.update({'0.weight':weights})
weight_dict.update({'0.bias':biases})
inputs = torch.from_numpy(1.5*np.ones((1, 3, 10, 10), dtype=np.float32))
net = nn.Sequential(nn.Conv2d(3, 6, kernel_size=3, stride=1, padding=1))
net.load_state_dict(weight_dict)
# Compare results
m_ = net(inputs)
m=np.load('tf.npy')
print(np.linalg.norm(m - m_.permute((0, 2, 3,1)).data.numpy())) # Swap back to NHWC
So I found that the different options to initialise the weights i.e. 'weg.npy’ led to various the errors results(notice the difference between weights options is just different coefficients)
Results:
np.random.randint(1,1000,(3,3,3,6)) => 0.0
0.008*np.random.randint(1,1000,(3,3,3,6)) => 0.00031963884
10.12*np.random.randint(1,1000,(3,3,3,6)) => 0.37016743
Does anyone have any idea why this is happening? I am really confused, hoping someone can help,thanks! |
st83814 | Hello Zhou,
I have same problems in my work and tried your approach.
In case of yours, changing data type to double might help. There is similar 44 of yours.
I changed your code a bit like below.
# data type
# data_type = np.float32
data_type = np.float64
# Different weights for testing
# weight = np.random.randint(1, 1000, (3, 3, 3, 6)).astype(data_type)
# weight = 0.008 * np.random.randint(1, 1000, (3, 3, 3, 6)).astype(data_type)
weight = 10.12 * np.random.randint(1, 1000, (3, 3, 3, 6)).astype(data_type)
inputs = tf.Variable(1.5 * np.ones((1, 10, 10, 3), dtype=data_type))
# Tensorflow
tf_model = tf.contrib.slim.conv2d(inputs, 6, [3, 3], stride=1,
weights_initializer=tf.constant_initializer(weight),
biases_initializer=tf.constant_initializer(0),
activation_fn=None)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
tf_output = sess.run(tf_model)
# Pytorch
weights = torch.from_numpy(weight).permute((3, 2, 0, 1))
biases = torch.from_numpy(np.zeros(6).astype(data_type))
inputs = torch.from_numpy(1.5 * np.ones((1, 3, 10, 10), dtype=data_type))
torch_model = nn.Conv2d(3, 6, kernel_size=3, stride=1, padding=1)
torch_model.weight = nn.Parameter(weights)
torch_model.bias = nn.Parameter(biases)
torch_output = torch_model(inputs)
# Compare results
print("diff max: ", (tf_output - torch_output.permute((0, 2, 3, 1)).data.numpy()).max())
print("diff norm: ", np.linalg.norm(tf_output - torch_output.permute((0, 2, 3, 1)).data.numpy()))
Here is the result.
Result in case np.float32
diff max: 0.015625
diff norm: 0.286411
Result in case np.float64
diff max: 2.9103830456733704e-11
diff norm: 7.428606633388099e-10 |
st83815 | I am new to Neural Networks and currently doing a project for university. I am trying to train a CNN using frames that portray me shooting a ball through a basket. And my aim is for the network to be able to classify the result( hit or miss) correctly. When I train the network, the training accuracy increases slowly until it reaches 100%, while the validation accuracy remains around 65% (It is important to mention here that 65% is the percentage of shots that have a Miss label. So the network gives the highest Validation accuracy when it predicts all frames are a miss) Does anyone have experience with a similar problem?
I am using PyTorch and Resnet18 ( have tried other architectures as well but they all gave the same result). My frames are jpg images of sie 224.
As an optimizer, both Adam and SGD gave the same result
Thank you in advance |
st83816 | Hi Wassim,
Sounds like your model is over fitting to the training set. You could try adding regularization or dropout during training to avoid it. You can also read up more about how else to avoid overfitting to the training set online.
Hope this helps,
Best,
Prerna |
st83817 | Hi Wassim,
In addition to what @Prerna_Dhareshwar said, do have a look at your training data to make sure there are no biases or features in the image that would allow the network to “cheat”. One example would be ratio of hits and misses in your training data, which ideally should be 1(called a balanced dataset). Another example, if you collected the training data for “hit” during the day, training data for “miss” during the night, and all validation data during the night, your network could just be predicting day or night depending on the lighting conditions, and get 100% accuracy on your training data.
Hope this helps! |
st83818 | Hi! In addition to previous answers I would like to suggest you to use data augmentations. This will help you to increase your training set and will have a regularization effect. |
st83819 | I am having the same issue. I am doing 3D medical image synthesis and train loss(red) and valid loss(blue) looks as below plot.
The valid loss doesnt drop. I tried standardizing and normalizing and changed the validation sets.
I would request to look at my discussions lately for more details (having trouble to paste links from phone)
Would appreciate any feedbacks. |
st83820 | @Mazhar_Shaikh Thank you for your input. My data is quite unbalanced (around 65% miss and 35% hit). Is the unbalance large enough to cause this error? It seems like, during validation, the model tries to predict the outcome but gets a very low accuracy, so it goes back to predicting all shots to be a miss and gets stuck on 65% accuracy. |
st83821 | Hey!
If I understand it correctly, when training RNNs using mini batch sgd, the elements in one batch should not be sequential. Rather, every index throughout the batches corresponds to one sequence. I can see that this makes sense when one has multiple sequences to train on.
Currently I’m working on a problem where I have only 1 ongoing time series, no individual sequences. Is it common practice to artificially create sequences by splitting the training data? Or should I train with batch size 1 and consider my whole training set as one sequence?
It would be cool if someone could confirm my understanding of the matter and/or maybe give some some tips on best practices for this sort of situation.
Thanks! |
st83822 | mfluegge:
every index throughout the batches corresponds to one sequence
Exactly.
Either approach would work. Training will run quicker with batches made up of subsequences. |
st83823 | I like to make sure that the subsequences in each batch are the continuations of the subsequences in the previous batch. That way I can use the final hidden state of one batch as the initial hidden state of the next batch.
If you do that then you must either
detach 16 the hidden state between batches in order to cut off backpropagation. The fact that the hidden state is retained allows the model to learn something about the influence of previous history, but not as much as full BPTT would allow.
or use the retain_graph=True option when calling .backward(), but in this case each batch will take longer than the previous batch because it will be backpropagating all the way through time to the beginning of the first batch. |
st83824 | @jpeg729 I am trying to train a model the way you are suggesting and I have a couple questions about it.
1- If i call loss.backward() after every batch wouldnt this reset the graph and therefore detach the hidden_state so that it won’t be used in the next batch?
2- At the beginning of each batch i reset the LSTM hidden state and cell state to 0. Is this a valid approach if I want to exclusively remember one batch?
3- If i want to retain the last hidden state from the previous batch and avoid backpropagating through the beginning. Couldn’t I just call loss.backward() at the end of one batch and leave the hidden state untouched?
I would love some clarification regarding this. Thanks in advance!
Thanks in advance! |
st83825 | 1 & 3. The computation graph is created during the forward pass, and used by the backward pass. Normally the backward pass deletes the graph. The hidden state is the result of the calculation of the previous batch, and it remembers this fact even after the computation graph has been deleted, hence the need to explicitly detach the hidden state. You can try not detaching it and simply calling backward() but you will get an error saying the computation graph has already been deleted.
Yes. |
st83826 | 1 & 3. So the hidden state is not part of the graph? If it is why is it not detached on the .backward() call? If it’s not then why is it not?
I thought the cell and hidden state where a result of operations applied to weights plus some non-linearity functions inside each lstm cell, should’t this make it part of the graph? I feel like I am massively confused about this. |
st83827 | The hidden state is part of the graph.
When you call backward after the first batch the computation graph is freed, but the hidden state still remembers that it was computed, it just no longer knows how. Which is why if you don’t explicitly detach the hidden state before the second batch, then the second call to backward will fail with an error. |
st83828 | I create new Variables for the hidden state setting it to 0 at the beginning of each batch since these are new Variable objects they are not part of the graph right? Im guessing this is why i dont get an error when calling .backward() on batch number 2. Is this a correct approach if im treating each batch individually as explained in case 2? Or will this affect training? |
st83829 | Yep. If you create new variables, then they aren’t part of any graph yet, so you don’t need to detach them. The only downside will be that the model will have little to no understanding of dependencies in the data that are longer than the sequence length used for training. |
st83830 | Maybe that is why im not getting good results. I will try keeping the hidden state and detaching it between batches so the lstm has a notion of the previous sequences. Maybe this will solve my problem. You are a life saver! Thanks a lot!! |
st83831 | If you data is stock market data, then that won’t help much. Stock market prices are full of noise with little predictability.
Best of luck. |
st83832 | I am just trying to generate text so your approach will help a lot i am guessing. |
st83833 | Thanks for the discussion and the insightful answers @jpeg729. I will try this tonight when I’m home. |
st83834 | Hey! Sorry, I got one more question.
I’m a little confused about the seq_len dimension in the RNN type modules’ inputs and outputs. I was looking at nn.LSTM and nn.GRU. From their documentation I could assume that they would accept the whole training corpus as input at once (which doesn’t exactly seem sensible), considering they have both a sequence and batch dimension. Am I maybe supposed to split my sequences into smaller subsequences, meaning one training iteration would get a batch of batches of training samples as input? I’m sorry if this is a newbie-question, I have only ever trained vanilla nets and am a little confused about the RNN API at training time.
I think I’ll look at the cell-implementations for now since they don’t seem to have this sequence-dimension requirement. |
st83835 | The cell implementations take one timestep at a time. The LSTM, RNN and GRU all take inputs with several timesteps in one go. I find it helpful to be very clear about the distinction between the batch dimension, whose indices correspond to different input sequences, and the sequence dimension whose indices correspond to different timesteps of each sequence. The term “batch” can be a little ambiguous if you are not careful.
I believe that the GPU implementation of LSTM is somewhat faster that the cell implementation, but I have never tried it myself.
I think most people either use LSTMs for short sequences (e.g. sentences to be classified) and they pass entire sequences to the LSTM in one go, or for long sequences which they cut up into subsequences.
I have found that if I take a really long sequence and divide it up into subsequences which I stack into batches, then I can get through an epoch of training much faster. That said, I didn’t start out by doing that, I started with simpler methods and I went from there. |
st83836 | I think I just have some very fundamental knowledge gaps when it comes to what RNNs inputs have to look like during training time, specifically for this single time series case I’m looking at now. I feel like blog posts or articles about RNNs usually focus on the inputs during prediction time. When they talk about training time its mostly straight to BPTT, which I guess one could infer the training data’s shape from but somehow I haven’t been able to do that. I’ll just keep trying, thanks for your help and sorry that it didn’t amount to more yet lol |
st83837 | Alright, I kinda feel bad asking all these stupid questions, but well, you asked for it
I might have either kind of understood it now or gone fully mad.
Lets say I have one “big” overall sequence of steps 1,2,…,8
If I pick batch size = 4, that means I have created 4 subsequences and 2 batches
— b1 b2
s1 1, 2,
s2 3, 4,
s3 5, 6,
s4 7, 8
bn = batch n, sn = subsequence n
I guess I can now decide how many batches I should use per training iteration; the number of batches I input into my network per training iteration determines how long the training sequence used for an update is. This number of batches is the seq_len dimension in the RNN input. I guess the appropriate term for the sequence used in one training iteration would be “subsubsequence”? So if I for example chose to only use one batch per training iteration I would use a subsequence of each of the 4 subsequences s1-s4.
Assuming what I wrote until now is actually correct and not completely insane I would do updates using subsubsequences. I guess I should reset the hidden states after inputting all the batches once and thereby finishing an epoch. But I guess I should not reset the hidden states after every single update since I only updated based on a subsubsequence and it would make sense that the following subsubsequence would use its subsequence’s previous hidden state. Given that this is sensible I suppose I would have to save the hidden state at the end of every training iteration and use it to initialize the hidden state during the next training iteration.
Is this even close to correct? If not feel free to tell me where I’ve gone wrong. Thanks! |
st83838 | I will try to clarify what I think you are asking as best as I can .
mfluegge:
If I pick batch size = 4, that means I have created 4 subsequences and 2 batches
1 - yes
mfluegge:
This number of batches is the seq_len dimension in the RNN input
2- No. The seq_len parameter is the size of your batch, the number of batches corresponds to the batch dimension in the LSTM input.
mfluegge:
I guess I can now decide how many batches I should use per training iteration; the number of batches I input into my network per training iteration determines how long the training sequence used for an update is
3- Yes. You can decide up to how many batches you want to backpropagate. If you have only 4 batches and you want to backpropagate through all of them until the beginning of your sequence then I’m guessing you can call loss.backward(retain_graph=True) at the end of every batch. I don’t think this is very common since when having lots of data it will be very slow.
mfluegge:
I guess I should reset the hidden states after inputting all the batches once and thereby finishing an epoch
4 -When to reset the hidden state depends on what you want. You should reset your hidden state when you want your RNN to not have any notion at all of what happened earlier. If you think its important to remember all of the data then you reset the hidden state after a full epoch but if one batch is independent from the rest then you can reset it after each .backward(). My questions and @jpeg729 answers on this same thread could clarify or expand on this.
That said, I am a bit of a newbie so I hope someone with more knowledge could confirm or correct my answer. Especially points 3 & 4.
Lastly, never hesitate to ask these kind of questions. This is the very purpose of these forums and furthermore they help newbies like me to understand things better. |
st83839 | mfluegge:
If I pick batch size = 4, that means I have created 4 subsequences and 2 batches
That would be right if you were using the cell implementations, if you want to use the LSTM implementation then it would be better to say that you have created “one batch of 4 subsequences of 2 timesteps each”, batch_size=4, seq_len=2.
Batches and subsequences get mixed up very easily. I don’t think I can reply to @Diego’s answer without confusing thing even more. So first some theory, then an example, then some code.
LSTM expects input of shape (seq_len, batch, input_size).
seq_len = the length of each subsequence.
batch = how many subsequences it will process in parallel.
input_size = how many values are there in each sequence at each timestep.
Now for an example
Suppose I have one big sequence of steps 0,1,2,…99 with f features at each timestep. The big sequence is a tensor of size (100, f).
If I want to train on subsequences of length 5 then I cut my big sequence into subsequences of length 5. In this case there are 20 subsequences. Then if I want to train with a batch size of 10, I take 10 of these subsequences.
In code… given a tensor sequence of shape (timesteps, features)
subsequences = sequence.split(desired_subsequence_length)
# each subsequence is of size (desired_subsequence_length, features)
# if the last subsequence is not of the same length, remove it.
if subsequences[-1].size(0) != desired_subsequence_length:
subsequences = subsequences[:-1]
# alternatively you could pad it, but I am not sure of the best method
for b in range(len(subsequences) / desired_batch_size):
training_batch = torch.cat(subsequences[b:b+desired_batch_size], dim=1)
# training_batch is of size (desired_subsequence_length, desired_batch_size, features)
# the last batch might contain fewer subsequences, but that doesn't matter here
model.reset_hidden()
output = model(training_batch)
...
With the above code, you must reset the hidden state before each batch because the batches don’t follow on from one another properly. If you want an example with batches that follow on properly, I can give you one, but I’ll wait until these concepts are in place. |
st83840 | Hi,
The following code is from PyTorch master documentation but I can not understand it; because I expect when we have an input with dimension [batch size, 1 , 3, 3], a filter tensor with dimension of [1, 1, 2, 2] should exist. As I think, F.conv2d would generate a tensor with dimension [batch_size, 1, 1, 1]. May I ask you to explain about the functionality of the following code and the way that I could see the real convolution like what I described with this code?
import torch
import torch.nn.functional as F
filters = torch.randn(8,2,2,2)
inputs = torch.randn(1,2,3,3)
a = F.conv2d(inputs, filters)
print(a.shape)
Thanks |
st83841 | Solved by Nikronic in post #2
Hi,
The only way to have a output such [batch, 1, 1, 1] is to have a filter with same_size as input when padding=0 and stride=1.
The output have to be [batch, 1, 2, 2] for your example input and filters. And the reason is you can have four 2x2 squares in a 3x3 matrix.
step one:
[[x, x, 0],
[x, x… |
st83842 | Hi,
The only way to have a output such [batch, 1, 1, 1] is to have a filter with same_size as input when padding=0 and stride=1.
The output have to be [batch, 1, 2, 2] for your example input and filters. And the reason is you can have four 2x2 squares in a 3x3 matrix.
step one:
[[x, x, 0],
[x, x, 0],
[0 ,0, 0]]
step two:
[[0, y, y],
[0, y, y],
[0 ,0, 0]]
step three:
[[0, 0, 0],
[z, z, 0],
[z ,z, 0]]
step four:
[[0, 0, 0],
[0, s, s],
[0 ,s, s]]
output:
[[x, y],
[z, s]]
So as you can see, we can extract 4 numbers which will be in a 2x2 matrix.
PS: number of filters determine number of channels in the output. Each filter a channel.
Good luck
Nik |
st83843 | hey guys I’ve been trying to get my pytorch segmentation model to coreML, but looks like I have to convert it to onnx first but I can’t seem to get it to work with everything I tried, is there anyone who’s really experienced In converting models? I would love your help.
link to model - https://drive.google.com/file/d/1yOkUCmlm8X8VQuh-hl1v4IeJe39PxnuP/view?usp=sharing 2 |
st83844 | You can try the export module
https://pytorch.org/docs/stable/onnx.html 4
If you want some extra read
https://michhar.github.io/convert-pytorch-onnx/ 5 |
st83845 | Hi,
I’m trying to implement a negative log likelihood loss function for a bivariate Gaussian distribution using torch MultivariateNormal.
my current implementation as follow:
import torch
from torch.distributions.multivariate_normal import MultivariateNormal as MVNormal
def Gaussian2DLikelihood(outputs, targets):
#mux is mean of x
#mux is mean of y
#sx,sy is std >0
#corr is correlation -1<corr<1
batch_size = targets.shape[0]
mux, muy, sx, sy, corr = outputs[:,0], outputs[:,1], outputs[:,2], outputs[:,3], outputs[:,4]
sx = torch.exp(sx)
sy = torch.exp(sy)
corr = torch.tanh(corr)
mux, muy, sx, sy, corr = mux.unsqueeze(-1), muy.unsqueeze(-1), sx.unsqueeze(-1), sy.unsqueeze(-1), corr.unsqueeze(-1)
mu_xy = torch.cat([mux,muy],1)
print('mu_xy.shape:',mu_xy.shape)#mu_xy.shape: torch.Size([8, 2])
cov_xy = torch.zeros(batch_size,2,2) #Covariance matrix
cov_xy[:,0,0] = (sx*sx).squeeze()
cov_xy[:,0,1] = (corr*sx*sy).squeeze()
cov_xy[:,1,0] = (corr*sx*sy).squeeze()
cov_xy[:,1,1] = (sy*sy).squeeze()
print('cov_xy.shape:',cov_xy.shape)#cov_xy.shape: torch.Size([8,2, 2])
loglik = -normal.log_prob(targets[0,:])
for d in range(1,batch_size):
loglik += -normal.log_prob(targets[d,:])
return loglik/batch_size
A test :
Gaussian2DLikelihood(torch.rand(8,5),torch.rand(8,2))
I’m not quite sure it’s correct, I’ve seen some examples where people take the log of loglik then they use the log-sum-exp thing.
Any ideas? |
st83846 | I’m trying building pytorch from source following this instruction 1. However, I encounter an error below, what am I missing?
In file included from ../caffe2/contrib/aten/aten_op.cc:1:0:
./caffe2/contrib/aten/aten_op.h: In lambda function:
./caffe2/contrib/aten/aten_op.h:9551:33: error: ‘pstrf’ is not a member of ‘at’
auto the_result = at::pstrf(self, upper, tol);
^
./caffe2/contrib/aten/aten_op.h: In lambda function:
./caffe2/contrib/aten/aten_op.h:9561:33: error: ‘pstrf’ is not a member of ‘at’
auto the_result = at::pstrf(self, upper);
^
./caffe2/contrib/aten/aten_op.h: In lambda function:
./caffe2/contrib/aten/aten_op.h:9571:33: error: ‘pstrf’ is not a member of ‘at’
auto the_result = at::pstrf(self);
^ |
st83847 | Solved by ptrblck in post #2
Could you try to sync and update the submodules again, run python setup.py clean and rebuild it again? |
st83848 | Could you try to sync and update the submodules again, run python setup.py clean and rebuild it again? |
st83849 | May I get some help to retrieve the constructor parameters (in_channels & out_channels) in nn.Conv2D from a saved Pytorch-ONNX model.
Background:
Load a pre-trained Alex-Net model from troch-vision, and save it to ONNX model.
Load the saved onnx model, and try to recover it back to a Pytorch model, but have difficulties to retrieve required information. For example, in conv operators, it has only 5 attributes (dilations, group, kernel_shape, pads, strides), the required parameters like in_channels and out_channels are missing.
My goal is to recover the model back from Onnx. Is there a way to retrieve enough information to achieve my goal? If no, is there a work-around?
I understand it is extremely similar to the existing issue “Importing Onnx model”, however I still want to raise this up, since from my point of view, the saved onnx model might lack adequate info to re-load by Pytorch itself and other frameworks.
Thanks in advance! |
st83850 | I am studying the link below.
https://pytorch.org/tutorials/beginner/nn_tutorial.html 12
But I can’t understand “log_softmax” written in this document.
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
How this function match to the figure below? |
st83851 | Hi Dong Wook!
spnova12:
But I can’t understand “log_softmax” written in this document.
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
How this function match to the figure below?
My guess is that you’re being thrown off by the “log-sum-exp trick”
that is being used to rewrite the “standard” expression for
log_softmax in a (mathematically-equivalent) form that avoids
floating-point overflow / underflow problems when evaluating
the expression numerically.
See the “log-sum-exp trick for log-domain calculations” section of
the LogSumExp 54 Wikipedia article for an explanation. (This article
is not specifically about the log_softmax function, but instead
about the related LogSumExp function. log_softmax has the
same potential overflow problems, and they are avoided using
the same log-sum-exp trick.)
Good luck.
K. Frank |
st83852 | Thank you very much for your explanation. It would be nice if I could find the same trick elsewhere, but it is not easy to find the same trick. |
st83853 | I use window 10, anaconda, python 3.7, spyder. GPU is geforce 1050Ti.
This is my dataset and dataloader code
class Dataset(emphasized textobject):
def init(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):
#nothing special here, just internalizing the constructor parameters
self.fname=fname
self.edge_weight = edge_weight
self.img_transform=img_transform
self.mask_transform = mask_transform
self.tables=tables.open_file(self.fname)
self.numpixels=self.tables.root.numpixels[:]
self.nitems=self.tables.root.img.shape[0]
self.tables.close()
self.img = None
self.mask = None
def __getitem__(self, index):
#opening should be done in __init__ but seems to be
#an issue with multithreading so doing here
with tables.open_file(self.fname,'r') as db:
self.img=db.root.img
self.mask=db.root.mask
#get the requested image and mask from the pytable
img = self.img[index,:,:,:]
mask = self.mask[index,:,:]
#the original Unet paper assignes increased weights to the edges of the annotated objects
#their method is more sophistocated, but this one is faster, we simply dilate the mask and
#highlight all the pixels which were "added"
if(self.edge_weight):
weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask
else: #otherwise the edge weight is all ones and thus has no affect
weight = np.ones(mask.shape,dtype=mask.dtype)
mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision
weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition
img_new = img
mask_new = mask
weight_new = weight
seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations
if self.img_transform is not None:
random.seed(seed) # apply this seed to img transforms
img_new = self.img_transform(img)
if self.mask_transform is not None:
random.seed(seed)
mask_new = self.mask_transform(mask)
mask_new = np.asarray(mask_new)[:,:,0].squeeze()
random.seed(seed)
weight_new = self.mask_transform(weight)
weight_new = np.asarray(weight_new)[:,:,0].squeeze()
return img_new, mask_new, weight_new
def __len__(self):
return self.nitems
In[ ]:
#note that since we need the transofrmations to be reproducible for both masks and images
#we do the spatial transformations first, and afterwards do any color augmentations
img_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
transforms.RandomResizedCrop(size=patch_size),
transforms.RandomRotation(180),
transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=.5),
transforms.RandomGrayscale(),
transforms.ToTensor()
])
mask_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomVerticalFlip(),
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
transforms.RandomResizedCrop(size=patch_size,interpolation=PIL.Image.NEAREST),
transforms.RandomRotation(180),
])
dataset={}
dataLoader={}
for phase in phases: #now for each of the phases, we’re creating the dataloader
#interestingly, given the batch size, i’ve not seen any improvements from using a num_workers>0
dataset[phase]=Dataset(f"./{dataname}_{phase}.pytable", img_transform=img_transform , mask_transform = mask_transform ,edge_weight=edge_weight)
dataLoader[phase]=DataLoader(dataset[phase], batch_size=batch_size, shuffle=True, num_workers=2, pin_memory=True)
======================================================
for x,y,w in dataLoader[‘train’]:
print(x.shape, y.shape, w.shape)
I try this and it cause error.
ipdb> _CudaDeviceProperties(name=‘GeForce GTX 1050 Ti’, major=6, minor=1, total_memory=4096MB, multi_processor_count=6)
total params: 122466
ipdb> Traceback (most recent call last):
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 200, in
for x,y,w in dataLoader[‘train’]:
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 576, in next
idx, batch = self._get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 543, in _get_batch
success, data = self._try_get_batch()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 519, in _try_get_batch
raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly’.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 764, 5540) exited unexpectedly
=========================================================
When i try this one, error occurs too.
writer=SummaryWriter() #open the tensorboard visualiser
best_loss_on_test = np.Infinity
edge_weight=torch.tensor(edge_weight).to(device)
start_time = time.time()
for epoch in range(num_epochs):
#zero out epoch based performance variables
all_acc = {key: 0 for key in phases}
all_loss = {key: torch.zeros(0).to(device) for key in phases}
cmatrix = {key: np.zeros((2,2)) for key in phases}
for phase in phases: #iterate through both training and validation states
if phase == 'train':
model.train() # Set model to training mode
else: #when in eval mode, we don't want parameters to be updated
model.eval() # Set model to evaluate mode
for ii , [X, y, y_weight] in enumerate(dataLoader[phase]): #for each of the batches
X = X.to(device) # [Nbatch, 3, H, W]
y_weight = y_weight.type('torch.FloatTensor').to(device)
y = y.type('torch.LongTensor').to(device) # [Nbatch, H, W]
File “”, line 1, in
debugfile(‘C:/Users/mbmhm/Desktop/unet/train_unet.py’, wdir=‘C:/Users/mbmhm/Desktop/unet’)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 856, in debugfile
debugger.run(“runfile(%r, args=%r, wdir=%r)” % (filename, args, wdir))
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\bdb.py”, line 585, in run
exec(cmd, globals, locals)
File “”, line 1, in
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 827, in runfile
execfile(filename, namespace)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
exec(compile(f.read(), filename, ‘exec’), namespace)
File “c:/users/mbmhm/desktop/unet/train_unet.py”, line 265, in
for ii , [X, y, y_weight] in enumerate(dataLoader[phase]): #for each of the batches
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 193, in iter
return _DataLoaderIter(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\site-packages\torch\utils\data\dataloader.py”, line 469, in init
w.start()
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\process.py”, line 112, in start
self._popen = self._Popen(self)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\context.py”, line 322, in _Popen
return Popen(process_obj)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\popen_spawn_win32.py”, line 89, in init
reduction.dump(process_obj, to_child)
File “C:\Users\mbmhm\ansel\Anaconda3\envs\moongpu\lib\multiprocessing\reduction.py”, line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File “stringsource”, line 2, in tables.hdf5extension.Array.reduce_cython
TypeError: self.dims,self.dims_chunk,self.maxdims cannot be converted to a Python object for pickling
========================================
plz solve this Problem |
st83854 | I apologize for a long question and if it seems a really strange thing to ask.
If I have an N by N input image and I convolve it with a kernel that consists of (+1,-1) values, should this operation be faster then if kernel had random numbers (say, from a standard normal distribution)? To me, it seems that during the convolution operation the multiplication of the image and kernel values leads to a simple sign change but it still counts as regular multiplication and hence does not really affect the operation count and overall complexity. However, my colleague tells me that I’m wrong and we should see an increase in speed, which makes me sort of doubt that I am implementing everything correctly.
So far I have tried using timing magic in Jupyter notebooks to measure the time per convolution with (+1,-1) kernels and with regular kinds. I saw no difference. My main doubt is: should there be any at all? Is there a way to speed this up for such a specific kernel?
from torch.nn import functional as f
import torch
kernel = torch.randn((255,3,3,3)).sign_().float()
image = torch.randn((1,3,224,224)).float()
%%timeit
f.conv2d(image, kernel, stride=1, padding=1,)
This gives me: 10 loops, best of 3: 78.3 ms per loop
while running without sign_() operation yields 78.9 per loop.
Should there be any significant difference? |
st83855 | I am working on binary classification with time-series data. My approach is to, given a 30-second long signal, split it into n (overlapping) 2-second long signals, feed each one of these n 2-second long signals into the model, get n predictions as output in an array, and then compute the prediction of the whole 30-second long signal by aggregating the n predictions, before using this single prediction in the loss.
My model isn’t learning, and I suspect it is because I have not adequately set .requires_grad=true in the right places during this entire computation.
Here’s my code:
input_size = 30000
window_size = 2000
window_step = 100
for i in range(batch_size):
signal = torch.reshape(batch_x[i], (1, 1, input_size))
steps = ((input_size - window_size) / window_step) + 1
agg_pred = []
loss = 0
for j in range(steps):
signal_segment = signal[:, :, j*window_step:j*window_step + window_size]
y_pred = model(signal_segment)
agg_pred.append(int(round(y_pred)))
signal_pred = torch.tensor(np.argmax(np.bincount(np.array(agg_pred))), dtype=torch.float, device="cuda", requires_grad=True)
loss += nn.BCELoss(signal_pred, batch_y[i])
loss_values.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Any ideas as to what I could try modifying? |
st83856 | You are detaching the computation graph in a few places:
int(round(y_pred)) will detach y_pred so that the backward call cannot compute the gradients of all preceding operations
numpy operations are not differentiable in PyTorch, so you would have to use the corresponding PyTorch method or write the backward function manually
re-wrapping the output in a torch.tensor(, requires_grad=True) will also detach the input from all preceding operations |
st83857 | Issue:
I am getting a segfault error when I train an RNN model using num_workers>0 on the Dataloader. The data is being loaded from a kyoto cabinet file, using a custom collate function.
Question:
What can I do to solve this problem?
Trying to isolate the issue:
If I comment the code related to the loss function/optimization (nn.BCELoss, backward, adam) inside the train batch generator loop, the code works fine for several epochs. If I train with num_workers=0 on the Dataloader, there are no issues either and I can also train for several epochs (very slow though). When I uncomment the loss/optimization code, it usually works for the first half of the mini-batches but then I get the memory error on the second half - always on the first epoch.
Environment:
PyTorch Version: 1.1.0
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Anaconda version: Anaconda3-2019.03-Linux-x86_64.sh
$ python --version
Python 3.7.3
$ cat /usr/local/cuda/version.txt
CUDA Version 10.1.168
$ cat /etc/os-release
NAME=“CentOS Linux”
VERSION=“7 (Core)”
ID=“centos”
ID_LIKE=“rhel fedora”
VERSION_ID=“7”
PRETTY_NAME=“CentOS Linux 7 (Core)”
ANSI_COLOR=“0;31”
CPE_NAME=“cpe:/o:centos:centos:7”
HOME_URL=“https://www.centos.org/”
BUG_REPORT_URL=“https://bugs.centos.org/”
CENTOS_MANTISBT_PROJECT=“CentOS-7”
CENTOS_MANTISBT_PROJECT_VERSION=“7”
REDHAT_SUPPORT_PRODUCT=“centos”
REDHAT_SUPPORT_PRODUCT_VERSION=“7”
$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
Copyright © 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Error log:
*** Error in `python’: double free or corruption (fasttop): 0x00007f39fc04af40 *** time: 0:12:22
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x81609)[0x7f3b58f82609]
/usr/lib64/libcuda.so.1(+0x2011f2)[0x7f3afb3c71f2]
/usr/lib64/libcuda.so.1(+0x10abc2)[0x7f3afb2d0bc2]
/usr/lib64/libcuda.so.1(+0x10ac39)[0x7f3afb2d0c39]
/usr/lib64/libcuda.so.1(cuStreamCreate+0x5b)[0x7f3afb41635b]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/…/…/…/…/libcudart.so.10.0(+0xffa2)[0x7f3b430ecfa2]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/…/…/…/…/libcudart.so.10.0(cudaStreamCreate+0x64)[0x7f3b43126874]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(_ZN17RNNBackwardFilterIfffE4initEP12cudnnContextP14cudnnRNNStructi11PerfOptions+0x3b0)[0x7f3b16a725c0]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(_Z24RNN_WGRAD_LaunchTemplateIfffE13cudnnStatus_tP12cudnnContextP14cudnnRNNStructiPKP17cudnnTensorStructPKvSA_SA_SA_mPvSA_m11PerfOptions+0x80)[0x7f3b16a76270]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(cudnnRNNBackwardWeights+0xf02)[0x7f3b16a71602]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(ZN2at6native26_cudnn_rnn_backward_weightERKNS_6TensorEN3c108ArrayRefIS1_EElS3_S3_S3_S3_lllbdbbNS5_IlEES3_S3+0xc0b)[0x7f3b142accbb]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(_ZN2at6native19_cudnn_rnn_backwardERKNS_6TensorEN3c108ArrayRefIS1_EElS3_S3_S3_S3_S3_S3_S3_lllbdbbNS5_IlEES3_S3_St5arrayIbLm4EE+0x2f6)[0x7f3b142b3106]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so(_ZNK2at8CUDAType19_cudnn_rnn_backwardERKNS_6TensorEN3c108ArrayRefIS1_EElS3_S3_S3_S3_S3_S3_S3_lllbdbbNS5_IlEES3_S3_St5arrayIbLm4EE+0x178)[0x7f3b1438a7f8]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(_ZNK5torch8autograd12VariableType19_cudnn_rnn_backwardERKN2at6TensorEN3c108ArrayRefIS3_EElS5_S5_S5_S5_S5_S5_S5_lllbdbbNS7_IlEES5_S5_St5arrayIbLm4EE+0x1101)[0x7f3b12421f81]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(_ZN5torch8autograd9generated16CudnnRnnBackward5applyEOSt6vectorINS0_8VariableESaIS4_EE+0x6d4)[0x7f3b121c9f04]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(+0x307622)[0x7f3b121ab622]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(_ZN5torch8autograd6Engine17evaluate_functionERNS0_12FunctionTaskE+0x385)[0x7f3b121a4745]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(_ZN5torch8autograd6Engine11thread_mainEPNS0_9GraphTaskE+0xc0)[0x7f3b121a6740]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1(_ZN5torch8autograd6Engine11thread_initEi+0x2b0)[0x7f3b121a39e0]
/opt/mapr/tools/python/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so(_ZN5torch8autograd6python12PythonEngine11thread_initEi+0x2a)[0x7f3b3efc228a]
/opt/mapr/tools/python/anaconda3/lib/libstdc++.so.6(+0xb8678)[0x7f3b57e88678]
/usr/lib64/libpthread.so.0(+0x7dd5)[0x7f3b592d5dd5]
/usr/lib64/libc.so.6(clone+0x6d)[0x7f3b58fff02d]
======= Memory map: ========
200000000-200200000 rw-s 00000000 00:05 21694 /dev/nvidiactl
200200000-200600000 —p 00000000 00:00 0
200600000-200800000 rw-s 00000000 00:05 21694 /dev/nvidiactl
200800000-200a00000 rw-s 00000000 00:05 27538 /dev/nvidia0
200a00000-206200000 rw-s 00000000 00:05 21694 /dev/nvidiactl
206200000-206400000 rw-s 00000000 00:05 27538 /dev/nvidia0
206400000-207400000 —p 00000000 00:00 0
207400000-207600000 rw-s 00000000 00:05 21694 /dev/nvidiactl
207600000-207800000 rw-s 00000000 00:05 21694 /dev/nvidiactl
207800000-207a00000 rw-s 207800000 00:05 19518 /dev/nvidia-uvm
207a00000-207c00000 —p 00000000 00:00 0
207c00000-207e00000 rw-s 00000000 00:05 21694 /dev/nvidiactl
207e00000-208000000 rw-s 00000000 00:04 186466 /dev/zero (deleted)
208000000-300200000 —p 00000000 00:00 0
10000000000-10004000000 —p 00000000 00:00 0
7f390c000000-7f390c021000 rw-p 00000000 00:00 0
7f390c021000-7f3910000000 —p 00000000 00:00 0
7f3914000000-7f39f8000000 —p 00000000 00:00 0
7f39f8000000-7f39f8021000 rw-p 00000000 00:00 0
7f39f8021000-7f39fc000000 —p 00000000 00:00 0
7f39fc000000-7f39fc107000 rw-p 00000000 00:00 0
7f39fc107000-7f3a00000000 —p 00000000 00:00 0
7f3a00000000-7f3a30000000 —p 00000000 00:00 0
7f3a30000000-7f3a30021000 rw-p 00000000 00:00 0
7f3a30021000-7f3a34000000 —p 00000000 00:00 0
7f3a34000000-7f3a34021000 rw-p 00000000 00:00 0
7f3a34021000-7f3a38000000 —p 00000000 00:00 0
7f3a38000000-7f3a38021000 rw-p 00000000 00:00 0
7f3a38021000-7f3a3c000000 —p 00000000 00:00 0
7f3a3c000000-7f3a3c021000 rw-p 00000000 00:00 0
7f3a3c021000-7f3a40000000 —p 00000000 00:00 0
7f3a40000000-7f3a40021000 rw-p 00000000 00:00 0
7f3a40021000-7f3a44000000 —p 00000000 00:00 0
7f3a447f9000-7f3a447fa000 —p 00000000 00:00 0
7f3a447fa000-7f3a44ffa000 rw-p 00000000 00:00 0 [stack:17157]
7f3a44ffa000-7f3a44ffb000 —p 00000000 00:00 0
7f3a44ffb000-7f3a457fb000 rw-p 00000000 00:00 0 [stack:17156]
7f3a457fb000-7f3a457fc000 —p 00000000 00:00 0
7f3a457fc000-7f3a45ffc000 rw-p 00000000 00:00 0 [stack:17113]
7f3a45ffc000-7f3a45ffd000 —p 00000000 00:00 0
7f3a45ffd000-7f3a467fd000 rw-p 00000000 00:00 0 [stack:17112]
7f3a467fd000-7f3a467fe000 —p 00000000 00:00 0
7f3a467fe000-7f3a46ffe000 rw-p 00000000 00:00 0 [stack:17111]
7f3a46ffe000-7f3a46fff000 —p 00000000 00:00 0
7f3a46fff000-7f3a477ff000 rw-p 00000000 00:00 0 [stack:17110]
7f3a477ff000-7f3a47800000 —p 00000000 00:00 0
7f3a47800000-7f3a48000000 rw-p 00000000 00:00 0 [stack:17109]
7f3a48000000-7f3a48021000 rw-p 00000000 00:00 0
7f3a48021000-7f3a4c000000 —p 00000000 00:00 0
7f3a4c000000-7f3a4c021000 rw-p 00000000 00:00 0
7f3a4c021000-7f3a50000000 —p 00000000 00:00 0
7f3a50000000-7f3a50021000 rw-p 00000000 00:00 0
7f3a50021000-7f3a54000000 —p 00000000 00:00 0
7f3a54000000-7f3a54021000 rw-p 00000000 00:00 0
7f3a54021000-7f3a58000000 —p 00000000 00:00 0
7f3a58000000-7f3a58021000 rw-p 00000000 00:00 0
7f3a58021000-7f3a5c000000 —p 00000000 00:00 0
7f3a5c000000-7f3a5c021000 rw-p 00000000 00:00 0
7f3a5c021000-7f3a60000000 —p 00000000 00:00 0
7f3a60000000-7f3a60021000 rw-p 00000000 00:00 0
7f3a60021000-7f3a64000000 —p 00000000 00:00 0
7f3a64000000-7f3a64021000 rw-p 00000000 00:00 0
7f3a64021000-7f3a68000000 —p 00000000 00:00 0
7f3a68000000-7f3a68021000 rw-p 00000000 00:00 0
7f3a68021000-7f3a6c000000 —p 00000000 00:00 0
7f3a6c000000-7f3a6c021000 rw-p 00000000 00:00 0
7f3a6c021000-7f3a70000000 —p 00000000 00:00 0
7f3a70000000-7f3a70021000 rw-p 00000000 00:00 0
7f3a70021000-7f3a74000000 —p 00000000 00:00 0
7f3a74000000-7f3a74021000 rw-p 00000000 00:00 0
7f3a74021000-7f3a78000000 —p 00000000 00:00 0
7f3a78000000-7f3a78021000 rw-p 00000000 00:00 0
7f3a78021000-7f3a7c000000 —p 00000000 00:00 0
7f3a7c000000-7f3a7c021000 rw-p 00000000 00:00 0
7f3a7c021000-7f3a80000000 —p 00000000 00:00 0
7f3a80000000-7f3a80021000 rw-p 00000000 00:00 0
7f3a80021000-7f3a84000000 —p 00000000 00:00 0
7f3a847fd000-7f3a847fe000 —p 00000000 00:00 0
7f3a847fe000-7f3a84ffe000 rw-p 00000000 00:00 0 [stack:17108]
7f3a84ffe000-7f3a84fff000 —p 00000000 00:00 0
7f3a84fff000-7f3a857ff000 rw-p 00000000 00:00 0 [stack:17107]
7f3a857ff000-7f3a85800000 —p 00000000 00:00 0
7f3a85800000-7f3a86000000 rw-p 00000000 00:00 0 [stack:17106]
7f3a86000000-7f3aa0000000 —p 00000000 00:00 0
7f3aa03ca000-7f3aa040a000 rw-p 00000000 00:00 0
7f3aa040a000-7f3aa040b000 —p 00000000 00:00 0
7f3aa040b000-7f3aa0c0b000 rw-p 00000000 00:00 0 [stack:17105]
7f3aa0c0b000-7f3aa0c0c000 —p 00000000 00:00 0
7f3aa0c0c000-7f3aa3a06000 rw-p 00000000 00:00 0 [stack:17104]
7f3aa3ffc000-7f3aa3ffd000 —p 00000000 00:00 0
7f3aa3ffd000-7f3aa47fd000 rw-p 00000000 00:00 0 [stack:17103]
7f3aa47fd000-7f3aa47fe000 —p 00000000 00:00 0
7f3aa47fe000-7f3aa4ffe000 rw-p 00000000 00:00 0 [stack:17102]
7f3aa4ffe000-7f3aa4fff000 —p 00000000 00:00 0
7f3aa4fff000-7f3aa57ff000 rw-p 00000000 00:00 0 [stack:17101]
7f3aa57ff000-7f3aa5800000 —p 00000000 00:00 0
7f3aa5800000-7f3aa6000000 rw-p 00000000 00:00 0 [stack:17100]
7f3aa6000000-7f3ab1c00000 —p 00000000 00:00 0
7f3ab1c00000-7f3ab1e00000 rw-s 00000000 00:04 189555 /dev/zero (deleted)
7f3ab1e00000-7f3ab9000000 —p 00000000 00:00 0
7f3ab9000000-7f3ab9200000 rw-s 00000000 00:04 186469 /dev/zero (deleted)
7f3ab9200000-7f3ace400000 —p 00000000 00:00 0
7f3ace400000-7f3ace600000 rw-s 00000000 00:04 186464 /dev/zero (Aborted |
st83858 | I’m implementing an algorithm that involves some tricks to allow parallelism in the forward runs and gradient computations during the training process. I’m doing these computations using the multiprocessing library, which seems to mean that I can’t set requires_grad=True for my variables even if I do not use the backward() function and calculate them myself. I wonder whether this also means I’m unable to use the optim optimization routines, such as SGD, or if there is a way to make use of them? |
st83859 | Solved by ptrblck in post #2
The optimizers should still work, as they are just using the .grad attribute of each passed parameter:
x = torch.zeros(1)
optimizer = torch.optim.SGD([x], lr=1.)
x.grad = torch.tensor([10.])
optimizer.step()
print(x)
> tensor([-10]) |
st83860 | The optimizers should still work, as they are just using the .grad attribute of each passed parameter:
x = torch.zeros(1)
optimizer = torch.optim.SGD([x], lr=1.)
x.grad = torch.tensor([10.])
optimizer.step()
print(x)
> tensor([-10]) |
st83861 | I want to create one code such that no matter what task or input size the image is, it will always produce a ConvNet such that the padding is set such that the size of the features is always equal to the size of the original image (up until the end/final layer).
How does one do that in pytorch?
I realized this might be useful:
'Same' convolution in pytorch vision
Hi,
Is there a simple way to do ‘same’ convolution in pytorch such that the output spacial dimensions equals the input counterpart ?
Best
Cross posted with code:
stackoverflow.com
Implementing same convolution in Pytorch 114
python, conv-neural-network, pytorch
asked by
Pinocchio
on 04:03PM - 31 Jul 19 UTC |
st83862 | I am facing the transformation error while calling the train function in CNN model
Here is my code that i used for dataloader
#make a class for data
class OAIDataset(Dataset):
def __init__(self, csv_file,root_dir, transform=None):
# self.data = pd.read_csv(csv_file, header=None)
self.data = csv_file
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self,idx):
img_name = os.path.join(self.root_dir, str(self.data['ID'].iloc[idx])+'.npy')
patches, p_id = np.load(img_name, allow_pickle= True)
img_class = int(self.data.iloc[idx,2])
side = self.data.iloc[idx,1]
if side ==1:
image = np.array(patches['R'].astype('uint8'), 'L') #[image['R':side]|image['L':side]]
else:
image = np.array(patches['L'].astype('uint8'), 'L')
if self.transform is not None:
image = self.transform(image)
sample = {'image': image, 'grade':img_class}
return sample
# Define our data transforms
#
trans = transforms.Compose([ transforms.ToTensor()])
# Call the dataset
#trans =transforms.ToTensor()
train_data =OAIDataset(train_df,root_dirA,transform=trans )
test_data =OAIDataset(test_df,root_dirA,transform=trans)
from sklearn.model_selection import GroupKFold # import sklearn module
# creating kfold and make indices
kfold = GroupKFold(n_splits=5)
snapshots = []
#iterating over folds
for train_index, val_index in kfold.split(X=train_df, y=grades_dev, groups=groups_dev):
print("\nTRAIN:", train_index, "TEST:", val_index)
#Subset of a dataset at saccording to indices
train_set = torch.utils.data.Subset(train_data, train_index)
val_set = torch.utils.data.Subset(train_data, val_index)
#load data(train and validation)
train_loader = torch.utils.data.DataLoader(train_set,batch_size=50,shuffle=True,num_workers=0)
val_loader = torch.utils.data.DataLoader(val_set,batch_size=50,shuffle=False,num_workers=0)
model = ConvNet()
def train_val_model(model, epoch):
model.train() #set the model to training mode
for epoch in range(epoch):
losses= 0
closs = 0
for i,batch in enumerate(train_loader):
image, grade = batch['image'], batch['grade']
#image=image.unsqueeze(1).type(torch.FloatTensor)
#image = torch.from_numpy(image).float()
optimizer.zero_grad()
prediction = model(image)
loss = costFunction(prediction,grade)
closs += loss.item()
loss.backward()
optimizer.step()
# storing the loss
losses += loss.data[0]
# losses.append(loss.item())
# num_times = num_times + 1
print('epoch',epoch,'losses',closs)
Here is my error massages
Traceback (most recent call last):
File "/home/fatema/Downloads/assignment3/DATAfile.py", line 288, in <module>
train_val_model(model,1)
File "/home/fatema/Downloads/assignment3/DATAfile.py", line 154, in train_val_model
for i,batch in enumerate(train_loader):
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in __next__
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in <listcomp>
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 107, in __getitem__
return self.dataset[self.indices[idx]]
File "/home/fatema/Downloads/assignment3/DATAfile.py", line 62, in __getitem__
image = self.transform(image)
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 49, in __call__
img = t(img)
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 76, in __call__
return F.to_tensor(pic)
File "/home/fatema/miniconda3/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 48, in to_tensor
img = torch.from_numpy(pic.transpose((2, 0, 1)))
ValueError: axes don't match array
Process finished with exit code 1
I tried to implement sever transformation but could not solved. |
st83863 | The raised ValueError is actually raised from the numpy array pic, which apparently does not have three dimensions:
x = np.random.randn(10, 10)
x.transpose((2, 0, 1))
> ValueError: axes don't match array
Could you print the shape of image before passing it to the transformation? |
st83864 | In getitem I have put ’ img_class = int(self.data.iloc[idx,2])’ in that it showed error like Traceback (most recent call last): File "/home/fatema/Downloads/numpy data.py", line 107, in <module> print(i, sample['image'].shape, sample['grade'].shape) AttributeError: 'int' object has no attribute 'shape'
When i removed 'int ’ from imag_class,i am abled to print image shape but i have not got any information about img_class.
Here is my data set image shape.
print(sample['image'].shape)
print(sample['grade'].shape)
Traceback (most recent call last):
File "/home/fatema/miniconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3325, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-35-b3e87aaa979e>", line 2, in <module>
print(sample['grade'].shape)
AttributeError: 'int' object has no attribute 'shape'
(64, 64)
The shape of the image is(64,64)
I am wondering that why i am not getting shape of ‘‘grade’’? |
st83865 | Is the AttributeError thrown for sample['image'] or sample['grade']?
Are you able to get some sample before the original error is raised?
If so, could you add the print statement regarding the shape into __getitem__ directly before calling self.transform and post the output here? |
st83866 | mchowdhu:
torch.from_numpy(pic.transpose((2, 0, 1))) ValueError: axes don’t match array
AttributeError seems comes from sample['grade']whenever i call '‘print(sample[‘grade’])’ it gives output*
(64, 64)
2.0
I am not quit sure what do you mean by print __getitem__? Now i called the ‘print’ function like this print(sample['image']) print(sample['grade'])
And got the result like thisprint(sample['image']) print(sample['grade']) [[175 177 176 ... 81 79 68] [175 175 174 ... 79 82 79] [173 172 173 ... 76 75 78] ... [122 120 121 ... 45 45 46] [121 121 120 ... 45 45 44] [122 122 123 ... 44 44 43]] 2 |
st83867 | Hi
I have a rather small image dataset and want want to augment my training images.
However I want the training-dataloader to use unaugmented images as well as augmented images.
For that I am using the ConcatDataset-class. I also want to use WeightedRandomSampler, because some classes have more images than others, the amounts of images per class being: [602,536,1088,751].
datasetBasic = datasets.ImageFolder('path', transform=transformsBasic)
datasetAugmented = datasets.ImageFolder('path', transform=transformsAugment)
concatDataset=torch.utils.data.ConcatDataset((datasetBasic,datasetAugmented))
dataloaders_dict = {"train": torch.utils.data.DataLoader(concatDataset, batch_size=batch_size, sampler=sampler, num_workers=0, pin_memory=True),
"val": torch.utils.data.DataLoader(datasetBasic, batch_size=batch_size, shuffle=True, num_workers=0,pin_memory=True)}
However I do not know how I should use WeightedRandomSampler together with ConcatDataset.
Is there any suggestion to how I can solve this issue?
This post ist very similar to: Sampling from a concatenated dataset 41 , but there were no more replies on that one so I am asking again ^^’.
PS: Great forums, the people seem to be really active here ^^ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.