instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
how to deploy pytorch neural network for production code in Unity | I have a neural network trained in pytorch that I'd like to deploy into a Unity app. What's the best way to do it? I'm also interested in allowing the user to further train the neural network in the Unity app, which I guess would require to integrate some part of pytorch into Unity (maybe there's a way to integrate pytorch's C++ / torchscript API with Unity?). If anybody has experience with this, I'd like to know what the best alternatives are.
| Check out the new features in Unity ML Agents. There is an inference engine within Unity ML Agents (called Barracuda) that allows you to use pretrained models within your app. AFAIK, you can convert Tensorflow and ONNX models into Barracuda. It should not be a problem as Pytorch models can be converted to the ONNX format. You may need to retrain your model if it is directly affected by the app (for example, if it is an RL agent).
EDIT: To answer your second question, you can continue to train the model but not in real time. What you may be able to do is collect data from the user, and use that to further train the model (that is how TensorFlow Serving works). You can do that by converting the PyTorch model into a TensorFlow model via ONNX.
EDIT 2: Barracuda is now a standalone and production ready inference engine that runs exclusively on the ONNX format. Any framework that can be converted into the format (e.g. Keras, Pytorch, MXNet) will work as long as they contain the supported operators.
| https://stackoverflow.com/questions/56316823/ |
CUDA runtime unknown error, maybe a driver problem? CUDA can't see my gpu | My code is very simple for now:
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
torch.cuda.current_device()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-20-3380d2c12118> in <module>
----> 1 torch.cuda.current_device()
~/.conda/envs/tensorflow/lib/python3.6/site-packages/torch/cuda/__init__.py in current_device()
349 def current_device():
350 r"""Returns the index of a currently selected device."""
--> 351 _lazy_init()
352 return torch._C._cuda_getDevice()
353
~/.conda/envs/tensorflow/lib/python3.6/site-packages/torch/cuda/__init__.py in _lazy_init()
161 "Cannot re-initialize CUDA in forked subprocess. " + msg)
162 _check_driver()
--> 163 torch._C._cuda_init()
164 _cudart = _load_cudart()
165 _cudart.cudaGetErrorName.restype = ctypes.c_char_p
RuntimeError: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCGeneral.cpp:51
Looking in the internet it looks like it is a version problem, but I swear I tried all combinations of drivers from CUDA 10.0, 10.1, tensorflow-gpu 13, 12, etc. and nothing seems to work.
NVIDIA driver: nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.14 Driver Version: 430.14 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 930MX Off | 00000000:01:00.0 Off | N/A |
| N/A 36C P8 N/A / N/A | 139MiB / 2004MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 986 G /usr/lib/xorg/Xorg 64MiB |
| 0 1242 G /usr/bin/gnome-shell 72MiB |
+-----------------------------------------------------------------------------+
CUDA VERSION nvcc --version :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
tensorflow-gpu version: pip list | grep tensorflow:
tensorflow 1.13.1
tensorflow-estimator 1.13.0
pytorch version pip list | grep torch
pytorch-pretrained-bert 0.6.2
torch 1.1.0
torchvision 0.3.0
Can anyone see a problem of compatibility and explain why and how I can fix it?
| Did you test your cuda installation ? If not you can use (which will take a while):
$ cd ~/NVIDIA_CUDA-10.0_Samples
$ make
And then:
$ cd ~/NVIDIA_CUDA-10.0_Samples/bin/x86_64/linux/release
$./deviceQuery
You should get "Test passed!" as result.
Source
| https://stackoverflow.com/questions/56327475/ |
Why does multi layer perceprons outperform RNN in CartPole? | Recently, I compared two models for a DQN on CartPole-v0 environment. One of them is a multilayer perceptron with 3 layers and the other is an RNN built up from an LSTM and 1 fully connected layer. I have an experience replay buffer of size 200000 and the training doesn't start until it is filled up.
Although MLP has solved the problem under a reasonable amount of training steps (this means to achieve a mean reward of 195 for the last 100 episodes), the RNN model could not converge as quickly and its maximum mean reward did not even reach 195 too!
I have already tried to increase batch size, add more neurons to the LSTM'S hidden state, increase the RNN'S sequence length and making the fully connected layer more complex - but every attempt failed as I saw enormous fluctuations in mean reward so the model hardly converged at all. May these are the sings of early overfitting?
class DQN(nn.Module):
def __init__(self, n_input, output_size, n_hidden, n_layers, dropout=0.3):
super(DQN, self).__init__()
self.n_layers = n_layers
self.n_hidden = n_hidden
self.lstm = nn.LSTM(input_size=n_input,
hidden_size=n_hidden,
num_layers=n_layers,
dropout=dropout,
batch_first=True)
self.dropout= nn.Dropout(dropout)
self.fully_connected = nn.Linear(n_hidden, output_size)
def forward(self, x, hidden_parameters):
batch_size = x.size(0)
output, hidden_state = self.lstm(x.float(), hidden_parameters)
seq_length = output.shape[1]
output1 = output.contiguous().view(-1, self.n_hidden)
output2 = self.dropout(output1)
output3 = self.fully_connected(output2)
new = output3.view(batch_size, seq_length, -1)
new = new[:, -1]
return new.float(), hidden_state
def init_hidden(self, batch_size, device):
weight = next(self.parameters()).data
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().to(device),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_().to(device))
return hidden
Contrarily to what I expected, the simpler model gave a much better result than the other; even though RNN is supposed to be better in processing time series data.
Can anybody tell me what's the reason for this?
Also, I have to state that I applied no feature engineering and both DQN's worked with raw data. Could RNN outperform the MLP on using normalized features? (I mean feeding both models with normalized data)
Is there anything you can recommend me to improve training efficiency on RNN's to achieve the best results?
|
Contrary to what I expected the simpler model gave much better result that the other; even though RNN's supposed to be better in processing time series data.
There is no time series in the cart-pole, the state contains all the information needed for optimal decision. It would be different if, for instance, you would learn from images and you would need to estimate the pole velocity from a series of images.
Also, it is not true that the more complex model should perform better. On the contrary, it is more likely to overfit. For the cart-pole you don't even need a NN, a simple linear approximator with RBFs or random Fourier features would suffice. A RNN + LSTM is for sure an overkill for such a simple problem.
| https://stackoverflow.com/questions/56330269/ |
Treat a tuple/list of Tensors as a single Tensor | I'm using Pytorch for some robotics Reinforcement Learning tasks. I'd like to use both images and information about the state as observations for this task. The implementation I'm using does not directly support this so I'm making some amendments. Expected observations are either state, as a 1 dimensional Tensor, or images as a 3 dimensional Tensor (channels, width, height). In my task I would like the observation to be a tuple of Tensors.
In many places in my codebase, the observation is of course expected to be a single Tensor, not a tuple of Tensors. Is there an easy way to treat a tuple of Tensors as a single Tensor?
For example, I would like:
observation.to(device)
to work as normal when observation is a single Tensor, and call .to(device) on each Tensor when observation is a tuple of Tensors.
It should be simple enough to create a data type that can support this, but I'm wondering does such a data type already exist? I haven't found anything so far.
| If your tensors are all of the same size, you can use torch.stack to concatenate them into one tensor with one more dimension.
Example:
>>> import torch
>>> a=torch.randn(2,1)
>>> b=torch.randn(2,1)
>>> c=torch.randn(2,1)
>>> a
tensor([[ 0.7691],
[-0.0297]])
>>> b
tensor([[ 0.4844],
[-0.9142]])
>>> c
tensor([[ 0.0210],
[-1.1543]])
>>> torch.stack((a,b,c))
tensor([[[ 0.7691],
[-0.0297]],
[[ 0.4844],
[-0.9142]],
[[ 0.0210],
[-1.1543]]])
You can then use torch.unbind to go the other direction.
| https://stackoverflow.com/questions/56344101/ |
Pytorch RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAType instead | I am trying to re-execute a GitHub project on my computer for recommendation using embedding, the goal is to first embed the user and item present in the movieLens dataset, and then use the inner product to predict a rating, when I finished the integration of all components, I got an error in the training.
Code:
from lightfm.datasets import fetch_movielens
movielens = fetch_movielens()
ratings_train, ratings_test = movielens['train'], movielens['test']
def _binarize(dataset):
dataset = dataset.copy()
dataset.data = (dataset.data >= 0.0).astype(np.float32)
dataset = dataset.tocsr()
dataset.eliminate_zeros()
return dataset.tocoo()
train, test = _binarize(movielens['train']), _binarize(movielens['test'])
class ScaledEmbedding(nn.Embedding):
""" Change the scale from normal to [0,1/embedding_dim] """
def reset_parameters(self):
self.weight.data.normal_(0, 1.0 / self.embedding_dim)
if self.padding_idx is not None:
self.weight.data[self.padding_idx].fill_(0)
class ZeroEmbedding(nn.Embedding):
def reset_parameters(self):
self.weight.data.zero_()
if self.padding_idx is not None:
self.weight.data[self.padding_idx].fill_(0)
class BilinearNet(nn.Module):
def __init__(self, num_users, num_items, embedding_dim, sparse=False):
super().__init__()
self.embedding_dim = embedding_dim
self.user_embeddings = ScaledEmbedding(num_users, embedding_dim,
sparse=sparse)
self.item_embeddings = ScaledEmbedding(num_items, embedding_dim,
sparse=sparse)
self.user_biases = ZeroEmbedding(num_users, 1, sparse=sparse)
self.item_biases = ZeroEmbedding(num_items, 1, sparse=sparse)
def forward(self, user_ids, item_ids):
user_embedding = self.user_embeddings(user_ids)
item_embedding = self.item_embeddings(item_ids)
user_embedding = user_embedding.view(-1, self.embedding_dim)
item_embedding = item_embedding.view(-1, self.embedding_dim)
user_bias = self.user_biases(user_ids).view(-1, 1)
item_bias = self.item_biases(item_ids).view(-1, 1)
dot = (user_embedding * item_embedding).sum(1)
return dot + user_bias + item_bias
def pointwise_loss(net,users, items, ratings, num_items):
negatives = Variable(
torch.from_numpy(np.random.randint(0,
num_items,
len(users))).cuda()
)
positives_loss = (1.0 - torch.sigmoid(net(users, items)))
negatives_loss = torch.sigmoid(net(users, negatives))
return torch.cat([positives_loss, negatives_loss]).mean()
embedding_dim = 128
minibatch_size = 1024
n_iter = 10
l2=0.0
sparse = True
num_users, num_items = train.shape
net = BilinearNet(num_users,
num_items,
embedding_dim,
sparse=sparse).cuda()
optimizer = optim.Adagrad(net.parameters(),
weight_decay=l2)
for epoch_num in range(n_iter):
users, items, ratings = shuffle(train)
user_ids_tensor = torch.from_numpy(users).cuda()
item_ids_tensor = torch.from_numpy(items).cuda()
ratings_tensor = torch.from_numpy(ratings).cuda()
epoch_loss = 0.0
for (batch_user,
batch_item,
batch_ratings) in zip(_minibatch(user_ids_tensor,
minibatch_size),
_minibatch(item_ids_tensor,
minibatch_size),
_minibatch(ratings_tensor,
minibatch_size)):
user_var = Variable(batch_user)
item_var = Variable(batch_item)
ratings_var = Variable(batch_ratings)
optimizer.zero_grad()
loss = pointwise_loss(net,user_var, item_var, ratings_var, num_items)
epoch_loss += loss.data[0]
loss.backward()
optimizer.step()
print('Epoch {}: loss {}'.format(epoch_num, epoch_loss))
Error:
RuntimeError Traceback (most recent call last) <ipython-input-87-dcd04440363f> in <module>()
22 ratings_var = Variable(batch_ratings)
23 optimizer.zero_grad()
---> 24 loss = pointwise_loss(net,user_var, item_var, ratings_var, num_items)
25 epoch_loss += loss.data[0]
26 loss.backward()
<ipython-input-86-679e10f637a5> in pointwise_loss(net, users, items, ratings, num_items)
8
9 positives_loss = (1.0 - torch.sigmoid(net(users, items)))
---> 10 negatives_loss = torch.sigmoid(net(users, negatives))
11
12 return torch.cat([positives_loss, negatives_loss]).mean()
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
__call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
<ipython-input-58-3946abf81d81> in forward(self, user_ids, item_ids)
16
17 user_embedding = self.user_embeddings(user_ids)
---> 18 item_embedding = self.item_embeddings(item_ids)
19
20 user_embedding = user_embedding.view(-1, self.embedding_dim)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
__call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
115 return F.embedding(
116 input, self.weight, self.padding_idx, self.max_norm,
--> 117 self.norm_type, self.scale_grad_by_freq, self.sparse)
118
119 def extra_repr(self):
~\Anaconda3\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1504 # remove once script supports set_grad_enabled 1505
_no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1506 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1507 1508
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAType instead (while checking arguments for embedding)
can anyone help me please ?
| I would suggest you to check the input type
I had the same issue which solved by converting the input type from int32 to int64.(running on win10)
ex:
x = torch.tensor(train).to(torch.int64)
| https://stackoverflow.com/questions/56360644/ |
Pytorch method for conditional use of intermediate layer instead of final cnn layer output. ie: allow nn to learn to use more layers or less | I'm implementing a residual cnn(modified smaller version of xception) in a low latency environment. I've done a lot of manual tuning to minimize the run time speed of my network (reducing number of filters, removing layers, etc).
But now I want to try allowing my network to make its classification prediction(final fcnn layer) on the residual connection after each residual block.
basic logic-
attempt final prediction with residual connection as input
if this fcnn layer predicts a certain class with a probability > a set threshold:
return fcnn output as if it was normal final layer
else:
do next residual block like normal and try the previous conditional again unless we are already at final block
My hope is this will allow my network to learn to solve easier problems with less computation while allowing it to still do the additional layers if it is still unsure of the classification.
So my basic question is: In pytorch, whats the best way to implement this conditional in a way that allows my nn at run time to decide whether to do more processing or not
Currently Ive tested returning the intermediate x's after the blocks in the forward function, but I dont know how best to setup the conditional to chose which x to return
Also side note: I believe I may end up needing another cnn layer between the residual and fcnn to serve as a function to convert the internal representation for processing to a representation the fcnn understands for classification.
| It has already been done and presented in ICLR 2018.
It appears as if in ResNets the first few bottlenecks learn representations (and therefore cannot be skipped) while the remaining bottlenecks refine the features and therefore can be skipped at a moderate loss of accuracy. (Stanisław Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, Yoshua Bengio Residual Connections Encourage Iterative Inference, ICLR 2018).
This idea was taken to the extreme with sharing weights across bottlenecks in Sam Leroux, Pavlo Molchanov, Pieter Simoens, Bart Dhoedt, Thomas Breuel, Jan Kautz IamNN: Iterative and Adaptive Mobile Neural Network for efficient image classification, ICLR 2018.
| https://stackoverflow.com/questions/56369748/ |
Your session crashed after using all available RAM in Google Colab | I am trying to get this to run in google colab https://github.com/oawiles/X2Face/blob/master/UnwrapMosaic/Face2Face_UnwrapMosaic.ipynb,
I was able to get it to run and display results once but since then lately I have been getting your session crashed after using all available RAM at this cell
BASE_MODEL = '/scratch/shared/slow/ow/eccv/2018/release_models/' # Change to your path
state_dict = torch.load(BASE_MODEL + 'x2face_model.pth')
model = UnwrappedFaceWeightedAverage(output_num_channels=2, input_num_channels=3, inner_nc=128)
model.load_state_dict(state_dict['state_dict'])
model = model.cuda()
model = model.eval()
im not sure if it is a colab issue or not
here is the logs
warn("IPython.utils.traitlets has moved to a top-level traitlets package.")
/usr/local/lib/python2.7/dist-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package.
| Try to change settings,
Go to run-time->change run-time type->hardware accelerator->select option GPU or TPU
I think this might help.
| https://stackoverflow.com/questions/56385211/ |
How to fix '_pickle.UnpicklingError: invalid load key, '<' ' error in Pytorch | The problem I encountered when I ran the official code of maskrcnn-benchmark for facebookresearch,which was wrong when loading the pre-training model.
The code runs on a remote server at the school and the graphics card is an NVIDIA P100.
checkpointer = DetectronCheckpointer(
cfg, model, optimizer, scheduler, output_dir, save_to_disk)
extra_checkpoint_data = checkpointer.load(cfg.MODEL.WEIGHT)
arguments.update(extra_checkpoint_data)
I expect to run the code correctly and understand why this is the problem.
| The reason about the problem is that the previous download was not finished. So when I deleted the original file and re-downloaded it, the problem was solved.
| https://stackoverflow.com/questions/56391392/ |
Pytorch second derivative returns None | I am unable to take a second derivative of the following function. When I want the second derivative with respect to u_s it works but with x_s it doesn't work.
Does anyone know what I have done wrong here?
def cost(xt, x_goal, u, Q, R):
return (xt - x_goal).matmul(Q).matmul((xt - x_goal).transpose(0,1)) + u.matmul(R).matmul(u)
x_s = tr.tensor([ 0.0000, -1.0000, 0.0000], dtype=torch.float64, requires_grad=True)
u_s = tr.tensor([-0.2749], dtype=torch.float64, requires_grad=True)
c = cost(x_s, x_Goal, u_s, tr.tensor(Q), tr.tensor(R))
c
output:
tensor([[4.0076]], dtype=torch.float64, grad_fn=<ThAddBackward>)
Cu = grad(c, u_s, create_graph=True)[0]
Cu
output:
tensor([-0.0550], dtype=torch.float64, grad_fn=<ThAddBackward>)
Cuu = grad(Cu, u_s, allow_unused=True)[0]
Cuu
output:
tensor([0.2000], dtype=torch.float64)
Cux = grad(Cu, x_s, allow_unused=True)
Cux
output:
(None,)
I am guessing the Cu itself is completely independent of x_s, but then the derivative should be zero at least, not None!
| You haven't done anything wrong.
Suppose I have variables x, y, and z=f(y). If I compute z.backward() and then try to ask for the gradient with respect to x, I get None. For example,
import torch
x = torch.randn(1,requires_grad=True)
y = torch.randn(1,requires_grad=True)
z = y**2
z.backward()
print(y.grad) # Outputs some non-zero tensor
print(x.grad) # None
So what does this have to do with your attempt to compute the second derivative Cux? When you write create_graph=True, PyTorch keeps track of all the operations in the derivative computations which computed Cu, and since the derivatives themselves are made up of primitive operations, you can compute the gradient of the gradient, as you are doing. The problem here is that the gradient Cu never encounters the variable x_s, so effectively Cu = f(u_s). This means that when you perform Cu.backward(), the computational graph for Cu never sees the variable x_s, so it's gradient type remains None, just like in the example above.
| https://stackoverflow.com/questions/56397270/ |
Why are there 3 pythons installed on my computer? | When I try to see the version of python installed on my computer, I see the followings:
(base) dhcp76:bin me$ python -V
Python 2.7.16 :: Anaconda, Inc.
(base) dhcp76:bin me$ python2 -V
Python 2.7.16
(base) dhcp76:bin me$ python3 -V
Python 3.7.3
Would this cause any issue? I have also installed anaconda3, but python3 doesn't point to that, and I don't know how to make it point to anaconda3.
So, my questions:
Would having 2 python versions both by brew and anaconda cause problems? If yes, should I remove one of them? (I prefer anaconda)
I installed pytorch using this link: http://deeplizard.com/learn/video/UWlFM0R_x6I, and now when I do import torch in all three version it works!! How is this possible if this links only installs using pip3?
Thanks!
| Yes, having different version of Python can cause significant headache when you're installing python packages.
For example, if you install a package with Brew, your Anaconda installation might not be able to find it and vice-versa.
I had numerous consistency issues with maintaining all these different version of Python before I decided to completely uninstall all of them and only keep MacPorts as my general package manager.
When you use sudo pip install that could be using a different python than when you use python -m pip install which could be different from pip3 install ... etc.
There are lots of tradeoffs to each package manager.
Brew is good for people who want to get up to speed on a project quickly.
Anaconda has a great interface that allows you to minimize command line interface interactions and abstracts away some configuration stuff.
Macports has way more packages actively maintained than Brew, but requires more setup. For me, it was worth it because I've never had Python package dependency issues anymore. (Though that was also due to learning how to properly use virtualenvwrapper too.)
Most of the time, if you are fully aware of which pip/python you're calling, then you can avoid any issues and have all three.
However realistically, you may lose track of which Python versions are available with which packages in which system paths.
If things get bad, you might encounter situations where you try to pip install a package, and your system will say it already exists, but you may not be able to import <package> from python <file>.py or Terminal.
| https://stackoverflow.com/questions/56401225/ |
In pytorch, how to fill a tensor with another tensor? | I'm looking for a way to expand the size of an image by adding 0 values to the right & lower edges of it. My initial plan is to use nn.padding to add the edge, until I encounter this error:
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/nn/functional.py", line 2796, in pad
assert len(pad) % 2 == 0, 'Padding length must be divisible by 2'
AssertionError: Padding length must be divisible by 2
It appears that torch tries to pad the image from both side! Is there an easy way to override this and fill the tensor into the upper-left side of another image?
| the only way I know is:
with torch.no_grad(): # assuming it's for init
val = torch.distributions.MultivariateNormal(loc=zeros(2), scale=torch.eye(2))
w.data = val
but I doubt it's recommended.
Answering the title of the question.
| https://stackoverflow.com/questions/56403627/ |
Implementing STFT with Pytorch gives a slightly different result than the STFT with Librose | I am trying to implement STFT with Pytorch. But the output from the Pytorch implementation is slightly off, when compared with the implementation from Librosa.
Librosa version
import numpy as np
from librosa.core import stft
import matplotlib.pyplot as plt
np.random.seed(3)
y = np.sin(2*np.pi*50*np.linspace(0,10,2048))+np.sin(2*np.pi*20*np.linspace(0,10,2048)) + np.random.normal(scale=1,size=2048)
S_stft = np.abs(stft(y, hop_length=512, n_fft=2048,center=False))
plt.plot(S_stft)
Pytorch version
import torch
from torch.autograd import Variable
from torch.nn.functional import conv1d
from scipy.signal.windows import hann
stride = 512
def create_filters(d,k,low=50,high=6000):
x = np.arange(0, d, 1)
wsin = np.empty((k,1,d), dtype=np.float32)
wcos = np.empty((k,1,d), dtype=np.float32)
start_freq = low
end_freq = high
# num_cycles = start_freq*d/44000.
# scaling_ind = np.log(end_freq/start_freq)/k
window_mask = hann(2048, sym=False) # same as 0.5-0.5*np.cos(2*np.pi*x/(k))
for ind in range(k):
wsin[ind,0,:] = window_mask*np.sin(2*np.pi*ind/k*x)
wcos[ind,0,:] = window_mask*np.cos(2*np.pi*ind/k*x)
return wsin,wcos
wsin, wcos = create_filters(2048,2048)
wsin_var = Variable(torch.from_numpy(wsin), requires_grad=False)
wcos_var = Variable(torch.from_numpy(wcos),requires_grad=False)
network_input = torch.from_numpy(y).float()
network_input = network_input.reshape(1,-1)
zx = np.sqrt(conv1d(network_input[:,None,:], wsin_var, stride=stride).pow(2)+conv1d(network_input[:,None,:], wcos_var, stride=stride).pow(2))
pytorch_Xs = zx.cpu().numpy()
plt.plot(pytorch_Xs[0,:1025,0])
My Question
The two graphs might look the same, but if I check the two outputs with np.allclose, we can see that they are slightly different.
np.allclose(S_stft, pytorch_Xs[0,:1025,0].reshape(1025,1))
output >>> False
Only when I tune up the tolerance to 1e-5, it gives me True result
np.allclose(S_stft, pytorch_Xs[0,:1025,0].reshape(1025,1),atol=1e-5)
output >>> True
What causes the difference in values? Is it because of the data conversion by using torch.from_numpy(y).float()?
I would like to have a difference in value less than 1e-7, 1e-8 is even better.
| The difference is from the difference between their default bit.
NumPy's float is 64bit by default.
PyTorch's float is 32bit by default.
| https://stackoverflow.com/questions/56416490/ |
How to run matlab .m files in google colab | I am currently trying to run this repo
https://github.com/Fanziapril/mvfnet
which requires a step:
"Run the Matlab/ModelGeneration/ModelGenerate.m to generate the shape
model "Model_Shape.mat" and copy it to the Matlab/"
Is it possible to run a .m file in colab to do this?
Also, I have looked into oct2py library
https://blink1073.github.io/oct2py/,
but was not able to successfully run the file.
I followed this How to run a MATLAB code on Python
| You need to first install octave with
!apt install octave
Then you can run your m-file with
!octave -W file.m
Here's a minimal example.
| https://stackoverflow.com/questions/56416657/ |
Using tensor.share_memory_() vs multiprocessing.Queue in PyTorch when training model across multiple processes | I'm using the multiprocessing package in pytorch to split the training across multiple processes. My x and y, train and test data are CUDA tensors. I'm trying to understand the difference between using the tensor.share_memory_() and the multiprocessing.Queue method to share cuda tensors. Which is preferred and why?
Here's my current code using tensor.share_memory_(). What changes should I make?
def train(model, features, target, epochs=1000):
X_train, x_test, Y_train, y_test = train_test_split(features,
target,
test_size=0.4,
random_state=0)
Xtrain_ = torch.from_numpy(X_train.values).float().share_memory_()
Xtest_ = torch.from_numpy(x_test.values).float().share_memory_()
Ytrain_ = (torch.from_numpy(Y_train.values).view(1,-1)[0]).share_memory_()
Ytest_ = (torch.from_numpy(y_test.values).view(1,-1)[0]).share_memory_()
optimizer = optim.Adam(model.parameters(), lr = 0.01)
loss_fn = nn.NLLLoss()
for epoch in range(epochs):
#training code here
target method ends here
mp.set_start_method('spawn')
model = Net()
model.share_memory()
processes = []
for rank in range(1):
p = mp.Process(target=train, args=(model, features, target))
p.start()
processes.append(p)
Env details: Python-3 and Linux
| They are the same. torch.multiprocessing.Queue uses tensor.share_memory_() internally.
| https://stackoverflow.com/questions/56426326/ |
SageMaker PyTorchModel passing custom variables | When deploying a model with SageMaker through the PyTorchModel class, is it possible to pass a custom environmental variable or kwargs?
I'd like to be able to switch the functionality of the serving code via a custom argument rather than needing to write multiple serve.py to handle different training model export methods.
model = PyTorchModel(name='my_model',
model_data=estimator.model_data,
role=role,
framework_version='1.0.0',
entry_point='serve.py',
source_dir='src',
sagemaker_session=sess,
predictor_cls=ImagePredictor,
<custom_argument?>
)
| Have you tried using the env parameter in your PyTorchModel ? (cf. https://sagemaker.readthedocs.io/en/stable/model.html#sagemaker.model.Model)
model = PyTorchModel(name='my_model',
model_data=estimator.model_data,
role=role,
framework_version='1.0.0',
entry_point='serve.py',
source_dir='src',
sagemaker_session=sess,
predictor_cls=ImagePredictor,
env={'ENV_VALUE': 'val'}
)
| https://stackoverflow.com/questions/56465660/ |
Nvcc missing when installing cudatoolkit? | I have installed cuda along pytorch with
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
However, it seems like nvcc was not installed along with it. If I want to use for example nvcc -V, I get the error that nvcc was not found, and that I should install it with sudo apt install nvidia-cuda-toolkit.
Can I do this (I dont want to just try and then find out that it is not working/messes up the whole cuda setup).
And is this a bug or expected behavior?
I am using Ubuntu 18.04 and have cuda 10.2
| Met this question when installing cudatoolkit of 10.1 with PyTorch 1.4.
There is a conda-forge package https://anaconda.org/conda-forge/cudatoolkit-dev. After installing this, nvcc as well as other CUDA libraries will be then available at /home/li/anaconda3/envs/<env_name>/pkgs/cuda-toolkit in bin/ and lib/.
| https://stackoverflow.com/questions/56470424/ |
PyTorch most efficient Jacobian/Hessian calculation | I am looking for the most efficient way to get the Jacobian of a function through Pytorch and have so far come up with the following solutions:
# Setup
def func(X):
return torch.stack((X.pow(2).sum(1),
X.pow(3).sum(1),
X.pow(4).sum(1)),1)
X = Variable(torch.ones(1,int(1e5))*2.00094, requires_grad=True).cuda()
# Solution 1:
t = time()
Y = func(X)
J = torch.zeros(3, int(1e5))
for i in range(3):
J[i] = grad(Y[0][i], X, create_graph=True, retain_graph=True, allow_unused=True)[0]
print(time()-t)
>>> Output: 0.002 s
# Solution 2:
def Jacobian(f,X):
X_batch = Variable(X.repeat(3,1), requires_grad=True)
f(X_batch).backward(torch.eye(3).cuda(), retain_graph=True)
return X_batch.grad
t = time()
J2 = Jacobian(func,X)
print(time()-t)
>>> Output: 0.001 s
Since there seem to be not a big difference between using a loop in the first solution than the second one, I wanted to ask if there might still be be a faster way to calculate a Jacobian in pytorch.
My other question is then also about what might be the most efficient way to calculate the Hessian.
Finally, does anyone know if something like this can be done easier or more efficient in TensorFlow?
| functorch can speed up computations even more. E.g., this code is from the functorch docs for batched Jacobian calculation (Hessian works too):
batch_size = 64
Din = 31
Dout = 33
weight = torch.randn(Dout, Din)
print(f"weight shape = {weight.shape}")
bias = torch.randn(Dout)
def predict(weight, bias, x):
return F.linear(x, weight, bias).tanh()
x = torch.randn(batch_size, Din)
compute_batch_jacobian = vmap(jacrev(predict, argnums=2), in_dims=(None, None, 0))
batch_jacobian0 = compute_batch_jacobian(weight, bias, x)
| https://stackoverflow.com/questions/56480578/ |
IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number | def nms(bboxes,scores,threshold=0.5):
'''
bboxes(tensor) [N,4]
scores(tensor) [N,]
'''
x1 = bboxes[:,0]
y1 = bboxes[:,1]
x2 = bboxes[:,2]
y2 = bboxes[:,3]
areas = (x2-x1) * (y2-y1)
_,order = scores.sort(0,descending=True)
keep = []
while order.numel() > 0:
i = order[0]
keep.append(i)
if order.numel() == 1:
break
xx1 = x1[order[1:]].clamp(min=x1[i])
yy1 = y1[order[1:]].clamp(min=y1[i])
xx2 = x2[order[1:]].clamp(max=x2[i])
yy2 = y2[order[1:]].clamp(max=y2[i])
w = (xx2-xx1).clamp(min=0)
h = (yy2-yy1).clamp(min=0)
inter = w*h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
ids = (ovr<=threshold).nonzero().squeeze()
if ids.numel() == 0:
break
order = order[ids+1]
return torch.LongTensor(keep)
I tried
i=order.item()
But it does not work
| I found the solution in the github issues here
Try to change
i = order[0] # works for PyTorch 0.4.1.
to
i = order # works for PyTorch>=0.5.
| https://stackoverflow.com/questions/56483122/ |
Add padding in pytorch c++ API | I have a tensor with dimensions (1,3, 375, 1242). I want to reshape it to (1, 3, 384, 1248) by adding padding into it. How do i do that in Pytorch c++ API. Thank you in advance.
target = torch.zeros(1, 3, 384, 1248)
source = torch.ones(1, 3, 375, 1242)
target[: , : , :375, :1242] = source
| You can use torch::constant_pad_nd
torch::Tensor source = torch::ones(torch::IntList{1, 3, 375, 1242});
// add 6 zeros to the last dimension and 9 zeros to the third dimension
torch::Tensor target = torch::constant_pad_nd(target, IntList{0, 6, 0, 9}, 0);
| https://stackoverflow.com/questions/56490480/ |
how to convert series numpy array into tensors using pytorch | I am trying to convert image labels convert into tensor, but I got some error please help me to convert to tensor:
Here My code:
features_train, features_test, targets_train, targets_test = train_test_split(X,Y,test_size=0.2,
random_state=42)
X_train = torch.from_numpy(features_train)
X_test = torch.from_numpy(features_test)
Y_train =torch.from_numpy(targets_train).type(torch.IntTensor)
Y_test = torch.from_numpy(targets_test).type(torch.IntTensor)
train = torch.utils.data.TensorDataset(X_train,Y_train)
test = torch.utils.data.TensorDataset(X_test,Y_test)
train_loader = torch.utils.data.DataLoader(train, batch_size = train_batch_size, shuffle = False)
test_loader = torch.utils.data.DataLoader(test, batch_size = test_batch_size, shuffle = False)
Here my error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-f1578581ff5c> in <module>()
5 X_test = torch.from_numpy(features_test)
6
----> 7 Y_train =torch.from_numpy(targets_train).type(torch.IntTensor)
8 Y_test = torch.from_numpy(targets_test).type(torch.IntTensor)
9 train = torch.utils.data.TensorDataset(X_train,Y_train)
TypeError: expected np.ndarray (got Series)
Here my array values:
targets_train
478 1
5099 3
1203 2
5674 2
142 1
4836 2
4031 1
1553 3
4416 1
605 5
1194 3
4319 4
1498 5
| Here is what I would do:
import torch
import numpy as np
n = np.arange(10)
print(n) #[0 1 2 3 4 5 6 7 8 9]
t1 = torch.Tensor(n) # as torch.float32
print(t1) #tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
t2 = torch.from_numpy(n) # as torch.int32
print(t2) #tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=torch.int32)
| https://stackoverflow.com/questions/56503937/ |
PyTorch LSTM input dimension | I'm trying train a simple 2 layer neural network with PyTorch LSTMs and I'm having trouble interpreting the PyTorch documentation. Specifically, I'm not too sure how to go about with the shape of my training data.
What I want to do is train my network on a very large dataset through mini-batches, where each batch is say, 100 elements long. Each data element will have 5 features. The documentation states that the input to the layer should be of shape (seq_len, batch_size, input_size). How should I go about shaping the input?
I've been following this post: https://discuss.pytorch.org/t/understanding-lstm-input/31110/3
and if I'm interpreting this correctly, each minibatch should be of shape (100, 100, 5). But in this case, what's the difference between seq_len and batch_size? Also, would this mean that the first layer that the input LSTM layer should have 5 units?
Thank you!
| This is an old question, but since it has been viewed 80+ times with no response, let me take a crack at it.
An LSTM network is used to predict a sequence. In NLP, that would be a sequence of words; in economics, a sequence of economic indicators; etc.
The first parameter is the length of those sequences. If you sequence data is made of sentences, then "Tom has a black and ugly cat" is a sequence of length 7 (seq_len), one for each word, and maybe an 8th to indicate the end of the sentence.
Of course, you might object "what if my sequences are of varying length?" which is a common situation.
The two most common solutions are:
Pad your sequences with empty elements. For instance, if the longest sentence you have has 15 words, then encode the sentence above as "[Tom] [has] [a] [black] [and] [ugly] [cat] [EOS] [] [] [] [] [] [] []", where EOS stands for end of sentence. Suddenly, all your sequences become of length 15, which solves your issue. As soon as the [EOS] token is found, the model will learn quickly that it is followed by an unlimited sequence of empty tokens [], and that approach will barely tax your network.
Send mini-batches of equal lengths. For instance, train the network on all sentences with 2 words, then with 3, then with 4. Of course, seq_len will be increased at each mini batch, and the size of each mini batch will vary based on how many sequences of length N you have in your data.
A best-of-both-world approach would be to divide your data into mini batches of roughly equal sizes, grouping them by approximate length, and adding only the necessary padding. For instance, if you mini-batch together sentences of length 6, 7 and 8, then sequences of length 8 will require no padding, whereas sequence of length 6 will require only 2. If you have a large dataset with sequences of widely varying length, that's the best approach.
Option 1 is the easiest (and laziest) approach, though, and will work great on small datasets.
One last thing... Always pad your data at the end, not at the beginning.
I hope that helps.
| https://stackoverflow.com/questions/56506412/ |
Why pytorch training on CUDA works much slower than in CPU? | I guess i have made something in folowing simple neural network with PyTorch, because this runs much slower with CUDA then in CPU, can you find the mistake pls. The using function like
def backward(ctx, input):
return backward_sigm(ctx, input)
seems have no real impact on preformance
import torch
import torch.nn as nn
import torch.nn.functional as f
dname = 'cuda:0'
dname = 'cpu'
device = torch.device(dname)
print(torch.version.cuda)
def forward_sigm(ctx, input):
sigm = 1 / (1 + torch.exp(-input))
ctx.save_for_backward(sigm)
return sigm
def forward_step(ctx, input):
return torch.tensor(input > 0.5, dtype = torch.float32, device = device)
def backward_sigm(ctx, grad_output):
sigm, = ctx.saved_tensors
return grad_output * sigm * (1-sigm)
def backward_step(ctx, grad_output):
return grad_output
class StepAF(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
return forward_sigm(ctx, input)
@staticmethod
def backward(ctx, input):
return backward_sigm(ctx, input)
#else return grad_output
class StepNN(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(StepNN, self).__init__()
self.linear1 = torch.nn.Linear(input_size, hidden_size)
#self.linear1.cuda()
self.linear2 = torch.nn.Linear(hidden_size, output_size)
#self.linear2.cuda()
#self.StepAF = StepAF.apply
def forward(self,x):
h_line_1 = self.linear1(x)
h_thrash_1 = StepAF.apply(h_line_1)
h_line_2 = self.linear2(h_thrash_1)
output = StepAF.apply(h_line_2)
return output
inputs = torch.tensor( [[1,0,1,0],[1,0,0,1],[0,1,0,1],[0,1,1,0],[1,0,0,0],[0,0,0,1],[1,1,0,1],[0,1,0,0],], dtype = torch.float32, device = device)
expected = torch.tensor( [[1,0,0],[1,0,0],[0,1,0],[0,1,0],[1,0,0],[0,0,1],[0,1,0],[0,0,1],], dtype = torch.float32, device = device)
nn = StepNN(4,8,3)
#print(*(x for x in nn.parameters()))
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(nn.parameters(), lr=1e-3)
steps = 50000
print_steps = steps // 20
good_loss = 1e-5
for t in range(steps):
output = nn(inputs)
loss = criterion(output, expected)
if t % print_steps == 0:
print('step ',t, ', loss :' , loss.item())
if loss < good_loss:
print('step ',t, ', loss :' , loss.item())
break
optimizer.zero_grad()
loss.backward()
optimizer.step()
test = torch.tensor( [[0,1,0,1],[0,1,1,0],[1,0,1,0],[1,1,0,1],], dtype = torch.float32, device=device)
print(nn(test))
| Unless you have large enough data, you won't see any performance improvement while using GPU. The problem is that GPUs use parallel processing, so unless you have large amounts of data, the CPU can process the samples almost as fast as the GPU.
As far as I can see in your example, you are using 8 samples of size (4, 1). I would imagine maybe when having over hundreds or thousands of samples, then you would see the performance improvement on a GPU. In your case, the sample size is (4, 1), and the hidden layer size is 8, so the CPU can perform the calculations fairly quickly.
There are lots of example notebooks online of people using MNIST data (it has around 60000 images for training), so you could load one in maybe Google Colab and then try training on the CPU and then on GPU and observe the training times. You could try this link for example. It uses TensorFlow instead of PyTorch but it will give you an idea of the performance improvement of a GPU.
Note : If you haven't used Google Colab before, then you need to change the runtime type (None for CPU and GPU for GPU) in the runtime menu at the top.
Also, I will post the results from this notebook here itself (look at the time mentioned in the brackets, and if you run it, you can see firsthand how fast it runs) :
On CPU :
INFO:tensorflow:loss = 294.3736, step = 1
INFO:tensorflow:loss = 28.285727, step = 101 (23.769 sec)
INFO:tensorflow:loss = 23.518856, step = 201 (24.128 sec)
On GPU :
INFO:tensorflow:loss = 295.08328, step = 0
INFO:tensorflow:loss = 47.37291, step = 100 (4.709 sec)
INFO:tensorflow:loss = 23.31364, step = 200 (4.581 sec)
INFO:tensorflow:loss = 9.980572, step = 300 (4.572 sec)
INFO:tensorflow:loss = 17.769928, step = 400 (4.560 sec)
INFO:tensorflow:loss = 16.345463, step = 500 (4.531 sec)
| https://stackoverflow.com/questions/56509469/ |
How to convert torch int64 to torch LongTensor? | I am going through a course which uses a deprecated version of PyTorch which does not change torch.int64 to torch.LongTensor as needed. The current section of code which is throwing the error is:
loss = loss_fn(Ypred, Ytrain_) # calc loss on the prediction
I believe the dtype should be changed in thhis section though:
Ytrain_ = torch.from_numpy(y_train.values).view(1, -1)[0].
When testing the data-type by using Ytrain_.dtype it returns torch.int64. I have tried to convert it by applying the long() function as such: Ytrain_ = Ytrain_.long() to no avail.
I have also tried looking for it in the documentation but it seems that it says torch.int64 OR torch.long which I assume means torch.int64 should work.
RuntimeError Traceback (most recent call last)
----> 9 loss = loss_fn(Ypred, Ytrain_) # calc loss on the prediction
RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #2 'target'
| As stated by user8426627 you want to change the tensor type, not the data type. Therefore the solution was to add .type(torch.LongTensor) to convert it to a LongTensor.
Final code:
Ytrain_ = torch.from_numpy(Y_train.values).view(1, -1)[0].type(torch.LongTensor)
Test tensor type:
Ytrain_.type()
'torch.LongTensor'
| https://stackoverflow.com/questions/56510189/ |
converting tensor to one hot encoded tensor of indices | I have my label tensor of shape (1,1,128,128,128) in which the values might range from 0,24. I want to convert this to one hot encoded tensor, using the nn.fucntional.one_hot function
n = 24
one_hot = torch.nn.functional.one_hot(indices, n)
but this expects a tensor of indices, honestly, I am not sure how to get those. The only tensor I have is the label tensor of the shape described above and it contains values ranging from 1-24, not the indices
How can I get a tensor of indices from my tensor? Thanks in advance.
| If the error you are getting is this one:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: one_hot is only applicable to index tensor.
Maybe you just need to convert to int64:
import torch
# random Tensor with the shape you said
indices = torch.Tensor(1, 1, 128, 128, 128).random_(1, 24)
# indices.shape => torch.Size([1, 1, 128, 128, 128])
# indices.dtype => torch.float32
n = 24
one_hot = torch.nn.functional.one_hot(indices.to(torch.int64), n)
# one_hot.shape => torch.Size([1, 1, 128, 128, 128, 24])
# one_hot.dtype => torch.int64
You can use indices.long() too.
| https://stackoverflow.com/questions/56513576/ |
best_state changes with the model during training in pytorch | I want to save the best model and then load it during the test. So I used the following method:
def train():
#training steps …
if acc > best_acc:
best_state = model.state_dict()
best_acc = acc
return best_state
Then, in the main function I used:
model.load_state_dict(best_state)
to resume the model.
However, I found that best_state is always the same as the last state during training, not the best state. Is anyone know the reason and how to avoid it?
By the way, I know I can use torch.save(the_model.state_dict(), PATH) and then load the model by
the_model.load_state_dict(torch.load(PATH)).
However, I don’t want to save the parameters to file as train and test functions are in one file.
| model.state_dict() is OrderedDict
from collections import OrderedDict
You can use:
from copy import deepcopy
To fix the problem
Instead:
best_state = model.state_dict()
You should use:
best_state = copy.deepcopy(model.state_dict())
Deep (not shallow) copy makes the mutable OrderedDict instance not to mutate best_state as it goes.
You may check my other answer on saving the state dict in PyTorch.
| https://stackoverflow.com/questions/56526698/ |
When to use layernorm/batch norm? | Where should you splice the normalization when designing a network? E.g. if you have a stacked Transformer or Attention network, does it make sense to normalize any time after you have a dense layer?
| What the original paper tries to explain is to reduce overfitting use Batch Normalization.
Where should you splice the normalization when designing a network?
Set the normalization early on inputs. Unbalanced input extreme values can cause instability.
While if you normalize on outputs this will not prevent the inputs to cause the instability all over again.
Here is the little code that explains what the BN do:
import torch
import torch.nn as nn
m = nn.BatchNorm1d(100, affine=False)
input = 1000*torch.randn(3, 100)
print(input)
output = m(input)
print(output)
print(output.mean()) # should be ~ 0
print(output.std()) # should be ~ 1
Does it make sense to normalize any time after you have a dense layer
Yes, you may do so as matrix multiplication may lead to producing the extremes. Also, after convolution layers, because these are also matrix multiplication, similar but less intense comparing to dense (nn.Linear) layer. If you for instance print the resent model, you will see that batch norms are set every time after the conv layer like this:
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
To print the full resnet you may use this:
import torchvision.models as models
r = models.resnet18()
print(r)
| https://stackoverflow.com/questions/56535040/ |
How do I install Pytorch offline? | I need to install Pytorch on a computer with no internet connection.
I've tried finding information about this online but can't find a single piece of documentation.
Do you know how I can do this? Is it even possible?
| An easy way with pip:
Create an empty folder
pip download torch using the connected computer. You'll get the pytorch package and all its dependencies.
Copy the folder to the offline computer. You must be using the same python setup on both computers (this goes for virtual environments as well)
pip install * on the offline computer, in the copied folder. This installs all the packages in the correct order. You can then use pytorch.
Note that this works for (almost) any kind of python package.
| https://stackoverflow.com/questions/56539865/ |
Pytorch loss of convolution is 0.0 from start | I am building a conv net that classifies dog and cat. Architecture is pretty simple. 2 Conv(with batch norm, leakyReLU, Maxpooling) to 1 fc. Input image size is resized to 64. The size is good. The problem is loss is 0.0 from the start. I have no clue what the cause is. I couldn't find any answer. I have wrote every detail that might be important. If you need anything else please tell me, I will edit.
main.py
import torch
import torch.nn as nn
from torchvision import transforms, datasets
import PIL
import matplotlib.pyplot as plt
from Dataset import Dataset
from Network import Network
# Added to avoid torch._C._cuda_init() \n RuntimeError: CUDA error: unknown error
torch.cuda.current_device()
# Hyper Parameters
batch_size = 1
img_size = 64
learning_rate = 0.001
num_epoch = 1
# Directories
trainDir = "D:/Programming/python/Deep learning/datasets/dogs-vs-cats/train"
testDir = "D:/Programming/python/Deep learning/datasets/dogs-vs-cats/test1"
print("Initializing...")
# Device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Augmentation
transforms = transforms.Compose([
transforms.Resize((img_size, img_size)),
transforms.ColorJitter(hue=.05, saturation=.05),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20, resample=PIL.Image.BILINEAR) ,
transforms.ToTensor()
])
trainset = datasets.ImageFolder(root=trainDir, transform=transforms)
testset = datasets.ImageFolder(root=testDir, transform=transforms)
train_loader = torch.utils.data.DataLoader(
trainset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
testset, batch_size=batch_size, shuffle=False) # test set will not be shuffled
model = Network(img_size,2).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
total_step = len(train_loader)
print("Tranining started")
for epoch in range(num_epoch):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# forward propagate
outputs = model(images)
loss = criterion(outputs, labels)
# backpropagte and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print(
"Epoch [{}/{}], Step[{}/{}], Loss: {}".format(
epoch+1, num_epoch, i+1, total_step, loss.item()
)
)
print("Tranining complete, validation started")
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy: {} %'.format(100 * correct / total))
#
torch.save(model.state_dict(), "model.ckpy")
Network.py
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
"""
Input size for conv
l = number of input feature maps
k = number of output feature maps
n, m = width and height of kernel
total parameter = (n*m*l+1)*k
"""
class Network(nn.Module):
def __init__(self, input_size, num_class):
super(Network, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.LeakyReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
) # output size = (128, 128, 16)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.LeakyReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
) # output size = (64, 64, 32)
self.fc1 = nn.Linear(
int((input_size/4)**2*32), num_class
)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
return out
Output
Epoch [1/1], Step[5800/25000], Loss: 0.0
Epoch [1/1], Step[5900/25000], Loss: 0.0
Epoch [1/1], Step[6000/25000], Loss: 0.0
Epoch [1/1], Step[6100/25000], Loss: 0.0
Epoch [1/1], Step[6200/25000], Loss: 0.0
Epoch [1/1], Step[6300/25000], Loss: 0.0
Epoch [1/1], Step[6400/25000], Loss: 0.0
Epoch [1/1], Step[6500/25000], Loss: 0.0
Result after each layer
outputs of conv1,2
[[ 3.0135e-01, 3.5849e-01, 4.7758e-01, ..., 3.9759e-01,
3.7988e-01, 9.7870e-01],
[ 4.3010e-01, 6.0753e-03, 4.5642e-01, ..., -8.5486e-04,
4.4537e-02, 2.9074e-01],
[ 3.8567e-01, 7.8431e-02, 2.3859e-01, ..., -3.0013e-03,
-5.5821e-03, 1.2284e-01],
...,
[ 3.9181e-01, 3.9093e-01, 1.2053e-01, ..., -4.7156e-03,
5.6266e-01, 7.7017e-01],
outputs of fc1
[[-0.0772, 0.2166]]
| loss = criterion(output, target.view(-1)) # Flatten target
try this.
could you remove these two line?
images = images.to(device)
labels = labels.to(device)
self.conv1 and 2 must be sent to cuda : self.conv1(2).cuda()
| https://stackoverflow.com/questions/56549448/ |
How to solve the run time error "Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment" | I want to use output variables of NN as an input in another function,but met with error like this 'Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment'.The out variables require gradient.
I tried by changing the output variables to numpy values, but in that case the back propagataion does not work because it see numpy values as variables which does not need gradient.
output = model(SOC[13])
# Three output values of NN
Rs=output[0]
R1=output[1]
C1=output[2]
# Using these variables in another function
num1=[Rs*R1*C1,R1+Rs]
den1=[C1*R1,1]
G = control.tf(num,den)
It should work, but it gives error.
14 num=[Rs*R1*C1,R1+Rs]
15 den=[C1*R1,1]
---> 16 G = control.tf(num,den)
~\Anaconda3\lib\site-packages\control\xferfcn.py in __init__(self, *args)
106
107 """
--> 108 args = deepcopy(args)
109 if len(args) == 2:
110 # The user provided a numerator and a denominator.
~\Anaconda3\lib\site-packages\torch\tensor.py in __deepcopy__(self, memo)
16 def __deepcopy__(self, memo):
17 if not self.is_leaf:
---> 18 raise RuntimeError("Only Tensors created explicitly by the user "
19 "(graph leaves) support the deepcopy protocol at the moment")
20 if id(self) in memo:
| In pytorch, you can use the #tensor_name#.detach() function
new_tensor = _tensor_.detach()
| https://stackoverflow.com/questions/56590886/ |
What is a fused kernel (or fused layer) in deep learning? | I am reading the Apex AMP documentation:
A Python-only build omits:
Fused kernels required to use apex.optimizers.FusedAdam.
Fused kernels
required to use apex.normalization.FusedLayerNorm.
Fused kernels that
improve the performance and numerical stability of
apex.parallel.SyncBatchNorm.
Fused kernels that improve the
performance of apex.parallel.DistributedDataParallel and apex.amp.
DistributedDataParallel, amp, and SyncBatchNorm will still be usable,
but they may be slower.
There also seems to be a "FusedAdam" optimizer:
The Adam optimizer in Pytorch (like all Pytorch optimizers) carries
out optimizer.step() by looping over parameters, and launching a
series of kernels for each parameter. This can require hundreds of
small launches that are mostly bound by CPU-side Python looping and
kernel launch overhead, resulting in poor device utilization.
Currently, the FusedAdam implementation in Apex flattens the
parameters for the optimization step, then carries out the
optimization step itself via a fused kernel that combines all the Adam
operations. In this way, the loop over parameters as well as the
internal series of Adam operations for each parameter are fused such
that optimizer.step() requires only a few kernel launches.
The current implementation (in Apex master) is brittle and only works
with Amp opt_level O2. I’ve got a WIP branch to make it work for any
opt_level (https://github.com/NVIDIA/apex/pull/351). I recommend
waiting until this is merged then trying it.
This partially explains it. I'm left with more questions:
What is meant by kernel? A layer or an optimizer?
Is the idea of fused layer the same as a fused optimizer?
|
"Kernel" here is for computation kernels: https://en.wikipedia.org/wiki/Compute_kernel
Operations like convolution are often implemented using compute kernels for better efficiency. Compute kernels can be written using C, CUDA, OpenCL or even assembly for maximum efficiency. It is therefore not surprizing that "a Python-only build" does not support...
"Fusing" means commonalization of computation steps. Basically, it's an implementation trick to run code more efficiently by combining similar operations in a single hardware (GPU, CPU or TPU) operation. Therefore, a "fusedLayer" is a layer where operations benefit from a "fused" implementation.
| https://stackoverflow.com/questions/56601075/ |
Pytorch: How to implement nested transformers: a character-level transformer for words and a word-level transformer for sentences? | I have a model in mind, but I'm having a hard time figuring out how to actually implement it in Pytorch, especially when it comes to training the model (e.g. how to define mini-batches, etc.). First of all let me quickly introduce the context:
I'm working on VQA (visual question answering), in which the task is to answer questions about images, for example:
So, letting aside many details, I just want to focus here on the NLP aspect/branch of the model. In order to process the natural language question, I want to use character-level embeddings (instead of traditional word-level embeddings) because they are more robust in the sense that they can easily accommodate for morphological variations in words (e.g. prefixes, suffixes, plurals, verb conjugations, hyphens, etc.). But at the same time I don't want to lose the inductive bias of reasoning at the word level. Therefore, I came up with the following design:
As you can see in the picture above, I want to use transformers (or even better, universal transformers), but with a little twist. I want to use 2 transformers: the first one will process each word characters in isolation (character-level transformer) to produce an initial word-level embedding for each word in the question. Once we have all these initial word-level embeddings, a second word-level transformer will refine these embeddings to enrich their representation with context, thus obtaining context-aware word-level embeddings.
The full model for the whole VQA task obviously is more complex, but I just want to focus here on this NLP part. So my question is basically about which Pytorch functions should I pay attention to when implementing this. For example, since I'll be using character-level embeddings I have to define a character-level embedding matrix, but then I have to perform lookups on this matrix to generate the inputs for the character-level transformer, repeat this for each word in the question and then feed all these vectors into the word-level transformer. Moreover, words in a single question can have different lengths, and questions within a single mini-batch can have different lengths too. So in my code I have to somehow account for different lengths at the word and the question level simultaneously in a single mini-batch (during training), and I've got no idea how to do that in Pytorch or whether it's even possible at all.
Any tips on how to go about implementing this in Pytorch that could lead me in the right direction will be deeply appreciated.
| A way to implement what you say in pyTorch would require adapting the Transformer encoder:
1) Define a custom tokenizer that splits words into character embeddings (instead of word or word-piece embeddings)
2) Define a mask for each word (similar to what the original paper used to mask future tokens in the decoder), in order to force the model to be constrained to the word-context (in the first stage)
3) Then use a traditional Transformer with the mask (effectively restricting word-level context).
4) Then discard the mask and apply Transformer again (sentence-level context).
.
Things to be careful about:
1) Remember that Transformer encoder's output length is always the same size as the input (the decoder is the one able to produce longer or shorter sequences). So in your first stage, you will not have word-level embeddings (as shown in your diagram) but character level embeddings. If you want to merge them into word level embeddings, you will need an additional intermediate decoder step or merge the embeddings using a custom strategy (ex: a learnt weighted sum or using something similar to BERT's token).
2) You may face efficiency issues. Remember that Transformer is O(n^2), so the longer the sequence, the more computationally-expensive it is. In the original Transformer, if you had a sentence of length 10 words, then the Thansformer will have to deal with a 10-token sequence. If you use word-piece embeddings, your model will work at around ~15-token sequences. But if you use character-level embeddings, I estimate that you will be dealing with ~50-token sequences, which may not be feasible for long sentences, so you may need to truncate your input (and you will be losing all the long-term dependency power of attention models).
3) Are you sure that you will have a representational contribution by adding the character-level Transformer? Transformer aims to enrich embeddings based on the context (surrounding embeddings), that's why the original implementation used word-level embeddings. BERT uses word-piece embeddings, to take advantage of language regularities in related words and GPT-2 uses Byte-Pais-Embeddings (BPE), which I don't recommend in your case, because it is more suited for next-token prediction. In your case, what information do you think will be captured at the learnt character embeddings so that it can be effectively shared between the characters of the word? Do you think it will be richer than using a learnt embedding for each word or word-piece? My guess is that this is what you are trying to find out... right?
| https://stackoverflow.com/questions/56602442/ |
No module named torch.distributed | ImportError: No module named torch.distributed
File "train.py", line 4, in <module>
import torch.distributed as dist
ImportError: No module named torch.distributed
I installed CUDA AND cuDNN then created env and installed pip3 install torch torchvision but getting error.
| This works for me:
Create a conda virtual environment:
conda create -n env_pytorch python=3.6
Active this environment create above:
source activate env_pytorch
Install PyTorch with pip or pip3:
pip install torchvision --user
| https://stackoverflow.com/questions/56637726/ |
What does nnz in mean in the output of torch.sparse_coo_tensor(indices, values, size=None, dtype=None, device=None, requires_grad=False)? | What does nnz mean in the output of below pytorch function
torch.sparse_coo_tensor(indices, values, size=None, dtype=None, device=None, requires_grad=False)
It can be found at this link https://pytorch.org/docs/stable/torch.html
i = torch.tensor([[0, 1, 1],
[2, 0, 2]])
v = torch.tensor([3, 4, 5], dtype=torch.float32)
torch.sparse_coo_tensor(i, v, [2, 4],
dtype=torch.float64,
device=torch.device('cuda:0'))
tensor(indices=tensor([[0, 1, 1],
[2, 0, 2]]),
values=tensor([3., 4., 5.]),
device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,
layout=torch.sparse_coo)
| nnz mean number non zero elements. In this example nnz = 3.
| https://stackoverflow.com/questions/56641064/ |
How does DataParallel figure out which gpu I want to use? | I want to find a simple way to specify the gpus that my experiments run on. Currently, I know I can use prepend my python command with CUDA_VISIBLE_DEVICES=1,2,3,4 to set the gpu, and I am guessing DataParallel will then try to use all the gpu.
Is there a way to tell DataParallel directly the ids, like 4,7,9,12?
| Yeah, DataParallem provides us the feature of directly passing the gpu ids.
As per the official documentation here, Data Parallelism is implemented using torch.nn.DataParallel. One can wrap a Module in DataParallel and it will be parallelized over multiple GPUs in the batch dimension.
torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)
In your case, you can simply do something like this:
torch.nn.DataParallel(model, device_ids=[4, 7, 9, 12])
output = net(input_var) # input_var can be on any device, including CPU
You can know more about how to pass gpu ids directly to DataParallel in below links:
MULTI-GPU EXAMPLES
DataParallel layers
| https://stackoverflow.com/questions/56641967/ |
How to fix pytorch multi processing issue on cpu? | I'm doing inference of pytorch on CPU. I found pytorch is not utilizing all the cores of CPU for prediction. How to use all cores in pytorch?
| Skeleton
Using the skeleton below I see 4 processes running. You should tweak n_train_processes. I set it to 10 which was 2-much as I have 8 cores. Setting it to 6 work fine.
...
import torch.multiprocessing as mp
class MyModel(nn.Module):
...
def train(model, rank):
...
def test(model):
...
n_train_processes = 3
if __name__ == '__main__':
model = MyModel()
model.share_memory()
processes = []
for rank in range(n_train_processes + 1): # + 1 for test process
if rank == 0:
p = mp.Process(target=test, args=(model,))
else:
p = mp.Process(target=train, args=(model, rank,))
p.start()
processes.append(p)
for p in processes:
p.join()
Complete example
This example is taken from https://github.com/seungeunrho/minimalRL which has some other nice RL examples. This is a3c.py.
# a3c.py
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
import torch.multiprocessing as mp
import time
n_train_processes = 6
# Hyperparameters
learning_rate = 0.0002
update_interval = 5
gamma = 0.98
max_train_ep = 300
max_test_ep = 400
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.fc1 = nn.Linear(4, 256)
self.fc_pi = nn.Linear(256, 2)
self.fc_v = nn.Linear(256, 1)
def pi(self, x, softmax_dim=0):
x = F.relu(self.fc1(x))
x = self.fc_pi(x)
prob = F.softmax(x, dim=softmax_dim)
return prob
def v(self, x):
x = F.relu(self.fc1(x))
v = self.fc_v(x)
return v
def train(model, rank):
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
env = gym.make('CartPole-v1')
for n_epi in range(max_train_ep):
done = False
s = env.reset()
while not done:
s_lst, a_lst, r_lst = [], [], []
for t in range(update_interval):
prob = model.pi(torch.from_numpy(s).float())
m = Categorical(prob)
a = m.sample().item()
s_prime, r, done, info = env.step(a)
s_lst.append(s)
a_lst.append([a])
r_lst.append(r/100.0)
s = s_prime
if done:
break
R = 0.0
R_lst = []
for reward in r_lst[::-1]:
R = gamma * R + reward
R_lst.append([R])
R_lst.reverse()
done_mask = 0.0 if done else 1.0
s_batch, a_batch, R_batch, s_final = \
torch.tensor(s_lst, dtype=torch.float), torch.tensor(a_lst), \
torch.tensor(R_lst), torch.tensor(s_prime, dtype=torch.float)
td_target = R_batch + gamma * model.v(s_final) * done_mask
advantage = td_target - model.v(s_batch)
pi = model.pi(s_batch, softmax_dim=1)
pi_a = pi.gather(1, a_batch)
loss = -torch.log(pi_a) * advantage.detach() + \
F.smooth_l1_loss(td_target.detach(), model.v(s_batch))
optimizer.zero_grad()
loss.mean().backward()
optimizer.step()
env.close()
print("Training process {} reached maximum episode.".format(rank))
def test(model):
env = gym.make('CartPole-v1')
score = 0.0
print_interval = 20
for n_epi in range(max_test_ep):
done = False
s = env.reset()
while not done:
prob = model.pi(torch.from_numpy(s).float())
a = Categorical(prob).sample().item()
s_prime, r, done, info = env.step(a)
s = s_prime
score += r
if n_epi % print_interval == 0 and n_epi != 0:
print("# of episode :{}, avg score : {:.1f}".format(
n_epi, score/print_interval))
score = 0.0
time.sleep(1)
env.close()
if __name__ == '__main__':
model = ActorCritic()
model.share_memory()
processes = []
for rank in range(n_train_processes + 1): # + 1 for test process
if rank == 0:
p = mp.Process(target=test, args=(model,))
else:
p = mp.Process(target=train, args=(model, rank,))
p.start()
processes.append(p)
for p in processes:
p.join()
| https://stackoverflow.com/questions/56647897/ |
fastai.vision Import Error: How to fix the import error so I can use ImageDataBunch.from_folder? | I'm working on google colab ( python 3.6 and GPU), I import torch (1.2.0) nicely and I use the following to import fastai:
import fastai
print(fastai.__version__)
from fastai import *
from fastai.vision import *
I get the following error:
ImportError: /usr/local/lib/python3.6/dist-packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c106Device8validateEv
I have tried to install different torch versions like 1.0.0 and work with earlier python versions. I have also tried to install fastai and its dependencies manually using !pip, but nothing worked.
This is the complete code I used to install torch:
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(),
get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\
([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install torch_nightly -f
https://download.pytorch.org/whl/nightly/{accelerator}/torch_nightly.html
!pip install fastai
import torch
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)
And this is the full error message I get:
ImportError
Traceback (most recent call last)
<ipython-input-5-4b8b8d8134df> in <module>()
2 print(fastai.__version__)
3 from fastai import *
----> 4 from fastai.vision import *
8 frames
/usr/local/lib/python3.6/dist-packages/torchvision/ops/boxes.py in
<module>()
1 import torch
----> 2 from torchvision import _C
3
4
5 def nms(boxes, scores, iou_threshold):
ImportError: /usr/local/lib/python3.6/dist-
packages/torchvision/_C.cpython-36m-x86_64-linux-gnu.so: undefined
symbol: _ZN3c106Device8validateEv
I'm unable to use ImageDataBunch.from_folder because of this fastai import error. I get the error NameError: name 'ImageDataBunch' is not defined when I do.
Note: I did use the same code before and I was able to use fastai and ImageDataBunch.from_folder with no import errors , but I'm guessing that an update to fastai or torch happened.
| You don't need to install PyTorch first when you use FastAi, it will do it for you.
If you need latest FastAi do this:
pip3 install git+https://github.com/fastai/fastai.git
| https://stackoverflow.com/questions/56649583/ |
PyTorch Convolution `in_channels` and `out_channels` meaning? | From the PyTorch documentation for Convolution, I see the function torch.nn.Conv1d requires users to pass the parameters in_channels and out_channels.
I know these refer to "input channels" and "output channels", but I am not sure what they mean in the context of a convolution. My guess is that in_channels is equivalent to the input features and out_channels is equivalent to the output features, but I am not sure.
Could someone explain what thse arguments refer to?
| Given a convolution of:
length m
over N input channels / signals / variables
outputting P channels / features / filters
you would use:
nn.Conv1d(in_channels=N, out_channels=P, kernel_size=m)
This is illustrated for 2d images below in Deep Learning with PyTorch (where the kernels are of size 3x3xN (where N=3 for an RGB image), and there are 5 such kernels for the 5 outputs desired):
| https://stackoverflow.com/questions/56652204/ |
How Batch learning in Pytorch is performed? | When you look at how network architecture is built inside the pytorch code, we need to extend the torch.nn.Module and inside __init__, we define the module of networks and pytorch is going to track the gradients of parameters of these modules. Then inside the forward function, we define how the forward pass should be done for our network.
The thing I do not understand here is how the batch learning is going to occur. In none of the definition above including the forward function, we do not care about the dimension of batch of the input to our network. The only thing we need to set to perform batch learning is to add an extra dimension to the input which corresponds to the batch size but nothing inside the network definition is going to be changed if we are working with batch learning. At least, this is the thing I have seen in the codes here.
So, if all the things I have explained so far is correct (I would really appreciate if you let me know if I have misunderstood something), how batch learning is performed if nothing is declared regarding the batch size inside the definition of our network class (the class that inherits torch.nn.Module)? Specifically, I am interested to know how batch gradient descent algorithm is implemented in pytorch when we just set nn.MSELoss with batch dimension.
| Check this:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
print("Hi ma")
print(x)
x = F.relu(x)
return x
n = Net()
r = n(torch.tensor(-1))
print(r)
r = n.forward(torch.tensor(1)) #not planned to call directly
print(r)
out:
Hi ma
tensor(-1)
tensor(0)
Hi ma
tensor(1)
tensor(1)
Thing to remember is that forward should not be called directly.
The PyTorch made this Module object n callable. They implemented callable like:
def __call__(self, *input, **kwargs):
for hook in self._forward_pre_hooks.values():
hook(self, input)
if torch._C._get_tracing_state():
result = self._slow_forward(*input, **kwargs)
else:
result = self.forward(*input, **kwargs)
for hook in self._forward_hooks.values():
hook_result = hook(self, input, result)
if hook_result is not None:
raise RuntimeError(
"forward hooks should never return any values, but '{}'"
"didn't return None".format(hook))
if len(self._backward_hooks) > 0:
var = result
while not isinstance(var, torch.Tensor):
if isinstance(var, dict):
var = next((v for v in var.values() if isinstance(v, torch.Tensor)))
else:
var = var[0]
grad_fn = var.grad_fn
if grad_fn is not None:
for hook in self._backward_hooks.values():
wrapper = functools.partial(hook, self)
functools.update_wrapper(wrapper, hook)
grad_fn.register_hook(wrapper)
return result
And just n() will call forward automatically.
In general, __init__ defines the module structure and forward() defines operations on a single batch.
That operation may repeat if needed for some structure elements or you may call functions on tensors directly like we did x = F.relu(x).
You got this great, everything in PyTorch will do in batches (mini-batches), since the PyTorch is optimized to work this way.
This means when you read the image, you will not read the single one, but one bs batches of images.
| https://stackoverflow.com/questions/56658935/ |
What does layout = torch.strided mean? | As I was going through pytorch documentation I came across a term layout = torch.strided in many of the functions. Can anyone help me in understanding where is it used and how. The description says it's the the desired layout of returned Tensor. What does layout mean and how many types of layout are there ?
torch.rand(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
| strides is number of steps (or jumps) that is needed to go from one element to next element, in a given dimension. In computer memory, the data is stored linearly in a contiguous block of memory. What we view is just a (re)presentation.
Let's take an example tensor for understanding this:
# a 2D tensor
In [62]: tensor = torch.arange(1, 16).reshape(3, 5)
In [63]: tensor
Out[63]:
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]])
With this tensor in place, the strides are:
# get the strides
In [64]: tensor.stride()
Out[64]: (5, 1)
What this resultant tuple (5, 1) says is:
to traverse along the 0th dimension/axis (Y-axis), let's say we want to jump from 1 to 6, we should take 5 steps (or jumps)
to traverse along the 1st dimension/axis (X-axis), let's say we want to jump from 7 to 8, we should take 1 step (or jump)
The order (or index) of 5 & 1 in the tuple represents the dimension/axis. You can also pass the dimension, for which you want the stride, as an argument:
# get stride for axis 0
In [65]: tensor.stride(0)
Out[65]: 5
# get stride for axis 1
In [66]: tensor.stride(1)
Out[66]: 1
With that understanding, we might have to ask why is this extra parameter needed when we create the tensors? The answer to that is for efficiency reasons. (How can we store/read/access the elements in the (sparse) tensor most efficiently?).
With sparse tensors (a tensor where most of the elements are just zeroes), so we don't want to store these values. we only store the non-zero values and their indices. With a desired shape, the rest of the values can then be filled with zeroes, yielding the desired sparse tensor.
For further reading on this, the following articles might be of help:
numpy.ndarray.strides
torch.layout
torch.sparse
P.S: I guess there's a typo in the torch.layout documentation which says
Strides are a list of integers ...
The composite data type returned by tensor.stride() is a tuple, not a list.
| https://stackoverflow.com/questions/56659255/ |
Meaning of parameters in torch.nn.conv2d | In the fastai cutting edge deep learning for coders course lecture 7.
self.conv1 = nn.Conv2d(3,10,kernel_size = 5,stride=1,padding=2)
Does 10 there mean the number of filters or the number activations the filter will give?
| Here is what you may find
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. (Default: 1)
padding (int or tuple, optional) – Zero-padding added to both sides of the input (Default: 0)
padding_mode (string, optional) – zeros
dilation (int or tuple, optional) – Spacing between kernel elements. (Default: 1)
groups (int, optional) – Number of blocked connections from input to output channels. (Default: 1)
bias (bool, optional) – If True, adds a learnable bias to the output. (Default: True)
And this URL has helpful visualization of the process.
So the in_channels in the beginning is 3 for images with 3 channels (colored images).
For images black and white it should be 1.
Some satellite images should have 4.
The out_channels is the number of filters and you can set this arbitrary.
Let's create an example to "prove" that.
import torch
import torch.nn as nn
c = nn.Conv2d(1,3, stride = 1, kernel_size=(4,5))
print(c.weight.shape)
print(c.weight)
Out
torch.Size([3, 1, 4, 5])
Parameter containing:
tensor([[[[ 0.1571, 0.0723, 0.0900, 0.1573, 0.0537],
[-0.1213, 0.0579, 0.0009, -0.1750, 0.1616],
[-0.0427, 0.1968, 0.1861, -0.1787, -0.2035],
[-0.0796, 0.1741, -0.2231, 0.2020, -0.1762]]],
[[[ 0.1811, 0.0660, 0.1653, 0.0605, 0.0417],
[ 0.1885, -0.0440, -0.1638, 0.1429, -0.0606],
[-0.1395, -0.1202, 0.0498, 0.0432, -0.1132],
[-0.2073, 0.1480, -0.1296, -0.1661, -0.0633]]],
[[[ 0.0435, -0.2017, 0.0676, -0.0711, -0.1972],
[ 0.0968, -0.1157, 0.1012, 0.0863, -0.1844],
[-0.2080, -0.1355, -0.1842, -0.0017, -0.2123],
[-0.1495, -0.2196, 0.1811, 0.1672, -0.1817]]]], requires_grad=True)
If we would alter the number of out_channels,
c = nn.Conv2d(1,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 1, 4, 5])
We will get 5 filters each filter 4x5 as this is our kernel size.
If we would set 2 channels, (some images may have 2 channels only)
c = nn.Conv2d(2,5, stride = 1, kernel_size=(4,5))
print(c.weight.shape) # torch.Size([5, 2, 4, 5])
our filter will have 2 channels.
I think they have terms from this book and since they haven't called it filters, they haven't used that term.
So you are right; filters are what conv layer is learning and the number of filters is the number of out channels. They are set randomly at the start.
Number of activations is calculated based on bs and image dimension:
bs=16
x = torch.randn(bs, 3, 28, 28)
c = nn.Conv2d(3,10,kernel_size=5,stride=1,padding=2)
out = c(x)
print(out.nelement()) #125440 number of activations
| https://stackoverflow.com/questions/56675943/ |
Is hidden and output the same for a GRU unit in Pytorch? | I do understand conceptually what an LSTM or GRU should (thanks to this question What's the difference between "hidden" and "output" in PyTorch LSTM?) BUT when I inspect the output of the GRU h_n and output are NOT the same while they should be...
(Pdb) rnn_output
tensor([[[ 0.2663, 0.3429, -0.0415, ..., 0.1275, 0.0719, 0.1011],
[-0.1272, 0.3096, -0.0403, ..., 0.0589, -0.0556, -0.3039],
[ 0.1064, 0.2810, -0.1858, ..., 0.3308, 0.1150, -0.3348],
...,
[-0.0929, 0.2826, -0.0554, ..., 0.0176, -0.1552, -0.0427],
[-0.0849, 0.3395, -0.0477, ..., 0.0172, -0.1429, 0.0153],
[-0.0212, 0.1257, -0.2670, ..., -0.0432, 0.2122, -0.1797]]],
grad_fn=<StackBackward>)
(Pdb) hidden
tensor([[[ 0.1700, 0.2388, -0.4159, ..., -0.1949, 0.0692, -0.0630],
[ 0.1304, 0.0426, -0.2874, ..., 0.0882, 0.1394, -0.1899],
[-0.0071, 0.1512, -0.1558, ..., -0.1578, 0.1990, -0.2468],
...,
[ 0.0856, 0.0962, -0.0985, ..., 0.0081, 0.0906, -0.1234],
[ 0.1773, 0.2808, -0.0300, ..., -0.0415, -0.0650, -0.0010],
[ 0.2207, 0.3573, -0.2493, ..., -0.2371, 0.1349, -0.2982]],
[[ 0.2663, 0.3429, -0.0415, ..., 0.1275, 0.0719, 0.1011],
[-0.1272, 0.3096, -0.0403, ..., 0.0589, -0.0556, -0.3039],
[ 0.1064, 0.2810, -0.1858, ..., 0.3308, 0.1150, -0.3348],
...,
[-0.0929, 0.2826, -0.0554, ..., 0.0176, -0.1552, -0.0427],
[-0.0849, 0.3395, -0.0477, ..., 0.0172, -0.1429, 0.0153],
[-0.0212, 0.1257, -0.2670, ..., -0.0432, 0.2122, -0.1797]]],
grad_fn=<StackBackward>)
they are some transpose of each other...why?
| They are not really the same. Consider that we have the following Unidirectional GRU model:
import torch.nn as nn
import torch
gru = nn.GRU(input_size = 8, hidden_size = 50, num_layers = 3, batch_first = True)
Please make sure you observe the input shape carefully.
inp = torch.randn(1024, 112, 8)
out, hn = gru(inp)
Definitely,
torch.equal(out, hn)
False
One of the most efficient ways that helped me to understand the output vs. hidden states was to view the hn as hn.view(num_layers, num_directions, batch, hidden_size) where num_directions = 2 for bidirectional recurrent networks (and 1 other wise, i.e., our case). Thus,
hn_conceptual_view = hn.view(3, 1, 1024, 50)
As the doc states (Note the italics and bolds):
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len (i.e., for the last timestep)
In our case, this contains the hidden vector for the timestep t = 112, where the:
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively.
So, consequently, one can do:
torch.equal(out[:, -1], hn_conceptual_view[-1, 0, :, :])
True
Explanation: I compare the last sequence from all batches in out[:, -1] to the last layer hidden vectors from hn[-1, 0, :, :]
For Bidirectional GRU (requires reading the unidirectional first):
gru = nn.GRU(input_size = 8, hidden_size = 50, num_layers = 3, batch_first = True bidirectional = True)
inp = torch.randn(1024, 112, 8)
out, hn = gru(inp)
View is changed to (since we have two directions):
hn_conceptual_view = hn.view(3, 2, 1024, 50)
If you try the exact code:
torch.equal(out[:, -1], hn_conceptual_view[-1, 0, :, :])
False
Explanation: This is because we are even comparing wrong shapes;
out[:, 0].shape
torch.Size([1024, 100])
hn_conceptual_view[-1, 0, :, :].shape
torch.Size([1024, 50])
Remember that for bidirectional networks, hidden states get concatenated at each time step where the first hidden_state size (i.e., out[:, 0, :50]) are the the hidden states for the forward network, and the other hidden_state size are for the backward (i.e., out[:, 0, 50:]). The correct comparison for the forward network is then:
torch.equal(out[:, -1, :50], hn_conceptual_view[-1, 0, :, :])
True
If you want the hidden states for the backward network, and since a backward network processes the sequence from time step n ... 1. You compare the first timestep of the sequence but the last hidden_state size and changing the hn_conceptual_view direction to 1:
torch.equal(out[:, -1, :50], hn_conceptual_view[-1, 1, :, :])
True
In a nutshell, generally speaking:
Unidirectional:
rnn_module = nn.RECURRENT_MODULE(num_layers = X, hidden_state = H, batch_first = True)
inp = torch.rand(B, S, E)
output, hn = rnn_module(inp)
hn_conceptual_view = hn.view(X, 1, B, H)
Where RECURRENT_MODULE is either GRU or LSTM (at the time of writing this post), B is the batch size, S sequence length, and E embedding size.
torch.equal(output[:, S, :], hn_conceptual_view[-1, 0, :, :])
True
Again we used S since the rnn_module is forward (i.e., unidirectional) and the last timestep is stored at the sequence length S.
Bidirectional:
rnn_module = nn.RECURRENT_MODULE(num_layers = X, hidden_state = H, batch_first = True, bidirectional = True)
inp = torch.rand(B, S, E)
output, hn = rnn_module(inp)
hn_conceptual_view = hn.view(X, 2, B, H)
Comparison
torch.equal(output[:, S, :H], hn_conceptual_view[-1, 0, :, :])
True
Above is the forward network comparison, we used :H because the forward stores its hidden vector in the first H elements for each timestep.
For the backward network:
torch.equal(output[:, 0, H:], hn_conceptual_view[-1, 1, :, :])
True
We changed the direction in hn_conceptual_view to 1 to get hidden vectors for the backward network.
For all examples we used hn_conceptual_view[-1, ...] because we are only interested in the last layer.
| https://stackoverflow.com/questions/56677052/ |
How to compute the Surface Dice-Sørensen Coefficient in pytorch? | I would like to compute the Surface Dice-Sørensen Coefficient from this paper (page 19)in python3/pytorch.
I have to point out, that I do not try to implement the simple standard volumetric Dice-Sørensen Coefficient! This one would look as follows in my implementation:
import torch
def volumetric_DSC(M1, M2):
M1 = M1.view(-1)
M2 = M2.view(-1)
dividend = 2 * (M1 * M2).sum()
divisor = (M1 * M1).sum() + (M2 * M2).sum()
return dividend / divisor
if __name__ == "__main__":
m1 = torch.empty(5, 5, 5).uniform_(0, 1)
m1 = torch.bernoulli(m1)
m2 = torch.empty(5, 5, 5).uniform_(0, 1)
m2 = torch.bernoulli(m2)
loss = volumetric_DSC(m1, m2)
print("loss = {0}".format(loss))
How can I extend this code to a Surface Dice-Sørensen Coefficient loss?
| A surface dice implementation was provided here as part of this study. You can use it as an evaluation metric but not as a loss function as it contains non-differentiable ops. You will need to provide a "tolerance" distance i.e. a surface dice of 0.9 means that 90% of surfaces lie within the tolerance (which is better calculated from the data itself, such as the inter-observer variation of the task you are solving)
| https://stackoverflow.com/questions/56685144/ |
How to get the filename of a sample from a DataLoader? | I need to write a file with the result of the data test of a Convolutional Neural Network that I trained. The data include speech data collection. The file format needs to be "file name, prediction", but I am having a hard time to extract the file name. I load the data like this:
import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
TEST_DATA_PATH = ...
trans = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
test_dataset = torchvision.datasets.MNIST(
root=TEST_DATA_PATH,
train=False,
transform=trans,
download=True
)
test_loader = DataLoader(dataset=test_dataset, batch_size=1, shuffle=False)
and I am trying to write to the file as follows:
f = open("test_y", "w")
with torch.no_grad():
for i, (images, labels) in enumerate(test_loader, 0):
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
file = os.listdir(TEST_DATA_PATH + "/all")[i]
format = file + ", " + str(predicted.item()) + '\n'
f.write(format)
f.close()
The problem with os.listdir(TESTH_DATA_PATH + "/all")[i] is that it is not synchronized with the loaded files order of test_loader. What can I do?
| Well, it depends on how your Dataset is implemented. For instance, in the torchvision.datasets.MNIST(...) case, you cannot retrieve the filename simply because there is no such thing as the filename of a single sample (MNIST samples are loaded in a different way).
As you did not show your Dataset implementation, I'll tell you how this could be done with the torchvision.datasets.ImageFolder(...) (or any torchvision.datasets.DatasetFolder(...)):
f = open("test_y", "w")
with torch.no_grad():
for i, (images, labels) in enumerate(test_loader, 0):
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
sample_fname, _ = test_loader.dataset.samples[i]
f.write("{}, {}\n".format(sample_fname, predicted.item()))
f.close()
You can see that the path of the file is retrieved during the __getitem__(self, index), especifically here.
If you implemented your own Dataset (and perhaps would like to support shuffle and batch_size > 1), then I would return the sample_fname on the __getitem__(...) call and do something like this:
for i, (images, labels, sample_fname) in enumerate(test_loader, 0):
# [...]
This way you wouldn't need to care about shuffle. And if the batch_size is greater than 1, you would need to change the content of the loop for something more generic, e.g.:
f = open("test_y", "w")
for i, (images, labels, samples_fname) in enumerate(test_loader, 0):
outputs = model(images)
pred = torch.max(outputs, 1)[1]
f.write("\n".join([
", ".join(x)
for x in zip(map(str, pred.cpu().tolist()), samples_fname)
]) + "\n")
f.close()
| https://stackoverflow.com/questions/56699048/ |
Is there an function in PyTorch for converting convolutions to fully-connected networks form? | I'm trying to convert a convolution layer to a fully-connected layer.
For example, there is an example of 3×3 input and 2x2 kernel:
which is equivalent to a vector-matrix multiplication,
Is there a function in PyTorch to get the matrix B?
| I can only partially answer your question:
In your example above, you write the kernel as matrix and the input as a vector. If you are fine with writing the input as a matrix, you can use torch.nn.Unfold which explicitly calculates a convolution in the documentation:
# Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)
inp = torch.randn(1, 3, 10, 12)
w = torch.randn(2, 3, 4, 5)
inp_unf = torch.nn.functional.unfold(inp, (4, 5))
out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
out = out_unf.view(1, 2, 7, 8)
(torch.nn.functional.conv2d(inp, w) - out).abs().max()
# tensor(1.9073e-06)
If, however, you need to calculate the matrix for the kernel (the smaller matrix) you can use this function, which is based on Warren Weckessers answer:
def toeplitz_1_ch(kernel, input_size):
# shapes
k_h, k_w = kernel.shape
i_h, i_w = input_size
o_h, o_w = i_h-k_h+1, i_w-k_w+1
# construct 1d conv toeplitz matrices for each row of the kernel
toeplitz = []
for r in range(k_h):
toeplitz.append(linalg.toeplitz(c=(kernel[r,0], *np.zeros(i_w-k_w)), r=(*kernel[r], *np.zeros(i_w-k_w))) )
# construct toeplitz matrix of toeplitz matrices (just for padding=0)
h_blocks, w_blocks = o_h, i_h
h_block, w_block = toeplitz[0].shape
W_conv = np.zeros((h_blocks, h_block, w_blocks, w_block))
for i, B in enumerate(toeplitz):
for j in range(o_h):
W_conv[j, :, i+j, :] = B
W_conv.shape = (h_blocks*h_block, w_blocks*w_block)
return W_conv
which is not in pytorch but in numpy. This is for padding = 0 but can easily be adjusted by changing h_blocks and w_blocks and W_conv[i+j, :, j, :].
Update: Multiple output channels are just multiple of these matrices, as each output has its own kernel. Multiple input channels also have their own kernels - and their own matrices - over which you average after the convolution. This can be implemented as follows:
def conv2d_toeplitz(kernel, input):
"""Compute 2d convolution over multiple channels via toeplitz matrix
Args:
kernel: shape=(n_out, n_in, H_k, W_k)
input: shape=(n_in, H_i, W_i)"""
kernel_size = kernel.shape
input_size = input.shape
output_size = (kernel_size[0], input_size[1] - (kernel_size[1]-1), input_size[2] - (kernel_size[2]-1))
output = np.zeros(output_size)
for i,ks in enumerate(kernel): # loop over output channel
for j,k in enumerate(ks): # loop over input channel
T_k = toeplitz_1_ch(k, input_size[1:])
output[i] += T_k.dot(input[j].flatten()).reshape(output_size[1:]) # sum over input channels
return output
To check the correctness:
k = np.random.randn(4*3*3*3).reshape((4,3,3,3))
i = np.random.randn(3,7,9)
out = conv2d_toeplitz(k, i)
# check correctness of convolution via toeplitz matrix
print(np.sum((out - F.conv2d(torch.tensor(i).view(1,3,7,9), torch.tensor(k)).numpy())**2))
>>> 1.0063523219807736e-28
Update 2:
It is also possible to do this without looping in one matrix:
def toeplitz_mult_ch(kernel, input_size):
"""Compute toeplitz matrix for 2d conv with multiple in and out channels.
Args:
kernel: shape=(n_out, n_in, H_k, W_k)
input_size: (n_in, H_i, W_i)"""
kernel_size = kernel.shape
output_size = (kernel_size[0], input_size[1] - (kernel_size[1]-1), input_size[2] - (kernel_size[2]-1))
T = np.zeros((output_size[0], int(np.prod(output_size[1:])), input_size[0], int(np.prod(input_size[1:]))))
for i,ks in enumerate(kernel): # loop over output channel
for j,k in enumerate(ks): # loop over input channel
T_k = toeplitz_1_ch(k, input_size[1:])
T[i, :, j, :] = T_k
T.shape = (np.prod(output_size), np.prod(input_size))
return T
The input has to be flattened and the output reshaped after multiplication.
Checking for correctness (using the same i and k as above):
T = toeplitz_mult_ch(k, i.shape)
out = T.dot(i.flatten()).reshape((1,4,5,7))
# check correctness of convolution via toeplitz matrix
print(np.sum((out - F.conv2d(torch.tensor(i).view(1,3,7,9), torch.tensor(k)).numpy())**2))
>>> 1.5486060830252635e-28
| https://stackoverflow.com/questions/56702873/ |
Is there a way to reduce the time needed to find the indices of the top k largest elements from a 1D vector? | I am trying to obtain the indices of the of the largest n and smallest n elements of an array in Pytorch, and then concatenate these elements into a return array, but the time needed to do so is extremely unreasonable as the size of the array increases exponentially.
I have tried the code that is included, as well as sorting the vector so I can just take the first and last n elements, but time is not reduced and I lose the indices of the vector, which is what I need.
def draw(n, distr):
return np.concatenate((np.array(distr.topk(k=int(n), largest=True).indices),
np.array(distr.topk(k=int(n), largest=False).indices)),
axis=0)
For a series of 1 dimensional arrays, varying in size from length 10 to length 2359296; furthermore, the variable n is an integer valued at 1/10th of the length of the array. My computer can compute the indices of the largest and smallest n elements of all the arrays in about 0.5 seconds. I would prefer to minimize this time as much as possible, preferably to less than 0.2 seconds
| You should use torch.topk(x,k).
k=2
x = torch.arange(0,10).resize_((2,5))
print(x)
print("...")
res, ind = torch.topk(x,k)
print(res)
tensor([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
...
tensor([[4, 3],
[9, 8]])
| https://stackoverflow.com/questions/56706754/ |
How to implement Pytorch 1D crosscorrelation for long signals in fourier domain? | I have a series of signals length n = 36,000 which I need to perform crosscorrelation on. Currently, my cpu implementation in numpy is a little slow. I've heard Pytorch can greatly speed up tensor operations, and provides a way to perform computations in parallel on the GPU. I'd like to explore this option, but I'm not quite sure how to accomplish this using the framework.
Because of the length of these signals, I'd prefer to perform the crosscorrelation operation in the frequency domain.
Normally using numpy I'd perform the operation like so:
import numpy as np
signal_length=36000
# make the signals
signal_1 = np.random.uniform(-1,1, signal_length)
signal_2 = np.random.uniform(-1,1, signal_length)
# output target length of crosscorrelation
x_cor_sig_length = signal_length*2 - 1
# get optimized array length for fft computation
fast_length = np.fftpack.next_fast_len(x_cor_sig_length)
# move data into the frequency domain. axis=-1 to perform
# along last dimension
fft_1 = np.fft.rfft(src_data, fast_length, axis=-1)
fft_2 = np.fft.rfft(src_data, fast_length, axis=-1)
# take the complex conjugate of one of the spectrums. Which one you choose depends on domain specific conventions
fft_1 = np.conj(fft_1)
fft_multiplied = fft_1 * fft_2
# back to time domain.
prelim_correlation = np.fft.irfft(result, x_corr_sig_length, axis=-1)
# shift the signal to make it look like a proper crosscorrelation,
# and transform the output to be purely real
final_result = np.real(np.fft.fftshift(prelim_correlation),axes=-1)).astype(np.float64)
Looking at the Pytorch documentation, there doesn't seem to be an equivalent for numpy.conj(). I'm also not sure if/how I need to implement a fftshift after the irfft operation.
So how would you go about writing a 1D crosscorrelation in Pytorch using the fourier method?
| A few things to be considered.
Python interpreter is very slow, what those vectorization libraries do is to move the workload to a native implementation. In order to make any difference you need to be able to give perform many operations in one python instruction. Evaluating things on GPU follows the same principle, while GPU has more compute power it is slower to copy data to/from GPU.
I adapted your example to process multiple signals simulataneously.
import numpy as np
def numpy_xcorr(BATCH=1, signal_length=36000):
# make the signals
signal_1 = np.random.uniform(-1,1, (BATCH, signal_length))
signal_2 = np.random.uniform(-1,1, (BATCH, signal_length))
# output target length of crosscorrelation
x_cor_sig_length = signal_length*2 - 1
# get optimized array length for fft computation
fast_length = next_fast_len(x_cor_sig_length)
# move data into the frequency domain. axis=-1 to perform
# along last dimension
fft_1 = np.fft.rfft(signal_1, fast_length, axis=-1)
fft_2 = np.fft.rfft(signal_2 + 0.1 * signal_1, fast_length, axis=-1)
# take the complex conjugate of one of the spectrums.
fft_1 = np.conj(fft_1)
fft_multiplied = fft_1 * fft_2
# back to time domain.
prelim_correlation = np.fft.irfft(fft_multiplied, fast_length, axis=-1)
# shift the signal to make it look like a proper crosscorrelation,
# and transform the output to be purely real
final_result = np.fft.fftshift(np.real(prelim_correlation), axes=-1)
return final_result, np.sum(final_result)
Since torch 1.7 we have the torch.fft module that provides an interface similar to numpy.fft, the fftshift is missing but the same result can be obtained with torch.roll. Another point is that numpy uses by default 64-bit precision and torch will use 32-bit precision.
The fast length consists in choosing smooth numbers (those having that are factorized in to small prime numbers, and I suppose you are familiar with this subject).
def next_fast_len(n, factors=[2, 3, 5, 7]):
'''
Returns the minimum integer not smaller than n that can
be written as a product (possibly with repettitions) of
the given factors.
'''
best = float('inf')
stack = [1]
while len(stack):
a = stack.pop()
if a >= n:
if a < best:
best = a;
if best == n:
break; # no reason to keep searching
else:
for p in factors:
b = a * p;
if b < best:
stack.append(b)
return best;
Then the torch implementation goes
import torch;
import torch.fft
def torch_xcorr(BATCH=1, signal_length=36000, device='cpu', factors=[2,3,5], dtype=torch.float):
signal_length=36000
# torch.rand is random in the range (0, 1)
signal_1 = 1 - 2*torch.rand((BATCH, signal_length), device=device, dtype=dtype)
signal_2 = 1 - 2*torch.rand((BATCH, signal_length), device=device, dtype=dtype)
# just make the cross correlation more interesting
signal_2 += 0.1 * signal_1;
# output target length of crosscorrelation
x_cor_sig_length = signal_length*2 - 1
# get optimized array length for fft computation
fast_length = next_fast_len(x_cor_sig_length, [2, 3])
# the last signal_ndim axes (1,2 or 3) will be transformed
fft_1 = torch.fft.rfft(signal_1, fast_length, dim=-1)
fft_2 = torch.fft.rfft(signal_2, fast_length, dim=-1)
# take the complex conjugate of one of the spectrums. Which one you choose depends on domain specific conventions
fft_multiplied = torch.conj(fft_1) * fft_2
# back to time domain.
prelim_correlation = torch.fft.irfft(fft_multiplied, dim=-1)
# shift the signal to make it look like a proper crosscorrelation,
# and transform the output to be purely real
final_result = torch.roll(prelim_correlation, (fast_length//2,))
return final_result, torch.sum(final_result);
And here a code to test the results
import time
funcs = {'numpy-f64': lambda b: numpy_xcorr(b, factors=[2,3,5], dtype=np.float64),
'numpy-f32': lambda b: numpy_xcorr(b, factors=[2,3,5], dtype=np.float32),
'torch-cpu-f64': lambda b: torch_xcorr(b, device='cpu', factors=[2,3,5], dtype=torch.float64),
'torch-cpu': lambda b: torch_xcorr(b, device='cpu', factors=[2,3,5], dtype=torch.float32),
'torch-gpu-f64': lambda b: torch_xcorr(b, device='cuda', factors=[2,3,5], dtype=torch.float64),
'torch-gpu': lambda b: torch_xcorr(b, device='cuda', factors=[2,3,5], dtype=torch.float32),
}
times ={}
for batch in [1, 10, 100]:
times[batch] = {}
for l, f in funcs.items():
t0 = time.time()
t1, t2 = f(batch)
tf = time.time()
del t1
del t2
times[batch][l] = 1000 * (tf - t0) / batch;
I obtained the following results
And what surprised myself is the result when the numbers are not so smooth e.g. using 17-smooth length the torch implementation is so much better that I used logarithmic scale here (with batch size 100 the torch gpu was 10000 times faster than numpy with batch size 1).
Remember that these functions are generating the data at the GPU in general we want to copy the final results to the CPU, if we consider the time spent copying the final result to CPU I observed times up to 10x higher than the cross correlation computation (random data generation + three FFTs).
| https://stackoverflow.com/questions/56711772/ |
Why aren't torch.functional.sigmoid and torch.nn.functional.relu deprecated like torch.nn.functional.tanh? | Now when torch.autograd.Variable is merged with torch.tensor and obsolete, why did they deprecate some functions in torch.nn.functional but not others? Namely, tanhis deprecated but not sigmoid or relu.
>>> torch.__version__
'1.1.0'
>>> u
tensor(2., grad_fn=<MeanBackward0>)
>>> torch.nn.functional.tanh(u)
C:\Users\mlearning\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py:1374: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
tensor(0.9640, grad_fn=<TanhBackward>)
>>> torch.nn.functional.sigmoid(u)
tensor(0.8808, grad_fn=<SigmoidBackward>)
>>> torch.nn.functional.relu(u)
tensor(2., grad_fn=<ReluBackward0>)
Is there any difference between torch.nn.functional.relu and torch.relu, or I can use them interchangeably?
| You can check this thread where one of the few main PyTorch designers (actually a creator) set the directive.
You can also check the reasoning behind. Also, you may propose the same for the other 2 functions.
The other should deprecate as well.
| https://stackoverflow.com/questions/56723486/ |
BERT Multi-class text classification in Google Colab | I'm working on a data set of social media comments(including youtube links) as input features and the Myers-Biggs Personality Profile as the target label:
type posts
0 INFJ 'http://www.youtube.com/watch?v=qsXHcwe3krw|||...
1 ENTP 'I'm finding the lack of me in these posts ver...
2 INTP 'Good one _____ https://www.youtube.com/wat...
3 INTJ 'Dear INTP, I enjoyed our conversation the o...
4 ENTJ 'You're fired.|||That's another silly misconce...
but from what I've found, BERT wants DataFrame's to be in this format:
a label posts
0 a 8 'http://www.youtube.com/watch?v=qsXHcwe3krw|||...
1 a 3 'I'm finding the lack of me in these posts ver...
2 a 11 'Good one _____ https://www.youtube.com/wat...
3 a 10 'Dear INTP, I enjoyed our conversation the o...
4 a 2 'You're fired.|||That's another silly misconce...
The resulting output must be a prediction on a test set of comments split into four columns, one for each Personality Profile where, for example, 'Mind' = 1 is the label for Extrovert. Basically splitting a type like INFJ into 'Mind','Energy','Nature','Tactics', like such:
type post Mind Energy Nature Tactics
0 INFJ 'url-web 0 1 0 1
1 INFJ url-web 0 1 0 1
2 INFJ enfp and intj moments url-web sportscenter n... 0 1 0 1
3 INFJ What has been the most life-changing experienc... 0 1 0 1
4 INFJ url-web url-web On repeat for most of today. 0 1 0 1
I've installed pytorch-pretrained-bert using:
!pip install pytorch-pretrained-bert
I've imported the models and tried to tokenize the 'posts' column using:
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenized_train = tokenizer.tokenize(train)
but receive this error:
TypeError: ord() expected a character, but string of length 5 found
I tried this based off the pytorch-pretrained-bert GitHub Repo and a Youtube vidoe.
I am a Data Science intern with no Deep Learning experience at all. I simply want to experiment with the BERT model in the most simplest way to predict the multi-class classified output so I can compare the results to simpler text-classification models we are currently working on. I am working in Google Colab and the resulting output should be a .csv file.
I understand that this is a complicated model and all documentation and examples surrounding the model are complex (fine tuning layers etc.) but any help for a simple implementation(if in fact there is such a thing) for a beginner Data Scientist with minimal Software Engineering experience, would be much appreciated.
| I recommend you start with a simple BERT classification task, for example following this excellent tutorial: https://mccormickml.com/2019/07/22/BERT-fine-tuning/
Then you can get into multi-label by following: https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d
Only then I would recommend you try your task on your own dataset.
| https://stackoverflow.com/questions/56725445/ |
How does the groups parameter in torch.nn.conv* influence the convolution process? | I want to convolve a multichannel tensor with the same single channel weight.
I could repeat the weight along the channel dimension, but I thought there might be an other way.
I thought the groups parameter might do the job. However I don't understand the documentation.
That's why I want to ask how the groups parameter influences the convolution process ?
| Just minor tips since I never used it.
Group parameter multiplies the number of kernels you would normally have.
So if you set group=2, expect 2 times more kernels.
The definition of conv2d in PyTorch states group is 1 by default.
If you increase the group you get the depth-wise convolution, where each input channel is getting specific kernels per se.
The constraint is both in and out channels should be dividable by group number.
I think in Tensorfolow you can read the documentation of SeparableConv2D since this is what is equivalent when group>1.
| https://stackoverflow.com/questions/56725660/ |
TypeError: mul() argument 'other' (position 1) must be Tensor, not ReLU | I would like to add a torch.nn.ReLU() layer between fc1 and fc2 layer.
Original code:
model:
# ...
self.fc1 = nn.Linear(4096, 256)
self.fc2 = nn.Linear(256, 4096)
# ...
def forward(...):
# ...
x = x.view(-1, 4096)
x = self.fc1(x))
if a7 is not None:
x = x * a7.squeeze()
# ...
I tried
# ...
x = x.view(-1, 4096)
x = nn.ReLU(self.fc1(x)))
if a7 is not None:
x = x * a7.squeeze()
# ...
and this error pops out.
| My answer assumes __init__ was a typo and it should be forward. Let me know if that is not the case and I'll delete it.
import torch
from torch import nn
class SimpleModel(nn.Module):
def __init__(self, with_relu=False):
super(SimpleModel, self).__init__()
self.fc1 = nn.Sequential(nn.Linear(3, 10), nn.ReLU(inplace=True)) if with_relu else nn.Linear(3, 10)
self.fc2 = nn.Linear(10, 3)
def forward(self, x):
x = self.fc1(x)
print(torch.min(x)) # just to show you ReLU is working...
return self.fc2(x)
# Model without ReLU
net_without_relu = SimpleModel(with_relu=False)
print(net_without_relu)
# Model with ReLU
net_with_relu = SimpleModel(with_relu=True)
print(net_with_relu)
# random input data
x = torch.randn((5, 3))
print(x)
# we expect it to print something < 0
output1 = net_without_relu(x)
# we expect it to print 0.
output2 = net_with_relu(x)
You can check the code below running on the Colab: https://colab.research.google.com/drive/1W3Dh4_KPd3iABx5FSzZm3tilm6tnJh0v
To use as you tried:
x = nn.ReLU(self.fc1(x)))
you can use the functional API:
from torch.nn import functional as F
# ...
x = F.relu(self.fc1(x)))
| https://stackoverflow.com/questions/56732759/ |
Numerical equivalence of PyTorch backpropagation | After i 'v written the simple neural network with numpy, i wanted to compare it numerically with PyTorch impementation. Running alone, seems my neural network implementation converges, so it seems to have no errors.
Also i v checked forward pass matches to PyTorch, so basic setup is correct.
But something different happens while backward pass, because the weights after one backpropagation are different.
I dont want to post full code here because its linked over several .py files, and most of the code is irrelevant to the question. I just want to know does PyTorch "basic" gradient descent or something different.
I m viewing the most simle example about full-connected weights of the last layer, cause if it is different, further will be also different:
self.weight += self.learning_rate * hidden_layer.T.dot(output_delta )
where
output_delta = self.expected - self.output
self.expected are expected value,
self.output is forward pass result
No activation or further stuff here.
The torch past is:
optimizer = torch.optim.SGD(nn.parameters() , lr = 1.0)
criterion = torch.nn.MSELoss(reduction='sum')
output = nn.forward(x_train)
loss = criterion(output, y_train)
loss.backward()
optimizer.step()
optimizer.zero_grad()
So it is possible that with SGD optimizer and MSELoss it uses some different delta or backpropagation function, not the basic one mentioned above? If its so i d like to know how to numerically check my numpy solution with pytorch.
|
I just want to know does PyTorch "basic" gradient descent or something different.
If you set torch.optim.SGD, this means stochastic gradient descent.
You have different implementations on GD, but the one that is used in PyTorch is applied to mini-batches.
There are GD implementations that will optimize parameters after the full epoch. As you may guess they are very "slow", this may be great for supercomputers to test. There are GD implementations that work for every sample, as you may guess their imperfectness is "huge" gradient fluctuations.
These are all relative terms, so I am using ""
Note you are using too big learning rates like lr = 1.0, which means you haven't normalized your data at first, but this is a skill you may scalp over time.
So it is possible that with SGD optimizer and MSELoss it uses some different delta or backpropagation function, not the basic one mentioned above?
It uses what you told.
Here is a the example in PyTorch and in Python to show detection of gradients works as expected (used in back propagation) :
x = torch.tensor([5.], requires_grad=True);
print(x) # tensor([5.], requires_grad=True)
y = 3*x**2
y.backward()
print(x.grad) # tensor([30.])
How would you get this value 30 in plain python?
def y(x):
return 3*x**2
x=5
e=0.01 #etha
g=(y(x+e)-y(x))/e
print(g) # 30.0299
As we expect we got ~30, it would be even better with smaller etha.
| https://stackoverflow.com/questions/56742037/ |
Is there a way to create a graph comparing hyper-parameters vs model accuracy with TRAINS python package? | I would like to run multiple experiments, then report model accuracy per experiment.
I'm training a toy MNIST example with pytorch (v1.1.0), but the goal is, once I can compare performance for the toy problem, to have it integrated with the actual code base.
As I understand the TRAINS python package, with the "two lines of code" all my hyper-parameters are already logged (Command line argparse in my case).
What do I need to do in order to report a final scalar and then be able to sort through all the different training experiments (w/ hyper-parameters) in order to find the best one.
What I'd like to get, is a graph/s where on the X-axis I have hyper-parameter values and on the Y-axis I have the validation accuracy.
| I assume you are referring to: https://pypi.org/project/trains/ (https://github.com/allegroai/trains),
which I'm one of the maintainers.
You can manually create a plot with a single point X-axis for the hyper-parameter value, and Y-Axis for the accuracy.
number_layers = 10
accuracy = 0.95
Task.current_task().get_logger().report_scatter2d(
"performance", "accuracy", iteration=0,
mode='markers', scatter=[(number_layers, accuracy)])
Assuming your hyper-parameter is "number_layers" with current value 10, and the accuracy for the trained model is 0.95.
Then when you compare the experiments you get something like that:
| https://stackoverflow.com/questions/56744397/ |
Pytorch model.train() and a separte train() function written in a tutorial | I am new to PyTorch and I was wondering if you could explain to me some of the key differences between the default model.train() in PyTorch and the train() function here.
The other train() function is on the official PyTorch tutorial on text classification and was confused as to whether the model weights are being stored at the end of training.
https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html
learning_rate = 0.005
criterion = nn.NLLLoss()
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# Add parameters' gradients to their values, multiplied by learning rate
for p in rnn.parameters():
p.data.add_(-learning_rate, p.grad.data)
return output, loss.item()
This is the function. This function is then called multiple times in this form:
n_iters = 100000
print_every = 5000
plot_every = 1000
record_every = 500
# Keep track of losses for plotting
current_loss = 0
all_losses = []
predictions = []
true_vals = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = 'O' if guess == category else 'X (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
if iter % record_every == 0:
guess, guess_i = categoryFromOutput(output)
predictions.append(guess)
true_vals.append(category)
To me it seems that the model weights are not being saved or updated but rather being overridden at each iteration when written like this. Is this correct? Or does the model appear to be training correctly?
Additionally, if I were to use the default function model.train(), what is the main advantage and does model.train() perform more or less the same functionality as the train() function above?
| As per the source code here, model.train() sets the module in training mode. So, it basically tells your model that you are training the model. This has any effect only on certain modules like dropout, batchnorm etc. which behave differently in training/evaluation mode. In case of model.train() the model knows it has to learn the layers.
You can call either model.eval() or model.train(mode=False) to tell the model that it has nothing new to learn and the model is used for testing purpose.
model.train() just sets the mode. It doesn't actually train the model.
train() that you are using above is actually training the model, i.e., calculating gradient and doing backpropagation to learn the weights.
Learn more about model.train() from official pytorch discussion forum, here.
| https://stackoverflow.com/questions/56758445/ |
How to fix 'The kernel appears to have died. It will restart automatically" caused by pytorch | I have a strange problem with Pytorch. When I use torch functions with tensors like tensor.reshape or torch.transpose, I don't have any problems; even when I created networks, it's ok. However, when I want to train network my jupyter crashed.
I find where error is but I don't know why it is there and how to fix it.
I installed pytorch using conda. I have Ubuntu 18.04.
I don't have cuda.
| If you are on Ubuntu you may not install PyTorch just via conda.
It can be:
Conda
Pip
LibTorch
From Source
So you have multiple options.
Go to this page and select Cuda to NONE, LINUX, stable 1.1, CONDA.
conda install pytorch-cpu torchvision-cpu -c pytorch
If you have problems still, you may try also install PIP way.
pip3 install https://download.pytorch.org/whl/cpu/torch-1.1.0-cp36-cp36m-linux_x86_64.whl
pip3 install https://download.pytorch.org/whl/cpu/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl
Hopefully some of these ways will work.
| https://stackoverflow.com/questions/56759112/ |
AttributeError: module 'torch' has no attribute 'hub' | import torch
model = torch.hub.list('pytorch/vision')
My pytorch version is 1.0.0, but I can't load the hub, why is this?
| You will need torch >= 1.1.0 to use torch.hub attribute.
Alternatively, try by downloading this hub.py file and then try below code:
import hub
model = hub.list('pytorch/vision', force_reload=False)
Arguments:
github: Required, a string with format repo_owner/repo_name[:tag_name] with an optional tag/branch. The default branch is master if not specified.
Example: pytorch/vision[:hub]
force_reload: Optional, whether to discard the existing cache and force a fresh download.
Default is False.
| https://stackoverflow.com/questions/56764823/ |
Incorrect results obtained on running a model in LibTorch that was trained and exported from PyTorch | I am trying to export a trained model along with weights for inference in C++ using LibTorch. However, the output tensor results do not match.
The shape of the output tensor is the same.
model = FCN()
state_dict = torch.load('/content/gdrive/My Drive/model/trained_model.pth')
model.load_state_dict(state_dict)
example = torch.randn(1, 3, 768, 1024)
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save('/content/gdrive/My Drive/model/mymodel.pt')
However some warnings are generated which I think maybe causing the incorrect results to be generated.
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:137:
TracerWarning: Converting a tensor to a Python index might cause the
trace to be incorrect. We can't record the data flow of Python values,
so this value will be treated as a constant in the future. This means
that the trace might not generalize to other inputs!
/usr/local/lib/python3.6/dist-packages/torch/tensor.py:435:
RuntimeWarning: Iterating over a tensor might cause the trace to be
incorrect. Passing a tensor of different shape won't change the number
of iterations executed (and might lead to errors or silently give
incorrect results).'incorrect results).', category=RuntimeWarning)
Following is the LibTorch code to generate the output tensor
at::Tensor predict(std::shared_ptr<torch::jit::script::Module> model, at::Tensor &image_tensor) {
std::vector<torch::jit::IValue> inputs;
inputs.push_back(image_tensor);
at::Tensor result = model->forward(inputs).toTensor();
return result;
}
Has anyone tried using a trained PyTorch model in LibTorch?
| Just ran into the same issue, and found a solution:
add
model.eval()
before
traced_script_module = torch.jit.trace(model, example)
and the model gives the same result in c++ as in python
| https://stackoverflow.com/questions/56770197/ |
Transfer learning in Pytorch using fasterrcnn_resnet50_fpn | I am looking for Object Detection for custom dataset in PyTorch.
Tutorial here provides a snippet to use pre-trained model for custom object classification
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
I tried to use similar method for Object Detection using faster rcnn model.
# load a model pre-trained pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
model.eval()
for param in model.parameters():
param.requires_grad = False
# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 1 # 1 class (person) + background
print(model)
model = model.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model = train_model(model, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25)
PyTorch throws these errors . Is this approach correct in the first place ?
Epoch 0/24
----------
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-69-527ca4db8e5d> in <module>()
----> 1 model = train_model(model, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25)
2 frames
/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
43 """
44 if self.training and targets is None:
---> 45 raise ValueError("In training mode, targets should be passed")
46 original_image_sizes = [img.shape[-2:] for img in images]
47 images, targets = self.transform(images, targets)
ValueError: In training mode, targets should be passed
Is there a way modify this example for custom object detection ?
https://www.learnopencv.com/faster-r-cnn-object-detection-with-pytorch/
| If you want to detect a person and background, you will have to set num_classes to 2.
To train your custom detection model, you need to pass, images (each pixel between 0 and 1) and targets. You can follow this Kaggle tutorial : https://www.kaggle.com/abhishek/training-fast-rcnn-using-torchvision
| https://stackoverflow.com/questions/56771771/ |
Adding custom labels to pytorch dataloader/dataset does not work for custom dataset | I am working on the cactus image competition on Kaggle and I am trying to use the PyTorch dataloader for my CNN. However, I am running into an issue where I cannot set the labels for the training set. The training set images are given in a folder and the labels are in a csv file. This is my code.
train = torchvision.datasets.ImageFolder(root='../input/train',
transform=transform)
train.targets = torch.from_numpy(df['has_cactus'].values)
train_loader = torch.utils.data.DataLoader(train, batch_size=64, shuffle=True, num_workers=2)
for i, data in enumerate(train_loader, 0):
print(data[1])
This code outputs batch tensors of all zeros, which is clearly incorrect as the great majority of the labels(if you were to look at the dataframe) are ones. I believe that this is a problem with assigning the labels to "train.targets". If "train.targets" is printed prior to the assignment of the other labels, it returns a tensor of all zeros which is consistent with the incorrect results that I am getting. How do I fix this issue?
| I typically inherit the builtin DataSet class as follows:
from torch.utils.data import DataLoader
class DataSet:
def __init__(self, root):
"""Init function should not do any heavy lifting, but
must initialize how many items are available in this data set.
"""
self.ROOT = root
self.images = read_images(root + "/images")
self.labels = read_labels(root + "/labels")
def __len__(self):
"""return number of points in our dataset"""
return len(self.images)
def __getitem__(self, idx):
""" Here we have to return the item requested by `idx`
The PyTorch DataLoader class will use this method to make an iterable for
our training or validation loop.
"""
img = images[idx]
label = labels[idx]
return img, label
And now, you can create an instance of this class as,
ds = Dataset('../input/train')
Now, you can instantiate the DataLoader:
dl = DataLoader(ds, batch_size=TRAIN_BATCH_SIZE, shuffle=False, num_workers=4, drop_last=True)
This will create batches of your data that you can access as:
for image, label in dl:
print(label)
| https://stackoverflow.com/questions/56774582/ |
How to obtain input data from ONNX model? | I have exported my PyTorch model to ONNX. Now, is there a way for me to obtain the input layer from that ONNX model?
Exporting PyTorch model to ONNX
import torch.onnx
checkpoint = torch.load("./saved_pytorch_model.pth")
model.load_state_dict(checkpoint['state_dict'])
input = torch.tensor(df_X.values).float()
torch.onnx.export(model, input, "onnx_model.onnx")
Loading ONNX model
onnx_model = onnx.load('onnx_model.onnx')
I want to be able to somehow obtain the input layer from onnx_model. Is this possible?
| The ONNX model is a protobuf structure, as defined here (https://github.com/onnx/onnx/blob/master/onnx/onnx.in.proto). You can work with it using the standard protobuf methods generated for python (see: https://developers.google.com/protocol-buffers/docs/reference/python-generated). I don't understand what exactly you want to extract. But you can iterate through the nodes that make up the graph (model.graph.node). The first node in the graph may or may not correspond to what you might consider the first layer (it depends on how the translation was done). You can also get the inputs of the model (model.graph.input).
| https://stackoverflow.com/questions/56795995/ |
Pytorch Build from Source gives Error make: *** No rule to make target 'install'. Stop | I am following this guide to build Pytorch from scratch on a Raspberry Pi3B. For some reason, there is an error:
Building wheel torch-1.2.0a0+f13fadd
-- Building version 1.2.0a0+f13fadd
cmake --build . --target install --config Release -- -j 4
make: *** No rule to make target 'install'. Stop.
when I call python3 setup.py build. I am running Python version 3.5 and I am unsure why this seems to be failing.
| Recently I encountered this error so after some research, in
https://stackoverflow.com/a/46987554/12164529
someone mentioned something about cache.
Therefore I guess that's because of some CMake cache behavior, so I run this command:
sudo USE_ROCM=1 USE_LMDB=1 USE_OPENCV=1 MAX_JOBS=15 python setup.py clean
And the error went away.
ps. This is my first answer on stackoverflow, and I'm not sure if this is a good one, but I hope it helps people find here.
| https://stackoverflow.com/questions/56802904/ |
ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable MASTER_ADDR expected, but not set | I am not able to initialize the group process in PyTorch for BERT model
I had tried to initialize using following code:
import torch
import datetime
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
timeout=datetime.timedelta(0, 1800),
world_size=0,
rank=0,
store=None,
group_name=''
)
and tried to access the get_world_size() function:
num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()
full code:
train_examples = None
num_train_optimization_steps = None
if do_train:
train_examples = processor.get_train_examples(data_dir)
num_train_optimization_steps = int(
len(train_examples) / train_batch_size / gradient_accumulation_steps) * num_train_epochs
if local_rank != -1:
import datetime
torch.distributed.init_process_group(backend='nccl',init_method='env://', timeout=datetime.timedelta(0, 1800), world_size=0, rank=0, store=None, group_name='')
num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()
print(num_train_optimization_steps)
| I solve the problem by referring https://github.com/NVIDIA/apex/issues/99.
Specifically run
python -m torch.distributed.launch xxx.py
| https://stackoverflow.com/questions/56805951/ |
Difference between "detach()" and "with torch.nograd()" in PyTorch? | I know about two ways to exclude elements of a computation from the gradient calculation backward
Method 1: using with torch.no_grad()
with torch.no_grad():
y = reward + gamma * torch.max(net.forward(x))
loss = criterion(net.forward(torch.from_numpy(o)), y)
loss.backward();
Method 2: using .detach()
y = reward + gamma * torch.max(net.forward(x))
loss = criterion(net.forward(torch.from_numpy(o)), y.detach())
loss.backward();
Is there a difference between these two? Are there benefits/downsides to either?
| tensor.detach() creates a tensor that shares storage with tensor that does not require grad. It detaches the output from the computational graph. So no gradient will be backpropagated along this variable.
The wrapper with torch.no_grad() temporarily set all the requires_grad flag to false. torch.no_grad says that no operation should build the graph.
The difference is that one refers to only a given variable on which it is called. The other affects all operations taking place within the with statement. Also, torch.no_grad will use less memory because it knows from the beginning that no gradients are needed so it doesn’t need to keep intermediary results.
Learn more about the differences between these along with examples from here.
| https://stackoverflow.com/questions/56816241/ |
Why doesn't setting `random.seed(42)` give me identical results in pytorch? | I am setting random seed for both random and numpy.random at the beginning of my main file:
import random
import numpy as np
np.random.seed(42)
random.seed(42)
import torch
Nevertheless, when I create a Net() object with randomly initialized parameters, it gives a completely different result every time:
net=neuralnet.Net()
print ("initialized params: ", net.fc1.weight)
Note that neuralnet.Net() is in a different file, and is a class that extends torch.nn.Module. it is torch.nn.Module that is randomly initializing net.fc1.weight, not my own code.
How is it possible that when I create a Net() object with randomly initialized parameters, it gives a completely different result every time?
| try:
import torch
torch.manual_seed(0)
For further information:
https://pytorch.org/docs/stable/notes/randomness.html
| https://stackoverflow.com/questions/56817007/ |
Compute a convolution with weights that where computed by another function | I would like to perform a conv2d but a special one because I compute the weights of a conv2d layer in a function for each sample. Can I simply assign the weights of a given layer for each sample?
I am implementing the Spatially variant convolution of this paper : https://arxiv.org/abs/1804.00389
Thanks
| It is actually very simple : torch.nn.functional.conv2d !
| https://stackoverflow.com/questions/56842889/ |
How to enable Dict/OrderedDict/NamedTuple support in pytorch 1.1.0 JIT compiler? | From the release highlight of pytorch 1.1.0. It appears that the latest JIT compiler now supports Dict type. (Source: https://jaxenter.com/pytorch-1-1-158332.html)
Dictionary and list support in TorchScript: Lists and dictionary types behave like Python lists and dictionaries.
Unfortunately I can't find a way to make this improvement to work properly. The following code is a simple example of exporting a Feature Pyramid Network (FPN) into tensorboard, which uses the JIT compiler:
from collections import OrderedDict
import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
torchWriter = SummaryWriter(log_dir=".tensorboard/example1")
m = torchvision.ops.FeaturePyramidNetwork([10, 20, 30], 5)
# get some dummy data
x = OrderedDict()
x['feat0'] = torch.rand(1, 10, 64, 64)
x['feat2'] = torch.rand(1, 20, 16, 16)
x['feat3'] = torch.rand(1, 30, 8, 8)
# compute the FPN on top of x
output = m.forward(x)
print([(k, v.shape) for k, v in output.items()])
torchWriter.add_graph(m, input_to_model=x)
When I run it I got the following error:
Traceback (most recent call last):
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 276, in graph
trace, _ = torch.jit.get_trace_graph(model, args)
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/jit/__init__.py", line 231, in get_trace_graph
return LegacyTracedModule(f, _force_outplace, return_inputs)(*args, **kwargs)
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/jit/__init__.py", line 284, in forward
in_vars, in_desc = _flatten(args)
RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got collections.OrderedDict
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/peng/git-drone/gate_detection/python/gate_detection/errorcase/tb.py", line 36, in <module>
torchWriter.add_graph(m, input_to_model=x)
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 534, in add_graph
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, **kwargs))
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 279, in graph
_ = model(*args) # don't catch, just print the error message
File "/home/shared/virtualenv/dl-torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 4 were given
From the error message it appears that the support is still pending. Can I trust the release highlight? Or I'm not using the API properly?
| The release notes are accurate albeit a little vague. The dictionary/list/user defined classes support described in that link (and the official release notes) only apply to the TorchScript compiler (there are some code examples in the release notes), but SummaryWriter by default will run the TorchScript tracer on whatever module you pass to it, and the tracer only supports Tensors and lists/tuples of Tensors.
So the fix would be to use the TorchScript compiler rather than the tracer, but that requires:
Access to the original code
Support for the compiled output (ScriptModule) in Tensorboard
You should file an issue for (2), and there is ongoing work to fix (1), but this won't work in the short term for that model afaik.
| https://stackoverflow.com/questions/56843726/ |
How to select subtensor from a tensor | I have a tensor A with the dimension of [N1, N2, N3/2, 2, N4, N5]. Another tensor B is an boolean index tensor with the dimension of [N1, N2, N3/2] = 1,0,0,0,1....
Now, I want to get a subtensor C with the dimension of [N1, N2, N3/2, N4, N5] using the index of B .
How could I implement this with pytorch?
Note that I don't incline to use for-loop because it is very slow.
I have looked up some functions in pytorch but found none of them are suitable for my task
| You are making no sense: How can C have the same shape as A when B only selects part of the entries of A to copy to C??
Suppose B has k non-zero elements (that is, B selects k elements out of N1*N2*N3/2 matrices of size N4*N5 in A) then C can have shape of [k, N4, N5] only with k < N1*N2*N3/2.
You can use nonzero() to convert B from logical indices to k integer indices and then use these to select the k elements
C = A.view(-1, *A.shape[-2:])[B.nonzero(), ...]
| https://stackoverflow.com/questions/56844517/ |
model.cuda() in pytorch | If I call model.cuda() in pytorch where model is a subclass of nn.Module, and say if I have four GPUs, how it will utilize the four GPUs and how do I know which GPUs that are using?
| If you have a custom module derived from nn.Module after model.cuda() all model parameters, (model.parameters() iterator can show you these) will end on your cuda.
To check where are your parameters just print them (cuda:0) in my case:
class M(nn.Module):
'custom module'
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
m = M()
m.cuda()
for _ in m.parameters():
print(_)
# Parameter containing:
# tensor([[-0.0201, 0.0282, -0.0258, ..., 0.0056, 0.0146, 0.0220],
# [ 0.0098, -0.0264, 0.0283, ..., 0.0286, -0.0052, 0.0007],
# [-0.0036, -0.0045, -0.0227, ..., -0.0048, -0.0003, -0.0330],
# ...,
# [ 0.0217, -0.0008, 0.0029, ..., -0.0213, 0.0005, 0.0050],
# [-0.0050, 0.0320, 0.0013, ..., -0.0057, -0.0213, 0.0045],
# [-0.0302, 0.0315, 0.0356, ..., 0.0259, 0.0166, -0.0114]],
# device='cuda:0', requires_grad=True)
# Parameter containing:
# tensor([-0.0027, -0.0353, -0.0349, -0.0236, -0.0230, 0.0176, -0.0156, 0.0037,
# 0.0222, -0.0332], device='cuda:0', requires_grad=True)
You can also specify the device like this:
m.cuda('cuda:0')
With torch.cuda.device_count() you may check how many devices you have.
| https://stackoverflow.com/questions/56852347/ |
Difference in shape of tensor torch.Size([]) and torch.Size([1]) in pytorch | I am new to pytorch. While playing around with tensors I observed 2 types of tensors-
tensor(58)
tensor([57.3895])
I printed their shape and the output was respectively -
torch.Size([])
torch.Size([1])
What is the difference between the two?
| You can play with tensors having the single scalar value like this:
import torch
t = torch.tensor(1)
print(t, t.shape) # tensor(1) torch.Size([])
t = torch.tensor([1])
print(t, t.shape) # tensor([1]) torch.Size([1])
t = torch.tensor([[1]])
print(t, t.shape) # tensor([[1]]) torch.Size([1, 1])
t = torch.tensor([[[1]]])
print(t, t.shape) # tensor([[[1]]]) torch.Size([1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[1]]]]) torch.Size([1, 1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[[1]]]]]) torch.Size([1, 1, 1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[[[1]]]]]]) torch.Size([1, 1, 1, 1, 1, 1])
#squize dimension with id 0
t = torch.squeeze(t,dim=0)
print(t, t.shape) # tensor([[[[[1]]]]]) torch.Size([1, 1, 1, 1, 1])
#back to beginning.
t = torch.squeeze(t)
print(t, t.shape) # tensor(1) torch.Size([])
print(type(t)) # <class 'torch.Tensor'>
print(type(t.data)) # <class 'torch.Tensor'>
Tensors, do have a size or shape. Which is the same. Which is actually a class torch.Size.
You can write help(torch.Size) to get more info.
Any time you write t.shape, or t.size() you will get that size info.
The idea of tensors is they can have different compatible size dimension for the data inside it including torch.Size([]).
Any time you unsqueeze a tensor it will add another dimension of 1.
Any time you squeeze a tensor it will remove dimensions of 1, or in general case all dimensions of one.
| https://stackoverflow.com/questions/56856996/ |
Vectorized implementation of field-aware factorization | I would like to implement the field-aware factorization model (FFM) in a vectorized way. In FFM, a prediction is made by the following equation
where w are the embeddings that depend on the feature and the field of the other feature. For more info, see equation (4) in FFM.
To do so, I have defined the following parameter:
import torch
W = torch.nn.Parameter(torch.Tensor(n_features, n_fields, n_factors), requires_grad=True)
Now, given an input x of size (batch_size, n_features), I want to be able to compute the previous equation. Here is my current (non-vectorized) implementation:
total_inter = torch.zeros(x.shape[0])
for i in range(n_features):
for j in range(i + 1, n_features):
temp1 = torch.mm(
x[:, i].unsqueeze(1),
W[i, feature2field[j], :].unsqueeze(0))
temp2 = torch.mm(
x[:, j].unsqueeze(1),
W[j, feature2field[i], :].unsqueeze(0))
total_inter += torch.sum(temp1 * temp2, dim=1)
Unsurprisingly, this implementation is horribly slow since n_features can easily be as large as 1000! Note however that most of the entries of x are 0. All inputs are appreciated!
Edit:
If it can help in any ways, here are some implementations of this model in PyTorch:
pytorch-fm
ctr_model_zoo
Unfortunately, I cannot figure out exactly how they have done it.
Additional update:
I can now obtain the product of x and W in a more efficient way by doing:
temp = torch.einsum('ij, jkl -> ijkl', x, W)
Thus, my loop is now:
total_inter = torch.zeros(x.shape[0])
for i in range(n_features):
for j in range(i + 1, n_features):
temp1 = temp[:, i, feature2field[j], :]
temp2 = temp[:, j, feature2field[i], :]
total_inter += 0.5 * torch.sum(temp1 * temp2, dim=1)
It is however still too long since this loop goes over for about 500 000 iterations.
|
Something that could potentially help you speed up the multiplication is using pytorch sparse tensors.
Also something that might work would be the following:
Create n arrays, one for each feature i that would hold its corresponding field factors in each row. e.g. for feature i = 0
[ W[0, feature2field[0], :],
W[0, feature2field[1], :],
W[0, feature2field[n], :]]
Then calculate the multiplication of those arrays, lets call them F, with X
R[i] = F[i] * X
So each element in R would hold the result of the multiplication, an array, of the F[i] with X.
Next you would multiply each R[i] with its transpose
R[i] = R[i] * R[i].T
Now you can do the summation in a loop like before
for i in range(n_features):
total_inter += torch.sum(R[i], dim=1)
Please take this with a grain of salt as i haven't tested it. In any case i think that it will point you in the right direction.
One problem that might occur is in the transpose multiplication in which each element will also be multiplied with itself and then be added in the sum. I don't think it will affect the classifier but in any case you can make the elements in the diagonal of the transpose and above 0 (including the diagonal).
Also although minor nevertheless please move the 1st unsqueeze operation outside of the nested for loop.
I hope it helps.
| https://stackoverflow.com/questions/56860236/ |
Runtime error when load tensorflow and pytorch models at the same time | Here is the setup
Project A
|-- Module A
|--->step1 - Loads and run tensorflow pretrained model to detect
texts
|
|--->step2 - import and Instantiates Module-B from Project B
--> Runtime Failure here.
Project B
|-- Module B
|--> step 1 - Loads pretrained Pytorch model
|--> step 2 - Runs model to classify text.
Here is the actual runtime error
self.trmodel.load_state_dict(torch.load(self.opts.saved_model, map_location='cpu'))
File/root/anaconda3/envs/py3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict
self.__class__.__name__,\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
Unexpected key(s) in state_dict:module.Transformation.LocalizationNetwork.conv.0.weight",module.Transformation.LocalizationNetwork.conv.1.weight",module.Transformation.LocalizationNetwork.conv.1.bias",module.Transformation.LocalizationNetwork.conv.1.running_mean",module.Transformation.LocalizationNetwork.conv.1.running_var",module.Transformation.LocalizationNetwork.conv.1.num_batches_tracked",module.Transformation.LocalizationNetwork.conv.4.weight",module.Transformation.LocalizationNetwork.conv.5.weight",module.Transformation.LocalizationNetwork.conv.5.bias",module.Transformation.LocalizationNetwork.conv.5.running_mean",module.Transformation.LocalizationNetwork.conv.5.running_var",module.Transformation.LocalizationNetwork.conv.5.num_batches_tracked",module.Transformation.LocalizationNetwork.conv.8.weight",module.Transformation.LocalizationNetwork.conv.9.weight",module.Transformation.LocalizationNetwork.conv.9.bias",module.Transformation.LocalizationNetwork.conv.9.running_mean",module.Transformation.LocalizationNetwork.conv.9.running_var",module.Transformation.LocalizationNetwork.conv.9.num_batches_tracked",module.Transformation.LocalizationNetwork.conv.12.weight",module.Transformation.LocalizationNetwork.conv.13.weight",module.Transformation.LocalizationNetwork.conv.13.bias",module.Transformation.LocalizationNetwork.conv.13.running_mean",module.Transformation.LocalizationNetwork.conv.13.running_var",module.Transformation.LocalizationNetwork.conv.13.num_batches_tracked",module.Transformation.LocalizationNetwork.localization_fc1.0.weight",module.Transformation.LocalizationNetwork.localization_fc1.0.bias",module.Transformation.LocalizationNetwork.localization_fc2.weight",module.Transformation.LocalizationNetwork.localization_fc2.bias",module.Transformation.GridGenerator.inv_delta_C",module.Transformation.GridGenerator.P_hat",module.FeatureExtraction.ConvNet.conv0_1.weight",module.FeatureExtraction.ConvNet.bn0_1.weight",module.FeatureExtraction.ConvNet.bn0_1.bias",module.FeatureExtraction.ConvNet.bn0_1.running_mean",module.FeatureExtraction.ConvNet.bn0_1.running_var",module.FeatureExtraction.ConvNet.bn0_1.num_batches_tracked",module.FeatureExtraction.ConvNet.conv0_2.weight",module.FeatureExtraction.ConvNet.bn0_2.weight",module.FeatureExtraction.ConvNet.bn0_2.bias",module.FeatureExtraction.ConvNet.bn0_2.running_mean",module.FeatureExtraction.ConvNet.bn0_2.running_var",module.FeatureExtraction.ConvNet.bn0_2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer1.0.conv1.weight",module.FeatureExtraction.ConvNet.layer1.0.bn1.weight",module.FeatureExtraction.ConvNet.layer1.0.bn1.bias",module.FeatureExtraction.ConvNet.layer1.0.bn1.running_mean",module.FeatureExtraction.ConvNet.layer1.0.bn1.running_var",module.FeatureExtraction.ConvNet.layer1.0.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer1.0.conv2.weight",module.FeatureExtraction.ConvNet.layer1.0.bn2.weight",module.FeatureExtraction.ConvNet.layer1.0.bn2.bias",module.FeatureExtraction.ConvNet.layer1.0.bn2.running_mean",module.FeatureExtraction.ConvNet.layer1.0.bn2.running_var",module.FeatureExtraction.ConvNet.layer1.0.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer1.0.downsample.0.weight",module.FeatureExtraction.ConvNet.layer1.0.downsample.1.weight",module.FeatureExtraction.ConvNet.layer1.0.downsample.1.bias",module.FeatureExtraction.ConvNet.layer1.0.downsample.1.running_mean",module.FeatureExtraction.ConvNet.layer1.0.downsample.1.running_var",module.FeatureExtraction.ConvNet.layer1.0.downsample.1.num_batches_tracked",module.FeatureExtraction.ConvNet.conv1.weight",module.FeatureExtraction.ConvNet.bn1.weight",module.FeatureExtraction.ConvNet.bn1.bias",module.FeatureExtraction.ConvNet.bn1.running_mean",module.FeatureExtraction.ConvNet.bn1.running_var",module.FeatureExtraction.ConvNet.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer2.0.conv1.weight",module.FeatureExtraction.ConvNet.layer2.0.bn1.weight",module.FeatureExtraction.ConvNet.layer2.0.bn1.bias",module.FeatureExtraction.ConvNet.layer2.0.bn1.running_mean",module.FeatureExtraction.ConvNet.layer2.0.bn1.running_var",module.FeatureExtraction.ConvNet.layer2.0.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer2.0.conv2.weight",module.FeatureExtraction.ConvNet.layer2.0.bn2.weight",module.FeatureExtraction.ConvNet.layer2.0.bn2.bias",module.FeatureExtraction.ConvNet.layer2.0.bn2.running_mean",module.FeatureExtraction.ConvNet.layer2.0.bn2.running_var",module.FeatureExtraction.ConvNet.layer2.0.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer2.0.downsample.0.weight",module.FeatureExtraction.ConvNet.layer2.0.downsample.1.weight",module.FeatureExtraction.ConvNet.layer2.0.downsample.1.bias",module.FeatureExtraction.ConvNet.layer2.0.downsample.1.running_mean",module.FeatureExtraction.ConvNet.layer2.0.downsample.1.running_var",module.FeatureExtraction.ConvNet.layer2.0.downsample.1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer2.1.conv1.weight",module.FeatureExtraction.ConvNet.layer2.1.bn1.weight",module.FeatureExtraction.ConvNet.layer2.1.bn1.bias",module.FeatureExtraction.ConvNet.layer2.1.bn1.running_mean",module.FeatureExtraction.ConvNet.layer2.1.bn1.running_var",module.FeatureExtraction.ConvNet.layer2.1.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer2.1.conv2.weight",module.FeatureExtraction.ConvNet.layer2.1.bn2.weight",module.FeatureExtraction.ConvNet.layer2.1.bn2.bias",module.FeatureExtraction.ConvNet.layer2.1.bn2.running_mean",module.FeatureExtraction.ConvNet.layer2.1.bn2.running_var",module.FeatureExtraction.ConvNet.layer2.1.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.conv2.weight",module.FeatureExtraction.ConvNet.bn2.weight",module.FeatureExtraction.ConvNet.bn2.bias",module.FeatureExtraction.ConvNet.bn2.running_mean",module.FeatureExtraction.ConvNet.bn2.running_var",module.FeatureExtraction.ConvNet.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.0.conv1.weight",module.FeatureExtraction.ConvNet.layer3.0.bn1.weight",module.FeatureExtraction.ConvNet.layer3.0.bn1.bias",module.FeatureExtraction.ConvNet.layer3.0.bn1.running_mean",module.FeatureExtraction.ConvNet.layer3.0.bn1.running_var",module.FeatureExtraction.ConvNet.layer3.0.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.0.conv2.weight",module.FeatureExtraction.ConvNet.layer3.0.bn2.weight",module.FeatureExtraction.ConvNet.layer3.0.bn2.bias",module.FeatureExtraction.ConvNet.layer3.0.bn2.running_mean",module.FeatureExtraction.ConvNet.layer3.0.bn2.running_var",module.FeatureExtraction.ConvNet.layer3.0.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.0.downsample.0.weight",module.FeatureExtraction.ConvNet.layer3.0.downsample.1.weight",module.FeatureExtraction.ConvNet.layer3.0.downsample.1.bias",module.FeatureExtraction.ConvNet.layer3.0.downsample.1.running_mean",module.FeatureExtraction.ConvNet.layer3.0.downsample.1.running_var",module.FeatureExtraction.ConvNet.layer3.0.downsample.1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.1.conv1.weight",module.FeatureExtraction.ConvNet.layer3.1.bn1.weight",module.FeatureExtraction.ConvNet.layer3.1.bn1.bias",module.FeatureExtraction.ConvNet.layer3.1.bn1.running_mean",module.FeatureExtraction.ConvNet.layer3.1.bn1.running_var",module.FeatureExtraction.ConvNet.layer3.1.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.1.conv2.weight",module.FeatureExtraction.ConvNet.layer3.1.bn2.weight",module.FeatureExtraction.ConvNet.layer3.1.bn2.bias",module.FeatureExtraction.ConvNet.layer3.1.bn2.running_mean",module.FeatureExtraction.ConvNet.layer3.1.bn2.running_var",module.FeatureExtraction.ConvNet.layer3.1.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.2.conv1.weight",module.FeatureExtraction.ConvNet.layer3.2.bn1.weight",module.FeatureExtraction.ConvNet.layer3.2.bn1.bias",module.FeatureExtraction.ConvNet.layer3.2.bn1.running_mean",module.FeatureExtraction.ConvNet.layer3.2.bn1.running_var",module.FeatureExtraction.ConvNet.layer3.2.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.2.conv2.weight",module.FeatureExtraction.ConvNet.layer3.2.bn2.weight",module.FeatureExtraction.ConvNet.layer3.2.bn2.bias",module.FeatureExtraction.ConvNet.layer3.2.bn2.running_mean",module.FeatureExtraction.ConvNet.layer3.2.bn2.running_var",module.FeatureExtraction.ConvNet.layer3.2.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.3.conv1.weight",module.FeatureExtraction.ConvNet.layer3.3.bn1.weight",module.FeatureExtraction.ConvNet.layer3.3.bn1.bias",module.FeatureExtraction.ConvNet.layer3.3.bn1.running_mean",module.FeatureExtraction.ConvNet.layer3.3.bn1.running_var",module.FeatureExtraction.ConvNet.layer3.3.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.3.conv2.weight",module.FeatureExtraction.ConvNet.layer3.3.bn2.weight",module.FeatureExtraction.ConvNet.layer3.3.bn2.bias",module.FeatureExtraction.ConvNet.layer3.3.bn2.running_mean",module.FeatureExtraction.ConvNet.layer3.3.bn2.running_var",module.FeatureExtraction.ConvNet.layer3.3.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.4.conv1.weight",module.FeatureExtraction.ConvNet.layer3.4.bn1.weight",module.FeatureExtraction.ConvNet.layer3.4.bn1.bias",module.FeatureExtraction.ConvNet.layer3.4.bn1.running_mean",module.FeatureExtraction.ConvNet.layer3.4.bn1.running_var",module.FeatureExtraction.ConvNet.layer3.4.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer3.4.conv2.weight",module.FeatureExtraction.ConvNet.layer3.4.bn2.weight",module.FeatureExtraction.ConvNet.layer3.4.bn2.bias",module.FeatureExtraction.ConvNet.layer3.4.bn2.running_mean",module.FeatureExtraction.ConvNet.layer3.4.bn2.running_var",module.FeatureExtraction.ConvNet.layer3.4.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.conv3.weight",module.FeatureExtraction.ConvNet.bn3.weight",module.FeatureExtraction.ConvNet.bn3.bias",module.FeatureExtraction.ConvNet.bn3.running_mean",module.FeatureExtraction.ConvNet.bn3.running_var",module.FeatureExtraction.ConvNet.bn3.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.0.conv1.weight",module.FeatureExtraction.ConvNet.layer4.0.bn1.weight",module.FeatureExtraction.ConvNet.layer4.0.bn1.bias",module.FeatureExtraction.ConvNet.layer4.0.bn1.running_mean",module.FeatureExtraction.ConvNet.layer4.0.bn1.running_var",module.FeatureExtraction.ConvNet.layer4.0.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.0.conv2.weight",module.FeatureExtraction.ConvNet.layer4.0.bn2.weight",module.FeatureExtraction.ConvNet.layer4.0.bn2.bias",module.FeatureExtraction.ConvNet.layer4.0.bn2.running_mean",module.FeatureExtraction.ConvNet.layer4.0.bn2.running_var",module.FeatureExtraction.ConvNet.layer4.0.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.1.conv1.weight",module.FeatureExtraction.ConvNet.layer4.1.bn1.weight",module.FeatureExtraction.ConvNet.layer4.1.bn1.bias",module.FeatureExtraction.ConvNet.layer4.1.bn1.running_mean",module.FeatureExtraction.ConvNet.layer4.1.bn1.running_var",module.FeatureExtraction.ConvNet.layer4.1.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.1.conv2.weight",module.FeatureExtraction.ConvNet.layer4.1.bn2.weight",module.FeatureExtraction.ConvNet.layer4.1.bn2.bias",module.FeatureExtraction.ConvNet.layer4.1.bn2.running_mean",module.FeatureExtraction.ConvNet.layer4.1.bn2.running_var",module.FeatureExtraction.ConvNet.layer4.1.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.2.conv1.weight",module.FeatureExtraction.ConvNet.layer4.2.bn1.weight",module.FeatureExtraction.ConvNet.layer4.2.bn1.bias",module.FeatureExtraction.ConvNet.layer4.2.bn1.running_mean",module.FeatureExtraction.ConvNet.layer4.2.bn1.running_var",module.FeatureExtraction.ConvNet.layer4.2.bn1.num_batches_tracked",module.FeatureExtraction.ConvNet.layer4.2.conv2.weight",module.FeatureExtraction.ConvNet.layer4.2.bn2.weight",module.FeatureExtraction.ConvNet.layer4.2.bn2.bias",module.FeatureExtraction.ConvNet.layer4.2.bn2.running_mean",module.FeatureExtraction.ConvNet.layer4.2.bn2.running_var",module.FeatureExtraction.ConvNet.layer4.2.bn2.num_batches_tracked",module.FeatureExtraction.ConvNet.conv4_1.weight",module.FeatureExtraction.ConvNet.bn4_1.weight",module.FeatureExtraction.ConvNet.bn4_1.bias",module.FeatureExtraction.ConvNet.bn4_1.running_mean",module.FeatureExtraction.ConvNet.bn4_1.running_var",module.FeatureExtraction.ConvNet.bn4_1.num_batches_tracked",module.FeatureExtraction.ConvNet.conv4_2.weight",module.FeatureExtraction.ConvNet.bn4_2.weight",module.FeatureExtraction.ConvNet.bn4_2.bias",module.FeatureExtraction.ConvNet.bn4_2.running_mean",module.FeatureExtraction.ConvNet.bn4_2.running_var",module.FeatureExtraction.ConvNet.bn4_2.num_batches_tracked",module.SequenceModeling.0.rnn.weight_ih_l0",module.SequenceModeling.0.rnn.weight_hh_l0",module.SequenceModeling.0.rnn.bias_ih_l0",module.SequenceModeling.0.rnn.bias_hh_l0",module.SequenceModeling.0.rnn.weight_ih_l0_reverse",module.SequenceModeling.0.rnn.weight_hh_l0_reverse",module.SequenceModeling.0.rnn.bias_ih_l0_reverse",module.SequenceModeling.0.rnn.bias_hh_l0_reverse",module.SequenceModeling.0.linear.weight",module.SequenceModeling.0.linear.bias",module.SequenceModeling.1.rnn.weight_ih_l0",module.SequenceModeling.1.rnn.weight_hh_l0",module.SequenceModeling.1.rnn.bias_ih_l0",module.SequenceModeling.1.rnn.bias_hh_l0",module.SequenceModeling.1.rnn.weight_ih_l0_reverse",module.SequenceModeling.1.rnn.weight_hh_l0_reverse",module.SequenceModeling.1.rnn.bias_ih_l0_reverse",module.SequenceModeling.1.rnn.bias_hh_l0_reverse",module.SequenceModeling.1.linear.weight",module.SequenceModeling.1.linear.bias",module.Prediction.attention_cell.i2h.weight",module.Prediction.attention_cell.h2h.weight",module.Prediction.attention_cell.h2h.bias",module.Prediction.attention_cell.score.weight",module.Prediction.attention_cell.rnn.weight_ih",module.Prediction.attention_cell.rnn.weight_hh",module.Prediction.attention_cell.rnn.bias_ih",module.Prediction.attention_cell.rnn.bias_hh",module.Prediction.generator.weight",module.Prediction.generator.bias".
Here is the catch
Independently Both Module A and Module B runs without a problem in the same Py3 env
I used Python shell and imported Module B after loading Module A and it loads just fine.
Any clue on whats going on?
For reference
Module B (code where it fails)
def pre_load_model(self):
print("preloading the model with opts "+str(self.opts))
self.trmodel = Model(self.opts)
self.trmodel = torch.nn.DataParallel(self.trmodel)
if torch.cuda.is_available():
self.trmodel = self.trmodel.cuda()
self.device = torch.device('cuda:0')
else:
self.device = torch.device('cpu')
# load model
print('loading pretrained model from %s' % self.opts.saved_model)
if torch.cuda.is_available():
self.trmodel.load_state_dict(torch.load(self.opts.saved_model))
else:
self.trmodel.load_state_dict(torch.load(self.opts.saved_model, map_location='cpu'))
| I think you have the Python Global Interpreter Lock or GIL problem.
| https://stackoverflow.com/questions/56861681/ |
Getting the accuracy from classification_report back into a list | I am using Sklean's classification_report to summarize my train and test epochs.
sklearn.metrics.classification_report
I'm getting kind of this back for each epoch:
>>> from sklearn.metrics import classification_report
>>> y_true
>>> y_pred
>>> target_names = ['class 0', 'class 1', 'class 2']
>>> print(classification_report(y_true, y_pred, target_names=target_names))
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
accuracy 0.60 5
macro avg 0.50 0.56 0.49 5
weighted avg 0.70 0.60 0.61 5
(e.g. from sklearn script)
Now I am searching for a way, to get those accuracy for each epoch in a list to calculate the mean and std of all accuracy.
This question seems to be pretty trivial but as you can see from my questions before I am pretty new to Python/Machine Learning.
Thanks for your help
Leo
| Lets have a look at the documentation which contains information about the input parameter output_dict :
output_dict : bool (default = False) If True, return output as dict
If you call classification_report(y_true, y_pred, target_names=target_names, output_dict=True) you can get the dictionary. And then you are one stackoverflow question away from your solution.
| https://stackoverflow.com/questions/56870373/ |
Can you pass a variable to a data member in Python as done in Pytorch? | Here is the sample code from the nn.Module of pytorch documentation:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Here, we are passing x to self.conv1() in the forward function.
However, self.conv1 is a variable as can been from this line self.conv1 = nn.Conv2d(1, 20, 5) in the __init__ function.
How is this possible?
| Forward passes of anything in pytorch, layer or network, is done in functional way var(x). This is made in python with overwriting __call__() built-in. Try it yourself. Make a class, overwrite __call__() and use it like function.
| https://stackoverflow.com/questions/56871848/ |
Initializing convolutional layers in CNN | Is there a function to initialize weights of a convolutional layer to focus more on information closer to the center of input images?
All my input images are centered, so pixels further away from the center of an image matter less than pixels closer to the center.
| Please see the GIFs here for a demonstration of convolutions:
https://github.com/vdumoulin/conv_arithmetic#convolution-animations
As you can see, convolutions operate the same regardless of the position in the image, so weight initialization cannot change the focus of the image.
It is also not advisable to rush into thinking about what the net will and won't need to learn your task. There are sometimes surprising amounts of signal outside what you as a human might focus on. I would suggest training the net and seeing how it performs, and then (as others have suggested) thinking about cropping.
| https://stackoverflow.com/questions/56876321/ |
How to multiply a dense matrix by a sparse matrix element-wise in pytorch | I can use torch.sparse.mm() or torch.spmm() to do multiplication between sparse matrix and dense matrix directly, but which function should I choose to do element-wise multiplication?
| You can implement this multiplication yourself
def sparse_dense_mul(s, d):
i = s._indices()
v = s._values()
dv = d[i[0,:], i[1,:]] # get values from relevant entries of dense matrix
return torch.sparse.FloatTensor(i, v * dv, s.size())
Note that thanks to the linearity of the multiplication operation you do not need to worry if s is coalesced or not.
| https://stackoverflow.com/questions/56880166/ |
Getting error while installing Flair (NLP Library) | I am trying to install NLP library "Flair" using pip and getting the error message:
ERROR: Could not find a version that satisfies the requirement torch>=1.0.0 (from flair) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch>=1.0.0 (from flair)"
What should I do
I am using python3.6 and tried to install flair using cmd and pip in virtualenv. But there is still error messages.
(env) C:\Users\HP>pip install flair
I want the installation without any issue.enter image description here
| Pytorch or torch is currently cannot be installed using pypi in windows.
You will need conda to accomplish that:
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
OR
for older torch version you can use whl file, Cuda version can be 8.0,9.0,10.0
pip install torch==1.0.1 -f https://download.pytorch.org/whl/cu90/stable # CUDA 9.0 build
| https://stackoverflow.com/questions/56880498/ |
Custom Loss Function becomes zero when backpropagated | I am trying to write my own custom loss function that is based on the false positive and negative rates. I made a dummy code so you can check the first 2 defenitions as well. I added the rest, so you can see how it is implemented. However, still somewhere the gradient turns out to be zero. What is now the step where the gradient turns zero, or how can I check this? Please I would like to know how I can fix this :).
I tried providing you with more information so you can play around as well, but if you miss anything please do let me know!
The gradient stays True during every step. However, still during the training of the model the loss is not updated, therefore the NN does not train.
y = Variable(torch.tensor((0, 0, 0, 1, 1,1), dtype=torch.float), requires_grad = True)
y_pred = Variable(torch.tensor((0.333, 0.2, 0.01, 0.99, 0.49, 0.51), dtype=torch.float), requires_grad = True)
x = Variable(torch.tensor((0, 0, 0, 1, 1,1), dtype=torch.float), requires_grad = True)
x_pred = Variable(torch.tensor((0.55, 0.25, 0.01, 0.99, 0.65, 0.51), dtype=torch.float), requires_grad = True)
def binary_y_pred(y_pred):
y_pred.register_hook(lambda grad: print(grad))
y_pred = y_pred+torch.tensor(0.5, requires_grad=True, dtype=torch.float)
y_pred = y_pred.pow(5) # this is my way working around using torch.where()
y_pred = y_pred.pow(10)
y_pred = y_pred.pow(15)
m = nn.Sigmoid()
y_pred = m(y_pred)
y_pred = y_pred-torch.tensor(0.5, requires_grad=True, dtype=torch.float)
y_pred = y_pred*2
y_pred.register_hook(lambda grad: print(grad))
return y_pred
def confusion_matrix(y_pred, y):
TP = torch.sum(y*y_pred)
TN = torch.sum((1-y)*(1-y_pred))
FP = torch.sum((1-y)*y_pred)
FN = torch.sum(y*(1-y_pred))
k_eps = torch.tensor(1e-12, requires_grad=True, dtype=torch.float)
FN_rate = FN/(TP + FN + k_eps)
FP_rate = FP/(TN + FP + k_eps)
return FN_rate, FP_rate
def dif_rate(FN_rate_y, FN_rate_x):
dif = (FN_rate_y - FN_rate_x).pow(2)
return dif
def custom_loss_function(y_pred, y, x_pred, x):
y_pred = binary_y_pred(y_pred)
FN_rate_y, FP_rate_y = confusion_matrix(y_pred, y)
x_pred= binary_y_pred(x_pred)
FN_rate_x, FP_rate_x = confusion_matrix(x_pred, x)
FN_dif = dif_rate(FN_rate_y, FN_rate_x)
FP_dif = dif_rate(FP_rate_y, FP_rate_x)
cost = FN_dif+FP_dif
return cost
# I added the rest so you can see how it is implemented, but this peace does not fully run well! If you want this part to run as well, I can add more code.
class FeedforwardNeuralNetModel(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(FeedforwardNeuralNetModel, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_dim, output_dim)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.sigmoid(out)
return out
model = FeedforwardNeuralNetModel(input_dim, hidden_dim, output_dim)
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001, betas=[0.9, 0.99], amsgrad=True)
criterion = torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')
for epoch in range(num_epochs):
train_err = 0
for i, (samples, truths) in enumerate(train_loader):
samples = Variable(samples)
truths = Variable(truths)
optimizer.zero_grad() # Reset gradients
outputs = model(samples) # Do the forward pass
loss2 = criterion(outputs, truths) # Calculate loss
samples_y = Variable(samples_y)
samples_x = Variable(samples_x)
y_pred = model(samples_y)
y = Variable(y, requires_grad=True)
x_pred = model(samples_x)
x= Variable(x, requires_grad=True)
cost = custom_loss_function(y_pred, y, x_pred, x)
loss = loss2*0+cost #checking only if cost works.
loss.backward()
optimizer.step()
train_err += loss.item()
train_loss.append(train_err)
I expect the model to update during training. There is no error message.
| With your definitions:TP+FN=y and TN+FP=1-y. Then you'll get FN_rate=1-y_pred and FP_rate=y_pred. Your cost is then FN_rate+FP_rate=1, the gradient of which is 0.
You can check this by hand or using a library for symbolic mathematics (e.g., SymPy):
from sympy import symbols
y, y_pred = symbols("y y_pred")
TP = y * y_pred
TN = (1-y)*(1-y_pred)
FP = (1-y)*y_pred
FN = y*(1-y_pred)
# let's ignore the eps for now
FN_rate = FN/(TP + FN)
FP_rate = FP/(TN + FP)
cost = FN_rate + FP_rate
from sympy import simplify
print(simplify(cost))
# output: 1
| https://stackoverflow.com/questions/56882069/ |
How can I build a good approximation of an unknown distribution when only having samples from it in order to draw from it in torch? | Say I just have random samples from the Distribution and no other data - e.g. a list of numbers - [1,15,30,4,etc.]. What's the best way to estimate the distribution to draw more samples from it in pytorch?
I am currently assuming that all samples come from a Normal distribution and just using the mean and std of the samples to build it and draw from it. The function, however, can be of any distribution.
samples = torch.Tensor([1,2,3,4,3,2,2,1])
Normal(samples.mean(), samples.std()).sample()
| If you have enough samples (and preferably sample dimension is higher than 1), you could model the distribution using Variational Autoencoder or Generative Adversarial Networks (though I would stick with the first approach as it's simpler).
Basically, after correct implementation and training you would get deterministic decoder able to decode hidden code you would pass it (say vector of size 10 taken from normal distribution) into a value from your target distribution.
Note it might not be reliable at all though, it would be even harder if your samples are 1D only.
| https://stackoverflow.com/questions/56884391/ |
Will installation from source in different conda env lead to conflict? | I want to install a nightly-version(v1.2) of pytorch from source in a new conda env, and I got v1.1 outside conda env installed from pip.
Will that lead to conflict?
| If you didn't change anything else about your environment, then not. Conda shields its environment from the host environment in the sense that it comes with its own python distribution, libraries etc., hence there should be no conflicts.
Note, however, that conda also comes with pip, so you need to make sure that the pip you used to install v1.1 is indeed your host's pip and not conda's own pip.
| https://stackoverflow.com/questions/56896227/ |
PyTroch, Gradient calculations | https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/neural_networks_tutorial.ipynb
Hi I am trying to understand the NN with pytorch.
I have doubts in gradient calculations..
import torch.optim as optim
create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
```
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
```
From the about code, I understood loss.backward() calculates the gradients.
I am not sure, how these info shared with optimizer to update the gradient.
Can anyone explain this..
Thanks in advance !
| When you created the optimizer in this line
optimizer = optim.SGD(net.parameters(), lr=0.01)
You provided net.parameters() with all learnable parameters that will be updated, based on gradients.
The model and the optimizer are connected only because they share the same parameters.
PyTorch parameters are tensors. They are not called variables anymore.
| https://stackoverflow.com/questions/56898239/ |
LibTorch, convert deeplabv3_resnet101 to c++ | I am trying to use this example code from the PyTorch website to convert a python model for use in the PyTorch c++ api (LibTorch).
Converting to Torch Script via Tracing
To convert a PyTorch model to Torch Script via tracing, you must pass an instance of your model along with an example input to the torch.jit.trace function. This will produce a torch.jit.ScriptModule object with the trace of your model evaluation embedded in the module’s forward method:
import torch
import torchvision
# An instance of your model.
model = torchvision.models.resnet18()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
This example works fine, and saves out the file as expected.
When i switch to this model:
model = models.segmentation.deeplabv3_resnet101(pretrained=True)
It gives me the following error:
File "convert.py", line 14, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])
I assume this is because the example format is wrong, but how can I get the correct one?
Based on the comments below, my new code is:
import torch
import torchvision
from torchvision import models
model = models.segmentation.deeplabv3_resnet101(pretrained=True)
model.eval()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("model.pt")
And i now get the error:
File "convert.py", line 15, in <module>
traced_script_module = torch.jit.trace(model, example)
File "C:\Python37\lib\site-packages\torch\jit\__init__.py", line 636, in trace
var_lookup_fn, _force_outplace)
RuntimeError: Only tensors and (possibly nested) tuples of tensors are supported as inputs or outputs of traced functions (toIValue at C:\a\w\1\s\windows\pytorch\torch/csrc/jit/pybind_utils.h:91)
(no backtrace available)
| (from pytorch forums)
trace only supports modules that have tensor or tuple of tensor as output.
According to deeplabv3 implementation, its output is OrderedDict. That is a problem.
To solve this, make a wrapper module
class wrapper(torch.nn.Module):
def __init__(self, model):
super(wrapper, self).__init__()
self.model = model
def forward(self, input):
results = []
output = self.model(input)
for k, v in output.items():
results.append(v)
return tuple(results)
model = wrapper(deeplap_model)
#trace...
Has my model saving out.
| https://stackoverflow.com/questions/56902458/ |
Keras vs PyTorch LSTM different results | Trying to get similar results on same dataset with Keras and PyTorch.
Data
from numpy import array
from numpy import hstack
from sklearn.model_selection import train_test_split
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps):
X, y = list(), list()
for i in range(len(sequences)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the dataset
if end_ix > len(sequences):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
def get_data():
# define input sequence
in_seq1 = array([x for x in range(0,500,10)])/1
in_seq2 = array([x for x in range(5,505,10)])/1
out_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
# convert to [rows, columns] structure
in_seq1 = in_seq1.reshape((len(in_seq1), 1))
in_seq2 = in_seq2.reshape((len(in_seq2), 1))
out_seq = out_seq.reshape((len(out_seq), 1))
# horizontally stack columns
dataset = hstack((in_seq1, in_seq2, out_seq))
n_features = 2 # this is number of parallel inputs
n_timesteps = 3 # this is number of timesteps
# convert into input/output
X, y = split_sequences(dataset, n_timesteps)
print(X.shape, y.shape)
X_train,x_test,Y_train, y_test = train_test_split(X,y,test_size = 0.2,shuffle=False)
return X_train,x_test,Y_train, y_test
Keras
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from sklearn.metrics import mean_squared_error
import testing.TimeSeries.datacreator as dc # !!!!change this!!!!
X_train,x_test,Y_train, y_test = dc.get_data()
n_features = 2 # this is number of parallel inputs
n_timesteps = 3 # this is number of timesteps
# define model
model = Sequential()
model.add(LSTM(1024, activation='relu',
input_shape=(n_timesteps, n_features),
kernel_initializer='uniform',
recurrent_initializer='uniform'))
model.add(Dense(512, activation='relu'))
model.add(Dense(1))
opt = keras.optimizers.Adam(lr=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=keras.optimizers.K.epsilon(),
decay=0.0,
amsgrad=False)
model.compile(optimizer=opt, loss='mse')
# fit model
model.fit(X_train, Y_train, epochs=200, verbose=1,validation_data=(x_test,y_test))
yhat = model.predict(x_test, verbose=0)
mean_squared_error(y_test, yhat)
PyTorch - module class
import numpy as np
import torch
import torch.nn.functional as F
from sklearn.metrics import mean_squared_error
import testing.TimeSeries.datacreator as dc # !!!! change this !!!!
X_train,x_test,Y_train, y_test = dc.get_data()
n_features = 2 # this is number of parallel inputs
n_timesteps = 3 # this is number of timesteps
class MV_LSTM(torch.nn.Module):
def __init__(self,n_features,seq_length):
super(MV_LSTM, self).__init__()
self.n_features = n_features # number of parallel inputs
self.seq_len = seq_length # number of timesteps
self.n_hidden = 1024 # number of hidden states
self.n_layers = 1 # number of LSTM layers (stacked)
self.l_lstm = torch.nn.LSTM(input_size = n_features,
hidden_size = self.n_hidden,
num_layers = self.n_layers,
batch_first = True)
# according to pytorch docs LSTM output is
# (batch_size,seq_len, num_directions * hidden_size)
# when considering batch_first = True
self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, 512)
# self.l_linear1 = torch.nn.Linear(512, 512)
self.l_linear2 = torch.nn.Linear(512, 1)
def init_hidden(self, batch_size):
# even with batch_first = True this remains same as docs
hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden).to(next(self.parameters()).device)
cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden).to(next(self.parameters()).device)
self.hidden = (hidden_state, cell_state)
def forward(self, x):
batch_size, seq_len, _ = x.size()
lstm_out, self.hidden = self.l_lstm(x,self.hidden)
# lstm_out(with batch_first = True) is
# (batch_size,seq_len,num_directions * hidden_size)
# for following linear layer we want to keep batch_size dimension and merge rest
# .contiguous() -> solves tensor compatibility error
x = lstm_out.contiguous().view(batch_size,-1)
x = F.relu(x)
x = F.relu(self.l_linear(x))
# x = F.relu(self.l_linear1(x))
x = self.l_linear2(x)
return x
PyTorch - init and train
# create NN
mv_net = MV_LSTM(n_features,n_timesteps)
criterion = torch.nn.MSELoss()
import keras # for epsilon constant
optimizer = torch.optim.Adam(mv_net.parameters(),
lr=1e-3,
betas=[0.9,0.999],
eps=keras.optimizers.K.epsilon(),
weight_decay=0,
amsgrad=False)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
mv_net.to(device)
train_episodes = 200
batch_size = 32
eval_batch_size = 32
for t in range(train_episodes):
# TRAIN
mv_net.train()
for b in range(0,len(X_train),batch_size):
inpt = X_train[b:b+batch_size,:,:]
target = Y_train[b:b+batch_size]
x_batch = torch.tensor(inpt,dtype=torch.float32).to(device)
y_batch = torch.tensor(target,dtype=torch.float32).to(device)
mv_net.init_hidden(x_batch.size(0))
output = mv_net(x_batch)
loss = criterion(output.view(-1), y_batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# EVAL
mv_net.eval()
mv_net.init_hidden(eval_batch_size)
acc = 0
for b in range(0,len(x_test),eval_batch_size):
inpt = x_test[b:b+eval_batch_size,:,:]
target = y_test[b:b+eval_batch_size]
x_batch = torch.tensor(inpt,dtype=torch.float32).to(device)
y_batch = torch.tensor(target,dtype=torch.float32).to(device)
mv_net.init_hidden(x_batch.size(0))
output = mv_net(x_batch)
acc += mean_squared_error(y_batch.cpu().detach().numpy(), output.view(-1).cpu().detach().numpy())
print('step:' , t , 'train loss:' , round(loss.item(),3),'eval acc:',round(acc/len(x_test),3))
mv_net.init_hidden(len(x_test))
val = torch.tensor(x_test,dtype=torch.float32).to(device)
otp = mv_net(val)
print(mean_squared_error(y_test, otp.view(-1).cpu().detach().numpy()))
Results
Keras produces test MSE almost 0, but PyTorch about 6000, which is way too different
I have tried couple tweaks in PyTorch code, but none got me anywhere close to similar keras, even with identical optim params
I cant see what is wrong with (kinda tutorialic) PyTorch code
| I know it is almost one year too late. But I came across the same problem and I think the problem is the following. From the keras documentation it says:
return_sequences: Boolean. Whether to return the last output in the
output sequence, or the full sequence.
this basically means that the input shape of your self.l_linear needs to be torch.nn.Linear(1024, 512) instead of self.n_hidden*self.seq_len, 512.
Now you also need to do the same as keras does and only use the last output in your forward pass:
def forward(self, x):
batch_size, seq_len, _ = x.size()
lstm_out, self.hidden = self.l_lstm(x,self.hidden)
x = lstm_out[:,-1]
x = torch.nn.functional.relu(x)
x = torch.nn.functional.relu(self.l_linear(x))
x = self.l_linear2(x)
return x
when I run your example (which I needed to tweak a bit to get it run) I get very similar training losses.
Keras:
38/38 [==============================] - 0s 6ms/step - loss: 67.6081 - val_loss: 325.9259
PyTorch:
step: 199 train loss: 41.043 eval acc: 1142.688
I hope this helps others having a similar problem.
PS also note that keras is resetting the hidden state (stateful=False) by default.
| https://stackoverflow.com/questions/56915567/ |
Use Tensorflow/PyTorch to speed up minimisation of a custom function | I do a lot of simulations, for which I often need to minimise complicated user-defined functions, for which I generally use numpy and scipy.optimize.minimize(). However, the problem with this is that I need to explicitly write down a gradient function, which can sometimes be very difficult/impossible, to find. And for large dimensional vectors, the numerical derivatives calculated by scipy are prohibitively expensive.
So, I am trying to switch to Tensorflow or PyTorch to take advantage both of their automatic differentiation capabilities, and to be able to exploit GPUs freely. Let me give an explicit example of a function who derivative is somewhat complicated to write down (would required a lot of chain rule), and thus seems ripe for Tensorflow or PyTorch -- calculating the dihedral angle between the two triangles formed by four points in 3d space:
def dihedralAngle(xyz):
## calculate dihedral angle between 4 nodes
p1, p2, p3, p4 = 0, 1, 2, 3
## get unit normal vectors
N1 = np.cross(xyz[p1]-xyz[p3] , xyz[p2]-xyz[p3])
N2 = - np.cross(xyz[p1]-xyz[p4] , xyz[p2]-xyz[p4])
n1, n2 = N1 / np.linalg.norm(N1), N2 / np.linalg.norm(N2)
angle = np.arccos(np.dot(n1, n2))
return angle
xyz1 = np.array([[0.2 , 0. , 0. ],
[0.198358 , 0.02557543, 0. ],
[0.19345897, 0.05073092, 0. ],
[0.18538335, 0.0750534 , 0. ]]) # or any (4,3) ndarray
print(dihedralAngle(xyz1)) >> 3.141
I could easily minimise this using scipy.optimize.minimize(), and I should get 0. For such a small function, I don't really the need a gradient (explicit or numerical). However, if I wish to iterate over many, many nodes, and minimise some function that depends on all the dihedral angles, then the overhead is much higher?
My questions then --
How would I implement this minimisation problem using TensorFlow or PyTorch? Both for a single dihedral angle, and for a list of such angles (i.e. we need to account for looping over lists).
Also, could I just get the gradient using automatic differentiation, to plug back into scipy.optimize.minimize() if desired? For example, scipy.optimize.minimize() allows easily for bounds and constraints, something that I haven't noticed in the Tensorflow or PyToch optimisation modules.
| Here is a solution using automatic computation of the gradient by torch and then the minimizer of scipy with a lib I wrote autograd-minimize. The advantage over SGD is a better precision of the estimation (using second order methods). It is probably equivalent to using LBFGS from torch:
import numpy as np
import torch
from autograd_minimize import minimize
def dihedralAngle(xyz):
## calculate dihedral angle between 4 nodes
p1, p2, p3, p4 = 0, 1, 2, 3
## get unit normal vectors
N1 = np.cross(xyz[p1]-xyz[p3] , xyz[p2]-xyz[p3])
N2 = - np.cross(xyz[p1]-xyz[p4] , xyz[p2]-xyz[p4])
n1, n2 = N1 / np.linalg.norm(N1), N2 / np.linalg.norm(N2)
angle = np.arccos(np.dot(n1, n2))
return angle
def compute_angle(p1, p2):
# inner_product = torch.dot(p1, p2)
inner_product = (p1*p2).sum(-1)
p1_norm = torch.linalg.norm(p1, axis=-1)
p2_norm = torch.linalg.norm(p2, axis=-1)
cos = inner_product / (p1_norm * p2_norm)
cos = torch.clamp(cos, -0.99999, 0.99999)
angle = torch.acos(cos)
return angle
def compute_dihedral(v1,v2,v3,v4):
ab = v1 - v2
cb = v3 - v2
db = v4 - v3
u = torch.cross(ab, cb)
v = torch.cross(db, cb)
w = torch.cross(u, v)
angle = compute_angle(u, v)
angle = torch.where(compute_angle(cb, w) > 1, -angle, angle)
return angle
def loss_func(v1,v2,v3,v4):
return ((compute_dihedral(v1,v2,v3,v4)+2)**2).mean()
x0=[np.array([-17.0490, 5.9270, 21.5340]),
np.array([-0.1608, 0.0600, -0.0371]),
np.array([-0.2000, 0.0007, -0.0927]),
np.array([-0.1423, 0.0197, -0.0727])]
res = minimize(loss_func, x0, backend='torch')
print(compute_dihedral(*[torch.tensor(v) for v in res.x]))
| https://stackoverflow.com/questions/56918164/ |
How to convert Pytorch model parameters to long datatype? | I was able to convert pytorch model parameters to float or double but not to long.
model = model.long()
gives error, while
model = model.float()
runs.
The error I get is:
'Net' object has no attribute 'long'
| Most nn modules do not support long (integer) operations, e.g., convolutions, linear layer etc. Therefore, you cannot "cast" a model to torch.long.
| https://stackoverflow.com/questions/56921455/ |
Is there function in pytorch similar to tf.contrib.distributions.percentile of tensorflow? | Is there a function in PyTorch that does the same as tf.contrib.distributions.percentile of Tensorflow?
| Interestingly, it seems PyTorch is not providing any operator on its own for this, at least not according to its search function.
Fortunately, though, PyTorch Tensors can easily be used with NumPy functions, so that you can simply call numpy.percentile, see the example below:
import torch as t
import numpy as np
x = t.Tensor([1,2,3])
print(np.percentile(x, 30)) # 30-th percentile of x
# 1.6
| https://stackoverflow.com/questions/56944889/ |
Add extra dimension to an axes | I have a batch of segmentation masks of shape [5,1,100,100] (batch_size x dims x ht x wd) which I have to display in tensorboardX with an RGB image batch [5,3,100,100]. I want to add two dummy dimensions to the second axes of the segmentation mask to make it [5,3,100,100] so there will not be any dimension mismatch error when I pass it to torch.utils.make_grid. I have tried unsqueeze, expand and view but I am not able to do it. Any suggestions?
| You can use expand, repeat, or repeat_interleave:
import torch
x = torch.randn((5, 1, 100, 100))
x1_3channels = x.expand(-1, 3, -1, -1)
x2_3channels = x.repeat(1, 3, 1, 1)
x3_3channels = x.repeat_interleave(3, dim=1)
print(x1_3channels.shape) # torch.Size([5, 3, 100, 100])
print(x2_3channels.shape) # torch.Size([5, 3, 100, 100])
print(x3_3channels.shape) # torch.Size([5, 3, 100, 100])
Note that, as stated in the docs:
torch.expand():
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.
torch.repeat():
Unlike expand(), this function copies the tensor’s data.
| https://stackoverflow.com/questions/56952598/ |
printing image paths from the dataloader in pytorch | I am trying to learn One-shot learning with pytorch. I am experimenting with this Siamese Network in Pytorch example. Using that notebook as a guide, I simply would like to print out the image file paths for each pair of images, in addition to the dissimilarity scores.
From what I've been reading, it looks like I need to make some alterations to the dataloader in order to achieve this, as indicated here.
I do not have much experience yet in all of this. I would appreciate some guidance. I imported the altered dataloader (as in this gist) into my code.
The altered dataloader:
import torch
from torchvision import datasets
class ImageFolderWithPaths(datasets.ImageFolder):
"""Custom dataset that includes image file paths. Extends
torchvision.datasets.ImageFolder
"""
# override the __getitem__ method. this is the method that dataloader calls
def __getitem__(self, index):
# this is what ImageFolder normally returns
original_tuple = super(ImageFolderWithPaths, self).__getitem__(index)
# the image file path
path = self.imgs[index][0]
# make a new tuple that includes original and the path
tuple_with_path = (original_tuple + (path,))
return tuple_with_path
and the example usage:
data_dir = "your/data_dir/here"
dataset = ImageFolderWithPaths(data_dir) # our custom dataset
dataloader = torch.utils.DataLoader(dataset)
# iterate over data
for inputs, labels, paths in dataloader:
# use the above variables freely
print(inputs, labels, paths)
my code:
from pytorch_image_folder_with_file_paths import ImageFolderWithPaths
folder_dataset_test = dset.ImageFolder(root=Config.testing_dir)
siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset_test,
transform=transforms.Compose([transforms.Resize((100,100)),
transforms.ToTensor()
])
,should_invert=False)
test_dataloader = DataLoader(siamese_dataset,num_workers=6,batch_size=1,shuffle=True)
dataiter = iter(test_dataloader)
x0,_,_ = next(dataiter)
for i in range(10):
_,x1,label2 = next(dataiter)
concatenated = torch.cat((x0,x1),0)
output1,output2 = net(Variable(x0).cuda(),Variable(x1).cuda())
euclidean_distance = F.pairwise_distance(output1, output2)
imshow(torchvision.utils.make_grid(concatenated),'Dissimilarity: {:.2f}'.format(euclidean_distance.item()))
for inputs, labels, paths in test_dataloader:
print(inputs, labels, paths)
I get the paired images with the dissimilarity score, but I don't get the paths; I get
tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]]]) tensor([[[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
...etc.
Thanks
| Thank you Anubhav Singh for sorting me out.
This works:
from pytorch_image_folder_with_file_paths import ImageFolderWithPaths
folder_dataset_test = ImageFolderWithPaths(root=Config.testing_dir)
siamese_dataset = SiameseNetworkDataset(imageFolderDataset=folder_dataset_test,
transform=transforms.Compose([transforms.Resize((100,100)),
transforms.ToTensor()
])
,should_invert=False)
test_dataloader = DataLoader(siamese_dataset,num_workers=6,batch_size=1,shuffle=True)
dataiter = iter(test_dataloader)
x0,_,_ = next(dataiter)
for i in range(10):
_,x1,label2 = next(dataiter)
concatenated = torch.cat((x0,x1),0)
output1,output2 = net(Variable(x0).cuda(),Variable(x1).cuda())
euclidean_distance = F.pairwise_distance(output1, output2)
imshow(torchvision.utils.make_grid(concatenated),'Dissimilarity: {:.2f}'.format(euclidean_distance.item()))
for paths in folder_dataset_test:
# use the above variables freely
print(paths)
Incidentally, I'm working in Google Colab which doesn't allow me to edit files directly, so for the dataloader, I made a new cell and used %%writefile to get it into my notebook:
%%writefile pytorch_image_folder_with_file_paths.py
import torch
import torchvision.datasets as dset
class ImageFolderWithPaths(dset.ImageFolder):
"""Custom dataset that includes image file paths. Extends
torchvision.datasets.ImageFolder
"""
# override the __getitem__ method. this is the method that dataloader calls
def __getitem__(self, index):
# this is what ImageFolder normally returns
original_tuple = super(ImageFolderWithPaths, self).__getitem__(index)
# the image file path
path = self.imgs[index][0]
# make a new tuple that includes original and the path
tuple_with_path = (original_tuple + (path,))
return tuple_with_path
| https://stackoverflow.com/questions/56962318/ |
I am getting an error while installing PATE from SYFT | I am doing a Differential Privacy Course and have to work with syft. When running the below command, I encountered a error
I have already installed syft package required for the analysis. And I am working in Anaconda on Windows 10
from syft.frameworks.torch.differential_privacy import pate
WARNING:tf_encrypted:Falling back to insecure randomness since the required custom op could not be found for the installed version of TensorFlow (1.13.1). Fix this by compiling custom ops.
| It is not an error it is a warning. It comes from tf_encrypted which is used in tensorflow. The problem is that on Windows secure randomness is not supported (according to https://github.com/tf-encrypted/tf-encrypted/issues/513). And as you use PyTorch you can just ignore it.
| https://stackoverflow.com/questions/56962453/ |
How to encode categorical data that have variable length so could be fetched to nn.Embedding in PyTorch | Let's say i have a data field named movie_genre for each sample movie, it is selected from the following genres:
Action
Adventure
Animation
Comedy
...
And for each movie, it might contain multiple genres:
mid genres
1 Action | Adventure
2 Animation
3 Comedy | Adventure | Action
which means, the movie's genres is a variable list.
If i use one hot vector to encode the genre, Action can be encoded as (1, 0, 0, 0), Adventure can be encoded as(0, 1, 0, 0), and so on.
So movie with mid1 can be encoded as (1, 1, 0, 0), mid2's genre can be encoded as (0, 0, 1, 0), and so on.
However, the pytorch embedding layer nn.Embedding takes tensor containing the indices as input, but not one-hot vector. So how should i encode the data so that it can be fetched into the embedding layer?
| At the moment I can think of two way to proceed :
Transform your multilabel problem to a multicategorical problem. That is to say for each combination of label create a new label (For example Action | Adventure become its own label), then embed those new labels as usual.
Embed separately each category and sums the embedding of all the category appearing in your list.
Edit : You can use pytorch nn.EmbeddingBag to perform the second operation in an efficient manner : https://pytorch.org/docs/stable/nn.html?highlight=nn%20e#torch.nn.EmbeddingBag
| https://stackoverflow.com/questions/56972512/ |
How to modify Pretrained Torchvision Models in PyTorch to return two outputs for multilabel Image Classification | Input: Set of ten "Vowels", set of ten "Consonents", Image dataset where in every image both, one vowel and one consonent, are written.
Task: To identify the vowel and consonent from given image.
Approach: First apply CNN hidden layers on image, then apply two parallel fully connected/dense layers where one will classify vowel in image and other will classify consonent in image.
Problem: I am taking Pretrained Model like VGG or GoogleNet. How to modify that pretrained model to apply two parallel dense layers and return two outputs.
I have tried two different models but my query is can we modify pretrained model for this task.
Right now my model is having only one "fc" layer. I have modified number of neurons in final "fc" layer, like this
final_in_features = googlenet.fc.in_features
googlenet.fc = nn.Linear(final_in_features, 10)
But I need to add one more fc layer so that both "fc" layers are connecting with hidden layers parallel.
Right now model is returning only one output.
outputs1 = googlenet(inputs)
Task is to return two outputs from both "fc" layers, so that it should be look like this
outputs1, outputs2 = googlenet(inputs)
| Here is the source for a Linear Layer in Pytorch :
class Linear(Module):
r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`
Args:
in_features: size of each input sample
out_features: size of each output sample
bias: If set to ``False``, the layer will not learn an additive bias.
Default: ``True``
Shape:
- Input: :math:`(N, *, H_{in})` where :math:`*` means any number of
additional dimensions and :math:`H_{in} = \text{in\_features}`
- Output: :math:`(N, *, H_{out})` where all but the last dimension
are the same shape as the input and :math:`H_{out} = \text{out\_features}`.
Attributes:
weight: the learnable weights of the module of shape
:math:`(\text{out\_features}, \text{in\_features})`. The values are
initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where
:math:`k = \frac{1}{\text{in\_features}}`
bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.
If :attr:`bias` is ``True``, the values are initialized from
:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where
:math:`k = \frac{1}{\text{in\_features}}`
Examples::
>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
"""
__constants__ = ['bias']
def __init__(self, in_features, out_features, bias=True):
super(Linear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
@weak_script_method
def forward(self, input):
return F.linear(input, self.weight, self.bias)
def extra_repr(self):
return 'in_features={}, out_features={}, bias={}'.format(
self.in_features, self.out_features, self.bias is not None
)
You can create a class DoubleLinear like this :
class DoubleLinear(Module):
def __init__(self, Linear1, Linear2):
self.Linear1 = Linear1
self.Linear2 = Linear2
@weak_script_method
def forward(self, input):
return self.Linear1(input), self.Linear2(input)
Then, create your two Linear layers :
Linear_vow = nn.Linear(final_in_features, 10)
Linear_con = nn.Linear(final_in_features, 10)
final_layer = DoubleLinear(Linear_vow, Linear_con)
now outputs1, outputs2 = final_layer(inputs) will work as expected.
| https://stackoverflow.com/questions/56976775/ |
How to use multi-gpu during inference in pytorch framework | I am trying to make model prediction from unet3D built on pytorch framework. I am using multi-gpus
import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'
model = unet3d()
model = nn.DataParallel(model)
model = model.to('cuda')
result = model.forward(torch.tensor(input).to('cuda').float())
But the model still uses only 1 GPU (the first one) and I get memory error.
CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 11.00 GiB total capacity; 8.43 GiB already allocated; 52.21 MiB free; 5.17 MiB cached)
How shoudl I use Multi-GPUs during inference phase? What is the mistake in my script above?
| DataParallel handles sending the data to gpu.
import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'
model = unet3d()
model = nn.DataParallel(model.cuda())
result = model.forward(torch.tensor(input).float())
if this doesn't work, please provide more details about input.
[EDIT]:
Try this:
with torch.no_grad():
result = model(torch.tensor(input).float())
| https://stackoverflow.com/questions/56979461/ |
weighted mse loss in pytorch | def weighted_mse_loss(input_tensor, target_tensor, weight = 1):
observation_dim = input_tensor.size()[-1]
streched_tensor = ((input_tensor - target_tensor) ** 2).view(-1, observation_dim)
entry_num = float(streched_tensor.size())[0]
non_zero_entry_num = torch.sum(streched_tensor[:,0] != 0).float()
weighted_tensor = torch.mm(
((input_tensor - target_tensor)**2).view(-1, observation_dim),
(torch.diag(weight.float().view(-1)))
)
return torch.mean(weighted_tensor) * weight.nelement() * entry_num / non_zero_entry_num
I can't understand how the code gives weighted Mean Square Error loss.
I get that observation_dim is the final output dimension, (the class number I guess), and after that line, I don't get it. Could someone help me figure out how the code calculates the loss?
Thanks a lot.
| def weighted_mse_loss(input, target, weight):
return (weight * (input - target) ** 2).mean()
try this, hope this can help.
All arguments need tensored.
| https://stackoverflow.com/questions/57004498/ |
slow Inference time for Neural Net | I have written a simple Fully connected Neural Network in Pytorch. I saved the model and loaded it in C++ using LibTorch but my inference time is pretty slow for my application field. Inference time right now is about 10 ms. Is it normal or am I doing something wrong?
I measured the inference time on python only first. then to maybe make it faster I loaded the network on C++ but it didn't help.
Here is the code for network
class network(nn.Module):
def __init__(self):
super(network,self).__init__()
input_nodes = 362
hidden_nodes1 = 50
hidden_nodes2 = 30
output_nodes = 1
self.fc1 = nn.Linear(input_nodes,hidden_nodes1)
nn.init.xavier_uniform_(self.fc1.weight)
self.bn1 = nn.BatchNorm1d(num_features=hidden_nodes1)
self.fc2 = nn.Linear(hidden_nodes1,hidden_nodes2)
nn.init.xavier_uniform_(self.fc2.weight)
self.bn2 = nn.BatchNorm1d(num_features = hidden_nodes2)
self.fc3 = nn.Linear(hidden_nodes2,output_nodes)
nn.init.xavier_uniform_(self.fc3.weight)
self.out_act = nn.Sigmoid();
def forward(self,X):
X = F.relu(self.bn1(self.fc1(X)))
X = self.fc2(X)
X = F.dropout2d(X,p=0.3)
X = F.relu(X)
X = self.fc3(X)
out = self.out_act(X)
return out
I want inference to somewhat take around 0.01 milliseconds.
| How much data did you use for the inference? If it is only a few data points, I think there will be no much difference in execution time between python and C++. Maybe try with much more data?
Also, the architecture you are using is straightforward; it can probably run in CPU very well for inference. Don't forget to give feedback with your tests! I also want to know what is happening. :)
| https://stackoverflow.com/questions/57012213/ |
Can't Backpropagate in Pytorch module with .backwards(); weights are not updated | I'm willing to create a network estimating the next sample of the signal. I've statred with a simple sin signal. But when I run the code I got noise as an output. Then checked the layer weights and figured aout that they are not updating. I can't find the mistake here.
class Model(nn.Module):
def __init__(self,in_dim,hidden_dim,num_classes):
super(Model, self).__init__()
self.layer1 = nn.Linear(in_dim,hidden_dim)
self.layer2 = nn.Linear(hidden_dim,hidden_dim)
self.layer3 = nn.Linear(hidden_dim,num_classes)
self.relu = nn.ReLU()
def forward(self,x):
a = self.relu(self.layer1(x))
a = self.relu(self.layer2(a))
return self.relu(self.layer3(a))
train:
def train(epoch,L,depth):
criteria = nn.MSELoss()
learning_rate = 1e-3
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
t = np.linspace(0,2,L+2)
fs = L+2
trn_loss = list()
for f in range(0,epoch):
phase = f/np.pi
x = np.sin(2*np.pi*t*fs+phase)
x = torch.from_numpy(x).detach().float()
optimizer.zero_grad()
x_hat = model(x[:-2])
currentCost = criteria(x_hat,x[-2])
trn_loss.append(currentCost.item())
print(model.layer1.weight.data.clone())
currentCost.backward()
optimizer.step()
print(model.layer1.weight.data.clone())
sys.exit('DEBUG')
output:
tensor([[-0.1715, -0.1696, 0.0424, ..., 0.0154, 0.1450, -0.0544],
[ 0.0368, 0.1427, -0.1419, ..., 0.0966, 0.0298, -0.0659],
[-0.1641, -0.1551, 0.0570, ..., -0.0227, -0.1426, -0.0648],
...,
[-0.0684, -0.1707, -0.0711, ..., 0.0788, 0.1386, 0.1546],
[ 0.1401, -0.0922, -0.0104, ..., -0.0490, 0.0404, 0.1038],
[-0.0604, -0.0517, 0.0715, ..., -0.1200, 0.0014, 0.0215]])
tensor([[-0.1715, -0.1696, 0.0424, ..., 0.0154, 0.1450, -0.0544],
[ 0.0368, 0.1427, -0.1419, ..., 0.0966, 0.0298, -0.0659],
[-0.1641, -0.1551, 0.0570, ..., -0.0227, -0.1426, -0.0648],
...,
[-0.0684, -0.1707, -0.0711, ..., 0.0788, 0.1386, 0.1546],
[ 0.1401, -0.0922, -0.0104, ..., -0.0490, 0.0404, 0.1038],
[-0.0604, -0.0517, 0.0715, ..., -0.1200, 0.0014, 0.0215]])
| Your final layer in forward call uses ReLU activation. This limits outputs of the network to [0, +inf) range.
Please notice your target is in the [-1, 1] range, so the network cannot output half (negative) of the values (and for the positive part it has to crunch +inf possible values into [0, 1] space).
You should change return self.relu(self.layer3(a)) to return self.layer3(a) in forward.
Better yet, in order to help your network accommodate to [-1, 1] range, use torch.tanh activation, so return torch.tanh(self.layer3(a)) should work best.
| https://stackoverflow.com/questions/57019590/ |
How to calculate unbalanced weights for BCEWithLogitsLoss in pytorch | I am trying to solve one multilabel problem with 270 labels and i have converted target labels into one hot encoded form. I am using BCEWithLogitsLoss(). Since training data is unbalanced, I am using pos_weight argument but i am bit confused.
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.
Do i need to give total count of positive values of each label as a tensor or they mean something else by weights?
| The PyTorch documentation for BCEWithLogitsLoss recommends the pos_weight to be a ratio between the negative counts and the positive counts for each class.
So, if len(dataset) is 1000, element 0 of your multihot encoding has 100 positive counts, then element 0 of the pos_weights_vector should be 900/100 = 9. That means that the binary crossent loss will behave as if the dataset contains 900 positive examples instead of 100.
Here is my implementation:
(new, based on this post)
pos_weight = (y==0.).sum()/y.sum()
(original)
def calculate_pos_weights(class_counts):
pos_weights = np.ones_like(class_counts)
neg_counts = [len(data)-pos_count for pos_count in class_counts]
for cdx, pos_count, neg_count in enumerate(zip(class_counts, neg_counts)):
pos_weights[cdx] = neg_count / (pos_count + 1e-5)
return torch.as_tensor(pos_weights, dtype=torch.float)
Where class_counts is just a column-wise sum of the positive samples. I posted it on the PyTorch forum and one of the PyTorch devs gave it his blessing.
| https://stackoverflow.com/questions/57021620/ |
GAN measurement - FID score | I have two GANs and I want to compare their results using FID (Fréchet Inception Distance).
I have trianed the networks with the same dataset of frogs images, and by looking at the results (the generated images) one network yields better results but it's FID score is higher.
I computed the FID score between the original dataset and the generated images of each network.
I have read that lower FID values mean better image quality and diversity,
which is not consistent with the results I have seen.
Is there an explanation for that?
| You can have a look at this paper: https://arxiv.org/pdf/1904.06991v3.pdf. It clearly describes the problem of FID scores. However, the FID scores are also dependant on the number of samples. If you are using fewer samples to estimate it then the outcomes are not consistent.
| https://stackoverflow.com/questions/57053952/ |
PyTorch Dimension out of range (expected to be in range of [-1, 0], but got 1) | I have the following PyTorch tensors:
predicted = torch.tensor([4, 4, 4, 1, 1, 1, 1, 1, 1, 4, 4, 1, 1, 1, 4, 1, 1, 4, 0, 4, 4, 1, 4, 1])
target = torch.tensor([3, 0, 0, 1, 1, 0, 1, 1, 1, 3, 2, 4, 1, 1, 1, 0, 1, 1, 2, 1, 1, 1, 1, 1,])
I want to compute the Cross Entropy Loss (as part of an Logistic Regression implementation) between them with the following lines:
loss = nn.CrossEntropyLoss()
computed_loss = loss(predicted, target)
However, when my code runs, I get the following IndexError:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Any suggestions on what I'm doing wrong?
/ ##################################################################### /
Here is the full TraceBack:
-----------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-208-3cdb253d6620> in <module>
1 batch_size = 1000
2 train_class = Train((training_set.shape[1]-1), number_of_target_labels, 0.01, 1000)
----> 3 train_class.train_model(training_set, batch_size)
<ipython-input-207-f3e2c7f7979a> in train_model(self, training_data, n_iters)
42 out = self.model(x)
43 _, predicted = torch.max(out.data, 1)
---> 44 loss = self.criterion(predicted, y)
45 self.optimizer.zero_grad()
46 loss.backward()
/anaconda3/envs/malicious_ml/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/anaconda3/envs/malicious_ml/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
940 def forward(self, input, target):
941 return F.cross_entropy(input, target, weight=self.weight,
--> 942 ignore_index=self.ignore_index, reduction=self.reduction)
943
944
/anaconda3/envs/malicious_ml/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2054 if size_average is not None or reduce is not None:
2055 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2056 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2057
2058
/anaconda3/envs/malicious_ml/lib/python3.6/site-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel, dtype)
1348 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
1349 if dtype is None:
-> 1350 ret = input.log_softmax(dim)
1351 else:
1352 ret = input.log_softmax(dim, dtype=dtype)
/ ##################################################################### /
If you are interested in seeing the rest of my code, here it is:
import torch
import torch.nn as nn
from torch.autograd import Variable
class LogisticRegressionModel(nn.Module):
def __init__(self, in_dim, num_classes):
super().__init__()
self.linear = nn.Linear(in_dim, num_classes)
def forward(self, x):
return self.linear(x)
class Train(LogisticRegressionModel):
def __init__(self, in_dim, num_classes, lr, batch_size):
super().__init__(in_dim, num_classes)
self.batch_size = batch_size
self.learning_rate = lr
self.input_layer_dim = in_dim
self.output_layer_dim = num_classes
self.criterion = nn.CrossEntropyLoss()
self.model = LogisticRegressionModel(self.input_layer_dim, self.output_layer_dim)
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
self.model = self.model.to(self.device)
self.optimizer = torch.optim.SGD(self.model.parameters(), lr = self.learning_rate)
def epochs(self, iterations, train_dataset, batch_size):
epochs = int(iterations/(len(train_dataset)/batch_size))
return epochs
def train_model(self, training_data, n_iters):
batch = self.batch_size
epochs = self.epochs(n_iters, training_data, batch)
training_data = torch.utils.data.DataLoader(dataset = training_data, batch_size = batch, shuffle = True)
for epoch in range(epochs):
for i, data in enumerate(training_data):
X_train = data[:, :-1]
Y_train = data[:, -1]
if torch.cuda.is_available():
x = Variable(torch.Tensor(X_train).cuda())
y = Variable(torch.Tensor(Y_train).cuda())
else:
x = Variable(torch.Tensor(X_train.float()))
y = Variable(torch.Tensor(Y_train.float()))
out = self.model(x)
_, predicted = torch.max(out.data, 1)
loss = self.criterion(predicted, y)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
if i % 100 == 0:
print('[{}/{}] Loss: {:.6f}'.format(epoch + 1, epochs, loss))
| It seems you are not quite using Cross Entropy Loss the way it is designed. CEL is primarily used for classification problems, where you have a probability distribution over some number of classes:
predicted = torch.tensor([[1,2,3,4]]).float()
(in this case, there are four classes, and the model is indicating its confidence of those four classes)
and then the target is simply an index indicating which class is correct:
target = torch.tensor([1]).long()
then, we can compute:
lossfxn = nn.CrossEntropyLoss()
loss = lossfxn(predicted, target)
print(loss) # outputs tensor(2.4402)
now, if we change the prediction to align with the target:
predicted = torch.tensor([[1,10,3,4]]).float()
target = torch.tensor([1]).long()
lossfxn = nn.CrossEntropyLoss()
loss = lossfxn(predicted, target)
print(loss) # outputs tensor(0.0035)
now the loss is much lower, because the prediction is correct!
Please consider the loss functions available and determine which is appropriate for your task: https://pytorch.org/docs/stable/nn.html#loss-functions (perhaps MSELoss?)
| https://stackoverflow.com/questions/57065070/ |
How to get Docker to recognize NVIDIA drivers? | I have a container that loads a Pytorch model. Every time I try to start it up, I get this error:
Traceback (most recent call last):
File "server/start.py", line 166, in <module>
start()
File "server/start.py", line 94, in start
app.register_blueprint(create_api(), url_prefix="/api/1")
File "/usr/local/src/skiff/app/server/server/api.py", line 30, in create_api
atomic_demo_model = DemoModel(model_filepath, comet_dir)
File "/usr/local/src/comet/comet/comet/interactive/atomic_demo.py", line 69, in __init__
model = interactive.make_model(opt, n_vocab, n_ctx, state_dict)
File "/usr/local/src/comet/comet/comet/interactive/functions.py", line 98, in make_model
model.to(cfg.device)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 381, in to
return self._apply(convert)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 187, in _apply
module._apply(fn)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 193, in _apply
param.data = fn(param.data)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 379, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 161, in _lazy_init
_check_driver()
File "/usr/local/lib/python3.7/site-packages/torch/cuda/__init__.py", line 82, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx
I know that nvidia-docker2 is working.
$ docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
Tue Jul 16 22:09:40 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.39 Driver Version: 418.39 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:1A:00.0 Off | N/A |
| 0% 44C P0 72W / 260W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:1B:00.0 Off | N/A |
| 0% 44C P0 66W / 260W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... Off | 00000000:1E:00.0 Off | N/A |
| 0% 44C P0 48W / 260W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... Off | 00000000:3E:00.0 Off | N/A |
| 0% 41C P0 54W / 260W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 GeForce RTX 208... Off | 00000000:3F:00.0 Off | N/A |
| 0% 42C P0 48W / 260W | 0MiB / 10989MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
| 5 GeForce RTX 208... Off | 00000000:41:00.0 Off | N/A |
| 0% 42C P0 1W / 260W | 0MiB / 10989MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
However, I keep getting the error above.
I've tried the following:
Setting "default-runtime": nvidia in /etc/docker/daemon.json
Using docker run --runtime=nvidia <IMAGE_ID>
Adding the variables below to my Dockerfile:
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
LABEL com.nvidia.volumes.needed="nvidia_driver"
I expect this container to run - we have a working version in production without these issues. And I know that Docker can find the drivers, as the output above shows. Any ideas?
| In order for docker to use the host GPU drivers and GPUs, some steps are necessary.
Make sure an nvidia driver is installed on the host system
Follow the steps here to setup the nvidia container toolkit
Make sure cuda, cudnn is installed in the image
Run a container with the --gpus flag (as explained in the link above)
I guess you have done the first 3 points because nvidia-docker2 is working. So since you don't have a --gpus flag in your run command this could be the issue.
I usually run my containers with the following command
docker run --name <container_name> --gpus all -it <image_name>
-it is just that the container is interactive and starts a bash environment.
| https://stackoverflow.com/questions/57066162/ |
Compare the example of Pytorch and Keras on Cifar10 data | I use CIFAR10 dataset to learn how to code using Keras and PyTorch.
The environment is Python 3.6.7, Torch 1.0.0, Keras 2.2.4, Tensorflow 1.14.0.
I use the same batch size, number of epochs, learning rate and optimizer.
I use DenseNet121 as the model.
After training, Keras get 69% accuracy in test data.
PyTorch just get 54% in test data.
I know the results are different, but why is the result so bad in PyTorch?
Here is the Keras code:
import os, keras
from keras.datasets import cifar10
from keras.applications.densenet import DenseNet121
batch_size = 32
num_classes = 10
epochs = 20
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# model
model = DenseNet121(include_top=True, weights=None, input_shape=(32,32,3), classes=10)
# initiate RMSprop optimizer
opt = keras.optimizers.SGD(lr=0.001, momentum=0.9)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
Here is the Pytorch code:
import torch
import torchvision
import torchvision.transforms as transforms
from torch import flatten
import torch.optim as optim
from torchvision import transforms, models
from torch.nn import Linear, Softmax, Module, Sequential, CrossEntropyLoss
import numpy as np
from tqdm import tqdm
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
transform = transforms.Compose([transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root='./DataSet', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='./DataSet', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=0)
import torch.nn as nn
import torch.nn.functional as F
class Net(Module):
def __init__(self):
super(Net, self).__init__()
self.funFeatExtra = Sequential(*[i for i in list(models.densenet121().children())[:-1]])
self.funFlatten = flatten
self.funOutputLayer = Linear(1024, 10)
self.funSoftmax = Softmax(dim=1)
def forward(self, x):
x = self.funFeatExtra(x)
x = self.funFlatten(x, 1)
x = self.funOutputLayer(x)
x = self.funSoftmax(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(20): # loop over the dataset multiple times
running_loss = 0.0
for i, data in tqdm(enumerate(trainloader, 0)):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net.cuda()(inputs.cuda())
loss = criterion(outputs, labels.cuda())
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
# if i % 2000 == 1999: # print every 2000 mini-batches
# print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000))
# running_loss = 0.0
print('Finished Training')
########################################################################
# The results seem pretty good.
#
# Let us look at how the network performs on the whole dataset.
correct = 0
total = 0
with torch.no_grad():
for data in tqdm(testloader):
images, labels = data
outputs = net.cpu()(images.cpu())
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
| You are not supposed to softmax the model output before you pass it to CrossEntropyLoss. Per the documentation:
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
...
The input is expected to contain raw, unnormalized scores for each class.
You can softmax them separately (outside of forward()) when calculating accuracy.
| https://stackoverflow.com/questions/57071035/ |
Combined GRU and CNN network always returns the same value for all inputs | I am trying to train a combined CNN and GRU/LSTM to find out the number of objetcs in a series of pictures that move and the number of objects that do not move. For this reason I am using a CNN to process my images and consequently use a GRU.
My problem is that the GRU always returns the same value for each input set. What could be reasons for that?
I have already tried to use different learning rates and adding linear layers after the GRU.
My network:
class GRU(nn.Module):
def __init__(self, **kwargs):
super(GRU, self).__init__()
self.n_class = int(kwargs.get("n_class"))
self.seq_length = int(kwargs.get("seq_length"))
self.input_shape = int(kwargs.get("input_shape"))
self.n_channels = int(kwargs.get("n_channels"))
self.conv1 = nn.Conv2d(in_channels=1 * seq_length, out_channels=4 * seq_length, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=4 * seq_length, out_channels=8 * seq_length, kernel_size=5)
self.conv3 = nn.Conv2d(in_channels=8 * seq_length, out_channels=16 * seq_length, kernel_size=5)
self.rnn = nn.GRU(
input_size=self.seq_length,
hidden_size=64,
num_layers=1,
batch_first=True)
self.linear = nn.Linear(64, 2)
def forward(self, t):
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# second conv layer
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=4, stride=4)
# third conv layer
t = self.conv3(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=3, stride=3)
t = t.reshape(-1 , self.seq_length, 16 * 20 ** 2)
t = t.permute(0,2,1)
t, (h_n) =self.rnn(t)
t = self.linear(t[:,-1])
return t
and this is my training:
for epoch in range(number_epochs):
for batch in get_batch_generator(batch_size, rootdir, seq_length=seq_length):
current_batch = batch[0].cuda()
current_labels = batch[1].cuda()
pre = nw(current_batch)
loss_func = torch.nn.MSELoss()
loss = loss_func(pre, current_labels)
loss.backward()
optimizer = optim.Adam(nw.parameters(), lr=learning_rate)
optimizer.step()
Here is an example of the output, actual labels:
tensor([[ 4., 5.],
[10., 0.],
[10., 0.],
[ 2., 9.],
[ 5., 1.],
[10., 0.]], device='cuda:0')
Prediction of my network:
tensor([[2.0280, 1.1517],
[2.0175, 1.1593],
[2.0323, 1.1434],
[2.0333, 1.1557],
[2.0200, 1.1546],
[2.0069, 1.1687]], device='cuda:0', grad_fn=<AddmmBackward>)
So for both classes the output is the same for both classes (moving and not moving objects), which should not be the case.
| Finally I found out that it is necessary to set the gradients to zero for every batch. For some reason this did not cause problems, when I was training normal CNNs without LSTM.
The command that needs to be added in every training loop before the back-propagation:
optimizer.zero_grad()
or
nw.zero_grad()
| https://stackoverflow.com/questions/57078048/ |
load pickle file obtained from GPU to CPU | I meet a problem when I load a pickle file to CPU. I search it on the internet, and they say I need to add map_location parameter. However, after I add this parameter, the problem still exists.
the code is as follows:
torch.__version__
torch.load('featurs.pkl',map_location='cpu')
>>>
'1.0.1.post2'
Attempting to deserialize object on a CUDA device
but torch.cuda.is_available() is False. If you are running
on a CPU-only machine, please use torch.load with map_location='cpu'
to map your storages to the CPU.
I know it is because of the different devices, but I use the instruction in the error message, so I do not know how to solve it in the next step.
Thanks in advance!
| The error message suggests using map_location=torch.device('cpu') but even that doesn't work. One workaround is to use the pickle library and implement a custom unpickler.
import pickle
import io
class CPU_Unpickler(pickle.Unpickler):
def find_class(self, module, name):
if module == 'torch.storage' and name == '_load_from_bytes':
return lambda b: torch.load(io.BytesIO(b), map_location='cpu')
else:
return super().find_class(module, name)
#contents = pickle.load(f) becomes...
contents = CPU_Unpickler(f).load()
Source: Github
| https://stackoverflow.com/questions/57081727/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.