instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Creating and train only specified weights in TensorFlow or PyTorch
I am wondering if there is a way in TensorFlow, PyTorch or some other library to selectively connect neurons. I want to make a network with a very large number of neurons in each layer, but that has very few connections between layers. Note that I do not think this is a duplicate of this answer: Selectively zero weights in TensorFlow?. I implemented a custom keras layer using essentially the same method that appears in that question - essentially by creating a dense layer where all but the specified weights are ignored in training and evaluation. This fulfills part of what I want to do by not training specified weights, and not using them for prediction. But, the problems is that I still waste memory saving the untrained weights, and I waste time calculating the gradients of the zeroed weights. What I would like is for the computation of the gradient matrices to involve only sparse matrices, so that I do not waste time and memory. Is there a way to selectively create and train weights without wasting memory? If my question is unclear or there is more information that it would be helpful for me to provide, please let me know. I would like to be helpful as a question-asker.
The usual, simple solution is to initialize your weight matrices to have zeros where there should be no connection. You store a mask of the location of these zeros, and set the weights at these positions to zero after each weight update. You need to do this as the gradient for zero weights may be nonzero, and this would introduce nonzero weights (i.e. connectios) where you don't want any. Pseudocode: # setup network weights = sparse_init() # only nonzero for existing connections zero_mask = where(weights == 0) # train for e in range(num_epochs): train_operation() # may lead to introduction of new connections weights[zero_mask] = 0 # so we set them to zero again
https://stackoverflow.com/questions/52866049/
PyTorch: new_ones vs ones
In PyTorch what is the difference between new_ones() vs ones(). For example, x2.new_ones(3,2, dtype=torch.double) vs torch.ones(3,2, dtype=torch.double)
For the sake of this answer, I am assuming that your x2 is a previously defined torch.Tensor. If we then head over to the PyTorch documentation, we can read the following on new_ones(): Returns a Tensor of size size filled with 1. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor. Whereas ones() Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument sizes. So, essentially, new_ones allows you to quickly create a new torch.Tensor on the same device and data type as a previously existing tensor (with ones), whereas ones() serves the purpose of creating a torch.Tensor from scratch (filled with ones).
https://stackoverflow.com/questions/52866333/
What does the "greater than" operator ">" mean for PyTorch tensors?
I have a tensor it defined as: import torch it = torch.tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0') Given this definition, what does it > 0 then mean?
Using the > operator is the same as using the torch.gt() function. In other words, it > 0 is the same as torch.gt(it, 0) and it returns a ByteTensor (a Boolean tensor) of the same shape as it where out[i] is True if it[i] > 0 and False otherwise.
https://stackoverflow.com/questions/52868102/
is it possible make cuda deterministic?
I am trying to refactor an application and testing new against old version by comparing outputs. At the first pytorch conv layer using identical input I get identical output when using CPU. However using cuda on GPU the output tensors differ in the 5th decimal place on some cells. Is there any way to eliminate this difference? I am already setting: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True
cpu and gpu can't produce the same result even if the seeds are set equal. Refer to this and this.
https://stackoverflow.com/questions/52874480/
Pytorch : significance of grad_h_relu.clone() in Backprop
I am learning Pytorch, while I'm looking to the tutorial on their site I can't understand the significance of grad_h = grad_h_relu.clone(). h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss loss = (y_pred - y).pow(2).sum().item() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() # what is the signifigance of this line? grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
grad_h = grad_h_relu.clone() It means that you are making a copy of the gradient of relu such that it does not share the memory with the original grad_h_relu. Then you perform some operation on it. As they are stored in two different locations, changing the value of grad_h by the following operation will not affect the grad_h_relu. grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) This grad_w1 is needed to update your parameters of your network.
https://stackoverflow.com/questions/52875989/
What is the equivalent of torch.nn.functional.grid_sample in Tensorflow / Numpy?
I am new to pytorch and have been trying to convert some code . Can't find this particular functionality . Does it exist in tensorflow ?
I do not think there is anything like that provided in TensorFlow. Here is a possible implementation for the 2D case (I have not considered padding, but the code should behave like the border mode). Note that, unlike the PyTorch version, I am assuming the input dimension order is (batch_size, height, width, channels) (as is common in TensorFlow). import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def grid_sample_2d(inp, grid): in_shape = tf.shape(inp) in_h = in_shape[1] in_w = in_shape[2] # Find interpolation sides i, j = grid[..., 0], grid[..., 1] i = tf.cast(in_h - 1, grid.dtype) * (i + 1) / 2 j = tf.cast(in_w - 1, grid.dtype) * (j + 1) / 2 i_1 = tf.maximum(tf.cast(tf.floor(i), tf.int32), 0) i_2 = tf.minimum(i_1 + 1, in_h - 1) j_1 = tf.maximum(tf.cast(tf.floor(j), tf.int32), 0) j_2 = tf.minimum(j_1 + 1, in_w - 1) # Gather pixel values n_idx = tf.tile(tf.range(in_shape[0])[:, tf.newaxis, tf.newaxis], tf.concat([[1], tf.shape(i)[1:]], axis=0)) q_11 = tf.gather_nd(inp, tf.stack([n_idx, i_1, j_1], axis=-1)) q_12 = tf.gather_nd(inp, tf.stack([n_idx, i_1, j_2], axis=-1)) q_21 = tf.gather_nd(inp, tf.stack([n_idx, i_2, j_1], axis=-1)) q_22 = tf.gather_nd(inp, tf.stack([n_idx, i_2, j_2], axis=-1)) # Interpolation coefficients di = tf.cast(i, inp.dtype) - tf.cast(i_1, inp.dtype) di = tf.expand_dims(di, -1) dj = tf.cast(j, inp.dtype) - tf.cast(j_1, inp.dtype) dj = tf.expand_dims(dj, -1) # Compute interpolations q_i1 = q_11 * (1 - di) + q_21 * di q_i2 = q_12 * (1 - di) + q_22 * di q_ij = q_i1 * (1 - dj) + q_i2 * dj return q_ij # Test it inp = tf.placeholder(tf.float32, [None, None, None, None]) grid = tf.placeholder(tf.float32, [None, None, None, 2]) res = grid_sample_2d(inp, grid) with tf.Session() as sess: # Make test image im_grid_i, im_grid_j = np.meshgrid(np.arange(6), np.arange(10), indexing='ij') im = im_grid_i + im_grid_j im = im / im.max() im = np.stack([im] * 3, axis=-1) # Test grid 1: complete image grid1 = np.stack(np.meshgrid(np.linspace(-1, 1, 15), np.linspace(-1, 1, 18), indexing='ij'), axis=-1) # Test grid 2: lower right corner grid2 = np.stack(np.meshgrid(np.linspace(0, 1, 15), np.linspace(.5, 1, 18), indexing='ij'), axis=-1) # Run res1, res2 = sess.run(res, feed_dict={inp: [im, im], grid: [grid1, grid2]}) # Plot image and sampled grids plt.figure() plt.imshow(im) plt.figure() plt.imshow(res1) plt.figure() plt.imshow(res2) Here are the resulting images, first the input: First grid result, which is the first image but with different shape: Second grid result, which spans a region in the lower right corner:
https://stackoverflow.com/questions/52888146/
PyTorch autocomplete (code completion) in Sublime Text 3
I'm a fan of Sublime Text 3 and would like to get code autocompletion for PyTorch. However, I can't get this working yet. Any suggestions or starting points where I can begin to get this working? I have searched in the packages repository of Sublime Text but unfortunately there's none. Note: I have looked at a related question here IDE autocomplete for pytorch but that's only for VS Code.
Not a sublime user, but Jedi autocompletion handles PyTorch suggestions just fine. Jedi is an autocompletion engine and should work for any Python package, including the ones in virtual environment if you set it up and including PyTorch/Tensorflow. Anyway, sublime version seems to be missing a few things like results caching, hence you may have to wait a few seconds after issuing torch. (library is quite heavy, that's why you most probably will experience some lag). If you'd like to speed it up, I see no other possibility than changing it manually (vim plugin for jedi I am using has this option implemented, you may check how they've done it here, both jedi plugins seem to be written in Python hence should be tunable/fixable). Oh, and if something isn't working (or maybe you would like to submit a PR request to jedi team/sublime jedi team), the community around it is quite vibrant and you should get some help (definitely better and more in-depth than here on StackOverflow).
https://stackoverflow.com/questions/52957809/
Error while executing RNN code on pyTorch?
I am running a code on Binary Addition of two strings using PyTorch. However, while training the model I am getting the following error: can't convert np.ndarray of type numpy.object. The only supported types are: double, float, float16, int64, int32, and uint8. Can anyone help me? Here is my code: featDim=2 # two bits each from each of the String outputDim=1 # one output node which would output a zero or 1 lstmSize=10 lossFunction = nn.MSELoss() model =Adder(featDim, lstmSize, outputDim) print ('model initialized') #optimizer = optim.SGD(model.parameters(), lr=3e-2, momentum=0.8) optimizer=optim.Adam(model.parameters(),lr=0.001) epochs=500 ### epochs ## totalLoss= float("inf") while totalLoss > 1e-5: print(" Avg. Loss for last 500 samples = %lf"%(totalLoss)) totalLoss=0 for i in range(0,epochs): # average the loss over 200 samples stringLen=4 testFlag=0 x,y=getSample(stringLen, testFlag) model.zero_grad() x_var=autograd.Variable(torch.from_numpy(x).unsqueeze(1).float()) #convert to torch tensor and variable # unsqueeze() is used to add the extra dimension since # your input need to be of t*batchsize*featDim; you cant do away with the batch in pytorch seqLen=x_var.size(0) #print (x_var) x_var= x_var.contiguous() y_var=autograd.Variable(torch.from_numpy(y).float()) ##ERROR ON THIS LINE finalScores = model(x_var) #finalScores=finalScores. loss=lossFunction(finalScores,y_var) totalLoss+=loss.data[0] optimizer.zero_grad() loss.backward() optimizer.step() totalLoss=totalLoss/epochs
The main problem here is the type of your y. You haven't given any information about this, therefore this here will be more general: But obviously your ndarray doesn't contain numeric data types. You have to use one of these mentioned in your error message: The only supported types are: double, float, float16, int64, int32, and uint8. So here is a short example to demonstrate the issue: If you use one of the previously mentioned data types it works just fine: import torch import numpy as np a = np.ndarray(shape=(2,2), dtype=np.float) # data type np.float print(a) print(torch.autograd.Variable(torch.from_numpy(a).float())) Output: [[2.16641777e-314 2.16641777e-314] [2.16641777e-314 2.16641777e-314]] Variable containing: 0 0 0 0 [torch.FloatTensor of size 2x2] But if you use some other numpy data type (like np.object) you will get this error message: import torch import numpy as np a = np.ndarray(shape=(2,2), dtype=np.object) # data type np.object print(a) print(torch.autograd.Variable(torch.from_numpy(a).float())) This results in: [[None None] [None None]] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-01e1e4bec020> in <module>() 3 a = np.ndarray(shape=(2,2), dtype=np.object) 4 print(a) ----> 5 print(torch.autograd.Variable(torch.from_numpy(a).float())) RuntimeError: can't convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8. You probably haven't specified the data type np.object directly. I guess this is maybe a result of some nested arrays or so. But your need to bring your numpy array y into proper shape with a numeric data type then it should work for you.
https://stackoverflow.com/questions/52966950/
How can I visualize what happens during loss.backward()?
I am confident in my understanding of the forward pass of my model, how can I control its backward pass? This is not a theoretical question about what back-propagation is. The question is a practical one, about whether or not there are tools suited to visualize/track/control what happens during back-propagation. Ideally, this tool would allow to visualize the structure of the computational graph of the model (a graph of the model's operations), its inputs and its trainable parameters. Now, I do: loss.backward() and I would like to visualize what happens in that step.
There has been been already mention of pytorchviz which lets you visualize the graph. Here is a small example that might help you to understand how pytorchviz does trace the graph using the grad_fn: import torch from torch import nn d = 5 x = torch.rand(d, requires_grad=True) print('Tensor x:', x) y = torch.ones(d, requires_grad=True) print('Tensor y:', y) loss = torch.sum(x*y)*3 del x print() print('Tracing back tensors:') def getBack(var_grad_fn): print(var_grad_fn) for n in var_grad_fn.next_functions: if n[0]: try: tensor = getattr(n[0], 'variable') print(n[0]) print('Tensor with grad found:', tensor) print(' - gradient:', tensor.grad) print() except AttributeError as e: getBack(n[0]) loss.backward() getBack(loss.grad_fn) Output: Tensor x: tensor([0.0042, 0.5376, 0.7436, 0.2737, 0.4848], requires_grad=True) Tensor y: tensor([1., 1., 1., 1., 1.], requires_grad=True) Tracing back tensors: <MulBackward object at 0x1201bada0> <SumBackward0 object at 0x1201bacf8> <ThMulBackward object at 0x1201bae48> <AccumulateGrad object at 0x1201badd8> Tensor with grad found: tensor([0.0042, 0.5376, 0.7436, 0.2737, 0.4848], requires_grad=True) - gradient: tensor([3., 3., 3., 3., 3.]) <AccumulateGrad object at 0x1201bad68> Tensor with grad found: tensor([1., 1., 1., 1., 1.], requires_grad=True) - gradient: tensor([0.0125, 1.6129, 2.2307, 0.8211, 1.4543]) Further you should definately take a look into how autograd functions (that are used by the backward()-function) are actually work ! Here is a tutorial from the pytorch site with an easy and short example: PyTorch: Defining New autograd Functions Hope this helps a bit!
https://stackoverflow.com/questions/52988876/
KL Divergence of Normal and Laplace isn't Implemented in TensorFlow Probability and PyTorch
In both TensorFlow Probability (v0.4.0) and PyTorch (v0.4.1) the KL Divergence of the Normal distribution (tfp, PyTorch) and the Laplace distribution (tfp, PyTorch) isn't implemented resulting in a NotImplementedError error being thrown. >>> import tensorflow as tf >>> import tensorflow_probability as tfp >>> tfd = tfp.distributions >>> import torch >>> >>> tf.__version__ '1.11.0' >>> tfp.__version__ '0.4.0' >>> torch.__version__ '0.4.1' >>> >>> p = tfd.Normal(loc=0., scale=1.) >>> q = tfd.Laplace(loc=0., scale=1.) >>> tfd.kl_divergence(p, q) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda/envs/example/lib/python3.6/site-packages/tensorflow/python/ops/distributions/kullback_leibler.py", line 95, in kl_divergence % (type(distribution_a).__name__, type(distribution_b).__name__)) NotImplementedError: No KL(distribution_a || distribution_b) registered for distribution_a type Normal and distribution_b type Laplace >>> >>> a = torch.distributions.normal.Normal(loc=0., scale=1.) >>> b = torch.distributions.laplace.Laplace(loc=0., scale=1.) >>> torch.distributions.kl.kl_divergence(a,b) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/miniconda/envs/example/lib/python3.6/site-packages/torch/distributions/kl.py", line 161, in kl_divergence raise NotImplementedError NotImplementedError I assume as this is missing from both of these libraries that there is some good reason for this and that the user would be expected to implement it themselves with tfp.distributions.RegisterKL in TensorFlow Probability and torch.distributions.kl.register_kl in PyTorch. Is this the correct assumption? If so, can someone explain why the KL Divergence wouldn't be implemented for given distribution classes? I think I am missing something very basic about this. If my assumption is wrong, can someone explain how to properly have TensorFlow and PyTorch implement these operations? For additional reference, using for this example an older version of TensorFlow that works with Edward, pip install tensorflow==1.7 pip install edward In this minimal example above, I'm trying to implement the equivalent of the following edward toy example code in tfp (or in torch). import tensorflow as tf import edward as ed p = ed.models.Normal(loc=0., scale=1.) s = tf.Variable(1.) q = ed.models.Laplace(loc=0., scale=s) inference = ed.KLqp({p: q}) inference.run(n_iter=5000)
IIRC, Edward's KLqp switches tries to use the Analytic form, and if not switches to using the sample KL. For TFP, and I think PyTorch, kl_divergence only works for distributions registered, and unlike Edward only computes the analytic KL. As you mention, these aren't implemented in TFP, and I would say that's more of because the common cases (such as KL(MultivariateNormal || MultivariateNormal) have been implemented. To register the KL divergence, you would do something like: https://github.com/tensorflow/probability/blob/07878168731e0f6d3d0e7c878bdfd5780c16c8d4/tensorflow_probability/python/distributions/gamma.py#L275. (It would be great if you could file a PR at https://github.com/tensorflow/probability!). If it turns out that there isn't a suitable analytic form of this (off the top of my head, I don't know if there is one), then one can form the sample KL and do optimization with that. That can be done explicitly in TFP (by sampling and computing the sample KL. Also please file a PR if you would like this to be done more automatically as well. This is something some of us on TFP are interested in. It would be interesting to see for what cases analytic KL's can be automated. For instance, if q and p come from the same exponential family, then there is a nice form for the KL divergence in terms of sufficient statistics and the normalizer. But for KL's that are across exponential families (or even not exponential families), I'm not aware of results on classes of distributions where you can calculate the KL within the class semi-automatically.
https://stackoverflow.com/questions/53036403/
Pytorch is not using GPU even it detects the GPU
I made my windows 10 jupyter notebook as a server and running some trains on it. I've installed CUDA 9.0 and cuDNN properly, and python detects the GPU. This is what I've got on the anaconda prompt. >>> torch.cuda.get_device_name(0) 'GeForce GTX 1070' And I also placed my model and tensors on cuda by .cuda() model = LogPPredictor(1, 58, 64, 128, 1, 'gsc') if torch.cuda.is_available(): torch.set_default_tensor_type(torch.cuda.DoubleTensor) model.cuda() else: torch.set_default_tensor_type(torch.FloatTensor) list_train_loss = list() list_val_loss = list() acc = 0 mse = 0 optimizer = args.optim(model.parameters(), lr=args.lr, weight_decay=args.l2_coef) data_train = DataLoader(args.dict_partition['train'], batch_size=args.batch_size, pin_memory=True, shuffle=args.shuffle) data_val = DataLoader(args.dict_partition['val'], batch_size=args.batch_size, pin_memory=True, shuffle=args.shuffle) for epoch in tqdm_notebook(range(args.epoch), desc='Epoch'): model.train() epoch_train_loss = 0 for i, batch in enumerate(data_train): list_feature = torch.tensor(batch[0]).cuda() list_adj = torch.tensor(batch[1]).cuda() list_logP = torch.tensor(batch[2]).cuda() list_logP = list_logP.view(-1,1) optimizer.zero_grad() list_pred_logP = model(list_feature, list_adj) list_pred_logP.require_grad = False train_loss = args.criterion(list_pred_logP, list_logP) epoch_train_loss += train_loss.item() train_loss.backward() optimizer.step() list_train_loss.append(epoch_train_loss/len(data_train)) model.eval() epoch_val_loss = 0 with torch.no_grad(): for i, batch in enumerate(data_val): list_feature = torch.tensor(batch[0]).cuda() list_adj = torch.tensor(batch[1]).cuda() list_logP = torch.tensor(batch[2]).cuda() list_logP = list_logP.view(-1,1) list_pred_logP = model(list_feature, list_adj) val_loss = args.criterion(list_pred_logP, list_logP) epoch_val_loss += val_loss.item() list_val_loss.append(epoch_val_loss/len(data_val)) data_test = DataLoader(args.dict_partition['test'], batch_size=args.batch_size, pin_memory=True, shuffle=args.shuffle) model.eval() with torch.no_grad(): logP_total = list() pred_logP_total = list() for i, batch in enumerate(data_val): list_feature = torch.tensor(batch[0]).cuda() list_adj = torch.tensor(batch[1]).cuda() list_logP = torch.tensor(batch[2]).cuda() logP_total += list_logP.tolist() list_logP = list_logP.view(-1,1) list_pred_logP = model(list_feature, list_adj) pred_logP_total += list_pred_logP.tolist() mse = mean_squared_error(logP_total, pred_logP_total) But on the Process Manager of Windows, whenever I start training, only CPU usage goes up to 25% and GPU usage remains 0. How can I fix this???
I had a similar problem with using PyTorch on Cuda. After looking for possible solutions, I found the following post by Soumith himself that found it very helpful. https://discuss.pytorch.org/t/gpu-supposed-to-be-used-but-isnt/2883 The bottom line is, at least in my case, I could not put enough load on GPUs. There was a bottleneck in my application. Try another example, or increase batch size; it should be OK.
https://stackoverflow.com/questions/53043713/
Importing pytorch in Spyder crashes kernel after installing matplotlib
I created an environment in Anaconda3 and installed pytorch and spyder on a Linux machine. Here are the specifications: spyder 3.3.1 ipython 7.0.1 python 3.7.0 pytorch 0.4.1 torchvision 0.2.1 When I open spyder and import torch, it works. Afterwards I installed matplotlib 3.0.1. Restarting spyder and importing again pytorch results in a message on the ipython window in spyder: An error ocurred while starting the kernel terminate called after throwing an instance of 'std::runtime_error' what(): expected ) but found 'ident' here: aten::_addmv(Tensor self, Tensor mat, Tensor vec, *, Scalar beta=1, Scalar alpha=1) ‑> Tensor ~~~~~~ <‑‑‑ HERE On the bash terminal, I get the message: js: Not allowed to load local resource: file:///home/user/anaconda3/envs/myenv/lib/python3.7/site-packages/spyder/utils/help/static/css/default.css I have been using all these packages in another environment for months (so they are in an older version), so it must be something with the new versions. If I run ipython or python on the terminal, importing works, so I am concluding it has something to do with spyder. The 'solution' is obvious: install older versions of the packages, but is there any other more sustainable solution?
I have ipython 7.0.1 and matplotlib 2.0.2 and the same problem, it seems like ipython crashes after the following two commands: %matplotlib auto followed by import torch. This happens both in spyder as in jupyter notebook when the two commands are in seperate blocks. What worked for me was: First making sure that spyders backend graphics is set to inline: Tools -> Preferences -> IPython console -> Graphics backed to Inline. Then import torch followed by switching from inline to external plotting with %matplotlib auto. Note that this does not happen any more with ipython 7.2.0 and matplotlib 3.0.2
https://stackoverflow.com/questions/53047983/
Pytorch 3-GPUs, just can only use 2 of them to train
I have three 1080TI, but when train I can only use 2 of them.. Code: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.cuda() criterion = nn.CrossEntropyLoss().cuda() optimizer_conv = optim.SGD(model.classifier.parameters(), lr=0.0001, momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1) train part: outputs = nn.parallel.data_parallel(model,inputs,device_ids=[0,1,2]) With "CUDA_VISIBLE_DEVICES="1,2,3" python train.py" Got this: | 22% 35C P8 10W / 250W | 12MiB / 11178MiB | 0% | 43% 59C P2 92W / 250W | 1169MiB / 11178MiB | 49% | 44% 60C P2 91W / 250W | 1045MiB / 11175MiB | 54% With "CUDA_VISIBLE_DEVICES="0,1,2" python train.py" Got this: | 21% 38C P2 95W / 250W | 1169MiB / 11178MiB | 78% Default | | 42% 63C P2 93W / 250W | 777MiB / 11178MiB | 76% Default | | 43% 64C P0 85W / 250W | 282MiB / 11175MiB | 0% Default |
eeeee.. I found the reason: my batchsize = 4 when there are three GPUs so Change batchsize bigger can solve this "weird" problem
https://stackoverflow.com/questions/53078632/
PyTorch Cuda with anaconda not available
I'm using anaconda to regulate my environment, for a project i have to use my GPU for network training. I use pytorch for my project and i'm trying to get CUDA working. I installed cudatoolkit, numba, cudnn still, when i try this command: torch.cuda.is_available() I get "False" as output. This is my environment: # Name Version Build Channel blas 1.0 mkl bzip2 1.0.6 h470a237_2 conda-forge ca-certificates 2018.03.07 0 cairo 1.14.12 he6fea26_5 conda-forge certifi 2018.8.24 py35_1 cffi 1.11.5 py35he75722e_1 cloudpickle 0.5.5 py35_0 cudatoolkit 9.2 0 anaconda cudnn 7.2.1 cuda9.2_0 anaconda cycler 0.10.0 py_1 conda-forge cython 0.28.5 py35hf484d3e_0 anaconda dask-core 0.19.2 py35_0 dbus 1.13.0 h3a4f0e9_0 conda-forge decorator 4.3.0 py35_0 expat 2.2.5 hfc679d8_2 conda-forge ffmpeg 4.0.2 ha0c5888_1 conda-forge fontconfig 2.13.1 h65d0f4c_0 conda-forge freetype 2.9.1 h6debe1e_4 conda-forge gettext 0.19.8.1 h5e8e0c9_1 conda-forge giflib 5.1.4 h470a237_1 conda-forge glib 2.55.0 h464dc38_2 conda-forge gmp 6.1.2 hfc679d8_0 conda-forge gnutls 3.5.19 h2a4e5f8_1 conda-forge graphite2 1.3.12 hfc679d8_1 conda-forge gst-plugins-base 1.12.5 hde13a9d_0 conda-forge gstreamer 1.12.5 h61a6719_0 conda-forge harfbuzz 1.9.0 h08d66d9_0 conda-forge hdf5 1.10.2 hc401514_2 conda-forge icu 58.2 hfc679d8_0 conda-forge imageio 2.4.1 py35_0 intel-openmp 2019.0 118 jasper 1.900.1 hff1ad4c_5 conda-forge jpeg 9c h470a237_1 conda-forge kiwisolver 1.0.1 py35h2d50403_2 conda-forge libedit 3.1.20170329 h6b74fdf_2 libffi 3.2.1 hd88cf55_4 libgcc-ng 8.2.0 hdf63c60_1 libgfortran 3.0.0 1 conda-forge libgfortran-ng 7.3.0 hdf63c60_0 libiconv 1.15 h470a237_3 conda-forge libopenblas 0.3.3 h5a2b251_3 libpng 1.6.35 ha92aebf_2 conda-forge libstdcxx-ng 8.2.0 hdf63c60_1 libtiff 4.0.9 he6b73bb_2 conda-forge libuuid 2.32.1 h470a237_2 conda-forge libwebp 0.5.2 7 conda-forge libxcb 1.13 h470a237_2 conda-forge libxml2 2.9.8 h422b904_5 conda-forge llvmlite 0.24.0 py35hdbcaa40_0 matplotlib 3.0.0 py35h0b34cb6_1 conda-forge mkl 2019.0 118 mkl_fft 1.0.6 py35_0 conda-forge mkl_random 1.0.1 py35_0 conda-forge ncurses 6.1 hf484d3e_0 nettle 3.3 0 conda-forge networkx 2.1 py35_0 ninja 1.8.2 py35h6bb024c_1 numba 0.39.0 py35h04863e7_0 numpy 1.15.2 py35h1d66e8a_0 numpy-base 1.15.2 py35h81de0dd_0 olefile 0.46 py35_0 openblas 0.2.20 8 conda-forge opencv 3.4.1 py35h6fd60c2_1 opencv-python 3.4.3.18 <pip> openh264 1.7.0 0 conda-forge openssl 1.0.2p h14c3975_0 pandas 0.23.4 py35h04863e7_0 pcre 8.41 hfc679d8_3 conda-forge pillow 5.2.0 py35heded4f4_0 Pillow 5.3.0 <pip> pip 10.0.1 py35_0 pixman 0.34.0 h470a237_3 conda-forge pthread-stubs 0.4 h470a237_1 conda-forge pycparser 2.19 py35_0 pyparsing 2.2.2 py_0 conda-forge pyqt 5.6.0 py35h8210e8a_7 conda-forge python 3.5.6 hc3d631a_0 python-dateutil 2.7.3 py_0 conda-forge pytorch 0.4.1 py35_py27__9.0.176_7.1.2_2 pytorch pytz 2018.5 py35_0 pywavelets 1.0.0 py35hdd07704_0 qt 5.6.2 hf70d934_9 conda-forge readline 7.0 h7b6447c_5 scikit-image 0.14.0 py35hf484d3e_1 scipy 1.1.0 py35hfa4b5c9_1 setuptools 40.2.0 py35_0 sip 4.18.1 py35hfc679d8_0 conda-forge six 1.11.0 py35_1 conda-forge sqlite 3.25.2 h7b6447c_0 tk 8.6.8 hbc83047_0 toolz 0.9.0 py35_0 torchvision 0.1.9 py35h72e4c6f_1 soumith tornado 5.1.1 py35h470a237_0 conda-forge wheel 0.31.1 py35_0 x264 1!152.20180717 h470a237_1 conda-forge xorg-kbproto 1.0.7 h470a237_2 conda-forge xorg-libice 1.0.9 h470a237_4 conda-forge xorg-libsm 1.2.2 h8c8a85c_6 conda-forge xorg-libx11 1.6.6 h470a237_0 conda-forge xorg-libxau 1.0.8 h470a237_6 conda-forge xorg-libxdmcp 1.1.2 h470a237_7 conda-forge xorg-libxext 1.3.3 h470a237_4 conda-forge xorg-libxrender 0.9.10 h470a237_2 conda-forge xorg-renderproto 0.11.1 h470a237_2 conda-forge xorg-xextproto 7.3.0 h470a237_2 conda-forge xorg-xproto 7.0.31 h470a237_7 conda-forge xz 5.2.4 h14c3975_4 zlib 1.2.11 ha838bed_2 My desktop has a NVIDIA GeForce GTX 970 (so it is cuda available) Also for some reason, as you can see here: My graphics card doesn't show, however when using the lspci -v command, i can see my graphics card there. Don't know if that has something to do with it. Anyone knows how i can fix this?
You need to install pytorch "in one go" using https://pytorch.org/get-started/locally/ to construct the anaconda command. With the standard configuration in anaconda, you get: conda install pytorch torchvision cudatoolkit=10.2 -c pytorch (Please always check on https://pytorch.org/get-started/locally/ whether this command is still up to date.) You seem to need the right cuda version 10.2 package to be aligned with what pytorch can handle. This is what is meant with @RussellGallop's helpful message. We can see that installing pytorch and cuda separately is not recommended, and that Anaconda installation is recommended, against your answer: Anaconda is our recommended package manager since it installs all dependencies. Uninstall and install better than repair In case of problems, you better uninstall all covered packages and apply https://pytorch.org/get-started/locally/ to get the command again, instead of trying to fix it with separate installations. Thus, if you want to uninstall, you need to use the exactly same command of the installation, but with "conda uninstall" instead. For the example above, the uninstall command would be: conda uninstall pytorch torchvision cudatoolkit=10.2 -c pytorch This needed uninstall "in one go" again is another hint at the sensitive installation of pytorch, and that separate installation is risky. See How can l uninstall PyTorch?)
https://stackoverflow.com/questions/53102436/
Error in training PyTorch classifier from the 60 minute blitz in GPU
I've started to learn pytorch with their official 60 minute blitz tutorial in a jupyter lab (using their .ipynb file, link to the tutorial), and have completed it successfully until the conversion and training of the classifier using the gpu. I think that I have managed to change the device for the net, inputs and labels according to these results: net=net.to(device) net.fc1.weight.type() With output: 'torch.cuda.FloatTensor' And: inputs, labels = inputs.to(device), labels.to(device) inputs.type(),labels.type() With output: ('torch.cuda.FloatTensor', 'torch.cuda.LongTensor') After running these cells, I ran the cell for training the model, containing this code: for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') And received this error: RuntimeError Traceback (most recent call last) <ipython-input-55-fe85c778b0e6> in <module>() 10 11 # forward + backward + optimize ---> 12 outputs = net(inputs) 13 loss = criterion(outputs, labels) 14 loss.backward() ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) <ipython-input-52-725d44154459> in forward(self, x) 14 15 def forward(self, x): --->16 x=self.conv1(x) 17 x = self.pool(F.relu(x)) 18 x = self.pool(F.relu(self.conv2(x))) ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 475 result = self._slow_forward(*input, **kwargs) 476 else: --> 477 result = self.forward(*input, **kwargs) 478 for hook in self._forward_hooks.values(): 479 hook_result = hook(self, input, result) ~\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --> 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight' Why did I receive this error and how can I fix it?
You also need to move your inputs and labels to GPU inside the training loop. for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # move to GPU inputs = inputs.to(device) labels = labels.to(device) ...
https://stackoverflow.com/questions/53106369/
Pytorch: is there a function similar to torch.argmax which can really keep the dimension of the original data?
For example, the code is input = torch.randn(3, 10) result = torch.argmax(input, dim=0, keepdim=True) input is tensor([[ 1.5742, 0.8183, -2.3005, -1.1650, -0.2451], [ 1.0553, 0.6021, -0.4938, -1.5379, -1.2054], [-0.1728, 0.8372, -1.9181, -0.9110, 0.2422]]) and result is tensor([[ 0, 2, 1, 2, 2]]) However, I want a result like this tensor([[ 1, 0, 0, 0, 0], [ 0, 0, 1, 0, 0], [ 0, 1, 0, 1, 1]])
Finally, I solved it. But this solution may not be efficient. Code as follow, input = torch.randn(3, 10) result = torch.argmax(input, dim=0, keepdim=True) result_0 = result == 0 result_1 = result == 1 result_2 = result == 2 result = torch.cat((result_0, result_1, result_2), 0)
https://stackoverflow.com/questions/53116477/
Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'
Firstly I have used like 'model.cuda()' to convert model and data to cuda. But it still has such a problem. I debug every layer of the model, and weights for every module has iscuda=True. So does anyone know why there is such a problem? I have two models, one is resnet50 and another one which contains the first one as backbone. class FC_Resnet(nn.Module): def __init__(self, model, num_classes): super(FC_Resnet, self).__init__() # feature encoding self.features = nn.Sequential( model.conv1, model.bn1, model.relu, model.maxpool, model.layer1, model.layer2, model.layer3, model.layer4) # classifier num_features = model.layer4[1].conv1.in_channels self.classifier = nn.Sequential( nn.Conv2d(num_features, num_classes, kernel_size=1, bias=True)) def forward(self, x): # children=self.features.children() # for child in children: # if child.weight is not None: # print(child.weight.device) x = self.features(x) x = self.classifier(x) return x def fc_resnet50(num_classes=20, pre_trained=True): model = FC_Resnet(models.resnet50(pre_trained), num_classes) return model And another one: class PeakResponseMapping(nn.Sequential): def __init__(self, *args, **kargs): super(PeakResponseMapping, self).__init__(*args) ... def forward(self, input, class_threshold=0, peak_threshold=30, retrieval_cfg=None): assert input.dim() == 4 if self.inferencing: input.requires_grad_() class_response_maps = super(PeakResponseMapping, self).forward(input) return class_response_maps And the main is very simple: def main(): dataset = VOC(img_transform=image_transform()) dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True) model = peak_response_mapping(fc_resnet50(), win_size=3, sub_pixel_locating_factor=8, enable_peak_stimulation=True) model=model.cuda() for step, (b_x, b_y) in enumerate(dataloader): b_x.cuda() b_y.cuda() result = model.forward(b_x)
Somewhere down in the stack trace, Torch is expecting a CPU tensor (torch.FloatTensor) but is getting a GPU / CUDA tensor (torch.cuda.FloatTensor). Given a tensor tensor: tensor.to('cpu') returns the CPU version of the tensor tensor.to('cuda') returns the CUDA version of the tensor To write hardware-agnostic code: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") Then you can do: tensor.to(device) For the OP, this becomes: result = model.forward(b_x.to(device))
https://stackoverflow.com/questions/53156298/
How do you convert a .onnx to tflite?
I've exported my model to ONNX via: # Export the model torch_out = torch.onnx._export(learn.model, # model being run x, # model input (or a tuple for multiple inputs) EXPORT_PATH + "mnist.onnx", # where to save the model (can be a file or file-like object) export_params=True) # store the trained parameter weights inside the model file And now I am trying to convert the model to a Tensorflow Lite file so that I can do inference on Android. Unfortunately, PyTorch/Caffe2 support is fairly lacking or too complex for Android but Tensorflow appears much simpler. The documentation for ONNX to Tflite is pretty light on this. I've tried exporting to a Tensorflow GraphDef proto via: tf_rep.export_graph(EXPORT_PATH + 'mnist-test/mnist-tf-export.pb') And then running toco: toco \ --graph_def_file=mnist-tf-export.pb \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --inference_type=FLOAT \ --input_type=FLOAT \ --input_arrays=0 \ --output_arrays=add_10 \ --input_shapes=1,3,28,28 \ --output_file=mnist.tflite` When I do though I get the following error: File "anaconda3/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 172, in toco_convert_protos "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr)) tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info. 2018-11-06 16:28:33.864889: I tensorflow/lite/toco/import_tensorflow.cc:1268] Converting unsupported operation: PyFunc 2018-11-06 16:28:33.874130: F tensorflow/lite/toco/import_tensorflow.cc:114] Check failed: attr.value_case() == AttrValue::kType (1 vs. 6) Further, even when I run the command I don't know what to specify for the input_arrays or output_arrays since the model was originally built in PyTorch. Has anyone successfully converted their ONNX model to TFlite? Here's the ONNX file I'm trying to convert: https://drive.google.com/file/d/1sM4RpeBVqPNw1WeCROpKLdzbSJPWSK79/view?usp=sharing Extra info Python 3.6.6 :: Anaconda custom (64-bit) onnx.version = '1.3.0' tf.version = '1.13.0-dev20181106' torch.version = '1.0.0.dev20181029'
I think the ONNX file i.e. model.onnx that you have given is corrupted I don't know what is the issue but it is not doing any inference on ONNX runtime. Now you can run PyTorch Models directly on mobile phones. check out PyTorch Mobile's documentation here This answer is for TensorFlow version 1, For TensorFlow version 2 or higher click link The best way to convert the model from protobuf freezeGraph to TFlite is to use the official TensorFlow lite converter documentation According to TensorFlow Docs, TocoConverter has been deprecated This class (tf.compat.v1.lite.TocoConverter) has been deprecated. Please use lite.TFLiteConverter instead. Convert from PyTorch to ONNX model The best practice to convert the model from Pytorch to Onnx is that you should add the following parameters to specify the names of the input and output layer of your model in torch.onnx.export() function # Export the model from PyTorch to ONNX torch_out = torch.onnx._export(model, # model being run x, # model input (or a tuple for multiple inputs) EXPORT_PATH + "mnist.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file input_names=['main_input'], # specify the name of input layer in onnx model output_names=['main_output']) # specify the name of input layer in onnx model So in your case: Now export this model to TensorFlow protobuf FreezeGraph using onnx-tf Please note that this method is only working when tensorflow_version < 2 Convert from ONNX to TensorFlow freezGraph To convert the model please install onnx-tf version 1.5.0 from the below command pip install onnx-tf==1.5.0 Now to convert .onnx model to TensorFlow freeze graph run this below command in shell onnx-tf convert -i "mnist.onnx" -o "mnist.pb" Convert from TensorFlow FreezeGraph .pb to TF Now to convert this model from .pb file to tflite model use this code import tensorflow as tf # make a converter object from the saved tensorflow file converter = tf.lite.TFLiteConverter.from_frozen_graph('mnist.pb', #TensorFlow freezegraph .pb model file input_arrays=['main_input'], # name of input arrays as defined in torch.onnx.export function before. output_arrays=['main_output'] # name of output arrays defined in torch.onnx.export function before. ) # tell converter which type of optimization techniques to use converter.optimizations = [tf.lite.Optimize.DEFAULT] # to view the best option for optimization read documentation of tflite about optimization # go to this link https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional # convert the model tf_lite_model = converter.convert() # save the converted model open('mnist.tflite', 'wb').write(tf_lite_model) To choose which option is best for optimization for your model use case see this official guide about TensorFlow lite optimization https://www.tensorflow.org/lite/guide/get_started#4_optimize_your_model_optional Note: You can try my Jupyter Notebook Convert ONNX model to Tensorflow Lite on Google Colaboratory link
https://stackoverflow.com/questions/53182177/
Making custom non-trivial loss function in pytorch
I'm just started with pytorch and trying to understand how to deal with custom loss functions, especially with some non trivial ones. Problem 1. I'd like to stimulate my nn to maximize true positive rate and at the same time minimize false discovery rate. For example increase total score on +2 for true positive, and decrease on -5 for false positive. def tp_fp_loss(yhat, y): total_score = 0 for i in range(y.size()): if is_tp(yhat[i],y[i]): total_score += 2 if is_fp(yhat[i],y[i]): total_score -= 5 return -total_score Problem 2. In case when y is a list of positive and negative rewards (y = [10,-5, -40, 23, 11, -7]), stimulate nn to maximize sum of rewards. def max_reward_loss(yhat,y): r = torch.autograd.Variable(torch.Tensor(y[yhat >= .5]), requires_grad=True).sum() return -r Maybe I'm not completely understand some autograd mechanics, functions which I implemented correctly calculate loss but learning with them doesnt work :( What I'm doing wrong? Can anybody help me with some working solution of any of that problems?
@Shai already summed it up: Your loss function is not differentiable. One way to think about it is that your loss function should be plottable, and the "downhill" slope should "roll" toward the desired model output. In order to plot your loss function, fix y_true=1 then plot [loss(y_pred) for y_pred in np.linspace(0, 1, 101)] where loss is your loss function, and make sure your plotted loss function has the slope as desired. In your case, it sounds like you want to weight the the loss more strongly when it is on the wrong side of the threshold. As long as you can plot it, and the slope is always downhill toward your target value (no flat spots or uphill slopes on the way from a valid prediction to the target value), your model should learn from it. Also note that if you're just trying to take into account some business objective which prioritizes precision over recall, you could accomplish this by training to convergence with cross entropy or some well-known loss function, and then by tuning your model threshold based on your use case. A higher threshold would normally prioritize precision, and a lower threshold would normally prioritize recall. After you've trained, you can then evaluate your model at a variety of thresholds and choose the most appropriate.
https://stackoverflow.com/questions/53215816/
CUDNN_STATUS_MAPPING_ERROR when training with pose2body
I'm trying to train https://github.com/NVIDIA/vid2vid. I'm... ...executing with pretty much the vanilla parametrization shown in the readme, I had to change the number of GPUs though and increased the number of threads for reading the dataset. Command: python train.py \ --name pose2body_256p \ --dataroot datasets/pose \ --dataset_mode pose \ --input_nc 6 \ --num_D 2 \ --resize_or_crop ScaleHeight_and_scaledCrop \ --loadSize 384 \ --fineSize 256 \ --gpu_ids 0,1 \ --batchSize 1 \ --max_frames_per_gpu 3 \ --no_first_img \ --n_frames_total 12 \ --max_t_step 4 \ --nThreads 6 ...training on the supplied example datasets. ...running a docker container built with the scripts in vid2vid/docker, e. g. with CUDA 9.0 and CUDNN 7. ...using two NVIDIA V100 GPUs. Whenever I start training the script crashes after a couple of minutes with the message RuntimeError: CUDNN_STATUS_MAPPING_ERROR. Full error message: Traceback (most recent call last): File "train.py", line 329, in <module> train() File "train.py", line 104, in train fake_B, fake_B_raw, flow, weight, real_A, real_Bp, fake_B_last = modelG(input_A, input_B, inst_A, fake_B_last) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 114, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/data_parallel.py", line 124, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply raise output File "/usr/local/lib/python3.5/dist-packages/torch/nn/parallel/parallel_apply.py", line 41, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/vid2vid/models/vid2vid_model_G.py", line 130, in forward fake_B, fake_B_raw, flow, weight = self.generate_frame_train(netG, real_A_all, fake_B_prev, start_gpu, is_first_frame) File "/vid2vid/models/vid2vid_model_G.py", line 175, in generate_frame_train fake_B_feat, flow_feat, fake_B_fg_feat, use_raw_only) File "/vid2vid/models/networks.py", line 171, in forward downsample = self.model_down_seg(input) + self.model_down_img(img_prev) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward input = module(input) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: CUDNN_STATUS_MAPPING_ERROR From reading the issues in the vid2vid using two V100 should work with this setup. The error also occurs if CUDA 8/CUDNN 6 are used. I checked the flags but haven't found any indication of further necessary changes to the arguments supplied to train.py. Any ideas on how to solve (or work around) this?
In case anybody deals the same issue: Training on P100 cards worked. Seems like the V100 architecture clashes with version of pytorch used in the supplied Dockerfile at some point. Not quite a solution, but a workaround.
https://stackoverflow.com/questions/53222648/
Trouble using openCV to load a net from ONNX (python/pytorch)
I'm trying to load a trained .onnx model (from a neural-style-transfer algorithm) into cv2. I've seen that there is a cv.dnn.readNetFromONNX() function, but there is no such function in cv2. I can't seem to import or load opencv as cv, and as such can't seem to load my model in cv2. Does anyone know a solution? I've basically trained a model with https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py#L122-L150 this script, and made an export of an onnx model by adding torch.onnx.export(style_model, dummy_input, "chipsoft_mod.onnx", verbose=True) Now I want to run the trained model trough the cv2 reader, but I fail spectacularly.
Update your opencv to a newer version. It should help. pip install opencv-python==4.1.0.25
https://stackoverflow.com/questions/53224685/
What does flatten_parameters() do?
I saw many Pytorch examples using flatten_parameters in the forward function of the RNN self.rnn.flatten_parameters() I saw this RNNBase and it is written that it Resets parameter data pointer so that they can use faster code paths What does that mean?
It may not be a full answer to your question. But, if you give a look at the flatten_parameters's source code , you will notice that it calls _cudnn_rnn_flatten_weight in ... NoGradGuard no_grad; torch::_cudnn_rnn_flatten_weight(...) ... is the function that does the job. You will find that what it actually does is copying the model's weights into a vector<Tensor> (check the params_arr declaration) in: // Slice off views into weight_buf std::vector<Tensor> params_arr; size_t params_stride0; std::tie(params_arr, params_stride0) = get_parameters(handle, rnn, rnn_desc, x_desc, w_desc, weight_buf); MatrixRef<Tensor> weight{weight_arr, static_cast<size_t>(weight_stride0)}, params{params_arr, params_stride0}; And the weights copying in // Copy weights _copyParams(weight, params); Also note that they update (or Reset as they explicitly say in docs) the original pointers of weights with the new pointers of params by doing an in-place operation .set_ (_ is their notation for the in-place operations) in orig_param.set_(new_param.view_as(orig_param)); // Update the storage for (size_t i = 0; i < weight.size(0); i++) { for (auto orig_param_it = weight[i].begin(), new_param_it = params[i].begin(); orig_param_it != weight[i].end() && new_param_it != params[i].end(); orig_param_it++, new_param_it++) { auto orig_param = *orig_param_it, new_param = *new_param_it; orig_param.set_(new_param.view_as(orig_param)); } } And according to n2798 (draft of C++0x) ©ISO/IECN3092 23.3.6 Class template vector A vector is a sequence container that supports random access iterators. In addition, it supports (amortized)constant time insert and erase operations at the end; insert and erase in the middle take linear time. Storage management is handled automatically, though hints can be given to improve efficiency. The elements of a vector are stored contiguously, meaning that if v is a vector <T, Allocator> where T is some type other than bool, then it obeys the identity&v[n] == &v[0] + n for all 0 <= n < v.size(). In some situations UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters(). They explicitly advise people in code warnings to have a contiguous chunk of memory.
https://stackoverflow.com/questions/53231571/
Error in Google Colaboratory - AttributeError: module 'PIL.Image' has no attribute 'register_decoder'
I am running this code on Google Colaboratory and I am getting error of register decoder image_data = dset.ImageFolder(root="drive/SemanticDataset/train/", transform = transforms.Compose([ transforms.Scale(size=img_size), transforms.CenterCrop(size=(img_size,img_size*2)), transforms.ToTensor(), ])) enter code herelabel_data = dset.ImageFolder(root="drive/SemanticDataset/label/", transform = transforms.Compose([ transforms.Scale(size=img_size), transforms.CenterCrop(size=(img_size,img_size*2)), transforms.ToTensor(), ])) image_batch = data.DataLoader(image_data, batch_size=batch_size, shuffle=False, num_workers=2) label_batch = data.DataLoader(label_data, batch_size=batch_size, shuffle=False, num_workers=2) for i in range(epoch): for _, (image, label) in enumerate(zip(image_batch, label_batch)): optimizer.zero_grad() x = Variable(image, requires_grad=True).cuda() y = Variable(label).cuda() out = model.forward(x) loss = loss_func(out, y) loss.backward() optimizer.step() if _ % 100 == 0: print("Epoch: "+i+"| Loss: " , loss) here is the screenshot of error
First, check the version of pillow you have by using: import PIL print(PIL.PILLOW_VERSION) and make sure you have the newest version, the one I am using right now is 5.3.0 If you have like 4.0.0, install a new version by using: !pip install Pillow==5.3.0 in the Colab environment. Second, restart your Google colab environment, and check the version again, it should be updated. I had the same problem, and I spent some time trying to solve it. Note: I was using PyTorch 0.4. I hope this will solve your problem.
https://stackoverflow.com/questions/53237161/
Pytorch Chatbot Tutorial problem: How can I solve List Index Out of Range
I’m new to pytorch and have been following the many tutorials available. But, When I did The CHATBOT TUTORIAL is not work. Like the figure below What should I do and what is causing this?
A little late, but I hit this same error at this line before on this same tutorial. I am also running on 3.6 on Windows, had no issues loading PyTorch or running anything with CUDA. The issue for me was in the data. It was because there was a blank line somewhere in the source data, so when it went to split the words in the list for that function, it threw this error - there were no words to split.
https://stackoverflow.com/questions/53247475/
Python - importing fastai results in SyntaxError
I am trying to setup an ML model using fastai and have to do the following imports: import fastai.models import fastai.nlp import fastai.dataset However, it gives me the following error by the fastai imports. Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34) [GCC 7.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import fastai.nlp Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/fastai/nlp.py", line 172 if os.path.isdir(path): paths=glob(f'{path}/*.*') ^ SyntaxError: invalid syntax Apparently, the character f in glob(f'{path}/*.*') is causing the error. I fixed the error by removing f, but it seems that there are lots of these errors in the fastai library. My current thought is that I am using an incorrect python version. Could anyone give me some pointer?
Strings in the shape of: f'{path}/*.*' are called f-strings and were introduced in Python3.6. That's why you get the SyntaxError - for versions lower than Python3.6 a SyntaxError will be raised, as this syntax just doesn't exist in lower versions. So obviously fast-ai is programmed for Python3.6 or higher. When you take a look at the installation issues (you have to scroll down a bit), you can see under Is My System Supported? the first point: Python: You need to have python 3.6 or higher So I'm afraid updating your python is the easiest way to solve the problem! If you like to learn more about f-strings you can take a look here: https://www.python.org/dev/peps/pep-0498/
https://stackoverflow.com/questions/53258219/
pytorch RuntimeError: CUDA error: device-side assert triggered
I've a notebook on google colab that fails with following error --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics) 93 exception = e ---> 94 raise e 95 finally: cb_handler.on_train_end(exception) /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics) 83 xb, yb = cb_handler.on_batch_begin(xb, yb) ---> 84 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler) 85 if cb_handler.on_batch_end(loss): break /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler) 24 if opt is not None: ---> 25 loss = cb_handler.on_backward_begin(loss) 26 loss.backward() /usr/local/lib/python3.6/dist-packages/fastai/callback.py in on_backward_begin(self, loss) 223 for cb in self.callbacks: --> 224 a = cb.on_backward_begin(**self.state_dict) 225 if a is not None: self.state_dict['last_loss'] = a /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in on_backward_begin(self, smooth_loss, **kwargs) 266 if self.pbar is not None and hasattr(self.pbar,'child'): --> 267 self.pbar.child.comment = f'{smooth_loss:.4f}' 268 /usr/local/lib/python3.6/dist-packages/torch/tensor.py in __format__(self, format_spec) 377 if self.dim() == 0: --> 378 return self.item().__format__(format_spec) 379 return object.__format__(self, format_spec) RuntimeError: CUDA error: device-side assert triggered During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) <ipython-input-33-dd390b1c8108> in <module>() ----> 1 lr_find(learn) 2 learn.recorder.plot() /usr/local/lib/python3.6/dist-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, **kwargs) 26 cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div) 27 a = int(np.ceil(num_it/len(learn.data.train_dl))) ---> 28 learn.fit(a, start_lr, callbacks=[cb], **kwargs) 29 30 def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner: /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks) 160 callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks) 161 fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics, --> 162 callbacks=self.callbacks+callbacks) 163 164 def create_opt(self, lr:Floats, wd:Floats=0.)->None: /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics) 93 exception = e 94 raise e ---> 95 finally: cb_handler.on_train_end(exception) 96 97 loss_func_name2activ = {'cross_entropy_loss': partial(F.softmax, dim=1), 'nll_loss': torch.exp, 'poisson_nll_loss': torch.exp, /usr/local/lib/python3.6/dist-packages/fastai/callback.py in on_train_end(self, exception) 254 def on_train_end(self, exception:Union[bool,Exception])->None: 255 "Handle end of training, `exception` is an `Exception` or False if no exceptions during training." --> 256 self('train_end', exception=exception) 257 258 class AverageMetric(Callback): /usr/local/lib/python3.6/dist-packages/fastai/callback.py in __call__(self, cb_name, call_mets, **kwargs) 185 "Call through to all of the `CallbakHandler` functions." 186 if call_mets: [getattr(met, f'on_{cb_name}')(**self.state_dict, **kwargs) for met in self.metrics] --> 187 return [getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs) for cb in self.callbacks] 188 189 def on_train_begin(self, epochs:int, pbar:PBar, metrics:MetricFuncList)->None: /usr/local/lib/python3.6/dist-packages/fastai/callback.py in <listcomp>(.0) 185 "Call through to all of the `CallbakHandler` functions." 186 if call_mets: [getattr(met, f'on_{cb_name}')(**self.state_dict, **kwargs) for met in self.metrics] --> 187 return [getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs) for cb in self.callbacks] 188 189 def on_train_begin(self, epochs:int, pbar:PBar, metrics:MetricFuncList)->None: /usr/local/lib/python3.6/dist-packages/fastai/callbacks/lr_finder.py in on_train_end(self, **kwargs) 45 # restore the valid_dl we turned of on `__init__` 46 self.data.valid_dl = self.valid_dl ---> 47 self.learn.load('tmp') 48 if hasattr(self.learn.model, 'reset'): self.learn.model.reset() 49 print('LR Finder complete, type {learner_name}.recorder.plot() to see the graph.') /usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in load(self, name, device) 202 "Load model `name` from `self.model_dir` using `device`, defaulting to `self.data.device`." 203 if device is None: device = self.data.device --> 204 self.model.load_state_dict(torch.load(self.path/self.model_dir/f'{name}.pth', map_location=device)) 205 return self 206 /usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module) 356 f = open(f, 'rb') 357 try: --> 358 return _load(f, map_location, pickle_module) 359 finally: 360 if new_fd: /usr/local/lib/python3.6/dist-packages/torch/serialization.py in _load(f, map_location, pickle_module) 527 unpickler = pickle_module.Unpickler(f) 528 unpickler.persistent_load = persistent_load --> 529 result = unpickler.load() 530 531 deserialized_storage_keys = pickle_module.load(f) /usr/local/lib/python3.6/dist-packages/torch/serialization.py in persistent_load(saved_id) 493 if root_key not in deserialized_objects: 494 deserialized_objects[root_key] = restore_location( --> 495 data_type(size), location) 496 storage = deserialized_objects[root_key] 497 if view_metadata is not None: /usr/local/lib/python3.6/dist-packages/torch/serialization.py in restore_location(storage, location) 376 elif isinstance(map_location, torch.device): 377 def restore_location(storage, location): --> 378 return default_restore_location(storage, str(map_location)) 379 else: 380 def restore_location(storage, location): /usr/local/lib/python3.6/dist-packages/torch/serialization.py in default_restore_location(storage, location) 102 def default_restore_location(storage, location): 103 for _, _, fn in _package_registry: --> 104 result = fn(storage, location) 105 if result is not None: 106 return result /usr/local/lib/python3.6/dist-packages/torch/serialization.py in _cuda_deserialize(obj, location) 84 'to an existing device.'.format( 85 device, torch.cuda.device_count())) ---> 86 return obj.cuda(device) 87 88 /usr/local/lib/python3.6/dist-packages/torch/_utils.py in _cuda(self, device, non_blocking, **kwargs) 74 else: 75 new_type = getattr(torch.cuda, self.__class__.__name__) ---> 76 return new_type(self.size()).copy_(self, non_blocking) 77 78 RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorCopy.cpp:20 There is no information about the real cause, I tried to get the stack trace by forcing cuda to run on one gpu (as suggested here) using a cell like this !export CUDA_LAUNCH_BLOCKING=1 But this does not seem to work, still having the same error with. Is there another way that works with Google Colab?
Be sure that your targets values starts from zero to number of classes - 1. Ex: you have 100 classification class so your target should be from 0 to 99
https://stackoverflow.com/questions/53268442/
When using Pytorch-GPU in Anaconda, is it not necessary to install CUDA?
I found that after installing Pytorch 0.4 GPU version in Anaconda, you don't need to install CUDA locally to call gpu acceleration. When running code, the GPU core can be used at more than 90%. Edit:I used it in Windows 10. Don't know if it works in Linux.
@talonmies Thanks for your url. It seems that pytorch don't need cuda in Windows, since its dependencies are cffi, mkl, numpy, and python. I entered this command conda search -c pytorch pytorch=0.4.0 --info in Anaconda Prompt and it says Loading channels: done pytorch 0.4.0 py35_cuda80_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py35_cuda80_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py35_cuda80_cudnn7he774522_1 build number: 1 size : 528.5 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py35_cuda80_cudnn7he774522_1.tar.bz2 md5 : 7db3971bb054079d7c7ff84b6286c58e dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.5,<3.6.0a0 pytorch 0.4.0 py35_cuda90_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py35_cuda90_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py35_cuda90_cudnn7he774522_1 build number: 1 size : 578.5 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py35_cuda90_cudnn7he774522_1.tar.bz2 md5 : 8200c9841f9cad6f2e605015812aa3f2 dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.5,<3.6.0a0 pytorch 0.4.0 py35_cuda91_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py35_cuda91_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py35_cuda91_cudnn7he774522_1 build number: 1 size : 546.1 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py35_cuda91_cudnn7he774522_1.tar.bz2 md5 : 79d99a825f66b55b1aa6f04d22d68aac dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.5,<3.6.0a0 pytorch 0.4.0 py36_cuda80_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py36_cuda80_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py36_cuda80_cudnn7he774522_1 build number: 1 size : 529.2 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py36_cuda80_cudnn7he774522_1.tar.bz2 md5 : 27d20c9869fb57ffe0d6d014cf348855 dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.6,<3.7.0a0 pytorch 0.4.0 py36_cuda90_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py36_cuda90_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py36_cuda90_cudnn7he774522_1 build number: 1 size : 577.6 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py36_cuda90_cudnn7he774522_1.tar.bz2 md5 : 138dcca8eeff1d58a8fd9b1febf702f6 dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.6,<3.7.0a0 pytorch 0.4.0 py36_cuda91_cudnn7he774522_1 ------------------------------------------ file name : pytorch-0.4.0-py36_cuda91_cudnn7he774522_1.tar.bz2 name : pytorch version : 0.4.0 build string: py36_cuda91_cudnn7he774522_1 build number: 1 size : 546.4 MB arch : x86_64 constrains : () platform : Platform.win license : BSD 3-Clause subdir : win-64 url : https://conda.anaconda.org/pytorch/win-64/pytorch-0.4.0-py36_cuda91_cudnn7he774522_1.tar.bz2 md5 : 326265665000de6f7501160b10b089c8 dependencies: - cffi - mkl >=2018 - numpy >=1.11 - python >=3.6,<3.7.0a0
https://stackoverflow.com/questions/53292107/
Pytorch Exception in Thread: ValueError: signal number 32 out of range
I'm getting this error: Exception in Thread: ValueError: signal number 32 out of range The specific tutorial that raises an issue for me is the training a classifier (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html), the specific line is: dataiter = iter(trainloader) and the full error traceback is: Exception in thread Thread-5: Traceback (most recent call last): File "/home/chenchen/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/chenchen/anaconda3/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/home/chenchen/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 139, in _serve signal.pthread_sigmask(signal.SIG_BLOCK, range(1, signal.NSIG)) File "/home/chenchen/anaconda3/lib/python3.6/signal.py", line 60, in pthread_sigmask sigs_set = _signal.pthread_sigmask(how, mask) ValueError: signal number 32 out of range My operation system is Ubuntu 18.10 and my python env is Anaconda3 for python 3.6. I installed pytorch from the latest source. My cuda version is 10.0.
I have faced a similar issue and it got resolved when I set: num_workers=0
https://stackoverflow.com/questions/53300965/
Pytorch speed comparison - GPU slower than CPU
I was trying to find out if GPU tensor operations are actually faster than CPU ones. So, I wrote this particular code below to implement a simple 2D addition of CPU tensors and GPU cuda tensors successively to see the speed difference: import torch import time ###CPU start_time = time.time() a = torch.ones(4,4) for _ in range(1000000): a += a elapsed_time = time.time() - start_time print('CPU time = ',elapsed_time) ###GPU start_time = time.time() b = torch.ones(4,4).cuda() for _ in range(1000000): b += b elapsed_time = time.time() - start_time print('GPU time = ',elapsed_time) To my surprise, the CPU time was 0.93 sec and the GPU time was as high as 63 seconds. Am I doing the cuda tensor operation properly or is the concept of cuda tensors works faster only in very highly complex operations, like in neural networks? Note: My GPU is NVIDIA 940MX and torch.cuda.is_available() call returns True.
GPU acceleration works by heavy parallelization of computation. On a GPU you have a huge amount of cores, each of them is not very powerful, but the huge amount of cores here matters. Frameworks like PyTorch do their to make it possible to compute as much as possible in parallel. In general matrix operations are very well suited for parallelization, but still it isn't always possible to parallelize computation! In your example you have a loop: b = torch.ones(4,4).cuda() for _ in range(1000000): b += b You have 1000000 operations, but due to the structure of the code it impossible to parallelize much of these computations. If you think about it, to compute the next b you need to know the value of the previous (or current) b. So you have 1000000 operations, but each of these has to be computed one after another. Possible parallelization is limited to the size of your tensor. This size though is not very large in your example: torch.ones(4,4) So you only can parallelize 16 operations (additions) per iteration. As the CPU has few, but much more powerful cores, it is just much faster for the given example! But things change if you change the size of the tensor, then PyTorch is able to parallelize much more of the overall computation. I changed the iterations to 1000 (because I did not want to wait so long :), but you can put in any value you like, the relation between CPU and GPU should stay the same. Here are the results for different tensor sizes: #torch.ones(4,4) - the size you used CPU time = 0.00926661491394043 GPU time = 0.0431208610534668 #torch.ones(40,40) - CPU gets slower, but still faster than GPU CPU time = 0.014729976654052734 GPU time = 0.04474186897277832 #torch.ones(400,400) - CPU now much slower than GPU CPU time = 0.9702610969543457 GPU time = 0.04415607452392578 #torch.ones(4000,4000) - GPU much faster then CPU CPU time = 38.088677167892456 GPU time = 0.044649362564086914 So as you see, where it is possible to parallelize stuff (here the addition of the tensor elements), GPU becomes very powerful. GPU time is not changing at all for the given calculations, the GPU can handle much more! (as long as it doesn't run out of memory :)
https://stackoverflow.com/questions/53325418/
pytorch delete model from gpu
I want to make a cross validation in my project based on Pytorch. And I didn't find any method that pytorch provided to delete the current model and empty the memory of GPU. Could you tell that how can I do it?
Freeing memory in PyTorch works as it does with the normal Python garbage collector. This means once all references to an Python-Object are gone it will be deleted. You can delete references by using the del operator: del model You have to make sure though that there is no reference to the respective object left, otherwise the memory won't be freed. So once you've deleted all references of your model, it should be deleted and the memory freed. If you want to learn more about memory management you can take a look here: https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management
https://stackoverflow.com/questions/53350905/
Issues loading from Drive with pytorch' datasets.DatasetFolder
The loading works great using jupyter and local files, but when I adapted to Colab, fetching data from a Drive folder, datasets.DatasetFolder always loads 9500 odd datapoints, never the full 10 000. Anyone had similar issues? train_data = datasets.DatasetFolder('/content/drive/My Drive/4 - kaggle/data', np.load, list(('npy')) ) print(train_data.__len__) Returns <bound method DatasetFolder.__len__ of Dataset DatasetFolder Number of datapoints: 9554 Root Location: /content/drive/My Drive/4 - kaggle/data Transforms (if any): None Target Transforms (if any): None> Where I would get the full 10 000 elements usually.
Loading lots of files from a single folder in Drive is likely to be slow and error-prone. You'll probably end up much happier if you either stage the data on GCS or upload an archive (.zip or .tar.gz) to Drive and copy that one file to your colab VM, unarchive it there, and then run your code over the local data.
https://stackoverflow.com/questions/53363663/
Theoretical underpinning behind Hardmax operator
In the tensor flow Github repository, in the file attentionwrapper.py, hardmax operator has been defined. On the docs, it has been mentioned tf.contrib.seq2seq.hardmax I want to know what's the theoretical underpinning behind providing this functionality for hardmax operator. Prima facie google searches for past few weeks haven't led me to concrete understanding of the concept. If softmax is differentiable (soft), why would hardmax be ever used? If it can't be used in back propagation (due to non-differentiability required in gradient calculation), where else can it be used? Reinforcement learning literature talks about Soft vs Hard attention. However I couldn't see concrete examples nor explanations of where the tf.contrib.seq2seq.hardmax can be actually used in some RL model. By the looks of it, since it is mentioned in seq2seq, it should be obviously having some application in Natural Language Processing. But exactly where? There are tonnes of NLP tasks. Couldn't find any direct task SOTA algorithm which uses hardmax.
Hardmax is used when you have no choice but to make a decision nonprobabalistically. For example, when you are using a model to generate a neural architecture as in neural module networks, you have to make a discrete choice. To make this trainable (since this would be non-differentiable as you state), you can use REINFORCE (an algorithm in RL) to train via policy gradient and estimate this loss contribution via Monte Carlo sampling. Neural module networks are an NLP construct and depend on seq2seq. I'm sure there are many examples, but this is one that immediately came to mind.
https://stackoverflow.com/questions/53367724/
PyTorch element-wise product of vectors / matrices / tensors
In PyTorch, how do I get the element-wise product of two vectors / matrices / tensors? For googlers, this is product is also known as: Hadamard product Schur product Entrywise product
Given two tensors A and B you can use either: A * B torch.mul(A, B) A.mul(B) Note: for matrix multiplication, you want to use A @ B which is equivalent to torch.matmul().
https://stackoverflow.com/questions/53369667/
PyTorch mapping operators to functions
What are all the PyTorch operators, and what are their function equivalents? Eg, is a @ b equivalent to a.mm(b) or a.matmul(b)? I'm after a canonical listing of operator -> function mappings. I'd be happy to be given a PyTorch documentation link as an answer - my googlefu couldn't track it down.
The Python documentation table Mapping Operators to Functions provides canonical mappings from: operator -> __function__() Eg: Matrix Multiplication a @ b matmul(a, b) Elsewhere on the page, you will see the __matmul__ name as an alternate to matmul. The definitions of the PyTorch __functions__ are found either in: The torch.Tensor module documentation python_variable_methods.cpp You can look up the documentation for the named functions at: https://pytorch.org/docs/stable/torch.html?#torch.<FUNCTION-NAME>
https://stackoverflow.com/questions/53370003/
Get the data type of a PyTorch tensor
I understand that PyTorch tensors are homogenous, ie, each of the elements are of the same type. How do I find out the type of the elements in a PyTorch tensor?
There are three kinds of things: dtype || CPU tensor || GPU tensor torch.float32 torch.FloatTensor torch.cuda.FloatTensor The first one you get with print(t.dtype) if t is your tensor, else you use t.type() for the other two.
https://stackoverflow.com/questions/53374499/
Check if PyTorch tensors are equal within epsilon
How do I check if two PyTorch tensors are semantically equal? Given floating point errors, I want to know if the the elements differ only by a small epsilon value.
At the time of writing, this is a undocumented function in the latest stable release (0.4.1), but the documentation is in the master (unstable) branch. torch.allclose() will return a boolean indicating whether all element-wise differences are equal allowing for a margin of error. Additionally, there's the undocumented isclose(): >>> torch.isclose(torch.Tensor([1]), torch.Tensor([1.00000001])) tensor([1], dtype=torch.uint8)
https://stackoverflow.com/questions/53374928/
How does pytorch's parallel method and distributed method work?
I'm not an expert in distributed system and CUDA. But there is one really interesting feature that PyTorch support which is nn.DataParallel and nn.DistributedDataParallel. How are they actually implemented? How do they separate common embeddings and synchronize data? Here is a basic example of DataParallel. import torch.nn as nn from torch.autograd.variable import Variable import numpy as np class Model(nn.Module): def __init__(self): super().__init__( embedding=nn.Embedding(1000, 10), rnn=nn.Linear(10, 10), ) def forward(self, x): x = self.embedding(x) x = self.rnn(x) return x model = nn.DataParallel(Model()) model.forward(Variable.from_numpy(np.array([1,2,3,4,5,6], dtype=np.int64)).cuda()).cpu() PyTorch can split the input and send them to many GPUs and merge the results back. How does it manage embeddings and synchronization for a parallel model or a distributed model? I wandered around PyTorch's code but it's very hard to know how the fundamentals work.
That's a great question. PyTorch DataParallel paradigm is actually quite simple and the implementation is open-sourced here . Note that his paradigm is not recommended today as it bottlenecks at the master GPU and not efficient in data transfer. This container parallelizes the application of the given :attr:module by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. As of DistributedDataParallel, thats more tricky. This is currently the more advanced approach and it is quite efficient (see here). This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. There are several approaches towards how to average the gradients from each node. I would recommend this paper to get a real sense how things work. Generally speaking, there is a trade-off between transferring the data from one GPU to another, regarding bandwidth and speed, and we want that part to be really efficient. So one possible approach is to connect each pairs of GPUs with a really fast protocol in a circle, and to pass only part of gradients from one to another, s.t. in total, we transfer less data, more efficiently, and all the nodes get all the gradients (or their average at least). There will still be a master GPU in that situation, or at least a process, but now there is no bottleneck on any GPU, they all share the same amount of data (up to...). Now this can be further optimized if we don't wait for all the batches to finish compute and start do a time-sharing thing where each node sends his portion when he's ready. Don't take me on the details, but it turns out that if we don't wait for everything to end, and do the averaging as soon as we can, it might also speed up the gradient averaging. Please refer to literature for more information about that area as it is still developing (as of today). PS 1: Usually these distributed training work better on machines that are set for that task, e.g. AWS deep learning instances that implement those protocols in HW. PS 2: Disclaimer: I really don't know what protocol PyTorch devs chose to implement and what is chosen according to what. I work with distributed training and prefer to follow PyTorch best practices without trying to outsmart them. I recommend for you to do the same unless you are really into researching this area. References: [1] Distributed Training of Deep Learning Models: A Taxonomic Perspective
https://stackoverflow.com/questions/53375422/
load test data in pytorch
All is in the title, I just want to know, how can I load my own test data (image.jpg) in pytorch in order to test my CNN.
You need to feed images to net the same as in training: that is, you should apply exactly the same transformations to get similar results. Assuming your net was trained using this code (or similar), you can see that an input image (for validation) undergoes the following transformations: transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), normalize, ])), Following torchvision.transforms docs you can see that an input image goes through: Resizing to 256x256 pix Cropping 224x224 rect from the center of the image The image is converted from uint8 datatype to float in range [0, 1], and transposed to 3-by-224-by-224 array The image is normalize by subtracting mean and dividing by std. You can do all this manually to any image import numpy as np from PIL import Image pil_img = Image.open('image.jpg').resize((256, 256), Image.BILINEAR) # read and resize # center crop w, h = pil_img.size i = int(round((h - 224) / 2.)) j = int(round((w - 224) / 2.)) pil_img = pil_img.crop((j, i, j+224, i+224)) np_img = np.array(pil_img).astype(np.float32) / 255. np_img = np.transpose(np_img, (2, 0, 1)) # normalize mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] for c in range(3): np_img = (np_img[c, ...] - mean[c]) / std[c] Once you have np_img ready for your model, you can run a feed forward pass: pred = model(np_img[None, ...]) # note that we add a singleton leading dim for batch
https://stackoverflow.com/questions/53379887/
Wasserstein GAN critic training ambiguity
I'm running a DCGAN-based GAN, and am experimenting with WGANs, but am a bit confused about how to train the WGAN. In the official Wasserstein GAN PyTorch implementation, the discriminator/critic is said to be trained Diters (usually 5) times per each generator training. Does this mean that the critic/discriminator trains on Diters batches or the whole dataset Diters times? If I'm not mistaken, the official implementation suggests the discriminator/critic is trained on the whole dataset Diters times, but other implementations of WGAN (in PyTorch and TensorFlow etc.) do the opposite. Which is correct? The WGAN paper (to me, at least), indicates that it is Diters batches. Training on the whole dataset is obviously orders of magnitude slower. Thanks in advance!
The correct is to consider an iteration as a batch. In the original paper, for each iteration of the critic/discriminator they are sampling a batch of size m of the real data and a batch of size m of prior samples p(z) to work it. After the critic is trained over Diters iterations, they train the generator which also starts by the sampling of a batch of prior samples of p(z). Therefore, each iteration is working on a batch. In the official implementation this is also happening. What may be confusing is that they use the variable name niter to represent the number of epochs to train the model. Although they use a different scheme to set Diters at lines 162-166: # train the discriminator Diters times if gen_iterations < 25 or gen_iterations % 500 == 0: Diters = 100 else: Diters = opt.Diters they are, as in the paper, training the critic over Diters batches.
https://stackoverflow.com/questions/53401431/
Apply transformation for Tensor without including it in Backward
Let's say I have n layered neural network. After running l layers, I want to apply some transformation to the l^th layer output, without including that transformation in backpropagation. For e.g. : output_layer_n = self.LinearLayer(output_layer_prev) #apply some transformation to output_layer_n, but don't want to take autograd w.r.t. this transformation, basically this transformation function doesn't have any parameter output_layer_n.data = TransformationFunction(output_layer_n.data) So how should I go about implementing it? What I want is not to take gradient accounted for TransformationFunction() in my code.
If you just don't want to compute gradients for your TransformationFunction, it is easiest to turn off gradient computation for all parameters involved in this computation by setting the requires_grad flag to False. Excluding subgraphs from backward: If there’s a single input to an operation that requires gradient, its output will also require gradient. Conversely, only if all inputs don’t require gradient, the output also won’t require it. Backward computation is never performed in the subgraphs, where all Tensors didn’t require gradients. This is especially useful when you want to freeze part of your model, or you know in advance that you’re not going to use gradients w.r.t. some parameters. For example if you want to finetune a pretrained CNN, it’s enough to switch the requires_grad flags in the frozen base, and no intermediate buffers will be saved, until the computation gets to the last layer, where the affine transform will use weights that require gradient, and the output of the network will also require them. Here is a small example which would do so: import torch import torch.nn as nn # define layers normal_layer = nn.Linear(5, 5) TransformationFunction = nn.Linear(5, 5) # disable gradient computation for parameters of TransformationFunction # here weight and bias TransformationFunction.weight.requires_grad = False TransformationFunction.bias.requires_grad = False # input inp = torch.rand(1, 5) # do computation out = normal_layer(inp) out = TransformationFunction(out) # loss loss = torch.sum(out) # backward loss.backward() # gradient for l1 print('Gradients for "normal_layer"', normal_layer.weight.grad, normal_layer.bias.grad) # gradient for l2 print('Gradients for "TransformationFunction"', TransformationFunction.weight.grad, TransformationFunction.bias.grad) Output: Gradients for "normal_layer" tensor([[0.1607, 0.0215, 0.0192, 0.2595, 0.0811], [0.0788, 0.0105, 0.0094, 0.1272, 0.0398], [0.1552, 0.0207, 0.0186, 0.2507, 0.0784], [0.1541, 0.0206, 0.0184, 0.2489, 0.0778], [0.2945, 0.0393, 0.0352, 0.4756, 0.1486]]) tensor([0.2975, 0.1458, 0.2874, 0.2853, 0.5452]) Gradients for "TransformationFunction" None None I hope this is what you were looking for, if not please edit your question with more detail!
https://stackoverflow.com/questions/53424056/
How can I change the padded input size per channel in Pytorch?
I am trying to set up an image classifier using Pytorch. My sample images have 4 channels and are 28x28 pixels in size. I am trying to use the built-in torchvision.models.inception_v3() as my model. Whenever I try to run my code, I get this error: RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (3 x 3). Kernel size can't greater than actual input size at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THNN/generic/SpatialConvolutionMM.c:48 I can't find how to change the padded input size per channel or quite figure out what the error means. I figure that I must modify the padded input size per channel since I can't edit the Kernel size in the pre-made model. I have tried padding, but it didn't help. Here is a shortened part of my code that throws the error when I call train(): import torch import torchvision as tv import torch.optim as optim from torch import nn from torch.utils.data import DataLoader model = tv.models.inception_v3() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.0001, weight_decay=0) lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.9) trn_dataset = tv.datasets.ImageFolder( "D:/tests/classification_test_data/trn", transform=tv.transforms.Compose([tv.transforms.RandomRotation((0,275)), tv.transforms.RandomHorizontalFlip(), tv.transforms.ToTensor()])) trn_dataloader = DataLoader(trn_dataset, batch_size=32, num_workers=4, shuffle=True) for epoch in range(0, 10): train(trn_dataloader, model, criterion, optimizer, lr_scheduler, 6, 32) print("End of training") def train(train_loader, model, criterion, optimizer, scheduler, num_classes, batch_size): model.train() scheduler.step() for index, data in enumerate(train_loader): inputs, labels = data optimizer.zero_grad() outputs = model(inputs) outputs_flatten = flatten_outputs(outputs, num_classes) loss = criterion(outputs_flatten, labels) loss.backward() optimizer.step() def flatten_outputs(predictions, number_of_classes): logits_permuted = predictions.permute(0, 2, 3, 1) logits_permuted_cont = logits_permuted.contiguous() outputs_flatten = logits_permuted_cont.view(-1, number_of_classes) return outputs_flatten
It could be due the following. Pytorch documentation for Inception_v3 model notes that the model expects input of shape Nx3x299x299. This is because the architecture contains a fully connected layer which fixed shape. Important: In contrast to the other models the inception_v3 expects tensors with a size of N x 3 x 299 x 299, so ensure your images are sized accordingly. https://pytorch.org/docs/stable/torchvision/models.html#inception-v3
https://stackoverflow.com/questions/53438146/
How to handle Multi Label DataSet from Directory for image captioning in PyTorch
I need a help in PyTorch, Regarding Dataloader, and dataset Can someone aid/guide me Here is my query : I am trying for Image Captioning using https://github.com/yunjey/pytorch-tutorial/tree/master/tutorials/03-advanced/image_captioning. Here they have used Standard COCO Dataset. I have dataset as images/ and captions/ directory . Example Directory Structure: images/T001.jpg images/T002.jpg ... ... captions/T001.txt captions/T002.txt .... .... The above is the relation. Caption file has 'n' number of captions in each separate line. I am able to create a custom Dataset class, in that the complete caption file content is being returned. But I want only one line alone gas to be returned. Any guidance/suggestion on how to achieving this. ++++++++++++++++++++++++++++++++++++++++++++++++ Here is the class that i have designed: from __future__ import print_function import torch from torchvision import datasets, models, transforms from torchvision import transforms from torch.autograd import Variable from torch.nn.utils.rnn import pack_padded_sequence import torch.optim as optim import torch.nn as nn #from torch import np import numpy as np import utils_c from data_loader_c import get_cust_data_loader from models import CNN, RNN from vocab_custom import Vocabulary, load_vocab import os class ImageCaptionDataSet(data.Dataset): def __init__(self, path, json, vocab=None, transform=None): self.vocab = vocab self.transform = transform self.img_dir_path = path self.cap_dir_path = json self.all_imgs_path = glob.glob(os.path.join(self.img_dir_path,'*.jpg')) self.all_caps_path = glob.glob(os.path.join(self.cap_dir_path,'*.txt')) pass def __getitem__(self,index): vocab = self.vocab img_path = self.all_imgs_path[index] img_base_name = os.path.basename(img_path) cap_base_name = img_base_name.replace(".jpg",".txt") cap_path = os.path.join(self.cap_dir_path,cap_base_name) caption_all_for_a_image = open(cap_path).read().split("\n") image = Image.open(img_path) image = image.convert('RGB') if self.transform != None: # apply image preprocessing image = self.transform(image) #captions_combined = [] #max_len = 0 #for caption in caption_all_for_a_image: # caption_str = str(caption).lower() # tokens = nltk.tokenize.word_tokenize(caption_str) # m = len(tokens) + 2 # if m>max_len: # max_len = m # caption = torch.Tensor([vocab(vocab.start_token())] + # [vocab(token) for token in tokens] + # [vocab(vocab.end_token())]) # captions_combined.append(caption) # #yield image, caption #return image,torch.Tensor(captions_combined) caption_str = str(caption_all_for_a_image).lower() tokens = nltk.tokenize.word_tokenize(caption_str) caption = torch.Tensor([vocab(vocab.start_token())] + [vocab(token) for token in tokens] + [vocab(vocab.end_token())]) return image,caption def __len__(self): return len(self.all_imgs_path) +++++++++++++++++++++++++++++++++
First, using str() to convert the list of captions into a single string (caption_str = str(caption_all_for_a_image)) is a bad idea: cap = ['a sentence', 'bla bla bla'] str(cap) Returns this sting: "['a sentence', 'bla bla bla']" Note that [', and ', ' are part of the resulting string! You can pick one of the captions at random: import random ... cap_idx = random.randi(0, len(caption_all_for_a_image)-1) # pick one at random caption_str = caption_all_for_a_image[cap_idx].lower() # actual selection
https://stackoverflow.com/questions/53442510/
Convert PyTorch CUDA tensor to NumPy array
How do I convert a torch.Tensor (on GPU) to a numpy.ndarray (on CPU)?
Use .detach() to convert from GPU / CUDA Tensor to numpy array: tensor.detach().cpu().numpy()
https://stackoverflow.com/questions/53467215/
one of the variables needed for gradient computation has been modified by an inplace operation
Here is my LossFunction, when I use this function, it will make this error. And I have tested that using the nn.L1Loss() instead of my LossFunction, and the network is ok. what should I do? Thanks for your help! class LossV1(nn.Module): def __init__(self,weight=1,pos_weight=1,scale_factor=2.5): super(LossV1,self).__init__() self.weight = weight self.pos_weight = pos_weight self.scale_factor = scale_factor def forward(self,pred,truth): objmask = torch.tensor(truth[:,:6,:,:],dtype=torch.float32,requires_grad=False) #没有物体的Boxes,置信度损失*0.4 objmask[objmask<0.65] = 0.4 #辅助Boxes,系数0.8 objmask[(objmask>0.649)*(objmask<0.949)] = 0.8 objLoss = torch.sum(objmask*self.myBCEWithLogitsLoss(pred[:,:6,:,:],truth[:,:6,:,:])) #没有物体的Boxes,只计算置信度损失 objmask[objmask<0.41] = 0 personLoss = torch.sum(objmask*self.myBCEWithLogitsLoss(pred[:,6:12,:,:],truth[:,6:12,:,:])) carLoss = torch.sum(objmask*self.myBCEWithLogitsLoss(pred[:,12:18,:,:],truth[:,12:18,:,:])) wLoss = torch.sum(objmask*self.myL2Loss(pred[:,18:24,:,:],truth[:,18:24,:,:])) hLoss = torch.sum(objmask*self.myL2Loss(pred[:,24:,:,:],truth[:,24:,:,:])) return objLoss+personLoss+carLoss+wLoss+hLoss def myBCEWithLogitsLoss(self,x,y): #pos_weight>1增加召回,pos_weight<1提高精度 return -self.weight*(self.pos_weight*y*torch.log(torch.sigmoid(x))+(1-y)*torch.log(1-torch.sigmoid(x))) def myL2Loss(self,x,y): return torch.pow(self.scale_factor*torch.sigmoid(x/self.scale_factor) - y,2)
I just remove the objmask,and compute it in my generation function,then pass it on to LossFunction with truth label,and the network works. I can't understand I have already done that making the requires_grad=False,why pytroch still compute the gradient.
https://stackoverflow.com/questions/53467460/
Anoconda does not allow Torchvision to be installed
I tried the following commands conda create -n torch_env -c pytorch pytorch torchvision conda install -c soumith/label/pytorch torchvision conda install -c soumith torchvision Provided by Anaconda but none of them works. Please help me I am stack ! Error message: Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: torchvision Current channels: https://conda.anaconda.org/pytorch/win-64 https://conda.anaconda.org/pytorch/noarch https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/win-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/win-64 https://repo.anaconda.com/pkgs/pro/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page.
I encountered the same problem. https://github.com/pytorch/pytorch/issues/12210 Then I found that torchvision should be installed together with pytorch in anaconda. conda install -c pytorch torchvision
https://stackoverflow.com/questions/53471393/
PyTorch: get input layer size
I want to programatically find the size of my input layer. If my first layer is called fc1, how do I find out its input?
Assuming your model is called model, this will give the number of input features of the fc1 layer: model.fc1.in_features This is useful inside the .forward() method: def forward(self, x): x = x.view(-1, self.fc1.in_features) # resize the input to match the input layer ...
https://stackoverflow.com/questions/53475580/
Increasingly large, positive WGAN-GP loss
I'm investigating the use of a Wasserstein GAN with gradient penalty in PyTorch, but consistently get large, positive generator losses that increase over epochs. I'm heavily borrowing from Caogang's implementation, but am using the discriminator and generator losses used in this implementation because I get Invalid gradient at index 0 - expected shape[] but got [1] if I try to call .backward() with the one and mone args used in the Caogang implementation. I'm training on a augmented WikiArt dataset (>400k 64x64 images) and CIFAR-10, and have gotten a normal WGAN (with weight clipping to work) [i.e. it produces passable images after 25 epochs], despite the fact that the D and G losses both hover around 3 [I calculate them using torch.mean(D_real) etc.] for all epochs. However, in the WGAN-GP version, the generator loss increases dramatically on both the WikiArt and CIFAR-10 datasets, and completely fails to generate anything other than noise on WikiArt. Here's an example of the loss after 25 epochs on CIFAR-10: I don't use any tricks like one-sided label smoothing, and I train with the default learning rate of 0.001, the Adam optimizer and I train the discriminator 5 times for every generator update. Why does this crazy loss behaviour happen, and why does the normal weight-clipping WGAN still 'work' on WikiArt but WGANGP completely fail? This happens irrespective of the structure, whether both G and D are DCGANs or when using this modified DCGAN, the Creative Adversarial Network, which requires that D be able to classify images and G generate ambiguous images. Below is the relevant part of my current trainmethod: self.generator = Can64Generator(self.z_noise, self.channels, self.num_gen_filters).to(self.device) self.discriminator =WCan64Discriminator(self.channels,self.y_dim, self.num_disc_filters).to(self.device) style_criterion = nn.CrossEntropyLoss() self.disc_optimizer = optim.Adam(self.discriminator.parameters(), lr=self.lr, betas=(self.beta1, 0.9)) self.gen_optimizer = optim.Adam(self.generator.parameters(), lr=self.lr, betas=(self.beta1, 0.9)) while i < len(dataloader): j = 0 disc_loss_epoch = [] gen_loss_epoch = [] if self.type == "can": disc_class_loss_epoch = [] gen_class_loss_epoch = [] if self.gradient_penalty == False: # critic training methodology in official WGAN implementation if gen_iterations < 25 or (gen_iterations % 500 == 0): disc_iters = 100 else: disc_iters = self.disc_iterations while j < disc_iters and (i < len(dataloader)): # if using wgan with weight clipping if self.gradient_penalty == False: # Train Discriminator for param in self.discriminator.parameters(): param.data.clamp_(self.lower_clamp,self.upper_clamp) for param in self.discriminator.parameters(): param.requires_grad_(True) j+=1 i+=1 data = data_iterator.next() self.discriminator.zero_grad() real_images, image_labels = data # image labels are the the image's classes (e.g. Impressionism) real_images = real_images.to(self.device) batch_size = real_images.size(0) real_image_labels = torch.LongTensor(batch_size).to(self.device) real_image_labels.copy_(image_labels) labels = torch.full((batch_size,),real_label,device=self.device) if self.type == 'can': predicted_output_real, predicted_styles_real = self.discriminator(real_images.detach()) predicted_styles_real = predicted_styles_real.to(self.device) disc_class_loss = style_criterion(predicted_styles_real,real_image_labels) disc_class_loss.backward(retain_graph=True) else: predicted_output_real = self.discriminator(real_images.detach()) disc_loss_real = -torch.mean(predicted_output_real) # fake noise = torch.randn(batch_size,self.z_noise,1,1,device=self.device) with torch.no_grad(): noise_g = noise.detach() fake_images = self.generator(noise_g) labels.fill_(fake_label) if self.type == 'can': predicted_output_fake, predicted_styles_fake = self.discriminator(fake_images) else: predicted_output_fake = self.discriminator(fake_images) disc_gen_z_1 = predicted_output_fake.mean().item() disc_loss_fake = torch.mean(predicted_output_fake) #via https://github.com/znxlwm/pytorch-generative-model-collections/blob/master/WGAN_GP.py if self.gradient_penalty: # gradient penalty alpha = torch.rand((real_images.size()[0], 1, 1, 1)).to(self.device) x_hat = alpha * real_images.data + (1 - alpha) * fake_images.data x_hat.requires_grad_(True) if self.type == 'can': pred_hat, _ = self.discriminator(x_hat) else: pred_hat = self.discriminator(x_hat) gradients = grad(outputs=pred_hat, inputs=x_hat, grad_outputs=torch.ones(pred_hat.size()).to(self.device), create_graph=True, retain_graph=True, only_inputs=True)[0] gradient_penalty = lambda_ * ((gradients.view(gradients.size()[0], -1).norm(2, 1) - 1) ** 2).mean() disc_loss = disc_loss_fake + disc_loss_real + gradient_penalty else: disc_loss = disc_loss_fake + disc_loss_real if self.type == 'can': disc_loss += disc_class_loss.mean() disc_x = disc_loss.mean().item() disc_loss.backward(retain_graph=True) self.disc_optimizer.step() # train generator for param in self.discriminator.parameters(): param.requires_grad_(False) self.generator.zero_grad() labels.fill_(real_label) if self.type == 'can': predicted_output_fake, predicted_styles_fake = self.discriminator(fake_images) predicted_styles_fake = predicted_styles_fake.to(self.device) else: predicted_output_fake = self.discriminator(fake_images) gen_loss = -torch.mean(predicted_output_fake) disc_gen_z_2 = gen_loss.mean().item() if self.type == 'can': fake_batch_labels = 1.0/self.y_dim * torch.ones_like(predicted_styles_fake) fake_batch_labels = torch.mean(fake_batch_labels,1).long().to(self.device) gen_class_loss = style_criterion(predicted_styles_fake,fake_batch_labels) gen_class_loss.backward(retain_graph=True) gen_loss += gen_class_loss.mean() gen_loss.backward() gen_iterations += 1 This is the code for the (DCGAN) generator: class Can64Generator(nn.Module): def __init__(self, z_noise, channels, num_gen_filters): super(Can64Generator,self).__init__() self.ngpu = 1 self.main = nn.Sequential( nn.ConvTranspose2d(z_noise, num_gen_filters * 16, 4, 1, 0, bias=False), nn.BatchNorm2d(num_gen_filters * 16), nn.ReLU(True), nn.ConvTranspose2d(num_gen_filters * 16, num_gen_filters * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(num_gen_filters * 4), nn.ReLU(True), nn.ConvTranspose2d(num_gen_filters * 4, num_gen_filters * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(num_gen_filters * 2), nn.ReLU(True), nn.ConvTranspose2d(num_gen_filters * 2, num_gen_filters, 4, 2, 1, bias=False), nn.BatchNorm2d(num_gen_filters), nn.ReLU(True), nn.ConvTranspose2d(num_gen_filters, 3, 4, 2, 1, bias=False), nn.Tanh() ) def forward(self, inp): output = self.main(inp) return output And this is the (current) CAN discriminator, which has extra layers for style (image class) classification): class Can64Discriminator(nn.Module): def __init__(self, channels,y_dim, num_disc_filters): super(Can64Discriminator, self).__init__() self.ngpu = 1 self.conv = nn.Sequential( nn.Conv2d(channels, num_disc_filters // 2, 4, 2, 1, bias=False), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(num_disc_filters // 2, num_disc_filters, 4, 2, 1, bias=False), nn.BatchNorm2d(num_disc_filters), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(num_disc_filters, num_disc_filters * 2, 4, 2, 1, bias=False), nn.BatchNorm2d(num_disc_filters * 2), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(num_disc_filters * 2, num_disc_filters * 4, 4, 2, 1, bias=False), nn.BatchNorm2d(num_disc_filters * 4), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(num_disc_filters * 4, num_disc_filters * 8, 4, 1, 0, bias=False), nn.BatchNorm2d(num_disc_filters * 8), nn.LeakyReLU(0.2, inplace=True), ) # was this #self.final_conv = nn.Conv2d(num_disc_filters * 8, num_disc_filters * 8, 4, 2, 1, bias=False) self.real_fake_head = nn.Linear(num_disc_filters * 8, 1) # no bn and lrelu needed self.sig = nn.Sigmoid() self.fc = nn.Sequential() self.fc.add_module("linear_layer{0}".format(num_disc_filters*16),nn.Linear(num_disc_filters*8,num_disc_filters*16)) self.fc.add_module("linear_layer{0}".format(num_disc_filters*8),nn.Linear(num_disc_filters*16,num_disc_filters*8)) self.fc.add_module("linear_layer{0}".format(num_disc_filters),nn.Linear(num_disc_filters*8,y_dim)) self.fc.add_module('softmax',nn.Softmax(dim=1)) def forward(self, inp): x = self.conv(inp) x = x.view(x.size(0),-1) real_out = self.sig(self.real_fake_head(x)) real_out = real_out.view(-1,1).squeeze(1) style = self.fc(x) #style = torch.mean(style,1) # CrossEntropyLoss requires input be (N,C) return real_out,style The only differences between the WGANGP version and the WGAN version of my GAN is the WGAN version uses RMSprop with lr=0.00005 and clips the weights of the discriminator, as per the WGAN paper. What could be causing this? I'd like to make as minimal change as possible, as I want to compare loss functions alone. The same problem is encountered even when using an unmodified DCGAN discriminator on CIFAR-10. Am I encountering this perhaps because I am only training currently for 25 epochs, or is there another reason? Interestingly, my GAN also completely fails to generate anything other than noise when using LSGAN (nn.MSELoss()). Thanks in advance!
Batch Normalization in the discriminator breaks Wasserstein GANs with gradient penalty. The authors themselves advocate the usage of layer normalization instead, but this is clearly written in bold in their paper (https://papers.nips.cc/paper/7159-improved-training-of-wasserstein-gans.pdf). It is hard to say if there are other bugs in your code, but I urge you to thoroughly read the DCGAN and the Wasserstein GAN paper and really take notes on the hyperparameters. Getting them wrong really destroys the performance of the GAN and doing a hyperparameter search gets expensive quite quickly. By the way transposed convolutions produce stairway artifacts in your output images. Use image resizing instead. For an indepth explanation of that phenomenon I can recommend the following resource (https://distill.pub/2016/deconv-checkerboard/).
https://stackoverflow.com/questions/53479523/
Pytorch RuntimeError: CUDA error: out of memory at loss.backward() , No error when using CPU
I'm training a Fully convolutional network (FCN32) for semantic segmentation on Tesla K80 with more than 11G memory. The input image is pretty large: 352x1216. Network structure is shown below. I used batch_size=1, but still encounter the out_of_memory error. Criterion is nn.BCEWithLogitsLoss() The network works fine when I run on CPU. Layer (type) Output Shape # Param Conv2d-1 [-1, 64, 352, 1216] 1,792 Conv2d-2 [-1, 64, 352, 1216] 36,928 MaxPool2d-3 [-1, 64, 176, 608] 0 Conv2d-4 [-1, 128, 176, 608] 73,856 Conv2d-5 [-1, 128, 176, 608] 147,584 MaxPool2d-6 [-1, 128, 88, 304] 0 Conv2d-7 [-1, 256, 88, 304] 295,168 Conv2d-8 [-1, 256, 88, 304] 590,080 Conv2d-9 [-1, 256, 88, 304] 590,080 MaxPool2d-10 [-1, 256, 44, 152] 0 Conv2d-11 [-1, 512, 44, 152] 1,180,160 Conv2d-12 [-1, 512, 44, 152] 2,359,808 Conv2d-13 [-1, 512, 44, 152] 2,359,808 MaxPool2d-14 [-1, 512, 22, 76] 0 Conv2d-15 [-1, 512, 22, 76] 2,359,808 Conv2d-16 [-1, 512, 22, 76] 2,359,808 Conv2d-17 [-1, 512, 22, 76] 2,359,808 MaxPool2d-18 [-1, 512, 11, 38] 0 Conv2d-19 [-1, 4096, 11, 38] 102,764,544 Conv2d-20 [-1, 4096, 11, 38] 16,781,312 Conv2d-21 [-1, 1, 11, 38] 4,097 ConvTranspose2d-22 [-1, 1, 352, 1216] 4,096 Error message: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () 36 print (loss) 37 #torch.cuda.empty_cache() ---> 38 loss.backward() 39 optimizer.step() 40 /anaconda/envs/py35/lib/python3.5/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 91 products. Defaults to False. 92 """ ---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph) 94 95 def register_hook(self, hook): /anaconda/envs/py35/lib/python3.5/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 88 Variable._execution_engine.run_backward( 89 tensors, grad_tensors, retain_graph, create_graph, ---> 90 allow_unreachable=True) # allow_unreachable flag 91 92 RuntimeError: CUDA error: out of memory
Usually this happens because of memory on your GPU. If you have more powerful GPUs, your problem could be solved (as you mentioned in your answer). But if you do not have, you can scale down your images into about 256*x sizes. It is also good practice for performance's sake.
https://stackoverflow.com/questions/53494498/
Gradient is none in pytorch when it shouldn't
I am trying to get/trace the gradient of a variable using pytorch, where I have that variable, pass it to a first function that looks for some minimum value of some other variable, then the output of the first function is inputted to a second function, and the whole thing repeats multiple times. Here is my code: import torch def myFirstFunction(parameter_current_here): optimalValue = 100000000000000 Optimal = 100000000000000 for j in range(2, 10): i = torch.ones(1, requires_grad=True)*j with torch.enable_grad(): optimalValueNow = i*parameter_current_here.sum() if (optimalValueNow < optimalValue): optimalValue = optimalValueNow Optimal = i return optimalValue, Optimal def mySecondFunction(Current): with torch.enable_grad(): y = (20*Current)/2 + (Current**2)/10 return y counter = 0 while counter < 5: parameter_current = torch.randn(2, 2, requires_grad=True) outputMyFirstFunction = myFirstFunction(parameter_current) outputmySecondFunction = mySecondFunction(outputMyFirstFunction[1]) outputmySecondFunction.backward() print("outputMyFirstFunction after backward:", outputMyFirstFunction) print("outputmySecondFunction after backward:", outputmySecondFunction) print("parameter_current Gradient after backward:", parameter_current.grad) counter = counter + 1 The parameter_current.grad is none for all iterations when it obviously shouldn't be none. What am I doing wrong? And how can I fix it? Your help on this would be highly appreciated. Thanks a lot! Aly
I have a similar experience with this. Reference: https://pytorch.org/docs/stable/tensors.html For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None. Only leaf Tensors will have their grad populated during a call to backward(). To get grad populated for non-leaf Tensors, you can use retain_grad(). Example: >>> a = torch.tensor([[1,1],[2,2]], dtype=torch.float, requires_grad=True) >>> a.is_leaf True >>> b = a * a >>> b.is_leaf False >>> c = b.mean() >>> c.backward() >>> print(c.grad) None >>> print(b.grad) None >>> print(a.grad) tensor([[0.5000, 0.5000], [1.0000, 1.0000]]) >>> b = a * a >>> c = b.mean() >>> b.retain_grad() >>> c.retain_grad() >>> c.backward() >>> print(a.grad) tensor([[1., 1.], [2., 2.]]) >>> print(b.grad) tensor([[0.2500, 0.2500], [0.2500, 0.2500]]) >>> print(c.grad) tensor(1.)
https://stackoverflow.com/questions/53507346/
Concatenate Two Tensors in Pytorch
How do I pad a tensor of shape [71 32 1] with zero vectors to make it [100 32 1]? RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 2. Got 32 and 71 in dimension 0 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:87 I tried by concatenating a padding vector of zeros of shape [29 32 1]. I get the error above. I try with a padding vector of zeros of shape [29 32 1], I still get an error.
In order to help you better, you need to post the code that caused the error, without it we are just guessing here... Guessing from the error message you got: 1. Sizes of tensors must match except in dimension 2 pytorch tries to concat along the 2nd dimension, whereas you try to concat along the first. 2. Got 32 and 71 in dimension 0 It seems like the dimensions of the tensor you want to concat are not as you expect, you have one with size (72, ...) while the other is (32, ...). You need to check this as well. Working code Here's an example of concat import torch x = torch.rand((71, 32, 1)) # x.shape = torch.Size([71, 32, 1]) px = torch.cat((torch.zeros(29, 32, 1, dtype=x.dtype, device=x.device), x), dim=0) # px.shape = torch.Size([100, 32, 1]) Alternatively, you can use functional.pad: from torch.nn import functional as F px = F.pad(x, (0, 0, 0, 0, 29, 0))
https://stackoverflow.com/questions/53512281/
RuntimeError: reduce failed to synchronize: device-side assert triggered
File "/home/username/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 433, in forward reduce=self.reduce) File "/home/username/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1483, in binary_cross_entropy return torch._C._nn.binary_cross_entropy(input, target, weight, size_average, reduce) RuntimeError: reduce failed to synchronize: device-side assert triggered
When using cuda you may get this generic error which is not very helpful. Try switching to cpu device instead, device = torch.device("cpu"), to see the actual error stack trace. In my case, the issue was caused because binary cross entropy expected the input values to be between 0~1, but I was sending in values between -1 and 1. Applying sigmoid to the output resolved that issue.
https://stackoverflow.com/questions/53530810/
Is there any way to copy all parameters of one Pytorch model to another specially Batch Normalization mean and std?
I have found many correct ways online to copy one pytorch model parameters to another but somehow the copy-paste operation always misses the batch normalization parameters. Everything works fine as long as I only use modules such as conv2d, linear, drop out, max pool etc in my model. But as soon as I add Batch normalization in pytorch model, the below-given script stop working and accuracy at test time is different : net = model() copy_net = model() for param in net.module.parameters(): copy_param.append(param.clone().detach()) count = 0 for param in copy_net.module.parameters(): param.data = copy_param[count] param.requires_grad = False count = count +1 Can anybody give me a possible solution to copying batch normalization also ?
net.load_state_dict(copy_net.state_dict()) should work. As per @dxtx, in pytorch's philosophy, the state dict should cover all the states in a 'module', e.g. in batch norm module , the running mean and var, if I remembered correctly, should be part of the state dict. But in fact, if you wrote module like batch norm yourself, you have to override the 'state_dict' method.
https://stackoverflow.com/questions/53568501/
Should I run a separate thread to save the model in Pytorch
Consider the following snippet from PyTorch Imagenet Example (Please notice the "#AREA OF INTEREST"): try: for epoch in range(1, args.epochs+1): epoch_start_time = time.time() train() val_loss = evaluate(val_data) # Save the model if the validation loss is the best we've seen so far. if not best_val_loss or val_loss < best_val_loss: # AREA OF INTEREST ########## with open(args.save, 'wb') as f: torch.save(model, f) ############################# best_val_loss = val_loss else: # Anneal the learning rate if no improvement has been seen in the validation dataset. lr /= 4.0 The question is: should I run a separate thread that saves the model in order to save time, or am I overkilling the process? i.e., it does not worth running a separate thread? I've checked the doc of torch.save but I did not find what I want.
It depends on your model size. If you have slow IO and big model it may take time. But usual FS cache is big enough to store a whole model.
https://stackoverflow.com/questions/53569555/
Documentation for PyTorch .to('cpu') or .to('cuda')
I've searched through the PyTorch documenation, but can't find anything for .to() which moves a tensor to CPU or CUDA memory. I remember seeing somewhere that calling to() on a nn.Module is an in-place operation, but not so on a tensor. Is there a in-place version for Tensors? Where do I find the doco for to() for both nn.Module and Tensor (and possibly elsewhere)?
You already found the documentation! great. .to is not an in-place operation for tensors. However, if no movement is required it returns the same tensor. In [10]: a = torch.rand(10) In [11]: b = a.to(torch.device("cuda")) In [12]: b is a Out[12]: False In [18]: c = b.to(torch.device("cuda")) In [19]: c is b Out[19]: True Since b is already on gpu and hence no change is done and c is b results in True. However, for models, it is an in-place operation which also returns a model. In [8]: import torch In [9]: model = torch.nn.Sequential (torch.nn.Linear(10,10)) In [10]: model_new = model.to(torch.device("cuda")) In [11]: model_new is model Out[11]: True It makes sense to keep it in-place for models as parameters of the model need to be moved to another device and not model object. For tensor, it seems new object is created.
https://stackoverflow.com/questions/53570334/
Calculate the output size in convolution layer
How do I calculate the output size in a convolution layer? For example, I have a 2D convolution layer that takes a 3x128x128 input and has 40 filters of size 5x5.
you can use this formula [(W−K+2P)/S]+1. W is the input volume - in your case 128 K is the Kernel size - in your case 5 P is the padding - in your case 0 i believe S is the stride - which you have not provided. So, we input into the formula: Output_Shape = (128-5+0)/1+1 Output_Shape = (124,124,40) NOTE: Stride defaults to 1 if not provided and the 40 in (124, 124, 40) is the number of filters provided by the user.
https://stackoverflow.com/questions/53580088/
How does pytorch calculate matrix pairwise distance? Why isn't 'self' distance not zero?
If this is a naive question, please forgive me, my test code like this: import torch from torch.nn.modules.distance import PairwiseDistance list_1 = [[1., 1.,],[1., 1.]] list_2 = [[1., 1.,],[2., 1.]] mtrxA=torch.tensor(list_1) mtrxB=torch.tensor(list_2) print "A-B distance :",PairwiseDistance(2).forward(mtrxA, mtrxB) print "A 'self' distance:",PairwiseDistance(2).forward(mtrxA, mtrxA) print "B 'self' distance:",PairwiseDistance(2).forward(mtrxB, mtrxB) Result: A-B distance : tensor([1.4142e-06, 1.0000e+00]) A 'self' distance: tensor([1.4142e-06, 1.4142e-06]) B 'self' distance: tensor([1.4142e-06, 1.4142e-06]) Questions are: How does pytorch calculate pairwise distance? Is it to calculate row vectors distance? Why isn't 'self' distance 0? Update After changing list_1 and list_2 to this: list_1 = [[1., 1.,1.,],[1., 1.,1.,]] list_2 = [[1., 1.,1.,],[2., 1.,1.,]] Result becomes: A-B distance : tensor([1.7321e-06, 1.0000e+00]) A 'self' distance: tensor([1.7321e-06, 1.7321e-06]) B 'self' distance: tensor([1.7321e-06, 1.7321e-06])
Looking at the documentation of nn.PairWiseDistance, pytorch expects two 2D tensors of N vectors in D dimensions, and computes the distances between the N pairs. Why "self" distance is not zero - probably because of floating point precision and because of eps = 1e-6.
https://stackoverflow.com/questions/53607710/
Loss Function & Its Inputs For Binary Classification PyTorch
I'm trying to write a neural Network for binary classification in PyTorch and I'm confused about the loss function. I see that BCELoss is a common function specifically geared for binary classification. I also see that an output layer of N outputs for N possible classes is standard for general classification. However, for binary classification it seems like it could be either 1 or 2 outputs. So, should I have 2 outputs (1 for each label) and then convert my 0/1 training labels into [1,0] and [0,1] arrays, or use something like a sigmoid for a single-variable output? Here are the relevant snippets of code so you can see: self.outputs = nn.Linear(NETWORK_WIDTH, 2) # 1 or 2 dimensions? def forward(self, x): # other layers omitted x = self.outputs(x) return F.log_softmax(x) # <<< softmax over multiple vars, sigmoid over one, or other? criterion = nn.BCELoss() # <<< Is this the right function? net_out = net(data) loss = criterion(net_out, target) # <<< Should target be an integer label or 1-hot vector? Thanks in advance.
For binary outputs you can use 1 output unit, so then: self.outputs = nn.Linear(NETWORK_WIDTH, 1) Then you use sigmoid activation to map the values of your output unit to a range between 0 and 1 (of course you need to arrange your training data this way too): def forward(self, x): # other layers omitted x = self.outputs(x) return torch.sigmoid(x) Finally you can use the torch.nn.BCELoss: criterion = nn.BCELoss() net_out = net(data) loss = criterion(net_out, target) This should work fine for you. You can also use torch.nn.BCEWithLogitsLoss, this loss function already includes the sigmoid function so you could leave it out in your forward. If you, want to use 2 output units, this is also possible. But then you need to use torch.nn.CrossEntropyLoss instead of BCELoss. The Softmax activation is already included in this loss function. Edit: I just want to emphasize that there is a real difference in doing so. Using 2 output units gives you twice as many weights compared to using 1 output unit.. So these two alternatives are not equivalent.
https://stackoverflow.com/questions/53628622/
Pytorch Adam optimizer's awkward behavior? better with restart?
I'm trying to train a CNN text classifier with Pytorch. I'm using the Adam optimizer like this. optimizer = torch.optim.Adam(CNN_Text.parameters(), lr=args.lr) I figured out that the optimizer converges really fast, and then it keeps on slowly dropping on accuracy. (the validation loss decreases a lot in 1-2 minutes, then it keeps on increasing slowly) So, I implemented learning-rate decay, If curr_loss > val_loss: prev_lr = param_group['lr'] param_group['lr'] = prev_lr/10 I found out that it didn't really help a lot. But if I manually save the model, load it, and run the training with decreased learning rate, it really gets way better performance! This gets me in hard time because I need to keep on watching the gradient descent and manually change the options. I tried SGD and other optimizers because I thought this was Adam's problem, but I couldn't find out a good way. Can anyone help me with it?
What is param_group? With that code snippet it looks like a variable not associated with the optimizer in any way. What you need to modify is the 'lr' entry of each element of optimizer.param_groups, which is what ADAM actually looks at. Either way, unless you have a good reason to hand-roll it yourself, I suggest you use the LR scheduler provided with PyTorch. And if you do need to reimplement it, check out its code and take inspiration from there.
https://stackoverflow.com/questions/53644632/
How to create Datasets Like MNIST in Pytorch?
I have looked Pytorch source code of MNIST dataset but it seems to read numpy array directly from binaries. How can I just create train_data and train_labels like it? I have already prepared images and txt with labels. I have learned how to read image and label and write get_item and len, what really confused me is how to make train_data and train_labels, which is torch.Tensor. I tried to arrange them into python lists and convert to torch.Tensor but failed: for index in range(0,len(self.files)): fn, label = self.files[index] img = self.loader(fn) if self.transform is not None: img = self.transform(img) train_data.append(img) self.train_data = torch.tensor(train_data) ValueError: only one element tensors can be converted to Python scalars
There are two ways to go. First, the manual. Torchvision.datasets states the following: datasets are subclasses of torch.utils.data.Dataset i.e, they have __getitem__ and __len__ methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader which can load multiple samples parallelly using torch.multiprocessing workers. So you can just implement your own class which scans for all the images and labels, keeps a list of their paths (so that you don't have to keep them in RAM) and has the __getitem__ method which given index i reads the i-th file, its label and returns them. This minimal interface is enough to work with the parallel dataloader in torch.utils.data. Secondly, if your data directory can be rearranged into either structure, you can use DatasetFolder and ImageFolder pre-built loaders. This will save you some coding and automatically provide support for data augumentation routines from torchvision.transforms.
https://stackoverflow.com/questions/53665225/
Load pytorch model from 0.4.1 to 0.4.0?
I trained a DENSENET161 model using pytorch 0.4.1 (GPU) and on testing environment I have to load it in pytorch version 0.4.0 (CPU). I am already using model.cpu() but when I am loading static dictionary model.load_state_dict(checkpoint['state_dict']) I am getting following error: RuntimeError: Error(s) in loading state_dict for DenseNet: Unexpected key(s) in state_dict: "features.norm0.num_batches_tracked", "features.denseblock1.denselayer1.norm1.num_batches_tracked", "features.denseblock1.denselayer1.norm2.num_batches_tracked", "features.denseblock1.denselayer2.norm1.num_batches_tracked",...
It seems to stem from the difference in implementation of normalization layers between PyTorch 0.4.1 and 0.4 - the former tracks some state variable called num_batches_tracked, which pytorch 0.4 does not expect. Assuming there are only unexpected keys and no missing keys (which I can't tell for sure since you've clipped the error message), you can just delete the extraneous ones and hopefully the model will load. Therefore try model_dict = checkpoint['state_dict'] filtered = { k: v for k, v in model_dict.items() if 'num_batches_tracked' not in k } model.load_state_dict(filtered) Please note, there may have been changes to the internals of normalization other than just what you're seeing here, so even if this fix suppresses the exception, the model may still silently misbehave.
https://stackoverflow.com/questions/53678133/
Pytorch parameter matrix from loss function of transformation
I have a pytorch tensor k x (n+k-1) tensor w with requires_grad=True. I want to transform it into a kxn tensor p also with as such: p[i] = w[i][i:i+n]. How do I do this, such that by calling backward() on a loss function of p in the end, I will learn w?
Any sort of indexing operation would do, with the backward function being <CopySlices> A naive way of doing this would be using simple python indexing: w_unrolled = torch.zeros(p.size()) for i in range(w.shape[0]): w_unrolled[i] = w[i][i:i+n] loss = criterion(w_unrolled, p) You can then reduce your loss via mean/sum on whichever axis. Note that while this will work, it is inefficient; the optimal way would be to use a native indexing function to speed things up.
https://stackoverflow.com/questions/53683076/
Pytorch, efficient way extend a tensor by its first and last element
I have a tensor in pytorch. I want to extend it on a specific dimension from the beginning and the end by k positions with the first and last elements of that dimension respectively. Say I have the tensor with data [[0, 0, 0], [1, 1, 1], [2, 2, 2]]. Operation extend(dim, k) would change it in this way: extend(0, 1): [[0, 0, 0], [0, 0, 0], [0, 0, 0], [1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2]] extend(1, 1): [0, 0, 0, 0, 0], [1, 1, 1, 1, 1], [2, 2, 2, 2, 2]] What is an efficient way to do this (compliant with tensor.requires_grad=true)
You are looking for torch.nn.functional.pad, with mode='replicate'. However, there are two things you need to pa attention to to get this to work: 1. pad does not work with 2D tensors. Thus, you need to add leading singleton dimensions before pad and squeezeing them afterwards. 2. The order of pad values pad expects is opposite to dim order. import torch from torch.nn inport functional x = torch.tensor([[0, 0, 0],[1, 1, 1], [2, 2, 2]], dtype=torch.float) # expand along dim=0 by k=2 f.pad(x[None,None,...], (0,0, 2, 2), mode='replicate').squeeze() Out[]: tensor([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [1., 1., 1.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]]) # expand along dim=1 by k=2 f.pad(x[None,None,...], (2, 2, 0 , 0), mode='replicate').squeeze() Out[]: tensor([[0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1., 1.], [2., 2., 2., 2., 2., 2., 2.]])
https://stackoverflow.com/questions/53688217/
OpenAI gym 0.10.9 'module' object has no attribute 'benchmark_spec'
benchmark = gym.benchmark_spec('Atari40M') AttributeError: 'module' object has no attribute 'benchmark_spec' I just got this error for gym-0.10.9. Any idea? Thx
According to this post on GitHub, the function 'benchmark_spec' is no longer supported.
https://stackoverflow.com/questions/53689624/
Pytorch: Learnable threshold for clipping activations
What is the proper way to clip ReLU activations with a learnable threshold? Here's how I implemented it, however I'm not sure if this is correct: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.act_max = nn.Parameter(torch.Tensor([0]), requires_grad=True) self.conv1 = nn.Conv2d(3, 32, kernel_size=5) self.conv2 = nn.Conv2d(32, 64, kernel_size=5) self.pool = nn.MaxPool2d(2, 2) self.relu = nn.ReLU() self.linear = nn.Linear(64 * 5 * 5, 10) def forward(self, input): conv1 = self.conv1(input) pool1 = self.pool(conv1) relu1 = self.relu(pool1) relu1[relu1 > self.act_max] = self.act_max conv2 = self.conv2(relu1) pool2 = self.pool(conv2) relu2 = self.relu(pool2) relu2 = relu2.view(relu2.size(0), -1) linear = self.linear(relu2) return linear model = Net() torch.nn.init.kaiming_normal_(model.parameters) nn.init.constant(model.act_max, 1.0) model = model.cuda() optimizer = torch.optim.SGD(model.parameters(), lr=0.001) for epoch in range(100): for i in range(1000): output = model(input) loss = nn.CrossEntropyLoss()(output, label) optimizer.zero_grad() loss.backward() optimizer.step() model.act_max.data = model.act_max.data - 0.001 * model.act_max.grad.data I had to add the last line because without it the value would not update for some reason. UPDATE: I am now trying a method to compute the uppper bound (act_max) based on the gradients for activations: For all activations above the threshold (relu1[relu1 > self.act_max]), look at their gradients: compute the average direction all these gradients point to. For all positive activations below the threshold, compute the average gradient of which direction they want to change to. The sum of these average gradients determines the direction and magnitude of the change for act_max.
There are two problems with that code. The implementation-level one is that you're using an in-place operation which generally doesn't work well with autograd. Instead of relu1[relu1 > self.act_max] = self.act_max you should use an out-of-place operation like relu1 = torch.where(relu1 > self.act_max, self.act_max, relu1) The other is more general : neural networks are generally trained with gradient descent methods and threshold values can have no gradient - the loss function is not differentiable with respect to thresholds. In your model you're using a dirty hackaround (whether you write is as it is or use torch.where) - model.act_max.grad.data is only defined because for some elements their value is set to model.act_max. But this gradient knows nothing about why they were set to that value. To make things more concrete, lets define cutoff operation C(x, t) which defines whether x is above or below threshold t C(x, t) = 1 if x < t else 0 and write your clipping operation as a product clip(x, t) = C(x, t) * x + (1 - C(x, t)) * t you can then see that the threshold t has twofold meaning: it controls when to cutoff (inside C) and it controls the value above cutoff (the trailing t). We can therefore generalize the operation as clip(x, t1, t2) = C(x, t1) * x + (1 - C(x, t1)) * t2 The problem with your operation is that it is only differentiable with respect to t2 but not t1. Your solution ties the two together so that t1 == t2, but it is still the case that gradient descent will act as if there was no changing the threshold, only changing the above-the-threshold-value. For this reason, in general your thresholding operation may not be learning the value you would hope it learns. This is something to keep in mind when developing your operations, but not a guarantee of failure - in fact, if you consider the standard ReLU on biased output of some linear unit, we get a similar picture. We define the cutoff operation H H(x, t) = 1 if x > t else 0 and ReLU as ReLU(x + b, t) = (x + b) * H(x + b, t) = (x + b) * H(x, t - b) where we could again generalize to ReLU(x, b, t) = (x + b) * H(x, t) and again we can only learn b and t is implicitly following b. Yet it seems to work :)
https://stackoverflow.com/questions/53698950/
pytorch batch normalization in distributed train
wondering how distributed pytorch handle batch norm, when I add a batch norm layer, will pytorch engine use the same allreduce call to sync the data cross node? or the batch norm only happen on local node.
Similarly to DataParallel (check the first Warning box). It will compute the norm separately for each node (or, more precisely, each GPU). It will not sync the rolling estimates of the norm either, but it will keep the values from one of the GPUs in the end. So assuming the examples are distributed across your cluster randomly, your BatchNorm will work roughly as expected, except its estimates of the normalization factors will have higher variance due to smaller effective sample sizes.
https://stackoverflow.com/questions/53709406/
vgg pytorch is probability distribution supposed to add up to 1?
I've trained a vgg16 model to predict 102 classes of flowers. It works however now that I'm trying to understand one of it's predictions I feel it's not acting normally. model layout # Imports here import os import numpy as np import torch import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import json from pprint import pprint from scipy import misc %matplotlib inline data_dir = 'flower_data' train_dir = data_dir + '/train' test_dir = data_dir + '/valid' json_data=open('cat_to_name.json').read() main_classes = json.loads(json_data) main_classes = {int(k):v for k,v in classes.items()} train_transform_2 = transforms.Compose([transforms.RandomResizedCrop(224), transforms.RandomRotation(30), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) test_transform_2= transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor()]) # TODO: Load the datasets with ImageFolder train_data = datasets.ImageFolder(train_dir, transform=train_transform_2) test_data = datasets.ImageFolder(test_dir, transform=test_transform_2) # define dataloader parameters batch_size = 20 num_workers=0 # TODO: Using the image datasets and the trainforms, define the dataloaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers, shuffle=True) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers, shuffle=True) vgg16 = models.vgg16(pretrained=True) # Freeze training for all "features" layers for param in vgg16.features.parameters(): param.requires_grad = False import torch.nn as nn n_inputs = vgg16.classifier[6].in_features # add last linear layer (n_inputs -> 102 flower classes) # new layers automatically have requires_grad = True last_layer = nn.Linear(n_inputs, len(classes)) vgg16.classifier[6] = last_layer import torch.optim as optim # specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer (stochastic gradient descent) and learning rate = 0.001 optimizer = optim.SGD(vgg16.classifier.parameters(), lr=0.001) pre_trained_model=torch.load("model.pt") new=list(pre_trained_model.items()) my_model_kvpair=vgg16.state_dict() count=0 for key,value in my_model_kvpair.items(): layer_name, weights = new[count] my_model_kvpair[key] = weights count+=1 # number of epochs to train the model n_epochs = 6 # initialize tracker for minimum validation loss valid_loss_min = np.Inf # set initial "min" to infinity for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### # model by default is set to train vgg16.train() for batch_i, (data, target) in enumerate(train_loader): # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = vgg16(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item() if batch_i % 20 == 19: # print training loss every specified number of mini-batches print('Epoch %d, Batch %d loss: %.16f' % (epoch, batch_i + 1, train_loss / 20)) train_loss = 0.0 ###################### # validate the model # ###################### vgg16.eval() # prep model for evaluation for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = vgg16(data) # calculate the loss loss = criterion(output, target) # update running validation loss valid_loss += loss.item() # print training/validation statistics # calculate average loss over an epoch train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(test_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch+1, train_loss, valid_loss )) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(vgg16.state_dict(), 'model.pt') valid_loss_min = valid_loss testing on a single image tensor = torch.from_numpy(test_image) reshaped = tensor.permute(2, 0, 1).unsqueeze(0) floatified = reshaped.to(torch.float32) / 255 vgg16(floatified) >>> tensor([[ 2.5686, -1.1964, -0.0872, -1.7010, -1.6669, -1.0638, 0.4515, 0.1124, 0.0166, 0.3156, 1.1699, 1.5374, 1.8720, 2.5184, 2.9046, -0.8241, -1.1949, -0.5700, 0.8692, -1.0485, 0.0390, -1.3783, -3.4632, -0.0143, 1.0986, 0.2667, -1.1127, -0.8515, 0.7759, -0.7528, 1.6366, -0.1170, -0.4983, -2.6970, 0.7545, 0.0188, 0.1094, 0.5002, 0.8838, -0.0006, -1.7993, -1.3706, 0.4964, -0.3251, -1.7313, 1.8731, 2.4963, 1.1713, -1.5726, 1.5476, 3.9576, 0.7388, 0.0228, 0.3947, -1.7237, -1.8350, -2.0297, 1.4088, -1.3469, 1.6128, -1.0851, 2.0257, 0.5881, 0.7498, 0.0738, 2.0592, 1.8034, -0.5468, 1.9512, 0.4534, 0.7746, -1.0465, -0.7254, 0.3333, -1.6506, -0.4242, 1.9529, -0.4542, 0.2396, -1.6804, -2.7987, -0.6367, -0.3599, 1.0102, 2.6319, 0.8305, -1.4333, 3.3043, -0.4021, -0.4877, 0.9125, 0.0607, -1.0326, 1.3186, -2.5861, 0.1211, -2.3177, -1.5040, 1.0416, 1.4008, 1.4225, -2.7291]], grad_fn=<ThAddmmBackward>) sum([ 2.5686, -1.1964, -0.0872, -1.7010, -1.6669, -1.0638, 0.4515, 0.1124, 0.0166, 0.3156, 1.1699, 1.5374, 1.8720, 2.5184, 2.9046, -0.8241, -1.1949, -0.5700, 0.8692, -1.0485, 0.0390, -1.3783, -3.4632, -0.0143, 1.0986, 0.2667, -1.1127, -0.8515, 0.7759, -0.7528, 1.6366, -0.1170, -0.4983, -2.6970, 0.7545, 0.0188, 0.1094, 0.5002, 0.8838, -0.0006, -1.7993, -1.3706, 0.4964, -0.3251, -1.7313, 1.8731, 2.4963, 1.1713, -1.5726, 1.5476, 3.9576, 0.7388, 0.0228, 0.3947, -1.7237, -1.8350, -2.0297, 1.4088, -1.3469, 1.6128, -1.0851, 2.0257, 0.5881, 0.7498, 0.0738, 2.0592, 1.8034, -0.5468, 1.9512, 0.4534, 0.7746, -1.0465, -0.7254, 0.3333, -1.6506, -0.4242, 1.9529, -0.4542, 0.2396, -1.6804, -2.7987, -0.6367, -0.3599, 1.0102, 2.6319, 0.8305, -1.4333, 3.3043, -0.4021, -0.4877, 0.9125, 0.0607, -1.0326, 1.3186, -2.5861, 0.1211, -2.3177, -1.5040, 1.0416, 1.4008, 1.4225, -2.7291]) >>> 5.325799999999998 given this as how I test it on a single image (and the model as usual is trained and tested on batches it returns a prediction matrix that doesn't seem to be normalized or add up to 1. Is this normal?
I cannot tell with certainty without seeing your training code, but it's most likely your model was trained with cross-entropy loss and as such it outputs logits rather than class probabilities. You can turn them into proper probabilities by applying the softmax function.
https://stackoverflow.com/questions/53711470/
Dice Loss and samples containing no data in target tensors?
I have a highly imbalanced 3D dataset, where about 80% of the volume is background data, I am only interested in the foreground elements which constitute about 20% of the total volume at random locations. These locations are noted in the label tensor given to the network. The target tensor is binary where 0 represents the background and 1 represents the areas we are interested in or want to segment. The size of each volume is [30,512,1024]. I am iterating over each volume using blocks of size [30,64,64]. Thus most of my blocks have only 0 values in the target tensor. I read that DiceLoss is perfect for such problems and is used successfully in segmenting 3D MRI scans. One simple implementation is from here: https://github.com/pytorch/pytorch/issues/1249#issuecomment-305088398 def dice_loss(input, target): smooth = 1. iflat = input.view(-1) tflat = target.view(-1) intersection = (iflat * tflat).sum() return 1 - ((2. * intersection + smooth) / (iflat.sum() + tflat.sum() + smooth)) This is not working for me, I mean for a patch where all I have are background the tflat.sum() would be 0. This would make intersection 0 as well, thus for majority of my patches or blocks I will get a return of 1. Is this right? This is not how it is supposed to work. But I am struggling with this as this is my network output: idx: 0 of 312 - Training Loss: 1.0 - Training Accuracy: 3.204042239857152e-11 idx: 5 of 312 - Training Loss: 0.9876335859298706 - Training Accuracy: 0.0119545953348279 idx: 10 of 312 - Training Loss: 1.0 - Training Accuracy: 7.269467666715101e-11 idx: 15 of 312 - Training Loss: 0.7320756912231445 - Training Accuracy: 0.22638492286205292 idx: 20 of 312 - Training Loss: 0.3599294424057007 - Training Accuracy: 0.49074622988700867 idx: 25 of 312 - Training Loss: 1.0 - Training Accuracy: 1.0720428988975073e-09 idx: 30 of 312 - Training Loss: 1.0 - Training Accuracy: 1.19782361807097e-09 idx: 35 of 312 - Training Loss: 1.0 - Training Accuracy: 1.956790285362331e-09 idx: 40 of 312 - Training Loss: 1.0 - Training Accuracy: 1.6055999862985004e-09 idx: 45 of 312 - Training Loss: 1.0 - Training Accuracy: 7.580232552761856e-10 idx: 50 of 312 - Training Loss: 1.0 - Training Accuracy: 9.510597864803572e-10 idx: 55 of 312 - Training Loss: 1.0 - Training Accuracy: 1.341515676323013e-09 idx: 60 of 312 - Training Loss: 0.7165247797966003 - Training Accuracy: 0.02658153884112835 idx: 65 of 312 - Training Loss: 1.0 - Training Accuracy: 4.528208030762926e-09 idx: 70 of 312 - Training Loss: 0.3205708861351013 - Training Accuracy: 0.6673439145088196 idx: 75 of 312 - Training Loss: 0.9305377006530762 - Training Accuracy: 2.3437689378624782e-05 idx: 80 of 312 - Training Loss: 1.0 - Training Accuracy: 5.305786885401176e-07 idx: 85 of 312 - Training Loss: 1.0 - Training Accuracy: 4.0612556517771736e-07 idx: 90 of 312 - Training Loss: 0.8207412362098694 - Training Accuracy: 0.0344742126762867 idx: 95 of 312 - Training Loss: 0.7463213205337524 - Training Accuracy: 0.19459737837314606 idx: 100 of 312 - Training Loss: 1.0 - Training Accuracy: 4.863646818620282e-09 idx: 105 of 312 - Training Loss: 0.35790306329727173 - Training Accuracy: 0.608722984790802 idx: 110 of 312 - Training Loss: 1.0 - Training Accuracy: 3.3852198821904267e-09 idx: 115 of 312 - Training Loss: 1.0 - Training Accuracy: 1.5268487585373691e-09 idx: 120 of 312 - Training Loss: 1.0 - Training Accuracy: 3.46353523639209e-09 idx: 125 of 312 - Training Loss: 1.0 - Training Accuracy: 2.5878148582347826e-11 idx: 130 of 312 - Training Loss: 1.0 - Training Accuracy: 2.3601216467272756e-11 idx: 135 of 312 - Training Loss: 1.0 - Training Accuracy: 1.1504343033763575e-09 idx: 140 of 312 - Training Loss: 0.4516671299934387 - Training Accuracy: 0.13879922032356262 I dont think the network is learning anything from this.. Now I'm confused, as my problem should not be too complex as I am sure MRI scans have target tensors as well where majority of them signify background.. What am I doing wrong? Thanks
You will get return of 1 if your algorithm predicts that all background voxels should have a value of exactly 0, but if it predicts any positive value (which it will surely do if you're using sigmoid activation) it can still improve the loss by outputting as little as possible. In other words, the numerator cannot go above smooth but the algorithm can still learn to keep the denominator as small as possible. If you're unsatisfied with your algorithm's behavior you can try to either increase your batch size (so the chance of none of the volumes having any foreground drops) or straight up skip such batches. It may or may not help learning. That being said, I've personally never had any success learning segmentation with Dice/IoU as loss functions and generally opt for binary cross entropy or similar losses, keeping the former as validation metrics.
https://stackoverflow.com/questions/53711976/
Best way to handle OOV words when using pretrained embeddings in PyTorch
I am using word2vec pretrained embedding in PyTorch (following code here). However, it does not seem to handle unseen words. Is there any good way to solve it?
FastText builds character ngram vectors as part of model training. When it finds an OOV word, it sums the character ngram vectors in the word to produce a vector for the word. You can find more detail here.
https://stackoverflow.com/questions/53715016/
Pytorch modify dataset label
This is a code snippet for loading images as dataset from pytorch transfer learning tutorial: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} And this is one of the examples in dataset: image_datasets['val'][0]: (tensor([[[ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489], [ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489], [ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489], ..., [ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489], [ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489], [ 2.2489, 2.2489, 2.2489, ..., 2.2489, 2.2489, 2.2489]], [[ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286], [ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286], [ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286], ..., [ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286], [ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286], [ 2.4286, 2.4286, 2.4286, ..., 2.4286, 2.4286, 2.4286]], [[ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400], [ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400], [ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400], ..., [ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400], [ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400], [ 2.6400, 2.6400, 2.6400, ..., 2.6400, 2.6400, 2.6400]]]), 0) Is there any method (best practices) to change the example data in dataset, for example change label 0 to label 1. The following does not work: image_datasets['val'][0] = (image_datasets['val'][0][0], 1)
Yes, though not (easily) programmatically. The labels are coming from torchvision.datasets.ImageFolder and reflect the directory structure of your dataset (as seen on your HDD). Firstly, I suspect you may want to know the directory name as a string. This is poorly documented but the dataloader has a classes attribute which stores those. So img, lbl = image_datasets['val'][0] directory_name = image_datasets['val'].classes[lbl] If you're looking to consistently return those instead of class IDs, you can use the target_transform api as follows: image_datasets['val'].target_transform = lambda id: image_datasets['val'].classes[id] which will make the loader return strings instead of IDs from now on. If you're looking for something more advanced you can reimplement/inherit from ImageFolder or DatasetFolder and implement your own semantics. The only methods you need to provide are __len__ and __getitem__.
https://stackoverflow.com/questions/53751882/
Trouble using transforms.FiveCrop()/TenCrop() in PyTorch
I am trying to increase my CNN’s performance and thus i decided to “play” with some transformations in order to see how they affect my model. I read that FiveCrop() and TenCrop() might help because they generate extra data to train on. However, when i try to train the model, using one of the transformations mentioned above, i get the following error: TypeError: pic should be PIL Image or ndarray. Got < class ‘tuple’> The documentation of those transformations, only states a note for the test procedure, any idea how to fix this? Thanks in advance! train_transform = transforms.Compose( [transforms.ColorJitter(), transforms.TenCrop(32), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.247, 0.243, 0.261)) ]) train = datasets.CIFAR10(root = './data', train = True, transform = train_transform, download = True) train_loader = torch.utils.data.DataLoader(dataset = train, batch_size = 1250, shuffle = True) for epoch in range(num_of_iterations): correct = 0 acc = 0.0 running_loss = 0.0 for i, (images, labels) in enumerate(train_loader): images = images.requires_grad_().to(device) labels = labels.to(device) The error occurs on the line of the second for-loop
Right, your error is coming from transforms.ToTensor(), which is directly downstream of your TenCrop in the composed transformation. It expects an image but gets a tuple of crops instead. You should follow a procedure similar to the one shown in the documentation not only for testing but also for training in order to reorganize your images into the expected format of [batch, feature_maps, width, height]. As a side note, CIFAR10 is already 32x32 pixels so taking 32x32 crops is an identity operation. Your FiveCrop effectively just repeats the same image 5 times and TenCrop repeats it 5 times plus adds 5 flipped versions. You should either reduce the size of crops or find out a different data augumentation scheme to see improvement with your networks generalization.
https://stackoverflow.com/questions/53763758/
How to change a Pytorch CNN to take color images instead of black and white?
This code I found has a neural net that is setup to take black and white images. (It's a siamese network but that part's not relevant). When I change it to take my images and NOT convert them to black and white I get an error shown below. I tried changing the first Conv2d, the sixth line down from a 1 to a 3 class SiameseNetwork(nn.Module): def __init__(self): super(SiameseNetwork, self).__init__() self.cnn1 = nn.Sequential( nn.ReflectionPad2d(1), # was nn.Conv2d(1, 4, kernel_size=3), nn.Conv2d(3, 4, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(4), nn.ReflectionPad2d(1), nn.Conv2d(4, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8), nn.ReflectionPad2d(1), nn.Conv2d(8, 8, kernel_size=3), nn.ReLU(inplace=True), nn.BatchNorm2d(8)) self.fc1 = nn.Sequential( nn.Linear(8*300*300, 500), nn.ReLU(inplace=True), nn.Linear(500, 500), nn.ReLU(inplace=True), nn.Linear(500, 5)) def forward_once(self, x): output = self.cnn1(x) output = output.view(output.size()[0], -1) output = self.fc1(output) return output def forward(self, input1, input2): output1 = self.forward_once(input1) output2 = self.forward_once(input2) return output1, output2 My error when the images are NOT converted to black and white and remain in color. RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 1 in dimension 1 at /opt/conda/conda-bld/pytorch-nightly_1542963753679/work/aten/src/TH/generic/THTensorMoreMath.cpp:1319 I checked the shapes of the images as arrays (right before they go into the model) as black and white vs in color... B&W torch.Size([1, 1, 300, 300]) In color torch.Size([1, 3, 300, 300]) Here is a link to a Jupyter Notebook of the entire original code I am working with... https://github.com/harveyslash/Facial-Similarity-with-Siamese-Networks-in-Pytorch/blob/master/Siamese-networks-medium.ipynb EDIT: UPDATE: I seemed to have solved it by converting the images to RBG in the SiameseNetworkDataset part of the code img0 = img0.convert("L") changed to img0 = img0.convert("RGB") I just had the line commented out before and thought this left it in RGB but it was something else the model didn't understand. Also, the change in the OP was needed. nn.Conv2d(1, 4, kernel_size=3), changed to nn.Conv2d(3, 4, kernel_size=3), If you'd like to answer with an explaination of what the model is doing that makes it clear I'll give you the green check. Don't really understand nn.Conv2d
The error seems to be in fully connected part below: self.fc1 = nn.Sequential( nn.Linear(8*100*100, 500), nn.ReLU(inplace=True), nn.Linear(500, 500), nn.ReLU(inplace=True), nn.Linear(500, 5)) It seems the output of cnn is of shape[8,300,300] and not [8,100,100] To solve this either, change input image to [n_channel, 100,100] or change the size of input size of fc-layer to 8*300*300
https://stackoverflow.com/questions/53769948/
How to predict a label in MultiClass classification model in pytorch?
I am currently working on my mini-project, where I predict movie genres based on their posters. So in the dataset that I have, each movie can have from 1 to 3 genres, therefore each instance can belong to multiple classes. I have total of 15 classes(15 genres). So now I am facing with the problem of how to do predictions using pytorch for this particular problem. In pytorch CIFAR-tutorial, where each instance can have only one class ( for example, if image is a car it should belong to class of cars) and there are 10 classes in total. So in this case, model prediction is defined in the following way(copying code snippet from pytorch website): import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') Question 1(for training part). What could you suggest to use as an activation function. I was thinking about BCEWithLogitsLoss() but I am not sure how good it will be. and then the accuracy of prediction for testset is defined in the following way: for the entire network: correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) and for each class: class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) where the output is as follows: Accuracy of plane : 36 % Accuracy of car : 40 % Accuracy of bird : 30 % Accuracy of cat : 19 % Accuracy of deer : 28 % Accuracy of dog : 17 % Accuracy of frog : 34 % Accuracy of horse : 43 % Accuracy of ship : 57 % Accuracy of truck : 35 % Now here is question 2: How can I determine the accuracy so it would look in the following way: For example: The Matrix (1999) ['Action: 91%', 'Drama: 25%', 'Adventure: 13%'] The Others (2001) ['Drama: 76%', 'Horror: 65%', 'Action: 41%'] Alien: Resurrection (1997) ['Horror: 67%', 'Action: 64%', 'Drama: 43%'] The Martian (2015) ['Drama: 95%', 'Adventure: 81%'] Considering that every movie does not always have 3 genres, sometimes is 2 and sometimes is 1. So as I see it, I should find 3 maximum values, 2 maximum values or 1 maximum value of my output list , which is list of 15 genres so, for example, if my predicted genres are [Movie, Adventure] then some_kind_of_function(outputs) should give me output of [1 0 0 0 0 0 0 0 0 0 0 1 0 0 0] , which I can compare afterwards with ground_truth. I don't think torchmax will work in this case, cause it gives only one max value from [weigts array], so What's the best way to implement it? Thank you in advance, appreciate any help or suggestion:)
You're right, you're looking to perform binary classification (is poster X a drama movie or not? Is it an action movie or not?) for each poster-genre pair. BinaryCrossEntropy(WithLogits) is the way to go. Regarding the best metric to evaluate your resulting algorithm, it's up to you, what are you looking for. But you may want to investigate ideas like precision and recall or f1 score. Personally, I would probably pick the top 3 for each genre (since that's at max number of genres assigned to each poster) and look if the ones to be expected show up with high probability and if the unexpected ones (in case of a movie with 2 "ground truth" genres) show at the last places, with significantly less probability assigned.
https://stackoverflow.com/questions/53788828/
computing gradients for every individual sample in a batch in PyTorch
I'm trying to implement a version of differentially private stochastic gradient descent (e.g., this), which goes as follows: Compute the gradient with respect to each point in the batch of size L, then clip each of the L gradients separately, then average them together, and then finally perform a (noisy) gradient descent step. What is the best way to do this in pytorch? Preferably, there would be a way to simulataneously compute the gradients for each point in the batch: x # inputs with batch size L y #true labels y_output = model(x) loss = loss_func(y_output,y) #vector of length L loss.backward() #stores L distinct gradients in each param.grad, magically But failing that, compute each gradient separately and then clip the norm before accumulating, but x # inputs with batch size L y #true labels y_output = model(x) loss = loss_func(y_output,y) #vector of length L for i in range(loss.size()[0]): loss[i].backward(retain_graph=True) torch.nn.utils.clip_grad_norm(model.parameters(), clip_size) accumulates the ith gradient, and then clips, rather than clipping before accumulating it into the gradient. What's the best way to get around this issue?
I don't think you can do much better than the second method in terms of computational efficiency, you're losing the benefits of batching in your backward and that's a fact. Regarding the order of clipping, autograd stores the gradients in .grad of parameter tensors. A crude solution would be to add a dictionary like clipped_grads = {name: torch.zeros_like(param) for name, param in net.named_parameters()} Run your for loop like for i in range(loss.size(0)): loss[i].backward(retain_graph=True) torch.nn.utils.clip_grad_norm_(net.parameters()) for name, param in net.named_parameters(): clipped_grads[name] += param.grad / loss.size(0) net.zero_grad() for name, param in net.named_parameters(): param.grad = clipped_grads[name] optimizer.step() where I omitted much of the detach, requires_grad=False and similar business which may be necessary to make it behave as expected. The disadvantage of the above is that you end up storing 2x the memory for your parameter gradients. In principle you could take the "raw" gradient, clip it, add to clipped_gradient, and then discard as soon as no downstream operations need it, whereas here you retain the raw values in grad until the end of a backward pass. It may be that register_backward_hook allows you to do that if you go against the guidelines and actually modify the grad_input, but you would have to verify with someone more intimately acquaintanced with autograd.
https://stackoverflow.com/questions/53798023/
RuntimeError: size mismatch m1: [a x b], m2: [c x d]
Can anyone help me in this.? I am getting below error. I use Google Colab. How to Solve this error.? size mismatch, m1: [64 x 100], m2: [784 x 128] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:2070 Below Code I am trying to Run. import torch from torch import nn import torch.nn.functional as F from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.CenterCrop(10),transforms.ToTensor(),]) # Download the load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Build a feed-forward network model = nn.Sequential(nn.Linear(784, 128),nn.ReLU(),nn.Linear(128, 64),nn.ReLU(),nn.Linear(64, 10)) # Define the loss criterion = nn.CrossEntropyLoss() # Get our data images, labels = next(iter(trainloader)) # Faltten images images = images.view(images.shape[0], -1) # Forward pass, get our logits logits = model(images) # Calculate the loss with the logits and the labels loss = criterion(logits, labels) print(loss)
All you have to care is b=c and you are done: m1: [a x b], m2: [c x d] m1 is [a x b] which is [batch size x in features] m2 is [c x d] which is [in features x out features]
https://stackoverflow.com/questions/53828518/
How does adaptive pooling in pytorch work?
Adaptive pooling is a great function, but how does it work? It seems to be inserting pads or shrinking/expanding kernel sizes in what seems like a pattered but fairly arbitrary way. The pytorch documentation I can find is not more descriptive than "put desired output size here." Does anyone know how this works or can point to where it's explained? Some test code on a 1x1x6 tensor, (1,2,3,4,5,6), with an adaptive output of size 8: import torch import torch.nn as nn class TestNet(nn.Module): def __init__(self): super(TestNet, self).__init__() self.avgpool = nn.AdaptiveAvgPool1d(8) def forward(self,x): print(x) x = self.avgpool(x) print(x) return x def test(): x = torch.Tensor([[[1,2,3,4,5,6]]]) net = TestNet() y = net(x) return y test() Output: tensor([[[ 1., 2., 3., 4., 5., 6.]]]) tensor([[[ 1.0000, 1.5000, 2.5000, 3.0000, 4.0000, 4.5000, 5.5000, 6.0000]]]) If it mirror pads by on the left and right (operating on (1,1,2,3,4,5,6,6)), and has a kernel of 2, then the outputs for all positions except for 4 and 5 make sense, except of course the output isn't the right size. Is it also padding the 3 and 4 internally? If so, it's operating on (1,1,2,3,3,4,4,5,6,6), which, if using a size 2 kernel, produces the wrong output size and would also miss a 3.5 output. Is it changing the size of the kernel? Am I missing something obvious about the way this works?
In general, pooling reduces dimensions. If you want to increase dimensions, you might want to look at interpolation. Anyway, let's talk about adaptive pooling in general. You can look at the source code here. Some claimed that adaptive pooling is the same as standard pooling with stride and kernel size calculated from input and output size. Specifically, the following parameters are used: Stride = (input_size//output_size) Kernel size = input_size - (output_size-1)*stride Padding = 0 These are inversely worked from the pooling formula. While they DO produce output of the desired size, its output is not necessarily the same as that of adaptive pooling. Here is a test snippet: import torch import torch.nn as nn in_length = 5 out_length = 3 x = torch.arange(0, in_length).view(1, 1, -1).float() print(x) stride = (in_length//out_length) avg_pool = nn.AvgPool1d( stride=stride, kernel_size=(in_length-(out_length-1)*stride), padding=0, ) adaptive_pool = nn.AdaptiveAvgPool1d(out_length) print(avg_pool.stride, avg_pool.kernel_size) y_avg = avg_pool(x) y_ada = adaptive_pool(x) print(y_avg) print(y_ada) Output: tensor([[[0., 1., 2., 3., 4.]]]) (1,) (3,) tensor([[[1., 2., 3.]]]) tensor([[[0.5000, 2.0000, 3.5000]]]) Error: 1.0 Average pooling pools from elements (0, 1, 2), (1, 2, 3) and (2, 3, 4). Adaptive pooling pools from elements (0, 1), (1, 2, 3) and (3, 4). (Change the code a bit to see that it is not pooling from (2) only) You can tell adaptive pooling tries to reduce overlapping in pooling. The difference can be mitigated using padding with count_include_pad=True, but in general I don't think they can be exactly the same for 2D or higher for all input/output sizes. I would imagine using different paddings for left/right. This is not supported in pooling layers for the moment. From a practical perspective it should not matter much. Check the code for actual implementation.
https://stackoverflow.com/questions/53841509/
Pytorch Convolutional Autoencoders
How one construct decoder part of convolutional autoencoder? Suppose I have this (input -> conv2d -> maxpool2d -> maxunpool2d -> convTranspose2d -> output): # CIFAR images shape = 3 x 32 x 32 class ConvDAE(nn.Module): def __init__(self): super().__init__() # input: batch x 3 x 32 x 32 -> output: batch x 16 x 16 x 16 self.encoder = nn.Sequential( nn.Conv2d(3, 16, 3, stride=1, padding=1), # batch x 16 x 32 x 32 nn.ReLU(), nn.BatchNorm2d(16), nn.MaxPool2d(2, stride=2) # batch x 16 x 16 x 16 ) # input: batch x 16 x 16 x 16 -> output: batch x 3 x 32 x 32 self.decoder = nn.Sequential( # this line does not work # nn.MaxUnpool2d(2, stride=2, padding=0), # batch x 16 x 32 x 32 nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1, output_padding=1), # batch x 16 x 32 x 32 nn.ReLU(), nn.BatchNorm2d(16), nn.ConvTranspose2d(16, 3, 3, stride=1, padding=1, output_padding=0), # batch x 3 x 32 x 32 nn.ReLU() ) def forward(self, x): print(x.size()) out = self.encoder(x) print(out.size()) out = self.decoder(out) print(out.size()) return out Pytorch specific question: why can't I use MaxUnpool2d in decoder part. This gives me the following error: TypeError: forward() missing 1 required positional argument: 'indices' And the conceptual question: Shouldn't we do in decoder inverse of whatever we did in encoder? I saw some implementations and it seems they only care about the dimensions of input and output of decoder. Here and here are some examples.
For the torch part of the question, unpool modules have as a required positional argument the indices returned from the pooling modules which will be returned with return_indices=True. So you could do class ConvDAE(nn.Module): def __init__(self): super().__init__() # input: batch x 3 x 32 x 32 -> output: batch x 16 x 16 x 16 self.encoder = nn.Sequential( nn.Conv2d(3, 16, 3, stride=1, padding=1), # batch x 16 x 32 x 32 nn.ReLU(), nn.BatchNorm2d(16), nn.MaxPool2d(2, stride=2, return_indices=True) ) self.unpool = nn.MaxUnpool2d(2, stride=2, padding=0) self.decoder = nn.Sequential( nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1, output_padding=1), nn.ReLU(), nn.BatchNorm2d(16), nn.ConvTranspose2d(16, 3, 3, stride=1, padding=1, output_padding=0), nn.ReLU() ) def forward(self, x): print(x.size()) out, indices = self.encoder(x) out = self.unpool(out, indices) out = self.decoder(out) print(out.size()) return out As for the general part of the question, I don't think state of the art is to use a symmetric decoder part, as it has been shown that devonvolution/transposed convolution produces checkerboard effects and many approaches tend to use upsampling modules instead. You will find more info faster through PyTorch channels.
https://stackoverflow.com/questions/53858626/
How to get all the tensors in a graph?
I would like to access all the tensors instances of a graph. For example, I can check if a tensor is detached or I can check the size. It can be done in tensorflow. I don't want visualization of the graph.
You can get access to the entirety of the computation graph at runtime. To do so, you can use hooks. These are functions plugged onto nn.Modules both for inference and when backpropagating. At inference, you can hook a callback function with register_forward_hook. Similarly for backpropagation, you can use register_full_backward_hook. Note: as of PyTorch version 1.8.0 register_backward_hook has been deprecated. With these two functions, you will basically have access to any tensor on the computation graph. It's entirely up to you whether you want to print all tensors, print the shapes, or even insert breakpoints to investigate. Here is a possible implementation: def forward_hook(module, input, output): # ... Argument input is passed by PyTorch as a tuple and will contain all arguments passed to the forward function of the hooked module. def backward_hook(module, grad_input, grad_output): # ... For the backward hook, both grad_input and grad_output will be tuples and will have varying shapes depending on your model's layers. Then you can hook these callbacks on any existing nn.Module. For example, you could loop over all child modules from your model: for module in model.children(): module.register_forward_hook(forward_hook) module.register_full_backward_hook(backward_hook) To get the names of the modules, you can wrap the hook to enclose the name and loop on your model's named_modules: def forward_hook(name): def hook(module, x, y): print(f'{name}: {[tuple(i.shape) for i in x]} -> {list(y.shape)}') return hook for name, module in model.named_children(): module.register_forward_hook(forward_hook(name)) Which could print the following on inference: fc1: [(1, 100)] -> (1, 10) fc2: [(1, 10)] -> (1, 5) fc3: [(1, 5)] -> (1, 1) As for the model's parameter, you can easily access the parameters for a given module in both hooks by calling module.parameters. This will return a generator.
https://stackoverflow.com/questions/53878476/
PyTorch transfer learning with pre-trained ImageNet model
I want to create an image classifier using transfer learning on a model already trained on ImageNet. How do I replace the final layer of a torchvision.models ImageNet classifier with my own custom classifier?
Get a pre-trained ImageNet model (resnet152 has the best accuracy): from torchvision import models # https://pytorch.org/docs/stable/torchvision/models.html model = models.resnet152(pretrained=True) Print out its structure so we can compare to the final state: print(model) Remove the last module (generally a single fully connected layer) from model: classifier_name, old_classifier = model._modules.popitem() Freeze the parameters of the feature detector part of the model so that they are not adjusted by back-propagation: for param in model.parameters(): param.requires_grad = False Create a new classifier: classifier_input_size = old_classifier.in_features classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(classifier_input_size, hidden_layer_size)), ('activation', nn.SELU()), ('dropout', nn.Dropout(p=0.5)), ('fc2', nn.Linear(hidden_layer_size, output_layer_size)), ('output', nn.LogSoftmax(dim=1)) ])) The module name for our classifier needs to be the same as the one which was removed. Add our new classifier to the end of the feature detector: model.add_module(classifier_name, classifier) Finally, print out the structure of the new network: print(model)
https://stackoverflow.com/questions/53884692/
TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
I am using a modified predict.py for testing a pruned SqueezeNet Model [phung@archlinux SqueezeNet-Pruning]$ python predict.py --image 3_100.jpg --model model_prunned --num_class 2 prediction in progress Traceback (most recent call last): File “predict.py”, line 66, in prediction = predict_image(imagepath) File “predict.py”, line 52, in predict_image index = output.data.numpy().argmax() TypeError: can’t convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. [phung@archlinux SqueezeNet-Pruning]$ I understand that numpy does not support GPU yet. How shall I modify the code to get away from this error without invoking tensor copy data operation, tensor.cpu() ?
Change index = output.data.numpy().argmax() to index = output.cpu().data.numpy().argmax() This means data is first moved to cpu and then converted to numpy array.
https://stackoverflow.com/questions/53900910/
How to transfer weight of own model to same network but different number of classin last layer?
I have my own network in Pytorch. It first trained for the binary classifier (2 classes). After 10k epochs, I obtained the trained weight as 10000_model.pth. Now, I want to use the model for 4 classes classifier problem using the same network. Thus, I want to transfer all trained weights in the binary classifier to 4 classes problem, without the lass layer that will random initialization. How could I do it? This is my model class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.conv_classify= nn.Conv2d(50, 2, 1, 1, bias=True) # number of class def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv_classify(x)) return x This what I did model = Net () checkpoint_dict = torch.load('10000_model.pth') pretrained_dict = checkpoint_dict['state_dict'] model_dict = model.state_dict() # 1. filter out unnecessary keys pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} # 2. overwrite entries in the existing state dict model_dict.update(pretrained_dict) # 3. load the new state dict model.load_state_dict(model_dict) For now, I have to manually delete the pretrained_dict by name. pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} pretrained_dict.pop('conv_classify.weight', None) pretrained_dict.pop('conv_classify.bias', None) It means pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} does not do anything. What is wrong? I am using pytorch 1.0. Thanks
Both networks have the same layers and therefore the same keys in state_dict, so indeed pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict} does nothing. The difference between the two is the weight tensors (their shape) and not their names. In other words, you can distinguish the two by [v.shape for v in model.state_dict().values()] but not model.state_dict().keys(). Your "workaround" approach is correct. If you want to make this a bit less manual, I would use merged_dict = {} for key in model_dict.keys(): if 'conv_classify' in key: # or perhaps a more complex criterion merged_dict[key] = model_dict[key] else: merged_dict[key] = pretrained_dict[key]
https://stackoverflow.com/questions/53901603/
In tensorflow, how to enumerate training data (compared to pytorch)
In pytorch, this is how I enumerate training data. for epoch in range(0, args.epoches): for i, batch in enumerate(train_data): model.update(batch) train_data contains multiple batches and batches are getting enumerated and updating the model, which is very clear to me. I think this is a basic example of how tensorflow treats the batches. for step in range(num_steps): batch_data, batch_labels = generate_batch(batch_size, num_skips, skip_window) feed_dict = {train_dataset : batch_data, train_labels : batch_labels} _, l = session.run([optimizer, loss], feed_dict=feed_dict) Maybe this is a very obvious question, but I'm not clear how enumerating training batches is handled by session.run in tensorflow. I can't find batches are getting looped through in the code. All I see is feed_dict and I assume it handles the looping. Can someone shed some light on this?
TensorFlow has a History object for this purpose. You get History object as a return from the model.fit() method. A History object and its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable). Hope this is what you need.
https://stackoverflow.com/questions/53906008/
Problem with missing and unexpected keys while loading my model in Pytorch
I'm trying to load the model using this tutorial: https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-for-inference . Unfortunately I'm very beginner and I face some problems. I have created checkpoint: checkpoint = {'epoch': epochs, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(),'loss': loss} torch.save(checkpoint, 'checkpoint.pth') Then I wrote class for my network and I wanted to load the file: class Network(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(9216, 4096) self.fc2 = nn.Linear(4096, 1000) self.fc3 = nn.Linear(1000, 102) def forward(self, x): x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = log(F.softmax(x, dim=1)) return x Like that: def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = Network() model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] loss = checkpoint['loss'] model = load_checkpoint('checkpoint.pth') I got this error (edited to show whole communicate): RuntimeError: Error(s) in loading state_dict for Network: Missing key(s) in state_dict: "fc1.weight", "fc1.bias", "fc2.weight", "fc2.bias", "fc3.weight", "fc3.bias". Unexpected key(s) in state_dict: "features.0.weight", "features.0.bias", "features.3.weight", "features.3.bias", "features.6.weight", "features.6.bias", "features.8.weight", "features.8.bias", "features.10.weight", "features.10.bias", "classifier.fc1.weight", "classifier.fc1.bias", "classifier.fc2.weight", "classifier.fc2.bias", "classifier.fc3.weight", "classifier.fc3.bias". This is my model.state_dict().keys(): odict_keys(['features.0.weight', 'features.0.bias', 'features.3.weight', 'features.3.bias', 'features.6.weight', 'features.6.bias', 'features.8.weight', 'features.8.bias', 'features.10.weight', 'features.10.bias', 'classifier.fc1.weight', 'classifier.fc1.bias', 'classifier.fc2.weight', 'classifier.fc2.bias', 'classifier.fc3.weight', 'classifier.fc3.bias']) This is my model: AlexNet( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2)) (1): ReLU(inplace) (2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2)) (4): ReLU(inplace) (5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (7): ReLU(inplace) (8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (9): ReLU(inplace) (10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace) (12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False) ((classifier): Sequential( (fc1): Linear(in_features=9216, out_features=4096, bias=True) (relu1): ReLU() (fc2): Linear(in_features=4096, out_features=1000, bias=True) (relu2): ReLU() (fc3): Linear(in_features=1000, out_features=102, bias=True) (output): LogSoftmax() ) ) It's my first network ever and I'm blundering along. Thanks for steering me into right direction!
So your Network is essentially the classifier part of AlexNet and you're looking to load pretrained AlexNet weights into it. The problem is that the keys in state_dict are "fully qualified", which means that if you look at your network as a tree of nested modules, a key is just a list of modules in each branch, joined with dots like grandparent.parent.child. You want to Keep only the tensors with name starting with "classifier." Remove the "classifier." part of keys so try model = Network() loaded_dict = checkpoint['model_state_dict'] prefix = 'classifier.' n_clip = len(prefix) adapted_dict = {k[n_clip:]: v for k, v in loaded_dict.items() if k.startswith(prefix)} model.load_state_dict(adapted_dict)
https://stackoverflow.com/questions/53907073/
Truncated Backpropagation Through Time (BPTT) in Pytorch
In pytorch, I train a RNN/GRU/LSTM network by starting the Backpropagation (Through Time) with : loss.backward() When the sequence is long, I'd like to do a Truncated Backpropagation Through Time instead of a normal Backpropagation Through Time where the whole sequence is used. But I can't find in the Pytorch API any parameters or functions to set up the truncated BPTT. Did I miss it? Am I supposed to code it myself in Pytorch ?
Here is an example: for t in range(T): y = lstm(y) if T-t == k: out.detach() out.backward() So in this example, k is the parameter you use to control the timesteps you want to unroll.
https://stackoverflow.com/questions/53912956/
Pytorch inputs for nn.CrossEntropyLoss()
I am trying to perform a Logistic Regression in PyTorch on a simple 0,1 labelled dataset. The criterion or loss is defined as: criterion = nn.CrossEntropyLoss(). The model is: model = LogisticRegression(1,2) I have a data point which is a pair: dat = (-3.5, 0), the first element is the datapoint and the second is the corresponding label. Then I convert the first element of the input to a tensor: tensor_input = torch.Tensor([dat[0]]). Then I apply the model to the tensor_input: outputs = model(tensor_input). Then I convert the label to a tensor: tensor_label = torch.Tensor([dat[1]]). Now, when I try to do this, the thing breaks: loss = criterion(outputs, tensor_label). It gives and error: RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) import torch import torch.nn as nn class LogisticRegression(nn.Module): def __init__(self, input_size, num_classes): super(LogisticRegression, self).__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, x): out = self.linear(x) return out model = LogisticRegression(1,2) criterion = nn.CrossEntropyLoss() dat = (-3.5,0) tensor_input = torch.Tensor([dat[0]]) outputs = binary_model(tensor_input) tensor_label = torch.Tensor([dat[1]]) loss = criterion(outputs, tensor_label) I can't for the life of me figure it out.
For the most part, the PyTorch documentation does an amazing job to explain the different functions; they usually do include expected input dimensions, as well as some simple examples. You can find the description for nn.CrossEntropyLoss() here. To walk through your specific example, let us start by looking at the expected input dimension: Input: (N,C) where C = number of classes. [...] To add to this, N generally refers to the batch size (number of samples). To compare this to what you currently have: outputs.shape >>> torch.Size([2]) I.e. currently we only have an input dimension of (2,), and not (1,2), as is expected by PyTorch. We can alleviate this by adding a "fake" dimension to our current tensor, by simply using .unsqueeze() like so: outputs = binary_model(tensor_input).unsqueeze(dim=0) outputs.shape >>> torch.Size([1,2]) Now that we got that, let us look at the expected input for the targets: Target: (N) [...] So we already got the right shape for this. If we try this, though, we will still encounter an error, though: RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target'. Again, the error message is rather expressive. The problem here is that PyTorch tensors (by default) are interpreted as torch.FloatTensors, but the input should be integers (or Long) instead. We can simply do this by specifying the exact type during tensor creations: tensor_label = torch.LongTensor([dat[1]]) I'm using PyTorch 1.0 under Linux fyi.
https://stackoverflow.com/questions/53936136/
TypeError: add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray
I am testing a ResNet-34 trained_model using Pytorch and fastai on a linux system with the latest anaconda3. To run it as a batch job, I commented out the gui related lines. It started to run for a few hrs, then stopped in the Validation step, the error message is as below. ... ^M100%|█████████▉| 452/453 [1:07:07<00:08, 8.75s/it, loss=1.23]^[[A^[[A^[[A ^MValidation: 0%| | 0/40 [00:00<?, ?it/s]^[[A^[[A^[[ATraceback (most recent call last): File "./resnet34_pretrained_PNG_nogui_2.py", line 279, in <module> learner.fit(lr,1,callbacks=[f1_callback]) File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 302, in fit return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs) File "/project/6000192/jemmyhu/resnet_png/fastai/learner.py", line 249, in fit_gen swa_eval_freq=swa_eval_freq, **kwargs) File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 162, in fit vals = validate(model_stepper, cur_data.val_dl, metrics, epoch, seq_first=seq_first, validate_skip = validate_skip) File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in validate res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics]) File "/project/6000192/jemmyhu/resnet_png/fastai/model.py", line 241, in <listcomp> res.append([to_np(f(datafy(preds), datafy(y))) for f in metrics]) File "./resnet34_pretrained_PNG_nogui_2.py", line 237, in __call__ self.TP += (preds*targs).float().sum(dim=0) TypeError: add(): argument 'other' (position 1) must be Tensor, not numpy.ndarray The link for the original code is https://www.kaggle.com/iafoss/pretrained-resnet34-with-rgby-0-460-public-lb lines 279 and 237 in my copy are shown below: 226 class F1: 227 __name__ = 'F1 macro' 228 def __init__(self,n=28): 229 self.n = n 230 self.TP = np.zeros(self.n) 231 self.FP = np.zeros(self.n) 232 self.FN = np.zeros(self.n) 233 234 def __call__(self,preds,targs,th=0.0): 235 preds = (preds > th).int() 236 targs = targs.int() 237 self.TP += (preds*targs).float().sum(dim=0) 238 self.FP += (preds > targs).float().sum(dim=0) 239 self.FN += (preds < targs).float().sum(dim=0) 240 score = (2.0*self.TP/(2.0*self.TP + self.FP + self.FN + 1e-6)).mean() 241 return score 276 lr = 0.5e-2 277 with warnings.catch_warnings(): 278 warnings.simplefilter("ignore") 279 learner.fit(lr,1,callbacks=[f1_callback]) Could anyone have a clue for the issue? Many thanks, Jemmy
Ok, the error seems be for the latest pytorch-1.0.0, when degrade pytorch to pytorch-0.4.1, the code seems work (passed the error lines at this point). Still have no idea to make the code work with pytorch-1.0.0
https://stackoverflow.com/questions/53938977/
How to install CUDA in Google Colab - Cannot initialize CUDA without ATen_cuda library
I am trying to use cuda in Goolge Colab but while running my program I get the following error. RuntimeError: Cannot initialize CUDA without ATen_cuda library. PyTorch splits its backend into two shared libraries: a CPU library and a CUDA library; this error has occurred because you are trying to use some CUDA functionality, but the CUDA library has not been loaded by the dynamic linker for some reason. The CUDA library MUST be loaded, EVEN IF you don't directly use any symbols from the CUDA library! One common culprit is a lack of -Wl,--no-as-needed in your link arguments; many dynamic linkers will delete dynamic library dependencies if you don't depend on any of their symbols. You can check if this has occurred by using ldd on your binary to see if there is a dependency on *_cuda.so library. I have the following libraries installed. from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1- {platform}-linux_x86_64.whl torchvision %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import time import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models !pip install Pillow==5.3.0 # import the new one import PIL And I am trying to run the following code. for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
Have you tried the following? Go to Menu > Runtime > Change runtime. Change hardware acceleration to GPU. How to install CUDA in Google Colab GPU's
https://stackoverflow.com/questions/53946404/
How to optimize the inference script to get a faster prediction of the classifier?
I have written the following prediction code that predicts from a trained classifier model. Now the prediction time is around 40s which I want to reduce as much as possible. Can I do any optimization to my inference script or should I look for developments in training script? import torch import torch.nn as nn from torchvision.models import resnet18 from torchvision.transforms import transforms import matplotlib.pyplot as plt import numpy as np from torch.autograd import Variable import torch.functional as F from PIL import Image import os import sys import argparse import time import json parser = argparse.ArgumentParser(description = 'To Predict from a trained model') parser.add_argument('-i','--image', dest = 'image_name', required = True, help='Path to the image file') args = parser.parse_args() def predict_image(image_path): print("prediciton in progress") image = Image.open(image_path) transformation = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) image_tensor = transformation(image).float() image_tensor = image_tensor.unsqueeze_(0) if cuda: image_tensor.cuda() input = Variable(image_tensor) output = model(input) index = output.data.numpy().argmax() return index def parameters(): hyp_param = open('param_predict.txt','r') param = {} for line in hyp_param: l = line.strip('\n').split(':') def class_mapping(index): with open("class_mapping.json") as cm: data = json.load(cm) if index == -1: return len(data) else: return data[str(index)] def segregate(): with open("class_mapping.json") as cm: data = json.load(cm) try: os.mkdir(seg_dir) print("Directory " , seg_dir , " Created ") except OSError: print("Directory " , seg_dir , " already created") for x in range (0,len(data)): dir_path="./"+seg_dir+"/"+data[str(x)] try: os.mkdir(dir_path) print("Directory " , dir_path , " Created ") except OSError: print("Directory " , dir_path , " already created") path_to_model = "./models/"+'trained.model' checkpoint = torch.load(path_to_model) seg_dir="segregation_folder" cuda = torch.cuda.is_available() num_class = class_mapping(index=-1) print num_class model = resnet18(num_classes = num_class) if cuda: model.load_state_dict(checkpoint) else: model.load_state_dict(checkpoint, map_location = 'cpu') model.eval() if __name__ == "__main__": imagepath = "./Predict_Image/"+args.image_name since = time.time() img = Image.open(imagepath) prediction = predict_image(imagepath) name = class_mapping(prediction) print("Time taken = ",time.time()-since) print("Predicted Class: ",name) The entire project can be found at https://github.com/amrit-das/custom_image_classifier_pytorch/
Without output from your profiler it's difficult to tell how much of that is because of inefficiencies in your code. That being said, PyTorch has a lot of startup overhead - in other words it's slow to initialize the library, model, load weights and to transfer it to GPU, as compared to inference time on a single image. This makes is pretty poor as a CLI utility for single-image prediction. If your use-case really requires working with single images instead of batch-processing, there is not that much potential for optimization. Two options I see are It may be worth it to skip GPU execution altogether and save on GPU allocations and transfers. You will get better performance writing this code in C++ using LibTorch. This is a plenty of development work though.
https://stackoverflow.com/questions/53954653/
Use Pytorch SSIM loss function in my model
I am trying out this SSIM loss implement by this repo for image restoration. For the reference of original sample code on author's GitHub, I tried: model.train() for epo in range(epoch): for i, data in enumerate(trainloader, 0): inputs = data inputs = Variable(inputs) optimizer.zero_grad() inputs = inputs.view(bs, 1, 128, 128) top = model.upward(inputs) outputs = model.downward(top, shortcut = True) outputs = outputs.view(bs, 1, 128, 128) if i % 20 == 0: out = outputs[0].view(128, 128).detach().numpy() * 255 cv2.imwrite("/home/tk/Documents/recover/SSIM/" + str(epo) + "_" + str(i) + "_re.png", out) loss = - criterion(inputs, outputs) ssim_value = - loss.data.item() print (ssim_value) loss.backward() optimizer.step() However, the results didn't come out as I expected. After first 10 epochs, the printed outcome image were all black. loss = - criterion(inputs, outputs) is proposed by the author, however, for classical Pytorch training code this will be loss = criterion(y_pred, target), therefore should be loss = criterion(inputs, outputs) here. However, I tried loss = criterion(inputs, outputs) but the results are still the same. Can anyone share some thoughts about how to properly utilize SSIM loss? Thanks.
The author is trying to maximize the SSIM value. The natural understanding of the pytorch loss function and optimizer working is to reduce the loss. But the SSIM value is quality measure and hence higher the better. Hence the author uses loss = - criterion(inputs, outputs) You can instead try using loss = 1 - criterion(inputs, outputs) as described in this paper. Modified code (max_ssim.py) for testing the above thing using this repo import pytorch_ssim import torch from torch.autograd import Variable from torch import optim import cv2 import numpy as np npImg1 = cv2.imread("einstein.png") img1 = torch.from_numpy(np.rollaxis(npImg1, 2)).float().unsqueeze(0)/255.0 img2 = torch.rand(img1.size()) if torch.cuda.is_available(): img1 = img1.cuda() img2 = img2.cuda() img1 = Variable( img1, requires_grad=False) img2 = Variable( img2, requires_grad = True) print(img1.shape) print(img2.shape) # Functional: pytorch_ssim.ssim(img1, img2, window_size = 11, size_average = True) ssim_value = 1-pytorch_ssim.ssim(img1, img2).item() print("Initial ssim:", ssim_value) # Module: pytorch_ssim.SSIM(window_size = 11, size_average = True) ssim_loss = pytorch_ssim.SSIM() optimizer = optim.Adam([img2], lr=0.01) while ssim_value > 0.05: optimizer.zero_grad() ssim_out = 1-ssim_loss(img1, img2) ssim_value = ssim_out.item() print(ssim_value) ssim_out.backward() optimizer.step() cv2.imshow('op',np.transpose(img2.cpu().detach().numpy()[0],(1,2,0))) cv2.waitKey()
https://stackoverflow.com/questions/53956932/
About autograd in pyorch, Adding new user-defined layers, how should I make its parameters update?
everyone ! My demand is a optical-flow-generating problem. I have two raw images and a optical flow data as ground truth, now my algorithm is to generate optical flow using raw images, and the euclidean distance between generating optical flow and ground truth could be defined as a loss value, so it can implement a backpropagation to update parameters. I take it as a regression problem, and I have to ideas now: I can set every parameters as (required_grad = true), and compute a loss, then I can loss.backward() to acquire the gradient, but I don’t know how to add these parameters in optimizer to update those. I write my algorithm as a model. If I design a “custom” model, I can initilize several layers such as nn.Con2d(), nn.Linear() in def init() and I can update parameters in methods like (torch.optim.Adam(model.parameters())), but if I define new layers by myself, how should I add this layer’s parameters in updating parameter collection??? This problem has confused me several days. Are there any good methods to update user-defined parameters? I would be very grateful if you could give me some advice!
Tensor values have their gradients calculated if they Have requires_grad == True Are used to compute some value (usually loss) on which you call .backward(). The gradients will then be accumulated in their .grad parameter. You can manually use them in order to perform arbitrary computation (including optimization). The predefined optimizers accept an iterable of parameters and model.parameters() does just that - it returns an iterable of parameters. If you have some custom "free-floating" parameters you can pass them as my_params = [my_param_1, my_param_2] optim = torch.optim.Adam(my_params) and you can also merge them with the other parameter iterables like below: model_params = list(model.parameters()) my_params = [my_param_1, my_param_2] optim = torch.optim.Adam(model_params + my_params) In practice however, you can usually structure your code to avoid that. There's the nn.Parameter class which wraps tensors. All subclasses of nn.Module have their __setattr__ overridden so that whenever you assign an instance of nn.Parameter as its property, it will become a part of Module's .parameters() iterable. In other words class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.my_param_1 = nn.Parameter(torch.tensor(...)) self.my_param_2 = nn.Parameter(torch.tensor(...)) will allow you to write module = MyModule() optim = torch.optim.Adam(module.parameters()) and have the optim update module.my_param_1 and module.my_param_2. This is the preferred way to go, since it helps keep your code more structured You won't have to manually include all your parameters when creating the optimizer You can call module.zero_grad() and zero out the gradient on all its children nn.Parameters. You can call methods such as module.cuda() or module.double() which, again, work on all children nn.Parameters instead of requiring to manually iterate through them.
https://stackoverflow.com/questions/53958898/
Windows 10,CUDA 9,: CUDA driver version is insufficient for CUDA runtime version at ..\src\THC\THCG
My env configuration: python:3.6 tensorflow-GPU 1.3 CUDA:9.0 VS:2013 torch:0.4.0 run CUDA 9.0 sample:success but when I run the pytorch code,I get the error info as follows: File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 249, in return self._apply(lambda t: t.cuda(device)) RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at ..\src\THC\THCGeneral.cpp:70 I have tried reinstall CUDA,but the error still exists
It is telling you the GPU driver you have installed cannot cope with the CUDA version you are using. Either update your driver to the latest version or downgrade your CUDA runtime to a version supported by your GPU driver.
https://stackoverflow.com/questions/53959762/
Pytorch: How does SGD with momentum works when optimizer has to call zero_grad() to help accumulation of gradients?
In pytorch, the backward() function accumulates gradients and we have to reset it every mini-batch by calling optimizer.zero_grad(). In this case, how does the SGD with momentum works when actually momentum SGD updates the weights using exponential average of some past mini-batches. For a beginner in Pytorch, I am confused. Doesn't it require to have past gradients to perform updates.
When using momentum you need to store a one-element history for each parameter, other solvers (e.g. ADAM) requires even more. The optimizer knows how to store this history data and accumuate new gradients in an orderly fashion. You do not have to worry about it. So why zero_grad(), you probably ask yourself? well, sometimes an entire minibatch does not fit into GPU memory and you want to split its processing into several "mini"-minibatches, but without decreasing the effective batch size used for computing the gradients and weight updates. In that case, you call zero_grad() once, do forward and backward for all mini-minibatches and only then call optimizer.step() - this step averages the gradients from all the mini-minibatches and you get an effective update as if you ran a single minibatch. See this thread for more details. Some more information about gradients and optimizer in pytorch can be found here and here.
https://stackoverflow.com/questions/53981485/
How can I select single indices over a dimension in pytorch?
Assume I have a tensor sequences of shape [8, 12, 2]. Now I would like to make a selection of that tensor for each first dimension which results in a tensor of shape [8, 2]. The selection over dimension 1 is specified by indices stored in a long tensor indices of shape [8]. I tried this, however it selects each index in indices for each first dimension in sequences instead of only one. sequences[:, indices] How can I make this query without a slow and ugly for loop?
sequences[torch.arange(sequences.size(0)), indices]
https://stackoverflow.com/questions/53986301/
How does torch.empty calculate the values?
Every time I run torch.empty(5, 3) I get one of those two results: >>> torch.empty(5, 3) tensor([[ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, -0.0000, 0.0000], [ 0.0000, 0.0000, -50716.6250]]) >>> torch.empty(5, 3) tensor([[0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]]) I tried this multiple times and I still get one of those two results. I tried changing the size number -50716.6250 appeared again. Are the values here random? Why are those numbers reoccurring?
torch.empty returns "a tensor filled with uninitialized data." If you want to have a tensor filled with zeros, use torch.zeros.
https://stackoverflow.com/questions/53987380/
Too many files open
import PIL as Image Image.fromarray(cv2.imread(link, cv2.IMREAD_GRAYSCALE)) I'm currently trying to complete a project but I'm constantly getting too many files open error on my linux GPU server which crashes the server. I'm loading 3 images for CNN classification using the code as shown above. Anyone facing the same problem have a solution to this? Thank you.
Try switching to the file strategy system by adding this to your script import torch.multiprocessing torch.multiprocessing.set_sharing_strategy('file_system')
https://stackoverflow.com/questions/54000317/
AssertionError: Torch not compiled with CUDA enabled
From https://pytorch.org/ to install pytorch on MacOS the following is stated : conda install pytorch torchvision -c pytorch # MacOS Binaries dont support CUDA, install from source if CUDA is needed Why would want to install pytorch without cuda enabled ? Reason I ask is I receive error : --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 78 # predicted = outputs.data.max(1)[1] 79 ---> 80 output = model(torch.tensor([[1,1]]).float().cuda()) 81 predicted = output.data.max(1)[1] 82 ~/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _lazy_init() 159 raise RuntimeError( 160 "Cannot re-initialize CUDA in forked subprocess. " + msg) --> 161 _check_driver() 162 torch._C._cuda_init() 163 _cudart = _load_cudart() ~/anaconda3/lib/python3.6/site-packages/torch/cuda/init.py in _check_driver() 73 def _check_driver(): 74 if not hasattr(torch._C, '_cuda_isDriverSufficient'): ---> 75 raise AssertionError("Torch not compiled with CUDA enabled") 76 if not torch._C._cuda_isDriverSufficient(): 77 if torch._C._cuda_getDriverVersion() == 0: AssertionError: Torch not compiled with CUDA enabled when attempting to execute code : x = torch.tensor([[0,0] , [0,1] , [1,0]]).float() print(x) y = torch.tensor([0,1,1]).long() print(y) my_train = data_utils.TensorDataset(x, y) my_train_loader = data_utils.DataLoader(my_train, batch_size=2, shuffle=True) # Device configuration device = 'cpu' print(device) # Hyper-parameters input_size = 2 hidden_size = 100 num_classes = 2 learning_rate = 0.001 train_dataset = my_train train_loader = my_train_loader pred = [] for i in range(0 , model_iters) : # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Move tensors to the configured device images = images.reshape(-1, 2).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item())) output = model(torch.tensor([[1,1]]).float().cuda()) To fix this error I need to install pytorch from source with cuda already installed ?
To summarize and expand on the comments: CUDA is an Nvidia proprietary (apparently unlicensed) technology to allow general computing on GPU processors. Very few Macbook Pro's have an Nvidia CUDA-capable GPU. Take a look here to see whether your MBP has an Nvidia GPU. Then, look at the table here to see if that GPU supports CUDA Same story for iMac, iMac Pro and Mac Pro. Therefore, PyTorch is installed without CUDA support by default on MacOS This PyTorch github issue mentions that very few Macs have Nvidia processors: https://github.com/pytorch/pytorch/issues/30664 IF your Mac does have a CUDA-capable GPU, then to use CUDA commands on MacOS you'll need to recompile pytorch from source with correct command line options.
https://stackoverflow.com/questions/54014220/
Pylint with pytorch: Is there a way to tell pylint to look for module in different place?
I am using pytorch and pylint does not recognize few functions for ex: torch.stack however, if I do import torch._C as torch it seems to work fine. If I do above, actual modules that exist inside torch package like torch.cuda or torch.nn need to imported individually as simply doing torch.cuda would point to torch._C.cuda and hence won't work. Is there a way to tell pylint to look at both torch and torch._C when I do import torch or even whenever it sees torch? I don't think I would use torch to reference any other thing in my code.
A solution for now is to add torch to generated-members: pylint --generated-members="torch.*" ... or in pylintrc under the [TYPECHECK] section: generated-members=torch.* I found this solution in a reply to the github discussion of the pytorch issue [Minor Bug] Pylint E1101 Module 'torch' has no 'from_numpy' member #701. Less satisfying than whitelisting, because I guess it won't catch if you reference something that actually isn't a member, but it's the best solution I've come across so far.
https://stackoverflow.com/questions/54030714/
Tensorflow finfo (numeric limits)
How do I access something like numpy.finfo (Or torch.finfo) in the tensorflow Python API? I want to look up things like the smallest increment or largest finite value of the given type (for example tf.float32). Some attributes are accessible directly; tf.float32.max >> -3.4028235e+38 tf.float32.min >> -3.4028235e+38 But how about epsilon, infinity or similar? I would expect a straightforward interface to c++ std::numeric_limits, but I can't seem to find it.
You can always check this via PyTorch. Should be the same as in TF. import torch print(torch.finfo(torch.float16).eps) #0.0009765625 print(torch.finfo(torch.float32).eps) #1.1920928955078125e-07 print(torch.finfo(torch.float64).eps) #2.220446049250313e-16 print(torch.finfo(torch.double).eps) #2.220446049250313e-16 Else this will be keras epsilon way: import tensorflow as tf tf.keras.backend.epsilon() #1e-07 There is also set_epsilon() you can play with.
https://stackoverflow.com/questions/54047155/
Speed up datasets loading on Google Colab
I am working on Image classification on the German Traffic Sign Dataset on Google Colab with Pytorch. Here is the structure of the dataset: GTSRB Training 00000/ *.ppmm … 00043/ *.ppmm Test *.ppmm … labels.csv I have managed to upload the whole dataset to my drive(it took a long time!!!). I have used ImageFolder class and Dataset class to load respectively training and test images. However, Training my model is really slow, and GPU is not used efficiently. After many searches, I discovered that file transfer from drive to Colab is at fault here. Does anyone know how I can use hd5 dataset (or others techniques) to first store all training and test images for later preprocessing?
If your problem truly is the network speed between Colab and Drive, you should try uploading the files directly to the Google Colab instance, rather than accessing them from Drive. from google.colab import files dataset_file_dict = files.upload() Doing this will save the files directly to your Colab instance, allowing your code to access the files locally. However, I'd suspect that there might be other problems besides the network latency – perhaps your model has lots of parameters, or somehow there was a bug in the code to get CUDA going. Sometimes I would forget to change my runtime to a GPU runtime under the "Runtime" menu tab, "Change Runtime Type". Hope this helps!
https://stackoverflow.com/questions/54049440/
Can't send pytorch tensor to cuda
I create a torch tensor and I want it to go to GPU but it doesn't. This is so broken. What's wrong? def test_model_works_on_gpu(): with torch.cuda.device(0) as cuda: some_random_d_model = 2 ** 9 five_sentences_of_twenty_words = torch.from_numpy(np.random.random((5, 20, T * d))).float() five_sentences_of_twenty_words_mask = torch.from_numpy(np.ones((5, 1, 20))).float() pytorch_model = make_sentence_model(d_model=some_random_d_model, T_sgnn=T, d_sgnn=d) five_sentences_of_twenty_words.to(cuda) five_sentences_of_twenty_words_mask.to(cuda) print(type(five_sentences_of_twenty_words), type(five_sentences_of_twenty_words_mask)) print(five_sentences_of_twenty_words.is_cuda, five_sentences_of_twenty_words_mask.is_cuda) pytorch_model.to(cuda) output_before_match = pytorch_model(five_sentences_of_twenty_words, five_sentences_of_twenty_words_mask) assert output_before_match.shape == (5, some_random_d_model) print(type(output_before_match)) print(output_before_match.is_cuda, output_before_match.get_device()) tests/test_model.py:58: RuntimeError <class 'torch.Tensor'> <class 'torch.Tensor'> False False <class 'torch.Tensor'> > print(output_before_match.is_cuda, output_before_match.get_device()) E RuntimeError: get_device is not implemented for tensors with CPU backend Also: >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 2 And: pip freeze | grep -i torch torch==1.0.0 torchvision==0.2.1
Your issue is the following lines: five_sentences_of_twenty_words.to(cuda) five_sentences_of_twenty_words_mask.to(cuda) .to(device) only operates in place when applied to a model. When applied to a tensor, it must be assigned: five_sentences_of_twenty_words = five_sentences_of_twenty_words.to(cuda) five_sentences_of_twenty_words_mask = five_sentences_of_twenty_words_mask.to(cuda)
https://stackoverflow.com/questions/54060499/
How do i add an layer to an Nural network in pytorch
i want to add a layer to a Nural network programicaly it returned this error TypeError: forward() missing 1 required positional argument: 'x' class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(1, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] num_features = 1 for s in size: num_features *= s return num_features netz =Net() print(netz) netz = nn.Sequential([nn.Linear(10, 120), netz()]) print(netz) ` The same error happend when i was loading it with netz=torch.load() the line which seems to cause the error is :netz = nn.Sequential([nn.Linear(10, 120), netz()]) how do i make it work?
Ok so there are several things. Beginning with why are you calling netz(), you already instiantiated the object earlier with netz =Net(), so this make no sense. Second thing, nn.Sequential expects *args as "constructor" argument, so you directly pass subclasses of modules: netz = nn.Sequential(Net(), nn.Linear(100,100)) or you unpack them: nn.Sequential(*[nn.Linear(100,100), Net()]). You can also add multiple modules using an OrderedDict as is well documented in the PyTorch docs (which you should consult by the way - it's there for a reason!) model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ])) You can also add a module with my_modules.add_module("my_module_name", Net()) to an existing collection of ordered modules.
https://stackoverflow.com/questions/54062495/