instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to upgrade to pytorch-nightly in google colab? | I got into problem ,
how can i solve this?
I want to run pytorch-nightly on colab, I have all codes in pytorch-nightly version because of new packages in it,
I tried to search about this and tried this code but it is not working even after restarting runtime
from os.path import exists
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install torchvision_nightly
!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/{accelerator}/torch_nightly.html
Help!!
| You're using the wrong package name, as mentioned on the pytorch website use this:
!pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html -U
Here, -U option is for upgrade (as there is already pytorch stable version installed on colab).
| https://stackoverflow.com/questions/64382934/ |
How can I extract the weight and bias of Linear layers in PyTorch? | In model.state_dict(), model.parameters() and model.named_parameters() weights and biases of nn.Linear() modules are contained separately, e.q. fc1.weight and fc1.bias. Is there a simple pythonic way to get both of them?
Expected example looks similar to this:
layer = model['fc1']
print(layer.weight)
print(layer.bias)
| You can recover the named parameters for each linear layer in your model like so:
from torch import nn
for layer in model.children():
if isinstance(layer, nn.Linear):
print(layer.state_dict()['weight'])
print(layer.state_dict()['bias'])
| https://stackoverflow.com/questions/64390904/ |
How do I create a shear matrix for PyTorch's F.affine_grid & F.grid_sample? | I need to create a shear matrix that is autograd compatible, works on B,C,H,W tensors, and takes input values (possibly generated randomly) for the shear values. How can I generate the shear matrix for this?
import torch
import torch.nn.functional as F
import torchvision.transforms as transforms
from PIL import Image
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
def get_shear_mat(theta):
...
return shear_mat
def shear_img(x, theta, dtype):
shear_mat = get_shear_mat(theta)
grid = F.affine_grid(shear_mat , x.size()).type(dtype)
x = F.grid_sample(x, grid)
return x
# Shear tensor
test_input = # Test image
shear_values = (3,4) # Example values
sheared_tensor = shear_img(test_input, shear_values)
| Say m is the shear factor, then theta = atan(1/m) is the shear angle.
You can now pick either horizontal shear or vertical shear. Here's how you implement get_shear_mat such that you can pick horizontal shear by setting ax=0 and vertical shear by setting ax=1:
def get_shear_mat(theta, ax=0):
assert ax in [0, 1]
m = 1 / torch.tan(torch.tensor(theta))
if ax == 0: # Horizontal shear
shear_mat = torch.tensor([[1, m, 0],
[0, 1, 0]])
else: # Vertical shear
shear_mat = torch.tensor([[1, 0, 0],
[m, 1, 0]])
return shear_mat
Notice that a shear mapping is just a mapping of point (x,y) in the original image to the point (x+my,y) for horizontal shear, and (x,y+mx) for vertical shear. This is exactly what we do here by defining the shear_mat as above.
An optional modification to shear_img to support the operation for a batched input in the first row. Also adding an argument - ax to shear_img to define whether we want a horizontal (ax=0) or vertical(ax=1) shear:
def shear_img(x, ax, theta, dtype):
shear_mat = get_shear_mat(theta, ax)[None, ...].type(dtype).repeat(x.shape[0], 1, 1)
grid = F.affine_grid(shear_mat , x.size()).type(dtype)
x = F.grid_sample(x.type(dtype), grid)
return x
Let's test this implementation on an image:
# Let im be a 4D tensor of shape BxCxHxW (an image or a batch of images):
dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor # Set type of data
sheared_im = shear_img(im, 0, np.pi/4, dtype) #Horizontal shear by shear angle of pi/4
plt.imshow(sheared_im.squeeze(0).permute(1,2,0)/255)
plt.show()
If im is our dancing cat with a skirt:
Then our plot will be:
If we want a vertical shear:
sheared_im = shear_img(im, 1, np.pi/4, dtype) # Vertical shear by shear angle of pi/4
plt.imshow(sheared_im.squeeze(0).permute(1, 2, 0)/255)
plt.show()
We obtain:
Hooray!
| https://stackoverflow.com/questions/64394325/ |
How can I make my deeplearning chatbot use the correct path to my dataset? | I am trying to get the code from a deeplearning chatbot to work. This chatbot uses pytorch and the dataset from Cornell movie corpus. But the code can't seem to find the path to the dataset, and I don't know how to code it in. This is the source for the deeplearning chatbot code [https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/chatbot_tutorial.ipynb][1]
This is as far as I've gotten with it.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")
corpus_name = "cornell movie-dialogs corpus"
corpus = os.path.join("data", corpus_name)
def printLines(file, n=10):
with open(file, 'rb') as datafile:
lines = datafile.readlines()
for line in lines[:n]:
print(line)
printLines(os.path.join(corpus, "movie_lines.txt"))
And this is my error log
D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py:26: SyntaxWarning: 'str' object is not callable; perhaps you missed a comma?
corpus = "D:\Documents\Python\intents\cornell_movie_dialogs_corpus.zip\cornell movie-dialogs corpus\\"("data", corpus_name)
Traceback (most recent call last):
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 26, in <module>
corpus = "D:\Documents\Python\intents\cornell_movie_dialogs_corpus.zip\cornell movie-dialogs corpus\\"("data", corpus_name)
TypeError: 'str' object is not callable
I hope there is a solution that doesn't alter the source code too much, but any tips or help are welcome.
| I solved my problem by removing anything but the cornell movie-dialogs corpus map. And then replacing the word "data" in line 26 with the directory of that map. This fixed it for me
| https://stackoverflow.com/questions/64395047/ |
Low accuracy binary classification with Pytorch | In practicing deep learning for binary classification with Pytorch on Breast-Cancer-Wisconsin-Diagnostic-DataSet.
I've tried different approaches, and the best I can get as below, the accuracy is still low at 61%.
What's the way to improve the accuracy?
Thank you.
import pandas as pd
import io
dataset = pd.read_excel(base_dir + "Breast-Cancer-Wisconsin-Diagnostic.xlsx")
number_of_columns = dataset.shape[1]
# training and testing split of 70:30
dataset['diagnosis'] = pd.Categorical(dataset['diagnosis']).codes
dataset = dataset.sample(frac=1, random_state=1234)
train_input = dataset.values[:398, :number_of_columns-1]
train_target = dataset.values[:398, number_of_columns-1]
test_input = dataset.values[398:, :number_of_columns-1]
test_target = dataset.values[398:, number_of_columns-1]
import torch
torch.manual_seed(1234)
hidden_units = 5
net = torch.nn.Sequential(
torch.nn.Linear(number_of_columns-1, hidden_units),
torch.nn.ReLU(),
torch.nn.Linear(hidden_units, 2))
# choose optimizer and loss function
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.1,momentum=0.9)
# train
epochs = 50
for epoch in range(epochs):
inputs = torch.autograd.Variable(torch.Tensor(train_input).float())
targets = torch.autograd.Variable(torch.Tensor(train_target).long())
optimizer.zero_grad()
out = net(inputs)
loss = criterion(out, targets)
loss.backward()
optimizer.step()
if epoch == 0 or (epoch + 1) % 10 == 0:
print('Epoch %d Loss: %.4f' % (epoch + 1, loss.item()))
# Epoch 1 Loss: 412063.1250
# Epoch 10 Loss: 0.6628
# Epoch 20 Loss: 0.6639
# Epoch 30 Loss: 0.6592
# Epoch 40 Loss: 0.6587
# Epoch 50 Loss: 0.6588
import numpy as np
inputs = torch.autograd.Variable(torch.Tensor(test_input).float())
targets = torch.autograd.Variable(torch.Tensor(test_target).long())
optimizer.zero_grad()
out = net(inputs)
_, predicted = torch.max(out.data, 1)
error_count = test_target.size - np.count_nonzero((targets == predicted).numpy())
print('Errors: %d; Accuracy: %d%%' % (error_count, 100 * torch.sum(targets == predicted) // test_target.size))
# Errors: 65; Accuracy: 61%
| Features Representing samples are in different range. So, First thing you should do is to normalize the data.
You should plot the loss and acc over the training epochs for training and validation/test dataset to understand whether the model overfits on training data or underfit.
Furthermore, you can try with more complex (deeper) model. And since your training dataset has few number of samples, you can consider augmentation and transfer learning as well if possible.
| https://stackoverflow.com/questions/64398085/ |
PyTorch: defined layer was not involved in the forward propagation but influenced the loss value | Recently, I met a confusing phenomenon when I used Pytorch to do simple experiments on logistic regression.
The question is when I fixed the random seed like this:
def set_seed(seed, cuda=True):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed(seed)
and defined the following model with 2 layers:
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.hidden = nn.Linear(784, 100)
self.output = nn.Linear(100, 10)
def forward(self, x):
x = self.hidden(x)
x = self.output(x)
return x
trained the network with:
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
The orignal loss value was 0.6422, which is reproducible.
However, when I added an additional layer that was not involoved in the forward process like this:
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.hidden = nn.Linear(784, 100)
self.output = nn.Linear(100, 10)
self.add = nn.Linear(10,10)
def forward(self, x):
x = self.hidden(x)
x = self.output(x)
return x
The original loss value changed to 0.7431, which was not equal to the previous one and the model performance dropped simultaneously.
I really wonder the reason to this. Thank you!
| This is completely expected if there are other sources of randomness (something that consumes the RNG) before computing the loss. As you didn't provide a Minimal, Reproducible Example, I'd guess that you're using a DataLoader with shuffle=True. In this case, even though you do not use the self.add layer, when you initialize it, it consumes the RNG; therefore leading to a different order to the samples. If the randomness is coming from a DataLoader with shuffle=True, you can control that by providing a different RNG to the DataLoader. Something like this:
import numpy as np
import torch
from torch import nn
import torchvision
from torchvision.transforms import ToTensor
def set_seed(seed, cuda=True):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed(seed)
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.hidden = nn.Linear(784, 100)
self.output = nn.Linear(100, 10)
# self.add = nn.Linear(10, 10) # try with and without
def forward(self, x):
x = self.hidden(x)
x = self.output(x)
return x
set_seed(0)
m = net()
bs = 4
ds = torchvision.datasets.MNIST(root=".", train=True, transform=ToTensor(), download=True)
rng_dl = torch.Generator()
dl = torch.utils.data.DataLoader(ds, batch_size=bs, shuffle=True, num_workers=0, generator=rng_dl)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(m.parameters(), lr=0.1)
for x, y in dl:
y_hat = m(x.view(bs, -1))
l = criterion(y_hat, y)
print(l)
exit()
Keep in mind that it could be several other things, such as data augmentation and other calls to functions that rely on random ops. If you can provide an MRE, I could try and give a more specific answer.
| https://stackoverflow.com/questions/64400630/ |
torchvision.transforms.Normalize() slows down learning when adding to torchvision,transforms.Compose() | when i use
train_transforms = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.1307,), (0.3081,))
])
for loading MNIST dataset, it slows down learning even with mean = 0 and std = 1.
| The transformations are performed on CPU, and it doesn't matter if the mean/std are all zeros (BTW, don't set std to 0). To speed up the transform you have two options:
If you don't have any data augmentations in your flow, just transform the data and save it as normalized tensors (pickled or something).
You can also use torch.utils.data.DataLoader with some arguments: for example num_workers specifies how many CPU processes to use to transform the data. THere is also pin_memory which will speed up the whole thing if you are using CUDA.
| https://stackoverflow.com/questions/64405333/ |
Space efficient way to store and read massive 3d dataset? | I am trying to train a neural network on sequential data. My dataset will consist of 3.6 million training examples. Each example will be a 30 x 32 ndarray (32 features observed over 30 days).
My question is what is the most space-efficient way to write and read this data?
Essentially it will have shape (3.6m, 30, 32) and np.save() seems convenient but I can't hold this whole thing in memory so I can't really save it using np.save() (or load it back using np.load()). CSV also won't work because my data has 3 dimensions.
My plan to create the thing is to process entries in batches and append them to some file so that I can keep memory free as I go.
Eventually, I am going to use the data file as an input for a PyTorch IterableDataset so it must be something that can be loaded one line at a time (like a .txt file, but I'm hoping there is some better way to save this data that is more true to its tabular, 3-dimensional nature). Any ideas are appreciated!
| Since you are planning on using an iterable dataset you shouldn't need random access (IterableDataset doesn't support shuffle samplers). In that case, why not just write everything to a binary file and iterate over that? I find in practice this often is much faster than alternative solutions. This should be much faster than saving as a text file since you avoid the overhead of converting text to numbers.
An example implementation may look something like the following. First we could build a binary file as follows (containing random data as a placeholder)
import numpy as np
from tqdm import tqdm
filename = 'data.bin'
num_samples = 3600000
rows, cols = 30, 32
dtype = np.float32
# format: <num_samples> <rows> <cols> <sample0> <sample1>...
with open(filename, 'wb') as fout:
# write a header that contains the total number of samples and the rows and columns per sample
fout.write(np.array((num_samples, rows, cols), dtype=np.int32).tobytes())
for i in tqdm(range(num_samples)):
# random placeholder
sample = np.random.randn(rows, cols).astype(dtype)
# write data to file
fout.write(sample.tobytes())
Then we could define an IterableDataset as follows
import numpy as np
from torch.utils.data import IterableDataset, DataLoader
from tqdm import tqdm
def binary_reader(filename, start=None, end=None, dtype=np.float32):
itemsize = np.dtype(dtype).itemsize
with open(filename, 'rb') as fin:
num_samples, rows, cols = np.frombuffer(fin.read(3 * np.dtype(np.int32).itemsize), dtype=np.int32)
start = start if start is not None else 0
end = end if end is not None else num_samples
blocksize = itemsize * rows * cols
start_offset = start * blocksize
fin.seek(start_offset, 1)
for _ in range(start, end):
yield np.frombuffer(fin.read(blocksize), dtype=dtype).reshape(rows, cols).copy()
class BinaryIterableDataset(IterableDataset):
def __init__(self, filename, start=None, end=None, dtype=np.float32):
super().__init__()
self.filename = filename
self.start = start
self.end = end
self.dtype = dtype
def __iter__(self):
return binary_reader(self.filename, self.start, self.end, self.dtype)
From a quick test of this dataset on my system (which uses SSD storage) I find I am able to iterate over all 3.6 million samples in about 10 seconds
dataset = BinaryIterableDataset('data.bin')
for sample in tqdm(dataset):
pass
3600000it [00:09, 374026.17it/s]
Using a DataLoader with batch_size=256 it takes me about 20 seconds to iterate over the whole dataset (converting to tensors and creating batches has some overhead). For this dataset I found that the overhead of transferring data to and from shared memory when using parallel loading is actually quite a bit slower than just using 0 workers. Therefore I recommend using num_workers=0. As with any iterable dataset you would need to add extra logic to support num_workers > 1, though I'm not sure it would be worth it in this case.
loader = DataLoader(dataset, batch_size=256, num_workers=0)
for batch in tqdm(loader):
# batch is a tensor of shape (256, 30, 32)
pass
14063it [00:19, 710.49it/s]
Note that the data.bin file would not be portable across systems that use different byte order. Though modifications could be made to support that.
| https://stackoverflow.com/questions/64407272/ |
How do I create a scale matrix for rescaling a PyTorch tensor, and then how do I use it? | I need to create a scale matrix that is autograd compatible, works on B,C,H,W tensors, and takes input values (possibly generated randomly) for controlling the scaling. How can I generate and use a scale matrix for this?
import torch
import torch.nn.functional as F
import torchvision.transforms as transforms
from PIL import Image
# Load image
def preprocess_simple(image_name, image_size):
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image = Image.open(image_name).convert('RGB')
return Loader(image).unsqueeze(0)
# Save image
def deprocess_simple(output_tensor, output_name):
output_tensor.clamp_(0, 1)
Image2PIL = transforms.ToPILImage()
image = Image2PIL(output_tensor.squeeze(0))
image.save(output_name)
def get_scale_mat(theta):
...
return scale_mat
def scale_img(x, theta, dtype):
scale_mat = get_scale_mat(theta)
# Can F.affine_grid & F.grid_sample be used with a scale matrix?
grid = F.affine_grid(scale_mat , x.size()).type(dtype)
x = F.grid_sample(x, grid)
return x
# Shear tensor
test_input = # Test image
scale = 5 # Example value
scaled_tensor = scale_img(test_input, scale)
| This is how you create and use a 3x2 scale matrix with F.affine_grid and F.grid_sample:
def get_scale_mat(m, device, dtype):
scale_mat = torch.tensor([[m, 0., 0.],
[0., m, 0.]])
return scale_mat
def scale_tensor(x, scale):
assert scale > 0
scale_matrix = get_scale_mat(scale, x.device, x.dtype)[None, ...].repeat(x.shape[0],1,1)
grid = F.affine_grid(scale_matrix, x.size())
x = F.grid_sample(x, grid)
return x
| https://stackoverflow.com/questions/64407726/ |
How can I unroll a PyTorch Tensor? | I have a tensor:
t1 = torch.randn(564, 400)
I want to unroll it to a 1-d tensor that's 225600 long.
How can I do this?
| Note the difference between view and reshape as suggested by Kris -
From reshape's docstring:
When possible, the returned tensor will be a view
of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying...
So in case your tensor is not contiguous calling reshape should handle what one would have had to handle had one used view instead; That is, call t1.contiguous().view(...) to handle non-contiguous tensors.
Also, one could use faltten: t1 = t1.flatten() as an equivalent of view(-1), which is more readable.
| https://stackoverflow.com/questions/64415250/ |
How could I get predictions for my PyTorch Image Classifier? | I have implemented this code for My Keras model that worked accurately but now I wanna implement this for my PyTorch Model, but I am unable to configure it for making predictions, Below is my full code, I need help with the classify function:
# importing libraries
import tkinter as tk
from tkinter import *
# import PIL
from tkinter import filedialog
import numpy
from PIL import Image, ImageTk
import torch
# importing model
model=torch.load('corelK_model_0.pt')
classes = {
0:'africa',
1: 'beach',
2: 'tallbuilding',
3: 'buses',
4: 'dinosaurs',
5: 'elephants',
6: 'Roses',
7: 'horses',
8: 'mountains',
9: 'food'
}
def upload_image():
file_path = filedialog.askopenfilename()
uploaded = Image.open(file_path)
uploaded.thumbnail(((top.winfo_width() / 2.25, (top.winfo_height() / 2.25))))
im = ImageTk.PhotoImage(uploaded)
sign_image.configure(image=im)
sign_image.image = im
label.configure(text=' ')
show_classify_button(file_path)
def show_classify_button(file_path):
classify_btn = Button(top, text="Classify Image", command=lambda: classify(file_path), padx=10, pady=5)
classify_btn.configure(background="#364156", foreground="white", font=('arial', 10, 'bold'))
classify_btn.place(relx=0.79, rely=0.46)
def classify(file_path):
image = Image.open(file_path)
image = image.resize((32, 32))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
pred = model.predict_classes([image])[0]
sign = classes[pred]
print(sign)
label.configure(foreground='#011638', text=sign)
# initialize GUI
top = tk.Tk() # calling the constructor or creating the object of tk class
top.geometry('800x600') # set height and width
top.title("Image Classification CIFAR10")
top.configure(background="#CDCDCD")
# set Heading
heading = Label(top, text="Image Classifier", pady=20, font=('arial', 20, 'bold'))
heading.configure(background="#CDCDCD", foreground='#364156')
heading.pack()
upload = Button(top, text="Upload an image", command=upload_image, padx=10, pady=5)
heading.configure(background="#364156", foreground='white', font=('arial', 10, 'bold'))
upload.pack(side=BOTTOM, pady=50)
# upload image
sign_image = Label(top)
sign_image.pack(side=BOTTOM, expand=True)
# predicted class
label = Label(top, background="#CDCDCD", font=('arial', 15, 'bold'))
label.pack(side=BOTTOM, expand=True)
top.mainloop()
The model I have used is VGG16 after transfer learning, I have saved it using the torch.save .
| Can you try the following?
After loading the model, set it to evaluation mode using this statement:
model=torch.load('corelK_model_0.pt')
model.eval()
You're not applying the same image transformations (on the test image) that were used while model training. Your classify function should look like this:
def classify(file_path):
image = Image.open(file_path)
image = image.resize((32, 32))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
# Start of transformations
# .......................
# End of transformations
pred = model.predict_classes([image])[0]
sign = classes[pred]
print(sign)
label.configure(foreground='#011638', text=sign)
| https://stackoverflow.com/questions/64417242/ |
When I run train.py with YOLACT, I get the error KeyError: 0 | I'm new to machine learning and program.
Now I'm trying to develop YOLACT AI using my own data.
However, when I run train.py, I get the following error and cannot learn.
What can I do to overcome this error?`
(yolact) tmori@tmori-Lenovo-Legion-Y740-15IRHg:~/yolact$ python train.py --config=can_config --save_interval=2000
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Initializing weights...
Begin training!
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
[ 0] 0 || B: 4.840 | C: 16.249 | M: 4.682 | S: 2.749 | T: 28.521 || ETA: 9:18:44 || timer: 3.352
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
[ 1] 10 || B: 4.535 | C: 9.228 | M: 4.379 | S: 1.867 | T: 20.008 || ETA: 3:25:24 || timer: 0.864
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
/home/tmori/yolact/utils/augmentations.py:309: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
Computing validation mAP (this may take a while)...
Traceback (most recent call last):
File "train.py", line 504, in <module>
train()
File "train.py", line 371, in train
compute_validation_map(epoch, iteration, yolact_net, val_dataset, log if args.log else None)
File "train.py", line 492, in compute_validation_map
val_info = eval_script.evaluate(yolact_net, dataset, train_mode=True)
File "/home/tmori/yolact/eval.py", line 956, in evaluate
prep_metrics(ap_data, preds, img, gt, gt_masks, h, w, num_crowd, dataset.ids[image_idx], detections)
File "/home/tmori/yolact/eval.py", line 427, in prep_metrics
detections.add_bbox(image_id, classes[i], boxes[i,:], box_scores[i])
File "/home/tmori/yolact/eval.py", line 315, in add_bbox
'category_id': get_coco_cat(int(category_id)),
File "/home/tmori/yolact/eval.py", line 293, in get_coco_cat
return coco_cats[transformed_cat_id]
KeyError: 0
I'm trying to develop an AI that finds cans and segments them.
First I annotated only one item of "can" with labelme, and then created a COCO format json file with labelme2coco.py.
After that, I modified config.py according to "Custom Datasets" on YOLACT's GitHub and ran train.py.
My development environment is as follows.
OS:Ubuntu20.04LTS
Anaconda:4.8.3
Python: 3.6.12
Pytorch: 1.4.0
CUDA Toolkit: 10.1
cuDNN: 7.6.5
| Your class id in annotations.json should start from 1 not 0. If they are starting from 0, try this in config.py in your "my_custom_dataset" in label map add this
'label_map': { 0: 1, 1: 2, 2: 3... and so on}
In this case there 3 classes!
Also in yolact_base_config in same script num_classes should be 1 greater than your number of classes for example in this case it would be 4.
| https://stackoverflow.com/questions/64420059/ |
How do I use two loss for two dataset in one pytorch nn? | I am quite new to pytorch and deep learning. Here is my question. I have two different datasets with the same feature domain sharing one neural network for a regression problem. The input is the features and the output is the target value. The first dataset uses a normal loss while the second dataset, I am trying to create a new loss for it.
I have searched multi-loss problems, people usually have two loss summed up for the backward process. But I want to use the loss in turn. (When I train the first dataset, the nn uses the first loss and when I train the second dataset, the nn uses the other loss)
Is this possible to do? Appreciate if anyone has some idea.
| The loss function does not necessarily have to do with network topology. You can use the corresponding loss with each dataset you use, e.g.
if first_task:
dataloader = torch.utils.data.DataLoader(first_dataset)
loss_fn = first_loss_fn
else:
dataloader = torch.utils.data.Dataloader(second_dataset)
loss_fn = second_loss_fn
# The pytorch training loop, very roughly
for batch in dataloader:
x, y = batch
optimizer.zero_grad()
loss = loss_fn(network.forward(x), y) # calls the corresponding loss function
loss.backward()
optimizer.step()
You can do this for the two datasets sequentially (meaning you interleave by epochs):
for batch in dataloader_1:
...
loss = first_loss_fn(...)
for batch in dataloader_2:
...
loss = second_loss_fn(...)
or better
dataset = torch.utils.data.ChainDataset([first_dataset, second_dataset])
dataloader = torch.utils.data.DataLoader(dataset)
You can also do simultaneously (interleave by examples). The standard way I think would be to use torch.utils.data.ConcatDataset
dataset = torch.utils.data.ConcatDataset([first_dataset, second_dataset])
dataloader = torch.utils.data.DataLoader(dataset)
Note that here you need each sample to store information about the dataset it comes from so you can determine which cost to apply.
A simpler way would be to interleave by batches (then you apply the same cost to the entire batch). For this case one way proposed here is to use separate dataloaders (this way you get flexibility on how often to sample each of them).
| https://stackoverflow.com/questions/64427227/ |
Deep learning chatbot specific Index error list index out of range | I am trying to follow a tutorial on how to make a deeplearning chatbot with pytorch. However, this code is quite complex for me and it has stopped with a "IndexError: list index out of range". I looked the error up and get the gist of what it usually means, but seeing as this code is very complex for me I can't figure out how to solve the error.
this is the source tutorial: [https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/chatbot_tutorial.ipynb#scrollTo=LTzdbPF-OBL9][1]
Line 198 seems to be causing the error
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
This is the error log
Start preparing training data ...
Reading lines...
Traceback (most recent call last):
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 221, in <module>
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 209, in loadPrepareData
pairs = filterPairs(pairs)
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 202, in filterPairs
return [pair for pair in pairs if filterPair(pair)]
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 202, in <listcomp>
return [pair for pair in pairs if filterPair(pair)]
File "D:\Documents\Python\python pycharm files\pythonProject4\3.9 Chatbot.py", line 198, in filterPair
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
IndexError: list index out of range
Read 442563 sentence pairs
Process finished with exit code 1
And this is my code copied from my pycharm up to the block with the error. Seeing as its a huge code I could not copy the entire code. The rest of the code can be found in the github source link above.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")
corpus_name = "cornell movie-dialogs corpus"
corpus = os.path.join("D:\Documents\Python\intents", corpus_name)
def printLines(file, n=10):
with open(file, 'rb') as datafile:
lines = datafile.readlines()
for line in lines[:n]:
print(line)
printLines(os.path.join(corpus, "movie_lines.txt"))
# Splits each line of the file into a dictionary of fields
def loadLines(fileName, fields):
lines = {}
with open(fileName, 'r', encoding='iso-8859-1') as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
lineObj = {}
for i, field in enumerate(fields):
lineObj[field] = values[i]
lines[lineObj['lineID']] = lineObj
return lines
# Groups fields of lines from `loadLines` into conversations based on *movie_conversations.txt*
def loadConversations(fileName, lines, fields):
conversations = []
with open(fileName, 'r', encoding='iso-8859-1') as f:
for line in f:
values = line.split(" +++$+++ ")
# Extract fields
convObj = {}
for i, field in enumerate(fields):
convObj[field] = values[i]
# Convert string to list (convObj["utteranceIDs"] == "['L598485', 'L598486', ...]")
lineIds = eval(convObj["utteranceIDs"])
# Reassemble lines
convObj["lines"] = []
for lineId in lineIds:
convObj["lines"].append(lines[lineId])
conversations.append(convObj)
return conversations
# Extracts pairs of sentences from conversations
def extractSentencePairs(conversations):
qa_pairs = []
for conversation in conversations:
# Iterate over all the lines of the conversation
for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it)
inputLine = conversation["lines"][i]["text"].strip()
targetLine = conversation["lines"][i+1]["text"].strip()
# Filter wrong samples (if one of the lists is empty)
if inputLine and targetLine:
qa_pairs.append([inputLine, targetLine])
return qa_pairs
# Define path to new file
datafile = os.path.join(corpus, "formatted_movie_lines.txt")
delimiter = '\t'
# Unescape the delimiter
delimiter = str(codecs.decode(delimiter, "unicode_escape"))
# Initialize lines dict, conversations list, and field ids
lines = {}
conversations = []
MOVIE_LINES_FIELDS = ["lineID", "characterID", "movieID", "character", "text"]
MOVIE_CONVERSATIONS_FIELDS = ["character1ID", "character2ID", "movieID", "utteranceIDs"]
# Load lines and process conversations
print("\nProcessing corpus...")
lines = loadLines(os.path.join(corpus, "movie_lines.txt"), MOVIE_LINES_FIELDS)
print("\nLoading conversations...")
conversations = loadConversations(os.path.join(corpus, "movie_conversations.txt"),
lines, MOVIE_CONVERSATIONS_FIELDS)
# Write new csv file
print("\nWriting newly formatted file...")
with open(datafile, 'w', encoding='utf-8') as outputfile:
writer = csv.writer(outputfile, delimiter=delimiter)
for pair in extractSentencePairs(conversations):
writer.writerow(pair)
# Print a sample of lines
print("\nSample lines from file:")
printLines(datafile)
# Default word tokens
PAD_token = 0 # Used for padding short sentences
SOS_token = 1 # Start-of-sentence token
EOS_token = 2 # End-of-sentence token
class Voc:
def __init__(self, name):
self.name = name
self.trimmed = False
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count SOS, EOS, PAD
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.num_words
self.word2count[word] = 1
self.index2word[self.num_words] = word
self.num_words += 1
else:
self.word2count[word] += 1
# Remove words below a certain count threshold
def trim(self, min_count):
if self.trimmed:
return
self.trimmed = True
keep_words = []
for k, v in self.word2count.items():
if v >= min_count:
keep_words.append(k)
print('keep_words {} / {} = {:.4f}'.format(
len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index)
))
# Reinitialize dictionaries
self.word2index = {}
self.word2count = {}
self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"}
self.num_words = 3 # Count default tokens
for word in keep_words:
self.addWord(word)
MAX_LENGTH = 10 # Maximum sentence length to consider
# Turn a Unicode string to plain ASCII, thanks to
# http://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
s = re.sub(r"\s+", r" ", s).strip()
return s
# Read query/response pairs and return a voc object
def readVocs(datafile, corpus_name):
print("Reading lines...")
# Read the file and split into lines
lines = open(datafile, encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
voc = Voc(corpus_name)
return voc, pairs
# Returns True iff both sentences in a pair 'p' are under the MAX_LENGTH threshold
def filterPair(p):
# Input sequences need to preserve the last word for EOS token
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
# Filter pairs using filterPair condition
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
# Using the functions defined above, return a populated voc object and pairs list
def loadPrepareData(corpus, corpus_name, datafile, save_dir):
print("Start preparing training data ...")
voc, pairs = readVocs(datafile, corpus_name)
print("Read {!s} sentence pairs".format(len(pairs)))
pairs = filterPairs(pairs)
print("Trimmed to {!s} sentence pairs".format(len(pairs)))
print("Counting words...")
for pair in pairs:
voc.addSentence(pair[0])
voc.addSentence(pair[1])
print("Counted words:", voc.num_words)
return voc, pairs
# Load/Assemble voc and pairs
save_dir = os.path.join("data", "save")
voc, pairs = loadPrepareData(corpus, corpus_name, datafile, save_dir)
# Print some pairs to validate
print("\npairs:")
for pair in pairs[:10]:
print(pair)
MIN_COUNT = 3 # Minimum word count threshold for trimming
I really hope someone can help me fix this problem and help me understand why it happens.
| In the end I changed
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
to
try:
return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH
except:
return False
And now the code seems to be working.
| https://stackoverflow.com/questions/64428188/ |
torch.cat along negative dimension | In the following,
x_6 = torch.cat((x_1, x_2_1, x_3_1, x_5_1), dim=-3)
Sizes of tensors x_1, x_2_1, x_3_1, x_5_1 are
torch.Size([1, 256, 7, 7])
torch.Size([1, 256, 7, 7])
torch.Size([1, 256, 7, 7])
torch.Size([1, 256, 7, 7]) respectively.
The size of x_6 turns out to be torch.Size([1, 1024, 7, 7])
I couldn't understand & visualise this concatenation along a negative dimension(-3 in this case).
What exactly is happening here?
How does the same go if dim = 3?
Is there any constraint on dim for a given set of tensors?
| The answer by danin is not completely correct, actually wrong when looked from the perspective of tensor algebra, since the answer indicates that the problem has to do with accessing or indexing a Python list. It isn't.
The -3 means that we concatenate the tensors along the 2nd dimension. (you could've very well used 1 instead of the confusing -3).
From taking a closer look at the tensor shapes, it seems that they represent (b, c, h, w) where b stands for batch_size, c stands for number of channels, h stands for height and w stands for width.
This is usually the case, somewhere at the final stages of encoding (possibly) images in a deep neural network and we arrive at these feature maps.
The torch.cat() operation with dim=-3 is meant to say that we concatenate these 4 tensors along the dimension of channels c (see above).
4 * 256 => 1024
Hence, the resultant tensor ends up with a shape torch.Size([1, 1024, 7, 7]).
Notes: It is hard to visualize a 4 dimensional space since we humans live in an inherently 3D world. Nevertheless, here are some answers that I wrote a while ago which will help to get some mental picture.
How to understand the term tensor in TensorFlow?
Very Basic Numpy array dimension visualization
| https://stackoverflow.com/questions/64430159/ |
How to view all possible distributions that can be used in PyTorch kl_divergence function? | I would like to know what are all of the possible combinations of distributions I can plug into PyTorch's kl_divergence function out of the box. How can I view all of the possibilities?
| Here is a function to display all of the registered combinations:
from torch.distributions.kl import _KL_REGISTRY
def view_kl_options():
"""
Displays all combinations of distributions that can be used in
torch's kl_divergence function. Iterates through the registry
and prints out the registered name combos.
"""
names = [(k[0].__name__, k[1].__name__) for k in _KL_REGISTRY.keys()]
max_name_len = max([len(t[0]) for t in names])
for arg1, arg2 in sorted(names):
print(f" {arg1:>{max_name_len}} || {arg2}")
Calling view_kl_options() will give you an output like
Bernoulli || Bernoulli
Bernoulli || Poisson
Beta || Beta
Beta || ContinuousBernoulli
Beta || Exponential
Beta || Gamma
Beta || Normal
Beta || Pareto
Beta || Uniform
Binomial || Binomial
Categorical || Categorical
Cauchy || Cauchy
ContinuousBernoulli || ContinuousBernoulli
ContinuousBernoulli || Exponential
ContinuousBernoulli || Normal
ContinuousBernoulli || Pareto
ContinuousBernoulli || Uniform
Dirichlet || Dirichlet
Exponential || Beta
Exponential || ContinuousBernoulli
Exponential || Exponential
Exponential || Gamma
Exponential || Gumbel
Exponential || Normal
Exponential || Pareto
Exponential || Uniform
ExponentialFamily || ExponentialFamily
Gamma || Beta
Gamma || ContinuousBernoulli
Gamma || Exponential
Gamma || Gamma
Gamma || Gumbel
Gamma || Normal
Gamma || Pareto
Gamma || Uniform
Geometric || Geometric
Gumbel || Beta
Gumbel || ContinuousBernoulli
Gumbel || Exponential
Gumbel || Gamma
Gumbel || Gumbel
Gumbel || Normal
Gumbel || Pareto
Gumbel || Uniform
HalfNormal || HalfNormal
Independent || Independent
Laplace || Beta
Laplace || ContinuousBernoulli
Laplace || Exponential
Laplace || Gamma
Laplace || Laplace
Laplace || Normal
Laplace || Pareto
Laplace || Uniform
LowRankMultivariateNormal || LowRankMultivariateNormal
LowRankMultivariateNormal || MultivariateNormal
MultivariateNormal || LowRankMultivariateNormal
MultivariateNormal || MultivariateNormal
Normal || Beta
Normal || ContinuousBernoulli
Normal || Exponential
Normal || Gamma
Normal || Gumbel
Normal || Normal
Normal || Pareto
Normal || Uniform
OneHotCategorical || OneHotCategorical
Pareto || Beta
Pareto || ContinuousBernoulli
Pareto || Exponential
Pareto || Gamma
Pareto || Normal
Pareto || Pareto
Pareto || Uniform
Poisson || Bernoulli
Poisson || Binomial
Poisson || Poisson
TransformedDistribution || TransformedDistribution
Uniform || Beta
Uniform || ContinuousBernoulli
Uniform || Exponential
Uniform || Gamma
Uniform || Gumbel
Uniform || Normal
Uniform || Pareto
Uniform || Uniform
| https://stackoverflow.com/questions/64431058/ |
PyTorch equivalent of numpy reshape function | Hi I have these to functions to flatten my complex type data to feed it to NN and reconstruct NN prediction to the original form.
def flatten_input64(Input): #convert (:,4,4,2) complex matrix to (:,64) real vector
Input1 = Input.reshape(-1, 32, order='F')
Input_vector=np.zeros([19957,64],dtype = np.float64)
Input_vector[:,0:32] = Input1.real
Input_vector[:,32:64] = Input1.imag
return Input_vector
def convert_output64(Output): #convert (:,64) real vector to (:,4,4,2) complex matrix
Output1 = Output[:,0:32] + 1j * Output[:,32:64]
output_matrix = Output1.reshape(-1, 4 ,4 ,2 , order = 'F')
return output_matrix
I am writing a customized loss that required all operation to be in torch and I should rewrite my conversion functions in PyTorch. The problem is that PyTorch doesn't have 'F' order reshape. I tried to write my own version of F reorder but, it doesn't work.
Do you have any idea what is my mistake?
def convert_output64_torch(input):
# number_of_samples = defined
for i in range(0, number_of_samples):
Output1 = input[i,0:32] + 1j * input[i,32:64]
Output2 = Output1.view(-1,4,4,2).permute(3,2,1,0)
if i == 0:
Output3 = Output2
else:
Output3 = torch.cat((Output3, Output2),0)
return Output3
Update: following @a_guest comment I tried to recreate my matrix with transpose and reshape and I got this code working same as F order reshape in numy:
def convert_output64_torch(input):
Output1 = input[:,0:32] + 1j * input[:,32:64]
shape = (-1 , 4 , 4 , 2)
Output3 = torch.transpose(torch.transpose(torch.reshape(torch.transpose(Output1,0,1),shape[::-1]),1,2),0,3)
return Output3
| In both, Numpy and PyTorch, you can get the equivalent with the following operation: a.T.reshape(shape[::-1]).T (where a is either an array or a tensor):
>>> a = np.arange(16).reshape(4, 4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
>>> shape = (2, 8)
>>> a.reshape(shape, order='F')
array([[ 0, 8, 1, 9, 2, 10, 3, 11],
[ 4, 12, 5, 13, 6, 14, 7, 15]])
>>> a.T.reshape(shape[::-1]).T
array([[ 0, 8, 1, 9, 2, 10, 3, 11],
[ 4, 12, 5, 13, 6, 14, 7, 15]])
| https://stackoverflow.com/questions/64433896/ |
Convert a list of numpy array to torch tensor list | The numpy arrays in the list are 2D array that have different sizes, let's say:
1x1, 4x4, 8x8, etc
about 7 arrays in total.
I know how to convert each on of them, by:
torch.from_numpy(a1by1).type(torch.FloatTensor)
torch.from_numpy(a4by4).type(torch.FloatTensor)
etc..
Is there a way to convert the entire list in one command?
I found these 2 question:
How to convert a list or numpy array to a 1d torch tensor?
How to convert a list of tensors into a torch::Tensor?
but it's not what Im' looking for
| If by one command you mean one-liner then
Here, We can use list-comprehension
lst = [a1by1, a4by4, a8by8]
lst = [torch.from_numpy(item).float() for item in lst]
| https://stackoverflow.com/questions/64435900/ |
Get the polygon coordinates of predicted output mask in YOLACT/YOLACT++ | I am using Yolact https://github.com/dbolya/yolact ,an instance segmentation algorithm which outputs the test image with a mask on the detected object. As the input images are given with the coordinates of polygons around the input classes in the annotations.json, I want to get an output like this. But I can't figure out how to extract the coordinates of those contours/polygons.
As far as I understood from this script https://github.com/dbolya/yolact/blob/master/eval.py the output is list of tensors for detected objects. It contains classes, scores, boxes and mask for evaluated image. The eval.py script returns recognized image with all this information. Recognition is saved in 'preds' in evalimg function (line 595), and post-processing of predict result is in the "def prep_display" (line 135)
Now how do I extract those polygon coordinates and save it in .JSON file or whatever else?
I also tried to look at these but couldn't figure out sadly!
https://github.com/dbolya/yolact/issues/286
and
https://github.com/dbolya/yolact/issues/256
| You need to create a complete post-processing pipeline that is specific to your task. Here's small pseudocode that could be added to the prep_disply() in eval.py
with timer.env('Copy'):
if cfg.eval_mask_branch:
# Add the below line to get all the predicted objects as a list
all_objects_mask = t[3][:args.top_k]
# Convert each object mask to binary and then
# Use OpenCV's findContours() method to extract the contour points for each object
| https://stackoverflow.com/questions/64440857/ |
Pytorch - RuntimeError: invalid multinomial distribution (encountering probability entry < 0) | I am using Stable Baselines 3 to train an agent to play Connect 4 game. I am trying to take the case into account when an agent starts a game as a second player.
self.env = self.ks_env.train([opponent, None])
When I am trying to run the code, I am getting the following error:
invalid multinomial distribution (encountering probability entry < 0)
/opt/conda/lib/python3.7/site-packages/torch/distributions/categorical.py in sample(self, sample_shape)
samples_2d = torch.multinomial(probs_2d, sample_shape.numel(), True).T
However, there is no problem when an agent is first player:
self.env = self.ks_env.train([None, opponent])
I think problem is related to the Pytorch library. My question is how can I fix this issue?
| After checking your provided code, the problem doesn't seem to come from what agent starts the game but from not restarting the environment after a game is done.
I just changed your step function as shown:
def step(self, action):
# Check if agent's move is valid
is_valid = (self.obs['board'][int(action)] == 0)
if is_valid: # Play the move
self.obs, old_reward, done, _ = self.env.step(int(action))
reward = self.change_reward(old_reward, done)
else: # End the game and penalize agent
reward, done, _ = -10, True, {}
if done:
self.reset()
return board_flip(self.obs.mark,
np.array(self.obs['board']).reshape(1, self.rows, self.columns) / 2),
reward, done, _
With this, the model was able to train and you can check that it works as expected with the following snippet:
done = True
for step in range(500):
if done:
state = env.reset()
state, reward, done, info = env.step(env.action_space.sample())
print(reward)
Link to my version of your notebook
| https://stackoverflow.com/questions/64441708/ |
Problems loading model; Pytorch | I am trying to load a model I saved with the following and it gives me this error:
import torch
model = torch.load('./grunet.pkl')
model.eval()
This is the error:
Traceback (most recent call last):
File ".\eval.py", line 4, in <module>
model.eval()
AttributeError: 'NoneType' object has no attribute 'eval'
Please help!
| You are missing one step:
model = YourModelClass()
model.load_state_dict(torch.load("./grunet.pkl"))
model.eval()
# do something with the model
| https://stackoverflow.com/questions/64443185/ |
How can I compute number of FLOPs and Params for 1-d CNN? Use pytorch platform | My network is a 1d CNN, I want to compute the number of FLOPs and params. I used public method 'flops_counter', but I am not sure the size of the input. When I run it with size(128,1,50), I get error 'Expected 3-dimensional input for 3-dimensional weight [128, 1, 50], but got 4-dimensional input of size [1, 128, 1, 50] instead'. When I run it with size(128,50), I get error 'RuntimeError: Given groups=1, weight of size [128, 1, 50], expected input[1, 128, 50] to have 1 channels, but got 128 channels instead'.
import torch
from models.cnn import net
from flops_counter import get_model_complexity_info
model = net()
# Flops&params
flops, params = get_model_complexity_info(model, (128,1,50), as_strings=True, print_per_layer_stat=True)
print('Flops: ' + flops)
print('Params: ' + params)
Here is my 1d CNN.
from __future__ import print_function
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
# create cnn
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.conv1 = nn.Conv1d(1, 128, 50, stride=3)
self.conv2 = nn.Conv1d(128, 32, 7, stride=1)
self.conv3 = nn.Conv1d(32, 32, 9, stride=1)
self.fc1 = nn.Linear(32, 128)
self.fc2 = nn.Linear(128, 5)
self.bn1 = nn.BatchNorm1d(128)
self.bn2 = nn.BatchNorm1d(32)
self.dropout = nn.Dropout2d(0.5)
self.faltten = nn.Flatten()
# forward propagation
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.bn1(x)
x = F.max_pool1d(x, 2, stride=3)
x = self.dropout(F.relu(self.conv2(x)))
# x = F.relu(self.conv2(x))
x = self.bn2(x)
x = F.max_pool1d(x, 2, stride=2)
x = self.dropout(F.relu(self.conv3(x)))
x = self.faltten(x)
x = self.dropout(self.fc1(x))
output = self.fc2(x)
return output
Here is the code of flops_counter:
'''
Copyright (C) 2019 Sovrasov V. - All Rights Reserved
* You may use, distribute and modify this code under the
* terms of the MIT license.
* You should have received a copy of the MIT license with
* this file. If not visit https://opensource.org/licenses/MIT
'''
import sys
from functools import partial
import torch
import torch.nn as nn
import numpy as np
def get_model_complexity_info(model, input_res,
print_per_layer_stat=True,
as_strings=True,
input_constructor=None, ost=sys.stdout,
verbose=False, ignore_modules=[],
custom_modules_hooks={}):
assert type(input_res) is tuple
assert len(input_res) >= 1
assert isinstance(model, nn.Module)
global CUSTOM_MODULES_MAPPING
CUSTOM_MODULES_MAPPING = custom_modules_hooks
flops_model = add_flops_counting_methods(model)
flops_model.eval()
flops_model.start_flops_count(ost=ost, verbose=verbose, ignore_list=ignore_modules)
if input_constructor:
input = input_constructor(input_res)
_ = flops_model(**input)
else:
try:
batch = torch.ones(()).new_empty((1, *input_res),
dtype=next(flops_model.parameters()).dtype,
device=next(flops_model.parameters()).device)
except StopIteration:
batch = torch.ones(()).new_empty((1, *input_res))
_ = flops_model(batch)
flops_count, params_count = flops_model.compute_average_flops_cost()
if print_per_layer_stat:
print_model_with_flops(flops_model, flops_count, params_count, ost=ost)
flops_model.stop_flops_count()
CUSTOM_MODULES_MAPPING = {}
if as_strings:
return flops_to_string(flops_count), params_to_string(params_count)
return flops_count, params_count
def flops_to_string(flops, units='GMac', precision=2):
if units is None:
if flops // 10**9 > 0:
return str(round(flops / 10.**9, precision)) + ' GMac'
elif flops // 10**6 > 0:
return str(round(flops / 10.**6, precision)) + ' MMac'
elif flops // 10**3 > 0:
return str(round(flops / 10.**3, precision)) + ' KMac'
else:
return str(flops) + ' Mac'
else:
if units == 'GMac':
return str(round(flops / 10.**9, precision)) + ' ' + units
elif units == 'MMac':
return str(round(flops / 10.**6, precision)) + ' ' + units
elif units == 'KMac':
return str(round(flops / 10.**3, precision)) + ' ' + units
else:
return str(flops) + ' Mac'
def params_to_string(params_num, units=None, precision=2):
if units is None:
if params_num // 10 ** 6 > 0:
return str(round(params_num / 10 ** 6, 2)) + ' M'
elif params_num // 10 ** 3:
return str(round(params_num / 10 ** 3, 2)) + ' k'
else:
return str(params_num)
else:
if units == 'M':
return str(round(params_num / 10.**6, precision)) + ' ' + units
elif units == 'K':
return str(round(params_num / 10.**3, precision)) + ' ' + units
else:
return str(params_num)
def print_model_with_flops(model, total_flops, total_params, units='GMac',
precision=3, ost=sys.stdout):
def accumulate_params(self):
if is_supported_instance(self):
return self.__params__
else:
sum = 0
for m in self.children():
sum += m.accumulate_params()
return sum
def accumulate_flops(self):
if is_supported_instance(self):
return self.__flops__ / model.__batch_counter__
else:
sum = 0
for m in self.children():
sum += m.accumulate_flops()
return sum
def flops_repr(self):
accumulated_params_num = self.accumulate_params()
accumulated_flops_cost = self.accumulate_flops()
return ', '.join([params_to_string(accumulated_params_num, units='M', precision=precision),
'{:.3%} Params'.format(accumulated_params_num / total_params),
flops_to_string(accumulated_flops_cost, units=units, precision=precision),
'{:.3%} MACs'.format(accumulated_flops_cost / total_flops),
self.original_extra_repr()])
def add_extra_repr(m):
m.accumulate_flops = accumulate_flops.__get__(m)
m.accumulate_params = accumulate_params.__get__(m)
flops_extra_repr = flops_repr.__get__(m)
if m.extra_repr != flops_extra_repr:
m.original_extra_repr = m.extra_repr
m.extra_repr = flops_extra_repr
assert m.extra_repr != m.original_extra_repr
def del_extra_repr(m):
if hasattr(m, 'original_extra_repr'):
m.extra_repr = m.original_extra_repr
del m.original_extra_repr
if hasattr(m, 'accumulate_flops'):
del m.accumulate_flops
model.apply(add_extra_repr)
print(model, file=ost)
model.apply(del_extra_repr)
def get_model_parameters_number(model):
params_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
return params_num
def add_flops_counting_methods(net_main_module):
# adding additional methods to the existing module object,
# this is done this way so that each function has access to self object
net_main_module.start_flops_count = start_flops_count.__get__(net_main_module)
net_main_module.stop_flops_count = stop_flops_count.__get__(net_main_module)
net_main_module.reset_flops_count = reset_flops_count.__get__(net_main_module)
net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__(net_main_module)
net_main_module.reset_flops_count()
return net_main_module
def compute_average_flops_cost(self):
"""
A method that will be available after add_flops_counting_methods() is called
on a desired net object.
Returns current mean flops consumption per image.
"""
batches_count = self.__batch_counter__
flops_sum = 0
params_sum = 0
for module in self.modules():
if is_supported_instance(module):
flops_sum += module.__flops__
params_sum = get_model_parameters_number(self)
return flops_sum / batches_count, params_sum
def start_flops_count(self, **kwargs):
"""
A method that will be available after add_flops_counting_methods() is called
on a desired net object.
Activates the computation of mean flops consumption per image.
Call it before you run the network.
"""
add_batch_counter_hook_function(self)
seen_types = set()
def add_flops_counter_hook_function(module, ost, verbose, ignore_list):
if type(module) in ignore_list:
seen_types.add(type(module))
if is_supported_instance(module):
module.__params__ = 0
elif is_supported_instance(module):
if hasattr(module, '__flops_handle__'):
return
if type(module) in CUSTOM_MODULES_MAPPING:
handle = module.register_forward_hook(CUSTOM_MODULES_MAPPING[type(module)])
else:
handle = module.register_forward_hook(MODULES_MAPPING[type(module)])
module.__flops_handle__ = handle
seen_types.add(type(module))
else:
if verbose and not type(module) in (nn.Sequential, nn.ModuleList) and not type(module) in seen_types:
print('Warning: module ' + type(module).__name__ + ' is treated as a zero-op.', file=ost)
seen_types.add(type(module))
self.apply(partial(add_flops_counter_hook_function, **kwargs))
def stop_flops_count(self):
"""
A method that will be available after add_flops_counting_methods() is called
on a desired net object.
Stops computing the mean flops consumption per image.
Call whenever you want to pause the computation.
"""
remove_batch_counter_hook_function(self)
self.apply(remove_flops_counter_hook_function)
def reset_flops_count(self):
"""
A method that will be available after add_flops_counting_methods() is called
on a desired net object.
Resets statistics computed so far.
"""
add_batch_counter_variables_or_reset(self)
self.apply(add_flops_counter_variable_or_reset)
# ---- Internal functions
def empty_flops_counter_hook(module, input, output):
module.__flops__ += 0
def upsample_flops_counter_hook(module, input, output):
output_size = output[0]
batch_size = output_size.shape[0]
output_elements_count = batch_size
for val in output_size.shape[1:]:
output_elements_count *= val
module.__flops__ += int(output_elements_count)
def relu_flops_counter_hook(module, input, output):
active_elements_count = output.numel()
module.__flops__ += int(active_elements_count)
def linear_flops_counter_hook(module, input, output):
input = input[0]
output_last_dim = output.shape[-1] # pytorch checks dimensions, so here we don't care much
module.__flops__ += int(np.prod(input.shape) * output_last_dim)
def pool_flops_counter_hook(module, input, output):
input = input[0]
module.__flops__ += int(np.prod(input.shape))
def bn_flops_counter_hook(module, input, output):
module.affine
input = input[0]
batch_flops = np.prod(input.shape)
if module.affine:
batch_flops *= 2
module.__flops__ += int(batch_flops)
def deconv_flops_counter_hook(conv_module, input, output):
# Can have multiple inputs, getting the first one
input = input[0]
batch_size = input.shape[0]
input_height, input_width = input.shape[2:]
kernel_height, kernel_width = conv_module.kernel_size
in_channels = conv_module.in_channels
out_channels = conv_module.out_channels
groups = conv_module.groups
filters_per_channel = out_channels // groups
conv_per_position_flops = kernel_height * kernel_width * in_channels * filters_per_channel
active_elements_count = batch_size * input_height * input_width
overall_conv_flops = conv_per_position_flops * active_elements_count
bias_flops = 0
if conv_module.bias is not None:
output_height, output_width = output.shape[2:]
bias_flops = out_channels * batch_size * output_height * output_height
overall_flops = overall_conv_flops + bias_flops
conv_module.__flops__ += int(overall_flops)
def conv_flops_counter_hook(conv_module, input, output):
# Can have multiple inputs, getting the first one
input = input[0]
batch_size = input.shape[0]
output_dims = list(output.shape[2:])
kernel_dims = list(conv_module.kernel_size)
in_channels = conv_module.in_channels
out_channels = conv_module.out_channels
groups = conv_module.groups
filters_per_channel = out_channels // groups
conv_per_position_flops = int(np.prod(kernel_dims)) * in_channels * filters_per_channel
active_elements_count = batch_size * int(np.prod(output_dims))
overall_conv_flops = conv_per_position_flops * active_elements_count
bias_flops = 0
if conv_module.bias is not None:
bias_flops = out_channels * active_elements_count
overall_flops = overall_conv_flops + bias_flops
conv_module.__flops__ += int(overall_flops)
def batch_counter_hook(module, input, output):
batch_size = 1
if len(input) > 0:
# Can have multiple inputs, getting the first one
input = input[0]
batch_size = len(input)
else:
pass
print('Warning! No positional inputs found for a module, assuming batch size is 1.')
module.__batch_counter__ += batch_size
def rnn_flops(flops, rnn_module, w_ih, w_hh, input_size):
# matrix matrix mult ih state and internal state
flops += w_ih.shape[0]*w_ih.shape[1]
# matrix matrix mult hh state and internal state
flops += w_hh.shape[0]*w_hh.shape[1]
if isinstance(rnn_module, (nn.RNN, nn.RNNCell)):
# add both operations
flops += rnn_module.hidden_size
elif isinstance(rnn_module, (nn.GRU, nn.GRUCell)):
# hadamard of r
flops += rnn_module.hidden_size
# adding operations from both states
flops += rnn_module.hidden_size*3
# last two hadamard product and add
flops += rnn_module.hidden_size*3
elif isinstance(rnn_module, (nn.LSTM, nn.LSTMCell)):
# adding operations from both states
flops += rnn_module.hidden_size*4
# two hadamard product and add for C state
flops += rnn_module.hidden_size + rnn_module.hidden_size + rnn_module.hidden_size
# final hadamard
flops += rnn_module.hidden_size + rnn_module.hidden_size + rnn_module.hidden_size
return flops
def rnn_flops_counter_hook(rnn_module, input, output):
"""
Takes into account batch goes at first position, contrary
to pytorch common rule (but actually it doesn't matter).
IF sigmoid and tanh are made hard, only a comparison FLOPS should be accurate
"""
flops = 0
inp = input[0] # input is a tuble containing a sequence to process and (optionally) hidden state
batch_size = inp.shape[0]
seq_length = inp.shape[1]
num_layers = rnn_module.num_layers
for i in range(num_layers):
w_ih = rnn_module.__getattr__('weight_ih_l' + str(i))
w_hh = rnn_module.__getattr__('weight_hh_l' + str(i))
if i == 0:
input_size = rnn_module.input_size
else:
input_size = rnn_module.hidden_size
flops = rnn_flops(flops, rnn_module, w_ih, w_hh, input_size)
if rnn_module.bias:
b_ih = rnn_module.__getattr__('bias_ih_l' + str(i))
b_hh = rnn_module.__getattr__('bias_hh_l' + str(i))
flops += b_ih.shape[0] + b_hh.shape[0]
flops *= batch_size
flops *= seq_length
if rnn_module.bidirectional:
flops *= 2
rnn_module.__flops__ += int(flops)
def rnn_cell_flops_counter_hook(rnn_cell_module, input, output):
flops = 0
inp = input[0]
batch_size = inp.shape[0]
w_ih = rnn_cell_module.__getattr__('weight_ih')
w_hh = rnn_cell_module.__getattr__('weight_hh')
input_size = inp.shape[1]
flops = rnn_flops(flops, rnn_cell_module, w_ih, w_hh, input_size)
if rnn_cell_module.bias:
b_ih = rnn_cell_module.__getattr__('bias_ih')
b_hh = rnn_cell_module.__getattr__('bias_hh')
flops += b_ih.shape[0] + b_hh.shape[0]
flops *= batch_size
rnn_cell_module.__flops__ += int(flops)
def add_batch_counter_variables_or_reset(module):
module.__batch_counter__ = 0
def add_batch_counter_hook_function(module):
if hasattr(module, '__batch_counter_handle__'):
return
handle = module.register_forward_hook(batch_counter_hook)
module.__batch_counter_handle__ = handle
def remove_batch_counter_hook_function(module):
if hasattr(module, '__batch_counter_handle__'):
module.__batch_counter_handle__.remove()
del module.__batch_counter_handle__
def add_flops_counter_variable_or_reset(module):
if is_supported_instance(module):
if hasattr(module, '__flops__') or hasattr(module, '__params__'):
print('Warning: variables __flops__ or __params__ are already '
'defined for the module' + type(module).__name__ +
' ptflops can affect your code!')
module.__flops__ = 0
module.__params__ = get_model_parameters_number(module)
CUSTOM_MODULES_MAPPING = {}
MODULES_MAPPING = {
# convolutions
nn.Conv1d: conv_flops_counter_hook,
nn.Conv2d: conv_flops_counter_hook,
nn.Conv3d: conv_flops_counter_hook,
# activations
nn.ReLU: relu_flops_counter_hook,
nn.PReLU: relu_flops_counter_hook,
nn.ELU: relu_flops_counter_hook,
nn.LeakyReLU: relu_flops_counter_hook,
nn.ReLU6: relu_flops_counter_hook,
# poolings
nn.MaxPool1d: pool_flops_counter_hook,
nn.AvgPool1d: pool_flops_counter_hook,
nn.AvgPool2d: pool_flops_counter_hook,
nn.MaxPool2d: pool_flops_counter_hook,
nn.MaxPool3d: pool_flops_counter_hook,
nn.AvgPool3d: pool_flops_counter_hook,
nn.AdaptiveMaxPool1d: pool_flops_counter_hook,
nn.AdaptiveAvgPool1d: pool_flops_counter_hook,
nn.AdaptiveMaxPool2d: pool_flops_counter_hook,
nn.AdaptiveAvgPool2d: pool_flops_counter_hook,
nn.AdaptiveMaxPool3d: pool_flops_counter_hook,
nn.AdaptiveAvgPool3d: pool_flops_counter_hook,
# BNs
nn.BatchNorm1d: bn_flops_counter_hook,
nn.BatchNorm2d: bn_flops_counter_hook,
nn.BatchNorm3d: bn_flops_counter_hook,
# FC
nn.Linear: linear_flops_counter_hook,
# Upscale
nn.Upsample: upsample_flops_counter_hook,
# Deconvolution
nn.ConvTranspose2d: deconv_flops_counter_hook,
# RNN
nn.RNN: rnn_flops_counter_hook,
nn.GRU: rnn_flops_counter_hook,
nn.LSTM: rnn_flops_counter_hook,
nn.RNNCell: rnn_cell_flops_counter_hook,
nn.LSTMCell: rnn_cell_flops_counter_hook,
nn.GRUCell: rnn_cell_flops_counter_hook
}
def is_supported_instance(module):
if type(module) in MODULES_MAPPING or type(module) in CUSTOM_MODULES_MAPPING:
return True
return False
def remove_flops_counter_hook_function(module):
if is_supported_instance(module):
if hasattr(module, '__flops_handle__'):
module.__flops_handle__.remove()
del module.__flops_handle__
| Here is working code using the ptflops package. You need to take care of the length of your input sequence. The pytorch doc for Conv1d reads: ,
which lets you backtrace the input size you need from the first fully connected layer (see my comments in the model definition).
from ptflops import get_model_complexity_info
import torch.nn as nn
import torch.nn.functional as F
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.conv1 = nn.Conv1d(1, 128, 50, stride=3) # Lin = 260
# max_pool1d(x, 2, stride=3) # Lin = 71
self.conv2 = nn.Conv1d(128, 32, 7, stride=1) # Lin = 24
# max_pool1d(x, 2, stride=2) # Lin = 18
self.conv3 = nn.Conv1d(32, 32, 9, stride=1) # Lin = 9
self.fc1 = nn.Linear(32, 128)
self.fc2 = nn.Linear(128, 5)
self.bn1 = nn.BatchNorm1d(128)
self.bn2 = nn.BatchNorm1d(32)
self.dropout = nn.Dropout2d(0.5)
self.flatten = nn.Flatten()
# forward propagation
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.bn1(x)
x = F.max_pool1d(x, 2, stride=3)
x = self.dropout(F.relu(self.conv2(x)))
# x = F.relu(self.conv2(x))
x = self.bn2(x)
x = F.max_pool1d(x, 2, stride=2)
x = self.dropout(F.relu(self.conv3(x)))
x = self.flatten(x)
x = self.dropout(self.fc1(x))
output = self.fc2(x)
return output
macs, params = get_model_complexity_info(net(), (1, 260), as_strings=False,
print_per_layer_stat=True, verbose=True)
print('{:<30} {:<8}'.format('Computational complexity: ', macs))
print('{:<30} {:<8}'.format('Number of parameters: ', params))
output:
net(
0.05 M, 100.000% Params, 0.001 GMac, 100.000% MACs,
(conv1): Conv1d(0.007 M, 13.143% Params, 0.0 GMac, 45.733% MACs, 1, 128, kernel_size=(50,), stride=(3,))
(conv2): Conv1d(0.029 M, 57.791% Params, 0.001 GMac, 50.980% MACs, 128, 32, kernel_size=(7,), stride=(1,))
(conv3): Conv1d(0.009 M, 18.619% Params, 0.0 GMac, 0.913% MACs, 32, 32, kernel_size=(9,), stride=(1,))
(fc1): Linear(0.004 M, 8.504% Params, 0.0 GMac, 0.404% MACs, in_features=32, out_features=128, bias=True)
(fc2): Linear(0.001 M, 1.299% Params, 0.0 GMac, 0.063% MACs, in_features=128, out_features=5, bias=True)
(bn1): BatchNorm1d(0.0 M, 0.515% Params, 0.0 GMac, 1.793% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn2): BatchNorm1d(0.0 M, 0.129% Params, 0.0 GMac, 0.114% MACs, 32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout): Dropout2d(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, p=0.5, inplace=False)
(flatten): Flatten(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, )
)
Computational complexity: 1013472.0
Number of parameters: 49669
```
| https://stackoverflow.com/questions/64443666/ |
PyTorch: Perform add/sub/mul operations using 1-D tensor and multi-channel (3-D) image tensor | Note: I am looking for the fastest/optimal way of doing or improving on this, as my constraint is time
Using PyTorch, I have an image as a 3D tensor, lets say of dimension 64 x 400 x 400, where 64 refers to channels and 400 is the image dimensions. With this, I have a 64 length 1D tensor, all with different values, where the intention is to use one value per channel. I want to use a value per 1D tensor to apply to the entire 400x400 block of the channel. So for example, when I want to add 3d_tensor + 1d_tensor, I want 1d_tensor[i] to be added to all 400x400 = 160000 values in 3d_tensor[i], with [i] ranging from 0 to 63.
What I previously did:
I tried doing it directly, by using the operators only:
output_add = 1d_tensor + 3d_tensor
This returned an error that the dimension of 3d_tensor (400) and 1d_tensor (64) are incompatible.
So my current form is using a for loop
for a, b in zip(3d_tensor, 1d_tensor):
a += b
However at one stage I have four different 1D tensors to use at once in either addition, subtraction or multiplication, so is this for loop method the most efficient? I'm also planning on doing it 20+ times per image, so speed is key. I tried maybe extending the 1d tensor to a dimension of 64 x 400 x 400 also, so it could be used directly, but I could not get this right using tensor.repeat()
| You should add some dimension to the 1D array: convert from 64 to (64 x 1 x 1):
output_add = 1d_tensor[:, None, None] + 3d_tensor
with this None type indexing you can add dimension any where.
The [:, None, None] will add two additional dimension to the 1D array after the existing dimension.
Or you can use view for the same result see:
output_add = 1d_tensor.view(-1, 1, 1) + 3d_tensor
The result has the same dimension as the 3D array: (64, 1, 1). So the pytorch can use broadcasting.
Here a good explanation for broad casting: How does pytorch broadcasting work?
| https://stackoverflow.com/questions/64444616/ |
Difference between PyTorch and exact analytical expression | While using Multivariate normal distribution in PyTorch I decided to compare it with exact analytical expression.
To my surprise, there was a small difference between them.
Is there any reason for this behaviour?
Firstly, calculate probabilities using MultivariateNormal:
1 from torch.distributions.multivariate_normal import MultivariateNormal
2 import torch
3 sigma = 2
4 m = MultivariateNormal(torch.zeros(2, dtype=torch.float32), torch.eye(2, dtype=torch.float32)*sigma**2)
5 values_temp = torch.zeros(size=(1,2), dtype=torch.float32)
6 out_torch = torch.exp(m.log_prob(values_temp))
7 out_torch
Out: tensor([0.0398])
Secondly, one can write exact formula for this case:
1 import numpy as np
2 out_exact = 1/(2*np.pi*sigma**2) * torch.exp(-torch.pow(values_temp, 2).sum(dim=-1)/(2*sigma**2))
3 out_exact
Out: tensor([0.0398])
There is a difference between them:
1 (out_torch - out_exact).sum()
Out: tensor(3.7253e-09)
Can someone help me understand the behavior of these two snippets? Which of these two expressions is more precise? Maybe someone can underline my mistake in any part of the code?
| Most modern systems use the IEEE 754 standard to represent fixed precision floating point values. Because of this, we can be sure that neither the result provided from pytorch or the "exact" value you've computed are actually exactly equal to the analytical expression. We know this because the actual value of the expression is certainly irrational, and IEEE 754 cannot exactly represent any irrational number. This is a general phenomenon when using fixed precision floating point representations, which you can read more about on Wikipedia and this question.
Upon further analysis we find that the normalized difference you're seeing is on the order of machine epsilon (i.e. 3.7253e-09 / 0.0398 is approximately equal to torch.finfo(torch.float32).eps) indicating that the difference is likely just a result of inaccuracies of floating point arithmetic.
For a further demonstration we can write a mathematically equivalent expression to the one you have as
out_exact = torch.exp(np.log(1/ (2*np.pi*sigma**2)) + (-torch.pow(values_temp, 2).sum(dim=-1)/2/sigma**2))
which agrees exactly with value given by my current installation of pytorch.
| https://stackoverflow.com/questions/64453010/ |
Arbitrary shaped Feedforward Neural Network in Pytorch | I am making a script that has some generative aspect to it, and I need to generate arbitrary shaped feedforward NNs. The idea is to pass a list With the number of number of neurons in each layer, and the number of layers is determined by the length of the list:
shape = [784,64,64,64,10]
I tried something like this:
shapenn = [784,64,64,64,10]
class Net(nn.Module):
def __init__(self, shapenn):
super().__init__()
self.shapenn = shapenn
self.fcl = [] # list with fully conected leyers
for i in range(len(act_funs)):
self.fcl.append(nn.Linear(self.nnarch[i],self.nnarch[i+1]))
net = Net(shapenn)
While the fully connected layers are created correctly in the list fcl, net is not initialized properly for example it has not net.parameters().
I am sure there is a correct way to do this, thank you very much in advance.
| You need to use nn.ModuleList in place of the built-in python list (similarly use nn.ModuleDict in place of python dictionaries). These behave like a normal list except that they must only contain instances that subclasses nn.Module and using them signals that the modules contained in the list should be considered submodules of your module. For example
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fcl = nn.ModuleList()
for i in range(5):
self.fcl.append(nn.Linear(10, 10))
net = Net()
print([name for name, val in net.named_parameters()])
prints
['fcl.0.weight', 'fcl.0.bias', 'fcl.1.weight', 'fcl.1.bias', 'fcl.2.weight', 'fcl.2.bias', 'fcl.3.weight', 'fcl.3.bias', 'fcl.4.weight', 'fcl.4.bias']
| https://stackoverflow.com/questions/64453392/ |
(image, mask) pair do not match one another in a semantic segmentation task | I am writing a simple custom DataLoader (which I will add more features to later) for a segmentation dataset but the (image, mask) pair I return using __getitem()__ method are different; the returned mask belongs to a different image than the one which is returned. My directory structure is /home/bohare/data/images and /home/bohare/data/masks .
Following is the code I have:
import torch
from torch.utils.data.dataset import Dataset
from PIL import Image
import glob
import os
import matplotlib.pyplot as plt
class CustomDataset(Dataset):
def __init__(self, folder_path):
self.img_files = glob.glob(os.path.join(folder_path,'images','*.png'))
self.mask_files = glob.glob(os.path.join(folder_path,'masks','*.png'))
def __getitem__(self, index):
image = Image.open(self.img_files[index])
mask = Image.open(self.mask_files[index])
return image, mask
def __len__(self):
return len(self.img_files)
data = CustomDataset(folder_path = '/home/bohare/data')
len(data)
This code correctly gives out the total size of the dataset.
But when I use:
img, msk = data.__getitem__(n) where n is the index of any (image, mask) pair and I plot the image and mask, they do not correspond to one another.
How can I modify/what can I add to the code to make sure the (image, mask) pair are returned correctly? Thanks for the help.
| glob.glob is returing it without order, glob.glob calls internally os.listdir:
os.listdir(path) Return a list containing the names of the entries in
the directory given by path. The list is in arbitrary order. It does
not include the special entries '.' and '..' even if they are present
in the directory.
To solve it, you can just sort both so that the order will be the same:
self.img_files = sorted(glob.glob(os.path.join(folder_path,'images','*.png')))
self.mask_files = sorted(glob.glob(os.path.join(folder_path,'masks','*.png')))
| https://stackoverflow.com/questions/64458201/ |
Trying to understand what "save_for_backward" is in Pytorch | I have some knowledge in Pytorch,but i don't really understand the mechanisms of classes within Pytorch.
For example in the link: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
you can find the following code:
import torch
class MyReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return input.clamp(min=0)
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
i am only focusing on the forward method of this class,and i am wondering what
ctx.save_for_backward(input)
does.Whether the previous line of code is present or not is irrelevent when i try the forward method on a concrete example:
a=torch.eye(3)
rel=MyReLU()
print(rel.forward(rel,a))
as i get the same result in both cases.Could someone explain me what is happening and why it is useful to add the save_for_backward?
Thank you in advance.
| The ctx.save_for_backward method is used to store values generated during forward() that will be needed later when performing backward(). The saved values can be accessed during backward() from the ctx.saved_tensors attribute.
| https://stackoverflow.com/questions/64460017/ |
"view_as_windows" from skimage but in Pytorch | Is there any Pytorch version of view_as_windows from skimage? I want to create the view while the tensor is on the GPU.
| I needed the same functionality from Pytorch and ended up implementing it myself:
def view_as_windows_torch(image, shape, stride=None):
"""View tensor as overlapping rectangular windows, with a given stride.
Parameters
----------
image : `~torch.Tensor`
4D image tensor, with the last two dimensions
being the image dimensions
shape : tuple of int
Shape of the window.
stride : tuple of int
Stride of the windows. By default it is half of the window size.
Returns
-------
windows : `~torch.Tensor`
Tensor of overlapping windows
"""
if stride is None:
stride = shape[0] // 2, shape[1] // 2
windows = image.unfold(2, shape[0], stride[0])
return windows.unfold(3, shape[1], stride[1])
Essentially it is just two lines of Pytorch code relying on torch.Tensor.unfold. You can easily convince yourself, that it does the same as skimage.util.view_as_windows:
import torch
x = torch.arange(16).reshape((1, 1, 4, 4))
patches = view_as_windows_torch(image=x, shape=(2, 2))
print(patches)
Gives:
tensor([[[[[[ 0, 1],
[ 4, 5]],
[[ 1, 2],
[ 5, 6]],
[[ 2, 3],
[ 6, 7]]],
[[[ 4, 5],
[ 8, 9]],
[[ 5, 6],
[ 9, 10]],
[[ 6, 7],
[10, 11]]],
[[[ 8, 9],
[12, 13]],
[[ 9, 10],
[13, 14]],
[[10, 11],
[14, 15]]]]]])
I hope this helps!
| https://stackoverflow.com/questions/64462917/ |
Convert from Keras to Pytorch - conv2d | I am trying to convert the following Keras code into PyTorch.
tf.keras.Sequential([
Conv2D(128, 1, activation=tf.nn.relu),
Conv2D(self.channel_n, 1, activation=None),
])
When creating the model summary with self.channels=16 i get the following summary.
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (1, 3, 3, 128) 6272
_________________________________________________________________
conv2d_1 (Conv2D) (1, 3, 3, 16) 2064
=================================================================
Total params: 8,336
Trainable params: 8,336
Non-trainable params: 0
How would one convert?
I have attempted it as such:
import torch
from torch import nn
class CellCA(nn.Module):
def __init__(self, channels, dim=128):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(in_channels=channels,out_channels=dim, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=dim, out_channels=channels, kernel_size=1),
)
def forward(self, x):
return self.net(x)
However, I get 4240 params
| The attempt above is correct if you configure the initial channels in correcty (48 in this case).
| https://stackoverflow.com/questions/64463810/ |
How do I make sure Vulkan is using the same GPU as CUDA? | I'm using an application that uses both vulkan and cuda (specifically pytorch) on an HPC cluster (univa grid engine).
When a job is submitted, the cluster scheduler sets an environment variable SGE_HGR_gpu which contains a GPU ID for the job to use (so other jobs run by other users do not use the same GPU)
The typical way to tell an application that uses CUDA to use a specific GPU is to set CUDA_VISIBLE_DEVICES=n
As i'm also using Vulkan, I dont know how to make sure that I choose the same device from those that are listed with vkEnumeratePhysicalDevices.
I think that the order of the values that 'n' can take is the same as the order of the devices on the PCI BUS, however I dont know if the order of the devices returned by vkEnumeratePhysicalDevices are in this order, and the documentation does not specify what this order is.
So how can I go about making sure i'm choosing the same physical GPU for both Vulkan and CUDA?
| With VkPhysicalDeviceIDPropertiesKHR (Vulkan 1.1) resp VkPhysicalDeviceVulkan11Properties (Vulkan 1.2) you can get device UUID, which is one of the formats CUDA_VISIBLE_DEVICES seems to use. You should also be able to convert index to UUID (or vice versa) with nvidia-smi -L (or with NVML library).
Or other way around, cudaDeviceProp includes PCI info which could be compared to VK_EXT_pci_bus_info extensions output.
If the order matches in Vulkan, it is best to ask NVidia directly; I cannot find info how NV orders them. IIRC from the Vulkan Loader implementation, the order should match the order from registry, and then the order the NV driver itself gives them. Even so you would have to filter non-NV GPUs from the list in generic code, and you do not know if the NV Vulkan ICD implementation matches CUDA without asking NV.
| https://stackoverflow.com/questions/64471080/ |
Pytorch CrossEntropyLoss from single dimensional Tensors | I am missing something here: why does CrossEntropyLoss not work with single dimensional Tensors?
from torch import Tensor
X =Tensor([1.0,2.0,3.0])
labs = Tensor([2,2,3])
loss = nn.CrossEntropyLoss().forward(X,labs)
_stacklevel, dtype)
1315 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
1316 if dtype is None:
-> 1317 ret = input.log_softmax(dim)
1318 else:
1319 ret = input.log_softmax(dim, dtype=dtype)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Why does that fail and what should be changed to get the desired result?
| if you see the documentation here!
Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.
So it expects the input as 2D tensor
and target as 1D
import torch
from torch import Tensor
X =Tensor([[1.0,2.0,3.0]]) #2D
labs = torch.LongTensor([2]) # 0 <= targets[i] <= C-1
loss = nn.CrossEntropyLoss().forward(X,labs)
| https://stackoverflow.com/questions/64475041/ |
(Efficiently) expanding a feature mask tensor to match embedding dimensions | I have a B (batch size) by F (feature count) mask tensor M, which I'd like to apply (element-wise multiply) to an input x.
...Thing is, my x has had its raw feature columns transformed into embeddings of non-constant width, so its total dimensions are B by E (embedded size).
My draft code is along the lines of:
# Given something like:
M = torch.Tensor([[0.2, 0.8], [0.5, 0.5], [0.6, 0.4]]) # B=3, F=2
x = torch.Tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 0], [11, 12, 13, 14, 15]]) # E=5
feature_sizes = [2, 3] # (Feature 0 embedded to 2 cols, feature 1 to 3)
# In forward() pass:
components = []
for ix, size in enumerate(feature_sizes):
components.append(M[:, ix].view(-1, 1).expand(-1, size))
M_x = torch.cat(components, dim=1)
# Now M_x is (B, E) and can be mapped with x
> M_x = torch.Tensor([
> [0.2, 0.4, 2.4, 3.6, 4],
> [3, 3.5, 4, 4.5, 0],
> [6.6, 7.2, 5.2, 5.6, 6],
> ])
My question is: are there any obvious optimizations I'm missing here? Is that for loop the right way to go about it, or is there some more direct way to achieve this?
I have control over the embedding process, so can store whatever representation is helpful and am e.g. not tied to feature_sizes list of ints.
| Duh, I forgot: An indexing operation could do this!
Given the above situation (but I'll take a more complex feature_sizes to show a bit more clearly), we can pre-compute an index tensor with something like:
# Given that:
feature_sizes = [1, 3, 1, 2]
# Produce nested list e.g. [[0], [1, 1, 1], [2], [3, 3]]:
ixs_per_feature = [[ix] * size for ix, size in enumerate(feature_sizes)]
# Flatten out into a vector e.g. [0, 1, 1, 1, 2, 3, 3]:
mask_ixs = torch.LongTensor(
[item for sublist in ixs_per_feature for item in sublist]
)
# Now can directly produce M_x by indexing M:
M_x = M[:, mask_ixs]
I got a modest speedup by using this method instead of the for loops.
| https://stackoverflow.com/questions/64475723/ |
Same padding when kernel size is even | When the kernel size is odd, we can manually calculate the necessary padding to get the output in the same dimension as input such that it creates same padding.
But how can we calculate padding dimensions for kernels with even sizes (ex: (2x2)?
|
I used padding as 1 and dilation as 2 which resulted same padding.
| https://stackoverflow.com/questions/64476632/ |
Is there an analogy for Python's array slicing in C++ (libtorch)? | In Python with PyTorch, if you have an array:
torch.linspace(0, 10, 10)
you can use e.g. only the first three elements by saying
reduced_tensor = torch.linspace(0, 10, 10)[:4].
Is there an analog to the [:] array slicing in C++/libtorch? If not, how can I achieve this easily?
| Yes, You can use Slice and index in libtorch. You can do:
auto tensor = torch::linspace(0, 10, 10).index({ Slice(None, 4) });
You can read more about indexing here.
Basically as its indicated in the documentation :
The main difference is that, instead of using the []-operator similar
to the Python API syntax, in the C++ API the indexing methods are:
torch::Tensor::index (link)
torch::Tensor::index_put_ (link)
It’s also important to note that index types such as None / Ellipsis /
Slice live in the torch::indexing namespace, and it’s recommended to
put using namespace torch::indexing before any indexing code for
convenient use of those index types.
For the convenience here is some of the Python vs C++ conversions taken from the link I just gave:
Here are some examples of translating Python indexing code to C++:
Getter
------
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| Python | C++ (assuming using namespace torch::indexing ) |
+==========================================================+======================================================================================+
| tensor[None] | tensor.index({None}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[Ellipsis, ...] | tensor.index({Ellipsis, "..."}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[1, 2] | tensor.index({1, 2}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[True, False] | tensor.index({true, false}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[1::2] | tensor.index({Slice(1, None, 2)}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[torch.tensor([1, 2])] | tensor.index({torch::tensor({1, 2})}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
| tensor[..., 0, True, 1::2, torch.tensor([1, 2])] | tensor.index({"...", 0, true, Slice(1, None, 2), torch::tensor({1, 2})}) |
+----------------------------------------------------------+--------------------------------------------------------------------------------------+
Translating between Python/C++ index types
------------------------------------------
The one-to-one translation between Python and C++ index types is as follows:
+-------------------------+------------------------------------------------------------------------+
| Python | C++ (assuming using namespace torch::indexing ) |
+=========================+========================================================================+
| None | None |
+-------------------------+------------------------------------------------------------------------+
| Ellipsis | Ellipsis |
+-------------------------+------------------------------------------------------------------------+
| ... | "..." |
+-------------------------+------------------------------------------------------------------------+
| 123 | 123 |
+-------------------------+------------------------------------------------------------------------+
| True | true |
+-------------------------+------------------------------------------------------------------------+
| False | false |
+-------------------------+------------------------------------------------------------------------+
| : or :: | Slice() or Slice(None, None) or Slice(None, None, None) |
+-------------------------+------------------------------------------------------------------------+
| 1: or 1:: | Slice(1, None) or Slice(1, None, None) |
+-------------------------+------------------------------------------------------------------------+
| :3 or :3: | Slice(None, 3) or Slice(None, 3, None) |
+-------------------------+------------------------------------------------------------------------+
| ::2 | Slice(None, None, 2) |
+-------------------------+------------------------------------------------------------------------+
| 1:3 | Slice(1, 3) |
+-------------------------+------------------------------------------------------------------------+
| 1::2 | Slice(1, None, 2) |
+-------------------------+------------------------------------------------------------------------+
| :3:2 | Slice(None, 3, 2) |
+-------------------------+------------------------------------------------------------------------+
| 1:3:2 | Slice(1, 3, 2) |
+-------------------------+------------------------------------------------------------------------+
| torch.tensor([1, 2]) | torch::tensor({1, 2}) |
+-------------------------+------------------------------------------------------------------------+
| https://stackoverflow.com/questions/64482223/ |
No module named 'parse_config' while tryhing to load checkpoint in PyTorch | I have a checkpoint file saved after training a model in Pytorch. I have to inspect it in a different module so I tried to load the checkpoint using the following code.
map_location = lambda storage, loc: storage
checkpoint = torch.load("model.pt", map_location=map_location)
But it is raising ModuleNotFoundError issue, which I couldn't find a way to resolve.
The error traceback :
Traceback (most recent call last):
File "main.py", line 11, in <module>
model = loadmodel(hook_feature)
File "/home/../model_loader.py", line 21, in loadmodel
checkpoint = torch.load(settings.MODEL_FILE, map_location=map_location)
File "/home/../.conda/envs/envreporting/lib/python3.6/site-packages/torch/serialization.py", line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/../.conda/envs/envreporting/lib/python3.6/site-packages/torch/serialization.py", line 842, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'parse_config'
I couldn't find an already existing issue relevant to this one.
| Is it possible that you have used https://github.com/victoresque/pytorch-template for training the model ? In that case, the project also saves its config in the checkpoint and you also need to import parse_config.py file in order to load it.
| https://stackoverflow.com/questions/64483065/ |
Use pre trained Nodes from past runs - Pytorch Biggraph | After struggling with this amazing facebookresearch
/
PyTorch-BigGraph project, and its impossible API, I managed to get a grip on how to run it (thanks to stand alone simple example)
My system restrictions do not allow me to train the dense (embedding) representation of all edges, and I need from time to time to upload past embeddings and train the model using both new edges and existing nodes, notice that nodes in past and new edge list do not necessarily overlap.
I tried to understand from here: see the context section how to do it, so far with no success.
Following is a stand-alone PGD code, that turned batch_edges into an embedding node list, however, I need it to use pre-trained nodes list past_trained_nodes.
import os
import shutil
from pathlib import Path
from torchbiggraph.config import parse_config
from torchbiggraph.converters.importers import TSVEdgelistReader, convert_input_data
from torchbiggraph.train import train
from torchbiggraph.util import SubprocessInitializer, setup_logging
DIMENSION = 4
DATA_DIR = 'data'
GRAPH_PATH = DATA_DIR + '/output1.tsv'
MODEL_DIR = 'model'
raw_config = dict(
entity_path=DATA_DIR,
edge_paths=[DATA_DIR + '/edges_partitioned', ],
checkpoint_path=MODEL_DIR,
entities={"n": {"num_partitions": 1}},
relations=[{"name": "doesnt_matter", "lhs": "n", "rhs": "n", "operator": "complex_diagonal", }],
dynamic_relations=False, dimension=DIMENSION, global_emb=False, comparator="dot",
num_epochs=7, num_uniform_negs=1000, loss_fn="softmax", lr=0.1, eval_fraction=0.,)
batch_edges = [["A", "B"], ["B", "C"], ["C", "D"], ["D", "B"], ["B", "D"]]
# I want the model to use these pretrained nodes, Notice that Node A exist, And F Does not
#I dont have all past nodes, as some are gained from data
past_trained_nodes = {'A': [0.5, 0.3, 1.5, 8.1], 'F': [3, 0.6, 1.2, 4.3]}
try:
shutil.rmtree('data')
except:
pass
try:
shutil.rmtree(MODEL_DIR)
except:
pass
os.makedirs(DATA_DIR, exist_ok=True)
with open(GRAPH_PATH, 'w') as f:
for edge in batch_edges:
f.write('\t'.join(edge) + '\n')
setup_logging()
config = parse_config(raw_config)
subprocess_init = SubprocessInitializer()
input_edge_paths = [Path(GRAPH_PATH)]
convert_input_data(config.entities, config.relations, config.entity_path, config.edge_paths,
input_edge_paths, TSVEdgelistReader(lhs_col=0, rel_col=None, rhs_col=1),
dynamic_relations=config.dynamic_relations, )
train(config, subprocess_init=subprocess_init)
How can I use my pre-trained nodes in the current model?
Thanks in advance!
| Since torchbiggraph is file based, you can modify the saved files to load pre-trained embeddings and add new nodes. I wrote a function to achieve this
import json
def pretrained_and_new_nodes(pretrained_nodes,new_nodes,entity_name,data_dir,embeddings_path):
"""
pretrained_nodes:
A dictionary of nodes and their embeddings
new_nodes:
A list of new nodes,each new node must have an embedding in pretrained_nodes.
If no new nodes, use []
entity_name:
The entity's name, for example, WHATEVER_0
data_dir:
The path to the files that record graph nodes and edges
embeddings_path:
The path to the .h5 file of embeddings
"""
with open('%s/entity_names_%s.json' % (data_dir,entity_name),'r') as source:
nodes = json.load(source)
dist = {item:ind for ind,item in enumerate(nodes)}
if len(new_nodes) > 0:
# modify both the node names and the node count
extended = nodes.copy()
extended.extend(new_nodes)
with open('%s/entity_names_%s.json' % (data_dir,entity_name),'w') as source:
json.dump(extended,source)
with open('%s/entity_count_%s.txt' % (data_dir,entity_name),'w') as source:
source.write('%i' % len(extended))
if len(new_nodes) == 0:
# if no new nodes are added, we won't bother create a new .h5 file, but just modify the original one
with h5py.File(embeddings_path,'r+') as source:
for node,embedding in pretrained_nodes.items():
if node in nodes:
source['embeddings'][dist[node]] = embedding
else:
# if there are new nodes, then we must create a new .h5 file
# see https://stackoverflow.com/a/47074545/8366805
with h5py.File(embeddings_path,'r+') as source:
embeddings = list(source['embeddings'])
optimizer = list(source['optimizer'])
for node,embedding in pretrained_nodes.items():
if node in nodes:
embeddings[dist[node]] = embedding
# append new nodes in order
for node in new_nodes:
if node not in list(pretrained_nodes.keys()):
raise ValueError
else:
embeddings.append(pretrained_nodes[node])
# write a new .h5 file for the embedding
with h5py.File(embeddings_path,'w') as source:
source.create_dataset('embeddings',data=embeddings,)
optimizer = [item.encode('ascii') for item in optimizer]
source.create_dataset('optimizer',data=optimizer)
After you trained a model (let's say the stand along simple example you linked in your post), and you want to change the learned embedding of node A to [0.5, 0.3, 1.5, 8.1]. Moreover, you also want to add a new node F to the graph with embedding [3, 0.6, 1.2, 4.3] (This newly added node F has no connections with other nodes). You can run my function with
past_trained_nodes = {'A': [0.5, 0.3, 1.5, 8.1], 'F': [3, 0.6, 1.2, 4.3]}
pretrained_and_new_nodes(pretrained_nodes=past_trained_nodes,
new_nodes=['F'],
entity_name='WHATEVER_0',
data_dir='data/example_1',
embeddings_path='model_1/embeddings_WHATEVER_0.v7.h5')
After you ran this function, you can check the modified file of embeddings embeddings_WHATEVER_0.v7.h5
filename = "model_1/embeddings_WHATEVER_0.v7.h5"
with h5py.File(filename, "r") as source:
embeddings = list(source['embeddings'])
embeddings
and you will see, the embedding of A is changed, and also the embedding of F is added (the order of the embeddings is consistent with the order of nodes in entity_names_WHATEVER_0.json).
With the files modified, you can use the pre-trained nodes in a new training session.
| https://stackoverflow.com/questions/64483856/ |
torch.cuda.is_available() True in (base), but False in other conda env | I run this code in Anaconda prompt, and it returns True.
(base) C:\User
torch.cuda.is_available()
True
But when I run other conda environment, it just doesn't work.
(pytorch_project) C:\User
torch.cuda.is_available()
False
The problem seems to be different results of torch.version.cuda.
(base) torch.version.cuda = 10.1
(pytorch_project) torch.version.cuda = 10.2
But I don't know why they are different...
How do I make 10.2 down to 10.1 and make is_available() == True ?
Here is my info.
Windows 10 / nvidia-smi=425.31 / CUDA ver=10.1 / pytorch=1.4.0 / torchvision=0.5.0
conda list
| Thank you for your answers and comments. <3
I've solved the problem.
I use Visual Studio Code as the developer environment tool, but as shown in the picture I uploaded, conda list pointed at exact the same directory, which means I didn't actually activate my environment.
It should be like (pytorch) C:\User , but it's (Power Shell) PS C:\User instead.
To solve this, I went to Settings → Terminal > Integrated > Shell Args: Windows and edited settings.json with
"terminal.integrated.shellArgs.windows": ["-ExecutionPolicy", "ByPass", "-NoExit","-Command","& 'C:\\Users\\user\\miniconda3\\shell\\condabin\\conda-hook.ps1'" ]
Reference: https://blog.lcarbon.idv.tw/vscode-設定-anaconda-路徑至-visual-studio-code-終端機中windows/
By starting a new terminal I got (pytorch) PS C:\User correctly, and saw my torch=1.5.0 using conda list.
Then I run conda install pytorch torchvision cudatoolkit=10.1 -c pytorch to change the pytorch version.
And Voilà!!
torch.cuda.is_available()
True
| https://stackoverflow.com/questions/64483963/ |
Directly update the optimizer learning rate | I have a specific learning rate schedule in mind. It is based on the epoch but differs from the generally available ones I am aware of including StepLR.
Is there something that would perform the equivalent to:
optimizer.set_lr(lr)
or
optimizer.set_param('lr,',lr)
I would then simply invoke that method at the end of each epoch (or possibly even more frequently)
Context: I am using the adam optimizer as so:
optimizer = torch.optim.Adam(model.parameters(), lr=LrMax, weight_decay=decay) # , betas=(args.beta1, args.beta2)
Update I found this information https://discuss.pytorch.org/t/change-learning-rate-in-pytorch/14653:
for param_group in optimizer.param_groups:
param_group['lr'] = lr
Is there a way to ascertain that the adam optimizer being used is employing the new learning rate?
| You can do this in this way:
for param_group in optimizer.param_groups:
param_group['lr'] = lr
| https://stackoverflow.com/questions/64489548/ |
Neural Network : Epoch and Batch Size | I am trying to train a neural network to classify words into different categories.
I notice two things:
When I use a smaller batch_size (like 8,16,32) the loss is not decreasing, but rather sporadically varying. When I use a larger batch_size (like 128, 256), the loss is going going down, but very slowly.
More importantly, when I use a larger EPOCH value, my model does a good job at reducing the loss. However I'm using a really large value (EPOCHS = 10000).
Question:
How to get the optimal EPOCH and batch_size values?
| There is no way to decide on these values based on some rules. Unfortunately, the best choices depend on the problem and the task. However, I can give you some insights.
When you train a network, you calculate a gradient which would reduce the loss. In order to do that, you need to backpropagate the loss. Now, ideally, you compute the loss based on all of the samples in your data because then you consider basically every sample and you come up with a gradient that would capture all of your samples. In practice, this is not possible due to the computational complexity of calculating gradient on all samples. Because for every update, you have to compute forward-pass for all your samples. That case would be batch_size = N, where N is the total number of data points you have.
Therefore, we use small batch_size as an approximation! The idea is instead of considering all the samples, we say I compute the gradient based on some small set of samples but the thing is I am losing information regarding the gradient.
Rule of thumb:
Smaller batch sizes give noise gradients but they converge faster because per epoch you have more updates. If your batch size is 1 you will have N updates per epoch. If it is N, you will only have 1 update per epoch. On the other hand, larger batch sizes give a more informative gradient but they convergence slower.
That is the reason why for smaller batch sizes, you observe varying losses because the gradient is noisy. And for larger batch sizes, your gradient is informative but you need a lot of epochs since you update less frequently.
The ideal batch size should be the one that gives you informative gradients but also small enough so that you can train the network efficiently. You can only find it by trying actually.
| https://stackoverflow.com/questions/64493769/ |
How to writte a raster plot? | I'm using pytorch on google colab.
I've got a tensor matrix below, this is the example, and actually the matrix size is about 50 neurons and 30,000~50,000 time.
a= torch.tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 1., 0.]])
each of values are,
a= torch.tensor([[Neuron1(t=1), N2(t=1), N3(t=1), N4(t=1), N5(t=1)],
[N1(t=2), N2(t=2), N3(t=2), N4(t=2), N5(t=2)],
[N1(t=3), N2(t=3), N3(t=3), N4(t=3), N5(t=3)]])
and 1 means that neuron fire, 0 means not fire.
So Neuron5(t=2), Neuron2(t=3) and Neuron4(t=3) are firing.
Then, I want to make a raster plot or scatter plot like below using this matrix,
The dots show the firing neuron.
neuron number
1|
2| *
3|
4| *
5|__ *_____time
1 2 3
What would be the best python code to do this?
I have no idea now.
Thank you for reading.
| You can do it easily as follows:
import matplotlib.pyplot as plt
a= torch.tensor([[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 1., 0.]],device='cuda')
plt.scatter(*torch.where(a.cpu()))
| https://stackoverflow.com/questions/64494641/ |
One instance with multple GPUs or multiple instances with one GPU | I am running multiple models using GPUs and all jobs combined can be run on 4 GPUs, for example. Multiple jobs can be run on the same GPU since the GPU memory can handle it.
Is it a better idea to spin up a powerful instance with all 4 GPUs as part of it and run all the jobs on one instance? Or should I go the route of having multiple instances with 1 GPU on each?
There are a few factors I'm thinking of:
Latency of reading files. Having a local disk on one machine should be faster latency wise, but it would be a quite a few reads from one source. Would this cause any issues?
I would need quite a few vCPU and a lot of memory to scale the IOPS since GPC scales IOPS that way, apparently. What is the best way to approach this? If anyone has any more on this, would appreciate pointers.
If in the future I need to downgrade to save costs/downgrade performance, I could simple stop the instance and change my specs.
Having everything on one machine would be easier to work with. I know in production I would want a more distributed approach, but this is strictly experimentation.
Those are my main thoughts. Am I missing something? Thanks for all of the help.
| Ended up going with one machine with multiple GPUs. Just assigned the jobs to the different GPUs to make the memory work.
| https://stackoverflow.com/questions/64495113/ |
How to use a numpy-based external library function within a Tensorflow 2/keras custom layer? | I would be very grateful if someone would help me understanding the following problem.
I am trying to implement a custom layer in tensorflow 2 by using keras (it's a layer derived from the class Layer).
I have overriden the build and the call functions.
When I write my call function I need to invoke a method from an external library which only accepts numpy array. It's a pretty complex function and of course it does not use tensorflow functions.
My call function takes in input a tensor, convert it to numpy (by .numpy() function), invoke the external method from another library, then converts the numpy array back to tensor.
Here I have some issues and tried different solutions.
If I run the code it tells me that ...ops.Tensor has no method .numpy(). If I correctly understood, this is due to the type of tensor I receive in input to the call function and I need to use the eager mode.
If I compile by setting the run_eagerly flag to true, during the fit, it tells me that the gradient is missing. I was hoping that some form of auto-grad computation was implemented but maybe this is not what it is currently happening?.
Source code:
class SimpleLayer(tf.keras.layers.Layer):
def __init__(self):
super(SimpleLayer, self).__init__()
def build(self, input_shape):
shape_a = input_shape[0] #batch_size
shape_b = int(input_shape[1]) #height
shape_c = int(input_shape[2]) #width
shape_d = input_shape[3] #channel_number
self.w = self.add_weight(shape=(shape_d*shape_a, 2, shape_b, shape_c),
initializer='random_uniform', dtype='float64',
trainable=True)
def call(self, inputs):
tf_inputs = inputs.numpy()
sft = tr.transform(tf_inputs) # call to the external library
transformed = tf.math.multiply(sft, self.w)
conv = tf.zeros(np.shape(inputs))
conv = tr.inv_transform(inputs, transformed) # another call to a function that also converts to tensor
return conv. # this is a tensor!
.....
model = tf.keras.Sequential([
layers.experimental.preprocessing.Rescaling(1./255),
pl.SimpleLayer(),
layers.Activation('relu'),
layers.MaxPooling2D(),
pl.SimpleLayer(),
layers.Activation('relu'),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(
optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
run_eagerly=True, # adding this to avoid graph building
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=3, batch_size=20,
validation_data=(test_images, test_labels))
Is there any workaround for that? Can I only use tensorflow 2 functions to process the tensor or maybe I am missing something?
I am also considering to switch to pyTorch (which I do not know yet) but I fear to have the same issue. What would you suggest?
| Gradient cannot flow through numpy functions as... numpy has no autograd capabilities of any sort.
Your custom function will not work in either tensorflow2.x nor in pytorch as the operation on numpy arrays are not recorded.
For your call code (in PyTorch it would be forward, otherwise really similar), see comments:
def call(self, inputs):
tf_inputs = inputs.numpy() # gradient breaks
sft = tr.transform(tf_inputs) # still numpy
transformed = tf.math.multiply(sft, self.w)
conv = tf.zeros(np.shape(inputs))
conv = tr.inv_transform(inputs, transformed)
return conv # Tensor with no gradient history
And this would happen for all your SimpleLayer layers.
Possible solutions
Code those operations in differentiable manner (assuming they are differentiable and do not use operations like argmax) either in tensorflow or pytorch (the latter is easier and more intuitive and you would have easier time with it IMO).
If your complicated functions use numpy and are differentiable then you could code it with PyTorch by rewriting np.op to torch.op as their APIs are pretty similar and interoperate almost seamlessly
You may go with JAX instead of tf and pytorch as "JAX can automatically differentiate native Python and NumPy functions". Depending on your application it might be worth a shot
| https://stackoverflow.com/questions/64497818/ |
Validate on entire validation set when using ddp backend with PyTorch Lightning | I'm training an image classification model with PyTorch Lightning and running on a machine with more than one GPU, so I use the recommended distributed backend for best performance ddp (DataDistributedParallel). This naturally splits up the dataset, so each GPU will only ever see one part of the data.
However, for validation, I would like to compute metrics like accuracy on the entire validation set and not just on a part. How would I do that? I found some hints in the official documentation, but they do not work as expected or are confusing to me. What's happening is that validation_epoch_end is called num_gpus times with 1/num_gpus of the validation data each. I would like to aggregate all results and only run the validation_epoch_end once.
In this section they state that when using dp/ddp2 you can add an additional function called like this
def validation_step(self, batch, batch_idx):
loss, x, y, y_hat = self.step(batch)
return {"val_loss": loss, 'y': y, 'y_hat': y_hat}
def validation_step_end(self, self, *args, **kwargs):
# do something here, I'm not sure what,
# as it gets called in ddp directly after validation_step with the exact same values
return args[0]
However, the results are not being aggregated and validation_epoch_end is still called num_gpu times. Is this kind of behavior not available for ddp? Is there some other way how achieve this aggregation behavior?
| training_epoch_end() and validation_epoch_end() receive data that is aggregated from all training / validation batches of the particular process. They simply receive a list of what you returned in each training or validation step.
When using the DDP backend, there's a separate process running for every GPU. There's no simple way to access the data that another process is processing, but there's a mechanism for synchronizing a particular tensor between the processes.
The easiest approach for computing a metric on the entire validation set is to calculate the metric in pieces and then synchronize the resulting tensor, for example by taking the average. self.log() calls will automatically synchronize the value between GPUs when you use sync_dist=True. How the value is synchronized is determined by the reduce_fx argument, which by default is torch.mean.
If you're happy with averaging the metric over batches too, you don't need to override training_epoch_end() or validation_epoch_end() — self.log() will do the averaging for you.
If the metric cannot be calculated separately for each GPU and then averaged, it can get a bit more challenging. It's possible to update some state variables at each step, and then synchronize the state variables at the end of an epoch and calculate the metric. The recommended way is to create a class that derives from the Metric class from the TorchMetrics project. Add the state variables in the constructor using add_state() and override the update() and compute() methods. The API will take care of synchronizing the state variables between the GPU processes.
There's already an accuracy metric in TorchMetrics and the source code is a good example of how to use the API.
| https://stackoverflow.com/questions/64499294/ |
How to check which pytorch version fits torchvision | I am trying to clone and run this repository:
https://github.com/switchablenorms/CelebAMask-HQ
The demo runs with PyTorch 0.4.1, and I am trying to find the corresponding version, how can I find it?
| The corresponding torchvision version for 0.4.1 is 0.2.1.
The easiest way is to look it up in the previous versions section. Only if you couldn't find it, you can have a look at the torchvision release data and pytorch's version. There you can find which version, got release with which version!
| https://stackoverflow.com/questions/64502474/ |
how to Implement replication_pad2d layer in to coremltools converters as torch_op | I've been trying to convert my pytorch model into coreML format, However one of the layers is currently not supported replication_pad2d. Therefore I was trying to implement it using the register operator decorator @register_torch_op to reimplement the layer for coremltools.converters, However I'm struggling to understand the input types to be able to implement the function currently. I got this, which is an implementation roughly translated from pytorch but it's not working
from coremltools.converters.mil import Builder as mb
from coremltools.converters.mil import register_torch_op
from coremltools.converters.mil.frontend.torch.ops import _get_inputs
@register_torch_op
def replication_pad2d(context, node):
inputs = _get_inputs(context, node)
x = inputs[0]
a = len(x)
L_list, R_list = [], []
U_list, D_list = [], []
for i in range(a):#i:0, 1
l = x[:, :, :, (a-i):(a-i+1)]
L_list.append(l)
r = x[:, :, :, (i-a-1):(i-a)]
R_list.append(r)
L_list.append(x)
x = mb.concat(L_list+R_list[::-1], axis=3, name=node.name)
for i in range(a):
u = x[:, :, (a-i):(a-i+1), :]
U_list.append(u)
d = x[:, :, (i-a-1):(i-a), :]
D_list.append(d)
U_list.append(x)
x = mb.concat(U_list+D_list[::-1], axis=3, name=node.name)
context.add(x)
but getting the following error
<ipython-input-12-cf14ed84cb93> in replication_pad2d(context, node)
59 inputs = _get_inputs(context, node)
60 x = inputs[0]
---> 61 a = len(x)
62 L_list, R_list = [], []
63 U_list, D_list = [], []
TypeError: object of type 'Var' has no len()
would be great if someone could help me understand this better especially input type node and context
| I think the you can use the existing padding layer as:
from coremltools.converters.mil import Builder as mb
from coremltools.converters.mil import register_torch_op
from coremltools.converters.mil.frontend.torch.ops import _get_inputs
@register_torch_op(torch_alias=["replication_pad2d"])
def HackedReplication_pad2d(context, node):
inputs = _get_inputs(context, node)
x = inputs[0]
pad = inputs[1].val
x_pad = mb.pad(x=x, pad=[pad[2], pad[3], pad[0], pad[1]], mode='replicate')
context.add(x_pad, node.name)
The documentation of the padding operation is not that great, so ordering of padding parameters is a guessing game.
| https://stackoverflow.com/questions/64504209/ |
Defining named parameters for a customized NN module in Pytorch | This question is about how to appropriately define the parameters of a customized layer in Pytorch. I am wondering how one can make the parameter to end up being named parameter?
class My_layer(torch.nn.Module):
def __init__(self):
self.mu = torch.nn.Parameter(torch.tensor([[0.0],[1.0]]))
So that when I print the parameters as below, the p.name is not empty.
my_layer = My_Layer()
for p in my_layer.parameters():
print(p.name, p.data, p.requires_grad)
| You are registering your parameter properly, but you should use nn.Module.named_parameters rather than nn.Module.parameters to access the names. Currently you are attempting to access Parameter.name, which is probably not what you want. The name attribute of Parameter and Tensor do not appear to be documented, but as far as I can tell, they refer to an internal attribute of the _C._TensorBase class and cannot be modified externally.
Every time you assign a Parameter to an attribute of your module it is registered with a name (this occurs in nn.Module.__setattr__ here). The parameter always takes the same name as the attribute itself, so "mu" in this case. To iterate over all the parameters and their associated names use nn.Module.named_parameters. For example,
my_layer = My_Layer()
for n, p in my_layer.named_parameters():
print('Parameter name:', n)
print(p.data)
print('requires_grad:', p.requires_grad)
which prints
Parameter name: mu
tensor([[0.],
[1.]])
requires_grad: True
| https://stackoverflow.com/questions/64507404/ |
Pytorch, Pre-trained model: How to use feature and classifier at the same time | I'm using vgg16 extracting image feature vector. I want to get 114096 vector from the 2nd-to-last layer.
My code:
def get_model():
model = models.vgg16(pretrained=True)#.features[:].classifier[:4]
model = model.eval()
# model.cuda() # send the model to GPU, DO NOT include this line if you haven't a GPU
return model
But I can only get 111000 vector from the last layer.
I know how to use feathers and classifier, but I don't know how to use them at the same time.
use classifier only:
use feathers only:
use them at the same time:
log:
Traceback (most recent call last):
File "/mnt/c/Users/sunji/PycharmProjects/image_cluster_pytorch/main.py", line 7, in <module>
model = calc.get_model()
File "/mnt/c/Users/sunji/PycharmProjects/image_cluster_pytorch/imagecluster/calc.py", line 17, in get_model
model = models.vgg16(pretrained=True).features[:].classifier[:4]
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 771, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'Sequential' object has no attribute 'classifier'
| I've find the solution myself.
Sorry for bothering to Stack overflow.
here is the code:
def get_model():
model = models.vgg16(pretrained=True)
model.features = model.features[:]
model.classifier = model.classifier[:4]
model = model.eval()
# model.cuda() # send the model to GPU, DO NOT include this line if you haven't a GPU
return model
result:
I think this is the right answer.
| https://stackoverflow.com/questions/64509012/ |
Inconsitencies/Inaccuracies between torchvision.datasets.MNIST and Michael Nielsens neuralnetworksanddeeplearning | I printed the vectorized form of the first training image of the mnist dataset of pytorch and https://github.com/mnielsen/neural-networks-and-deep-learning/tree/master/data .
The difference seems too big for just floating point precision error.
Full Diff of first mnist train image:
https://www.diffchecker.com/6y6YTFiN
Code to reproduce:
import mnist_loader
import numpy
numpy.set_printoptions(formatter={'float': '{: 0.4f}'.format})
d = mnist_loader.load_data()
print numpy.array(d[0][0][0]).reshape(784,1)
pytorch:
import torch
import torchvision
from torchvision import transforms as tf
dtype = torch.float32
transforms = tf.Compose([tf.ToTensor()])
mnist_data = torchvision.datasets.MNIST("./mnist", transform=transforms, download=True)
data_loader = torch.utils.data.DataLoader(mnist_data,
batch_size=500,
shuffle=False,
num_workers=1)
data, target = next(data_loader.__iter__())
print(data[0].reshape(784,1))
Anyone an idea what's causing this ?
| MNIST images consist of pixel values that are integers in the range 0 to 255 (inclusive). To produce the tensors you are looking at, those integer values have been normalised to lie between 0.0 and 1,0, by dividing them all by some constant factor. It appears that your two sources chose different normalising factors: 255 in one case and 256 in the other.
| https://stackoverflow.com/questions/64512021/ |
PyTorch not updating weights when using autograd in loss function | I am trying to use the gradient of a network with respect to its inputs as part of my loss function. However, whenever I try to calculate it, the training proceeds but the weights do not update
import torch
import torch.optim as optim
import torch.autograd as autograd
ic = torch.rand((25, 3))
ic = torch.tensor(ic, requires_grad=True)
optimizer = optim.RMSprop([ic], lr=1e-2)
for itr in range(1, 50):
optimizer.zero_grad()
sol = torch.tanh(.5*torch.stack(100*[ic])) # simplified for minimal working example
dx = sol[-1, :, 0]
dxdxy, = autograd.grad(dx,
inputs=ic,
grad_outputs = torch.ones(ic.shape[0]), # batchwise
retain_graph=True
)
dxdxy = torch.tensor(dxdxy, requires_grad=True)
loss = torch.sum(dxdxy)
loss.backward()
optimizer.step()
if itr % 5 == 0:
print(loss)
What am I doing wrong?
| When you run autograd.grad without setting flag create_graph to True then you won't obtain an output which is connected to the computational graph, which means that you won't be able to further optimize w.r.t ic (and obtain a higher order derivative as you wish to do here).
From torch.autograd.grad's docstring:
create_graph (bool, optional): If True, graph of the derivative will
be constructed, allowing to compute higher order derivative products.
Default: False.
Using dxdxy = torch.tensor(dxdxy, requires_grad=True) as you've tried here won't help since the computational graph which is connected to ic has been lost by then (since create_graph is False), and all you do is create a new computational graph with dxdxy being a leaf node.
See the solution attached below (note that when you create ic you can set requires_grad=True and hence the second line is redundant (that's not a logical problem but just longer code):
import torch
import torch.optim as optim
import torch.autograd as autograd
ic = torch.rand((25, 3),requires_grad=True) #<-- requires_grad to True here
#ic = torch.tensor(ic, requires_grad=True) #<-- redundant
optimizer = optim.RMSprop([ic], lr=1e-2)
for itr in range(1, 50):
optimizer.zero_grad()
sol = torch.tanh(.5 * torch.stack(100 * [ic])) # simplified for minimal working example
dx = sol[-1, :, 0]
dxdxy, = autograd.grad(dx,
inputs=ic,
grad_outputs=torch.ones(ic.shape[0]), # batchwise
retain_graph=True, create_graph=True # <-- important
)
#dxdxy = torch.tensor(dxdxy, requires_grad=True) #<-- won't do the trick. Remove
loss = torch.sum(dxdxy)
loss.backward()
optimizer.step()
if itr % 5 == 0:
print(loss)
| https://stackoverflow.com/questions/64513183/ |
How to define neural network neuron-by-neuron? | So we want to have a random brain-like neuron mess. Meaning:
We have AxB inputs and CxD outputs.
We want to have K (where K >= CxD) neurons that are connected to each other at random.
Yet so that all K neurons are connected to at least one of AxB inputs,
And all K neurons are connected to at least one CxD output.
Something like ths (here AxB=5, K=4, CxD=2):
Operations that neurons shall do are weighted summ + some reduction like LeakyReLu.
So one can imagine that when connection randomnes is controlled so that connections are localised over image patches alike layers of CNN it can produce intresting results.
How one could do such thing (handle neurons one-by-one) in PyTorch?
| There are two issues here: The major issue is how to randomly sample the connections, and a minor issue of how to optimize a sparse linear layer.
As for the minor issue, you can implement a sparse fully-connected layer based on the linked answer.
As for the random connectivity, you'll have to implement it in a way that there are no "loops"
| https://stackoverflow.com/questions/64515718/ |
how to find out the method definition of torch.nn.Module.parameters() | I am following this notebook:
One of the method:
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x n_hidden,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
weight.new(self.n_layers, batch_size, self.n_hidden).zero_())
return hidden
I would like to see what type of weight is and how to use new() method, so I was trying to find out the parameters() method as the data attribute comes from paramerters() method.
Surprisingly, I cannot find where it comes from after reading the source code of nn module in PyTorch.
How do you figure out where to see the definition of methods you saw from PyTorch?
|
so I was trying to find out the parameters() method as the data
attribute comes from paramerters() method.
Surprisingly, I cannot find where it comes from after reading the
source code of nn module in PyTorch.
You can see the module definition under torch/nn/modules/module.py here at line 178.
You can then easily spot the parameters() method here.
How do you guys figure out where to see the definition of methods you
saw from PyTorch?
The easiest way that I myself always use, is to use VSCode's Go to Definition or its Peek -> Peek definition feature.
I believe Pycharm has a similar functionality as well.
| https://stackoverflow.com/questions/64519933/ |
PyTorch: What is numpy.linalg.multi_dot() equivalent in PyTorch | I am trying to perform matrix multiplication of multiple matrices in PyTorch and was wondering what is the equivalent of numpy.linalg.multi_dot() in PyTorch?
If there isn't one, what is the next best way (in terms of speed and memory) I can do this in PyTorch?
Code:
import numpy as np
import torch
A = np.random.rand(3, 3)
B = np.random.rand(3, 3)
C = np.random.rand(3, 3)
results = np.linalg.multi_dot(A, B, C)
A_tsr = torch.tensor(A)
B_tsr = torch.tensor(B)
C_tsr = torch.tensor(C)
# What is the PyTorch equivalent of np.linalg.multi_dot()?
Many thanks!
| ~~Looks like one can send tensors into multi_dot~~
Looks like the numpy implementation casts everything into numpy arrays. If your tensors are on the cpu and detached this should work. Otherwise, the conversion to numpy would fail.
So in general - likely there isn't an alternative. I think your best shot is to take the multi_dot implementation, e.g. from here for numpy v1.19.0 and adjust it to handle tensors / skip the cast to numpy. Given the similar interface and the code simplicity I think that this should be pretty straightforward.
| https://stackoverflow.com/questions/64520994/ |
How to replicate PyTorch's nn.functional.unfold function in Tensorflow? | I want to use tensorflow to rewrite the pytorch's torch.nn.functional.unfold function:
#input x:[16, 1, 50, 36]
x = torch.nn.functional.unfold(x, kernel_size=(5, 36), stride=3)
#output x:[16, 180, 16]
I tried to use the function tf.extract_image_patches():
x = tf.extract_image_patches(x,ksizes=[1, 1,5, 98],strides=[1, 1, 3, 1], rates=[1, 1, 1, 1],padding='VALID')
The input x.shape:[16,1,64,98]
I get the output x.shape:[16,1,20,490]
Then I reshape the X to [16,490,20], that was I expect.
But I get the error when I feed the data:
UnimplementedError (see above for traceback): Only support ksizes across space.
[[Node:hcn/ExtractImagePatches = ExtractImagePatches[T=DT_FLOAT, ksizes=[1, 1, 5, 98], padding="VALID", rates=[1, 1, 1, 1], strides=[1, 1, 3, 1], _device="/job:localhost/replica:0/task:0/device:GPU:0"](hcn/Reshape)]]
How could I use tensorflow to rewrite pytorch torch.nn.functional.unfold function to change the X?
| x = tf.reshape(x, [16, 50, 36, 1])
x = tf.extract_image_patches(x, ksizes=[1, 4, 98, 1], strides=[1, 4, 1, 1], rates=[1, 1, 1, 1], padding='VALID')
| https://stackoverflow.com/questions/64523441/ |
Can you accelerate torch DL training on anything other than "cuda" like "hip" or "OpenCL"? | I've noticed that torch.device can accept a range of arguments, precisely cpu, cuda, mkldnn, opengl, opencl, ideep, hip, msnpu.
However, when training deep learning models, I've only ever seen cuda or cpu being used. Very often the code looks something like this
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
I've never seen any of the others being used, and was wondering if they can be used and how. The latest MacBooks with an AMD graphic card I believe should be able to use "hip", but is that true? And will the training speed be similar to that of using one CUDA GPU? If not, what is the point in torch.device accepting so many options if they cannot actually be used?
| If you want to use a GPU for deep learning there is selection between CUDA and CUDA...
More broad answer, yes there is AMD's hip and some OpenCL implementation:
The is hip by AMD - CUDA like interface with ports of pytorch, hipCaffe, tensorflow, but
AMD's hip/rocm is supported only on Linux - no Windows or Mac OS support by rocm provided
Even if you want to use Linux with AMD GPU + ROCM, you have to stick to GCN desrete devices (i.e. cards like rx 580, Vega 56/64 or Radeon VII), there is no hip/rocm support for RDNA devices (a year since a release) and it does not look to be any time soon, APUs aren't supported as well by hip.
Only one popular frameworks that supports OpenCL are Caffe and Keras+PlaidML. But
Caffe's issues:
Caffe seems have not being actively developed any more and somewhat outdated by todays standard
Performance of Caffe OpenCL implementation is about 1/2 of what is provided by nVidia's cuDNN and AMD's MIOpen, but it works quite OK and I used it in many cases.
Latest version had even grater performance hit https://github.com/BVLC/caffe/issues/6585 but at least you can run a version that works several changes behind
Also Caffe/OpenCL works there are still some bugs I fixed manually for OpenCL over AMD. https://github.com/BVLC/caffe/issues/6239
Keras/Plaid-ML
Keras on its own is much weaker framework in terms of ability to access lower level functionality
PlaidML performance is still 1/2 - to 1/3 of optimized NVidia's cuDNN & AMD's MIOpen-ROCM - and slower that caffe OpenCL in the tests I did
The future of non-TF backends for keras is not clear since 2.4 it requires TF...
Bottom line:
If you have GCN discrete AMD GPU and you run Linux you can use ROCM+Hip. Yet it isn't as stable as CUDA
You can try OpenCL Caffe or Keras-PlaidML - it maybe slower and mot as optimal as other solutions but have higher chances of making it work.
Edit 2021-09-14: there is a new project dlprimitives:
https://github.com/artyom-beilis/dlprimitives
that has better performance than both Caffe-OpenCL and Keras - it is ~75% performance for training in comparison to Keras/TF2, however it is under early development and has at this point much more limited set of layers that Caffe/Keras-PlaidML
The connection to pytorch is work in progress with some initial results: https://github.com/artyom-beilis/pytorch_dlprim
Disclaimer: I'm the author of this project
| https://stackoverflow.com/questions/64523498/ |
How to invert a PyTorch Embedding? | I have an multi-task encoder/decoder model in PyTorch with a (trainable) torch.nn.Embedding embedding layer at the input.
In one particular task, I'd like to pre-train the model self-supervised (to re-construct masked input data) and use it for inference (to fill in gaps in data).
I guess for training time I can just measure loss as the distance between the input embedding and the output embedding... But for inference, how do I invert an Embedding to reconstruct the proper category/token the output corresponds to? I can't see e.g. a "nearest" function on the Embedding class...
| You can do it quite easily:
import torch
embeddings = torch.nn.Embedding(1000, 100)
my_sample = torch.randn(1, 100)
distance = torch.norm(embeddings.weight.data - my_sample, dim=1)
nearest = torch.argmin(distance)
Assuming you have 1000 tokens with 100 dimensionality this would return nearest embedding based on euclidean distance. You could also use other metrics in similar manner.
| https://stackoverflow.com/questions/64523788/ |
How to multiply torch[N,1] and torch[1,N]? | I'd like to calculate a matrix (shape[N,N]) by multiplying 2 torch vectors A(shape[N,1]) and B=A'(shape[1,N]).
When I use torch.matmul or torch.mm, I got errors or A'A(shape[1,1]).
If A is denoted as
A = [a_1, a_2, ..., a_N]', I want to calculate a matrix C whose (i,j)element is
for i in range(N):
for j in range(N):
C(i,j) = a_i * a_j
I'd like to calculate this quickly. Do you have any ideas?
thank you for your help!
| If I understood you correctly you can do something like this :
A = torch.randint(0,5,(5,))
C = (A.view(1, -1) * A.view(-1, 1)).to(torch.float)
it produces :
tensor([[ 1., 4., 3., 3., 0.],
[ 4., 16., 12., 12., 0.],
[ 3., 12., 9., 9., 0.],
[ 3., 12., 9., 9., 0.],
[ 0., 0., 0., 0., 0.]])
which is equivalent to writting :
D = torch.zeros((5,5))
for i in range(5):
for j in range(5):
D[i][j] = A[i] * A[j]
which results in :
tensor([[ 1., 4., 3., 3., 0.],
[ 4., 16., 12., 12., 0.],
[ 3., 12., 9., 9., 0.],
[ 3., 12., 9., 9., 0.],
[ 0., 0., 0., 0., 0.]])
| https://stackoverflow.com/questions/64530828/ |
Retrieving original data from PyTorch nn.Embedding | I'm passing a dataframe with 5 categories (ex. car, bus, ...) into nn.Embedding.
When I do embedding.parameters(), I can see that there are 5tensors but how do I know which index corresponds to the original input (ex. car, bus, ...)?
| You can't as tensors are unnamed (only dimensions can be named, see PyTorch's Named Tensors).
You have to keep the names in separate data container, for example (4 categories here):
import pandas as pd
import torch
df = pd.DataFrame(
{
"bus": [1.0, 2, 3, 4, 5],
"car": [6.0, 7, 8, 9, 10],
"bike": [11.0, 12, 13, 14, 15],
"train": [16.0, 17, 18, 19, 20],
}
)
df_data = df.to_numpy().T
df_names = list(df)
embedding = torch.nn.Embedding(df_data.shape[0], df_data.shape[1])
embedding.weight.data = torch.from_numpy(df_data)
Now you can simply use it with any index you want:
index = 1
embedding(torch.tensor(index)), df_names[index]
This would give you (tensor[6, 7, 8, 9, 10], "car") so the data and respective column name.
| https://stackoverflow.com/questions/64531656/ |
PyTorch get_device_capability() Output Explanation | when running torch.cuda.get_device_capability() on my GTX 1070 I get the following output: (6, 1).
Could someone explain what this means?
| As stated in the comment, those are actully your GPU's compute capability version/indicator.
To put it simply, it describes the features supported by your GPU. You can also view your GPU compute capability using GPU-Z if you are on windows.
For more information concerning the exact feature set differences you can have a look Here.
As you can see here, based on, these allows the developers to know which feature-set is available to them and thus enable some features for some hardwares if they support it and otherwise fall back to other implementations of them.
This may be useful as well.
| https://stackoverflow.com/questions/64535324/ |
How can I see summary statistics (e.g. number of samples; type of data) of HuggingFace datasets? | I'm looking for suitable datasets to test a few new machine learning ideas. Is there any way to see summary statistics (e.g. number of samples; type of data) of HuggingFace datasets?
They provide descriptions here https://huggingface.co/datasets , but it's a bit hard to filter them.
| Not sure if I have missed the obvious, but I think you have to code it by yourself. When you use list_datasets, you only get general information for each dataset:
from datasets import list_datasets
list_datasets(with_details=True)[1].__dict__
Output:
{'id': 'ag_news',
'key': 'datasets/datasets/ag_news/ag_news.py',
'lastModified': '2020-09-15T08:26:31.000Z',
'description': "AG is a collection of more than 1 million news articles. News articles have been\ngathered from more than 2000 news sources by ComeToMyHead in more than 1 year of\nactivity. ComeToMyHead is an academic news search engine which has been running\nsince July, 2004. The dataset is provided by the academic comunity for research\npurposes in data mining (clustering, classification, etc), information retrieval\n(ranking, search, etc), xml, data compression, data streaming, and any other\nnon-commercial activity. For more information, please refer to the link\nhttp://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .\n\nThe AG's news topic classification dataset is constructed by Xiang Zhang\n([email protected]) from the dataset above. It is used as a text\nclassification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann\nLeCun. Character-level Convolutional Networks for Text Classification. Advances\nin Neural Information Processing Systems 28 (NIPS 2015).",
'citation': '@inproceedings{Zhang2015CharacterlevelCN,\n title={Character-level Convolutional Networks for Text Classification},\n author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},\n booktitle={NIPS},\n year={2015}\n}',
'size': 3991,
'etag': '"560ac59ac8cb6f76ac4180562a7f9342"',
'siblings': [datasets.S3Object('ag_news.py'),
datasets.S3Object('dataset_infos.json'),
datasets.S3Object('dummy/0.0.0/dummy_data.zip')],
'author': None,
'numModels': 1}
What you are actually looking for is the information that is provided by load_dataset:
from datasets import load_dataset
squad = load_dataset('squad')
squad
Output:
DatasetDict({'train': Dataset(features: {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}, num_rows: 87599), 'validation': Dataset(features: {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}, num_rows: 10570)})
Here you get the number of samples for each split (num_rows) and the datatype of each feature. But load_dataset will load the whole dataset which can be an undesired behavior and therefore should be rejected for performance reasons.
An alternative is the following as far as I have not overlooked a parameter that allows to only load the dataset_infos.json of each dataset:
import datasets
import requests
from datasets import list_datasets
from datasets.utils.file_utils import REPO_DATASETS_URL
sets = list_datasets()
version = datasets.__version__
name = 'dataset_infos.json'
summary =[]
for d in sets:
print('loading {}'.format(d))
try:
r = requests.get(REPO_DATASETS_URL.format(version=version, path=d, name=name))
summary.append(r.json())
except:
print('Could not load {}'.format(d))
#the features and splits values are probably interesting for you
print(summary[0]['default']['features'])
print(summary[0]['default']['splits'])
Output:
{'email_body': {'dtype': 'string', 'id': None, '_type': 'Value'}, 'subject_line': {'dtype': 'string', 'id': None, '_type': 'Value'}}
{'test': {'name': 'test', 'num_bytes': 1384177, 'num_examples': 1906, 'dataset_name': 'aeslc'}, 'train': {'name': 'train', 'num_bytes': 11902668, 'num_examples': 14436, 'dataset_name': 'aeslc'}, 'validation': {'name': 'validation', 'num_bytes': 1660730, 'num_examples': 1960, 'dataset_name': 'aeslc'}}
P.S.: I haven't check the dataset_infos.json of the datasets that weren't loaded. They have probably a more complex structure or errors inside.
| https://stackoverflow.com/questions/64542535/ |
Explain CUDA out of memory in Pytorch | Can anybody help me to explain the meaning of this common problem in Pytorch?
Model: EfficientDet-D4
GPU: RTX 2080Ti
Batch size: 2
CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 11.00 GiB total capacity; 8.32 GiB already allocated; 2.59 MiB free; 8.37 GiB reserved in total by PyTorch)
Anyway, I think the model and GPU are not important here and I know the solution should be reduced batch size, try to turn off the gradient while validating, etc. But I just want to know what is the meaning of 8.32 GiB while I have 11 GiB but can not allocate 14.00 MiB more?
Addition: I try to watch nvidia-smi while training with batch size = 1, it took 9.5 GiB in my GPU.
| I have the answer from @ptrblck in the Pytorch community. In there, I described my question in more detail than this question.
Please check the answer in here .
| https://stackoverflow.com/questions/64548199/ |
How to convert a PyTorch sparse_coo_tensor into a PyTorch dense tensor? | I create a sparse_coo tensor in PyTorch:
import torch
# create indices
i = torch.tensor([[0, 1, 1],
[2, 0, 2]])
# create values
v = torch.tensor([3, 4, 5], dtype=torch.float32)
# create sparse_coo_tensor
sparse_tensor = torch.sparse_coo_tensor(i, v, [2, 4])
Now I want to convert a PyTorch sparse tensor into a PyTorch dense tensor. Which function can be used?
| you can use to_dense as suggested in this example :
s = torch.sparse_coo_tensor(i, v, [2, 4])
s_dense = s.to_dense()
And by the way, the documentation is here
| https://stackoverflow.com/questions/64553148/ |
Train network coded with nn.ModuleDict | I was assigned to write a simple network with nn.ModuleDict. So, here it is:
third_model = torch.nn.ModuleDict({
'flatten': torch.nn.Flatten(),
'fc1': torch.nn.Linear(32 * 32 * 3, 1024),
'relu': torch.nn.ReLU(),
'fc2': torch.nn.Linear(1024, 240),
'relu': torch.nn.ReLU(),
'fc3': torch.nn.Linear(240, 10)})
Then I tried to train it (with cuda):
third_model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(third_model.parameters(), lr=0.001, momentum=0.9)
train(third_model, criterion, optimizer, train_dataloader, test_dataloader)
Function "train(model, criterion, optimizer, train_dataloader, test_dataloader)" trains the model and visualizes loss and accuracy of the model. It works properly.
Train:
def train(model, criterion, optimizer, train_dataloader, test_dataloader):
train_loss_log = []
train_acc_log = []
val_loss_log = []
val_acc_log = []
for epoch in range(NUM_EPOCH):
model.train()
train_loss = 0.
train_size = 0
train_acc = 0.
for imgs, labels in train_dataloader:
imgs, labels = imgs.to(device), labels.to(device)
optimizer.zero_grad()
y_pred = model(imgs)
loss = criterion(y_pred, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_size += y_pred.size(0)
train_loss_log.append(loss.data / y_pred.size(0))
_, pred_classes = torch.max(y_pred, 1)
train_acc += (pred_classes == labels).sum().item()
train_acc_log.append(np.mean((pred_classes == labels).cpu().numpy()))
val_loss = 0.
val_size = 0
val_acc = 0.
model.eval()
with torch.no_grad():
for imgs, labels in test_dataloader:
imgs, labels = imgs.to(device), labels.to(device)
pred = model(imgs)
loss = criterion(pred, labels)
val_loss += loss.item()
val_size += pred.size(0)
_, pred_classes = torch.max(pred, 1)
val_acc += (pred_classes == labels).sum().item()
val_loss_log.append(val_loss / val_size)
val_acc_log.append(val_acc / val_size)
clear_output()
plot_history(train_loss_log, val_loss_log, 'loss')
plot_history(train_acc_log, val_acc_log, 'accuracy')
print('Train loss:', train_loss / train_size)
print('Train acc:', train_acc / train_size)
print('Val loss:', val_loss / val_size)
print('Val acc:', val_acc / val_size)
I've already trained models coded with nn.Sequential and everything is okay. However, with nn.ModuleDict I get an error:
TypeError Traceback (most recent call last)
<ipython-input-144-8b33ad3aad2c> in <module>()
2 optimizer = optim.SGD(third_model.parameters(), lr=0.001, momentum=0.9)
3
----> 4 train(third_model, criterion, optimizer, train_dataloader, test_dataloader)
1 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
TypeError: forward() takes 1 positional argument but 2 were given
Tried to find any documentation on nn.ModuleDict, but it seems like there are no examples of coding networks with it.
It seems like the problem might be with linear layers, although I do not know why.
So, I hope anyone could explain where the mistake is. Would be very grateful for any possible advice.
| A nn.moduleDict is a container and its forward function is not defined. It should be used to store sub-modules/networks.
You should using nn.Sequential initialized with as ordered dictionary, OrderedDict:
third_model = torch.nn.Sequential(
OrderedDict([
('flatten', torch.nn.Flatten()),
('fc1', torch.nn.Linear(32 * 32 * 3, 1024)),
('relu', torch.nn.ReLU()),
('fc2', torch.nn.Linear(1024, 240)),
('relu', torch.nn.ReLU()),
('fc3', torch.nn.Linear(240, 10))]))
| https://stackoverflow.com/questions/64554137/ |
Why loaded Pytorch model's loss highly increased? | I'm trying to train Arcface with reference to.
As far as I know, Arcface requires more than 200 training epochs on CASIA-webface with a large batch size.
Within 100 epochs of training, I stopped the training for a while because I was needed to use GPU for other tasks. And the checkpoints of the model(Resnet) and margin are saved. Before it was stopped, its loss recorded a value between 0.3~1.0, and training accuracy was mount to 80~95%.
When I resume the Arcface training by loading the checkpoint files using load_sate, it seems normal when the first batch is processed. But suddenly the loss increased sharply and the accuracy became very low.
Its loss suddenly became increased. How did this happen? I had no other way so anyway continued the training, but I don't think the loss is decreasing well even though it is a trained model over 100 epochs...
When I searched for similar issues, they told the problem was that the optimizer was not saved (Because the reference github page didn't save the optimizer, so did I. Is it true?
My losses after loading
| if you see this line!
you are Decaying the learning rate of each parameter group by gamma.
This has altered your learning rate as you had reached 100th epoch. and moreover you had not saved your optimizer state while saving your model.
This made your code to start with the starting lr i.e 0.1 after resuming your training.
And this spiked your loss again.
Vote if you find this useful
| https://stackoverflow.com/questions/64554881/ |
How to learn two functions simultaneously in using python (either pytorch or tensorflow)? | I have three series of observations, namely Y, T, and X. I would like to study the differences between the predicted values of the two models. The first model is to learn g such that Y=g(T, X). The second model is to learn L and f such that Y=L(T)f(X). I have no problem in learning the first model using the PyTorch package or the Tensorflow package. However, I am not sure how to learn L and f. In using the PyTorch package, I can set up two feedforward MLPs with different hidden layers and inputs. For simplicity, I define a Feedforward MLP class as follows:
class Feedforward(t.nn.Module): # the definition of a feedforward neural network
# Basic definition
def __init__(self, input_size, hidden_size):
super(Feedforward, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.fc1 = t.nn.Linear(self.input_size, self.hidden_size)
self.relu = t.nn.ReLU()
self.fc2 = t.nn.Linear(self.hidden_size, 1)
self.sigmoid = t.nn.Sigmoid()
# Advance definition
def forward(self, x):
hidden = self.fc1(x)
relu = self.relu(hidden)
output = self.fc2(relu)
output = self.sigmoid(output)
return output
Suppose L=Feedforward(2,10) and L=Feedforward(3,9). From my understanding, I can only learn either L or f, but not both simultaneously. Is it possible to learn L and f simultaneously using Y, T, and X?
| I may be missing something, but I think you can :
L = Feedforward(2,10)
f = Feedforward(3,9)
L_opt = Adam(L.parameters(), lr=...)
f_opt = Adam(f.parameters(), lr=...)
for (x,t,y) in dataset:
L.zero_grad()
f.zero_grad()
y_pred = L(t)*f(x)
loss = (y-y_pred)**2
loss.backward()
L_opt.step()
f_opt.step()
You can also fuse them together in one single model :
class ProductModel(t.nn.Module):
def __init__(self, L, f):
self.L = L
self.f = f
def forward(self, x,t):
return self.L(t)*self.f(x)
and then train this model like you trained g
| https://stackoverflow.com/questions/64555178/ |
dimension extension with pytorch tensors | What is the way dimension extension with pytorch tensors?
-before:
torch.Size([3, 3, 3])
tensor([[[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.]],
[[ 9., 10., 11.],
[12., 13., 14.],
[15., 16., 17.]],
[[18., 19., 20.],
[21., 22., 23.],
[24., 25., 26.]]], device='cuda:0', dtype=torch.float64)
-after:
torch.Size([2, 3, 3, 3])
tensor([[[[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.]],
[[ 9., 10., 11.],
[12., 13., 14.],
[15., 16., 17.]],
[[18., 19., 20.],
[21., 22., 23.],
[24., 25., 26.]]],
[[[0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.]],
[[ 9., 10., 11.],
[12., 13., 14.],
[15., 16., 17.]],
[[18., 19., 20.],
[21., 22., 23.],
[24., 25., 26.]]]], device='cuda:0', dtype=torch.float64)
under numpy would work like this:
b = np.broadcast_to(a1[None, :,:,:], (2,3,3,3))
How does this work under pytorch? I want to take advantage of the gpu. Thanks in advance for the help!
| A new dimension can be added with unsqeeze (0 used below to specify the first dimension, i.e., position 0), followed by repeating the data twice along that dimension (and once, i.e., no repetitions, along the other dimensions).
before = torch.tensor(..., dtype=torch.float64, device='cuda:0')
after = before.unsqueeze(0).repeat(2, 1, 1, 1)
| https://stackoverflow.com/questions/64565021/ |
Should the parameters of the BatchNorm layer be regularized | When I use pytorch to train my CNN, the L2 regularization will be used to panalize the parameters in the model. But the pytorch code "weight decay" will use L2 to all the parameters which can be updated. Should the parameters in BatchNorm Layers be panalized by L2, too?
| If you are using BatchNorm right after nn.Conv2d or nn.Linear, you can "fold" the learned weight and bias into the conv/linear layer.
hence, the learned weigh and bias has a direct effect on the actual L2 norm of the "effective" weights of your network.
Therefore, if you are using L2 regularization to push your weights towards zero - you must also regularize the batch norm parameters.
| https://stackoverflow.com/questions/64569419/ |
How to load a dataset starting from list of images Pytorch | I have a service that receives images in a binary format from another service (let's call it service B):
from PIL import Image
img_list = []
img_bin = get_image_from_service_B()
image = Image.open(io.BytesIO(img_bin)) # Convert bytes to image using PIL
When an image is successfully converted thanks to PIL it is also appended to a list of images.
img_list.append(image)
When I've enough images I want to load my list of images using Pytorch as if it was a dataset
if img_list.__len__() == 500:
### Load dataset and do a transform operation on the data
In a previous version of the software the requirement was simply to retrieve the images from a folder, so it was quite simple to load all the images
my_dataset = datasets.ImageFolder("path/to/images/folder/", transform=transform)
dataset_iterator = DataLoader(my_dataset, batch_size=1)
Now my issue is how to perform the transform and load the dataset from a list.
| You can simply write a custom dataset:
class MyDataset(torch.utils.data.Dataset):
def __init__(self, img_list, augmentations):
super(MyDataset, self).__init__()
self.img_list = img_list
self.augmentations = augmentations
def __len__(self):
return len(self.img_list)
def __getitem__(self, idx):
img = self.img_list[idx]
return self.augmentations(img)
You can now plug this custom dataset into DataLoader and you are done.
| https://stackoverflow.com/questions/64574142/ |
Accuracy and loss does not change with RMSprop optimizer | The dataset is CIFAR10. I've created a VGG-like network:
class FirstModel(nn.Module):
def __init__(self):
super(FirstModel, self).__init__()
self.vgg1 = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.vgg2 = nn.Sequential(
nn.Conv2d(16, 32, 3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.vgg3 = nn.Sequential(
nn.Conv2d(32, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.fc1 = nn.Linear(4 * 4 * 64, 4096)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 10)
self.softmax = nn.Softmax()
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.vgg3(self.vgg2(self.vgg1(x)))
x = nn.Flatten()(x)
x = self.relu(self.fc1(x))
x = self.dropout(x)
x = self.relu(self.fc2(x))
x = self.dropout(x)
x = self.softmax(self.fc3(x))
return x
Then I train it and visualize loss and accuracy:
import matplotlib.pyplot as plt
from IPython.display import clear_output
def plot_history(train_history, val_history, title='loss'):
plt.figure()
plt.title('{}'.format(title))
plt.plot(train_history, label='train', zorder=1)
points = np.array(val_history)
steps = list(range(0, len(train_history) + 1, int(len(train_history) / len(val_history))))[1:]
plt.scatter(steps, val_history, marker='*', s=180, c='red', label='val', zorder=2)
plt.xlabel('train steps')
plt.legend(loc='best')
plt.grid()
plt.show()
def train_model(model, optimizer, train_dataloader, test_dataloader):
criterion = nn.CrossEntropyLoss()
train_loss_log = []
train_acc_log = []
val_loss_log = []
val_acc_log = []
for epoch in range(NUM_EPOCH):
model.train()
train_loss = 0.
train_size = 0
train_acc = 0.
for inputs, labels in train_dataloader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
y_pred = model(inputs)
loss = criterion(y_pred, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_size += y_pred.size(0)
train_loss_log.append(loss.data / y_pred.size(0))
_, pred_classes = torch.max(y_pred, 1)
train_acc += (pred_classes == labels).sum().item()
train_acc_log.append(np.mean((pred_classes == labels).cpu().numpy()))
# блок validation
val_loss = 0.
val_size = 0
val_acc = 0.
model.eval()
with torch.no_grad():
for inputs, labels in test_dataloader:
inputs, labels = inputs.to(device), labels.to(device)
y_pred = model(inputs)
loss = criterion(y_pred, labels)
val_loss += loss.item()
val_size += y_pred.size(0)
_, pred_classes = torch.max(y_pred, 1)
val_acc += (pred_classes == labels).sum().item()
val_loss_log.append(val_loss/val_size)
val_acc_log.append(val_acc/val_size)
clear_output()
plot_history(train_loss_log, val_loss_log, 'loss')
plot_history(train_acc_log, val_acc_log, 'accuracy')
print('Train loss:', train_loss / train_size)
print('Train acc:', train_acc / train_size)
print('Val loss:', val_loss / val_size)
print('Val acc:', val_acc / val_size)
Then I train the model:
first_model = FirstModel()
first_model.to(device)
optimizer = optim.RMSprop(first_model.parameters(), lr=0.001, momentum=0.9)
train_model(first_model_rms, optimizer, train_dataloader, test_dataloader)
The loss and accuracy do not change (accuracy at level of 0.1). However, if the optimizer is SGD with momentum everything works fine (loss and accuracy change). I've already tried to change momentum and lr, but it does not help.
What should be fixed? Would be grateful for any possible advice!
| So first of all, you don't have to use softmax in the "model" as it is done by the nn.CrossEntropyLoss, and I also think that the RMSprop doesn't work with momentum.
| https://stackoverflow.com/questions/64574222/ |
Sentence embedding using T5 | I would like to use state-of-the-art LM T5 to get sentence embedding vector.
I found this repository https://github.com/UKPLab/sentence-transformers
As I know, in BERT I should take the first token as [CLS] token, and it will be the sentence embedding.
In this repository I see the same behaviour on T5 model:
cls_tokens = output_tokens[:, 0, :] # CLS token is first token
Does this behaviour correct? I have taken encoder from T5 and encoded two phrases with it:
"I live in the kindergarden"
"Yes, I live in the kindergarden"
The cosine similarity between them was only "0.2420".
I just need to understand how sentence embedding works - should I train network to find similarity to reach correct results? Or I it is enough of base pretrained language model?
| In order to obtain the sentence embedding from the T5, you need to take the take the last_hidden_state from the T5 encoder output:
model.encoder(input_ids=s, attention_mask=attn, return_dict=True)
pooled_sentence = output.last_hidden_state # shape is [batch_size, seq_len, hidden_size]
# pooled_sentence will represent the embeddings for each word in the sentence
# you need to sum/average the pooled_sentence
pooled_sentence = torch.mean(pooled_sentence, dim=1)
You have now a sentence embeddings from T5
| https://stackoverflow.com/questions/64579258/ |
Passing 3 dimensional and one dimensional features to neural network with PyTorch Dataloader | I have examples with size 2x8x8 as tensor and I'm using PyTorch Dataloader for them. But now I want to add an additional 1 dim tensor with size 1 (a single number) as input too.
So I have two input parameters for the neural network, a multidimensional for convolutional layers and an additional one that I will concatenate later.
Probably I could use two dataloader, for every tensor shape one, but than I could not shuffle them.
How can I use a single PyTorch Dataloader for this two different input tensors?
| This is not about the dataloader, this should be done in the your dataset. Implement your own dataset by making it inherit from torch.util.data.Dataset (you need to implement __len__ and __getitem__). Make your __getitem__method return both your tensors, and you should be fine.
You can follow this tutorial if you need.
| https://stackoverflow.com/questions/64580276/ |
Problem running GRU model; missing argument for forward() | I am working on a GRU and when I try to make predictions I get an error indicating that I need to define h for forward(). I have tried several things and ran out of patience after googling and scouring stack overflow for hours.
This is the class:
class GRUNet(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim, n_layers, drop_prob = 0.2):
super(GRUNet, self).__init__()
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.gru = nn.GRU(input_dim, hidden_dim, n_layers, batch_first=True, dropout=drop_prob)
self.fc = nn.Linear(hidden_dim, output_dim)
self.relu = nn.ReLU()
def forward(self, x, h):
out, h = self.gru(x,h)
out = self.fc(self.relu(out[:,-1]))
return out, h
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
hidden = weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().to(device)
return hidden
and then this is where I load the model and try to make a prediction. Both of these are in the same script.
inputs = np.load('.//Pred//input_list.npy')
print(inputs.ndim, inputs.shape)
Gmodel = GRUNet(24,256,1,2)
Gmodel = torch.load('.//GRU//GRU_1028_48.pkl')
Gmodel.eval()
pred = Gmodel(inputs)
Without any other arguments to Gmodel I get the following:
Traceback (most recent call last):
File ".\grunet.py", line 136, in <module>
pred = Gmodel(inputs)
File "C:\Users\ryang\Anaconda-3\envs\tf-gpu\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'h'
| You need to provide the hidden state as well which is usually initially all zeros or simply None!
That is you either need to explicitly provide one like this :
hidden_state = torch.zeros(size=(num_layers*direction, batch_size, hidden_dim)).to(device)
pred = Gmodel(inputs, hidden_state)
or simply do :
hidden_state = None
pred = Gmodel(inputs, hidden_state)
| https://stackoverflow.com/questions/64584493/ |
Adding class objects to Pytorch Dataloader: batch must contain tensors | I have a custom Pytorch dataset that returns a dictionary containing a class object "queries".
class QueryDataset(torch.utils.data.Dataset):
def __init__(self, queries, values, targets):
super(QueryDataset).__init__()
self.queries = queries
self.values = values
self.targets = targets
def __len__(self):
return self.values.shape[0]
def __getitem__(self, idx):
sample = DeviceDict({'query': self.queries[idx],
"values": self.values[idx],
"targets": self.targets[idx]})
return sample
The problem is that when I put the queries in a data loader I get default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'query.Query'>. Is there a way to have a class object in my data loader? It blows up at next(iterator) in the code below.
train_queries = QueryDataset(train_queries)
train_loader = torch.utils.data.DataLoader(train_queries,
batch_size=10],
shuffle=True,
drop_last=False)
for i in range(epochs):
iterator = iter(train_loader)
for i in range(len(train_loader)):
batch = next(iterator)
out = model(batch)
loss = criterion(out["pred"], batch["targets"])
self.optimizer.zero_grad()
loss.sum().backward()
self.optimizer.step()
| You need to define your own colate_fn in order to do this.
A sloppy approach just to show you how stuff works here, would be something like this:
import torch
class DeviceDict:
def __init__(self, data):
self.data = data
def print_data(self):
print(self.data)
class QueryDataset(torch.utils.data.Dataset):
def __init__(self, queries, values, targets):
super(QueryDataset).__init__()
self.queries = queries
self.values = values
self.targets = targets
def __len__(self):
return 5
def __getitem__(self, idx):
sample = {'query': self.queries[idx],
"values": self.values[idx],
"targets": self.targets[idx]}
return sample
def custom_collate(dict):
return DeviceDict(dict)
dt = QueryDataset("q","v","t")
dl = torch.utils.data.DataLoader(dtt,batch_size=1,collate_fn=custom_collate)
t = next(iter(dl))
t.print_data()
Basically colate_fn allows you to achieve custom batching or adding support for custom data types as explained in the link I previously provided.
As you see it just shows the concept, you need to change it based on your own needs.
| https://stackoverflow.com/questions/64586575/ |
numpy.empty() data type not understood error on Google colab | numpy.empty() returns data type not understood exception on Google colab's default numpy library. I checked all questions in stackoverflow but didnt see related to this problem since i am using google colab. Here is full code and exception output:
Full code:
def diffMask(img1=None, img2=None, opt=None, dataset=None, args=None):
netG = args[0]
netB = args[1]
netD = args[2]
f = args[3]
res_path = opt.results_Stage3
res_folders = ['temp_masks',
'temp_Stage2',
'temp_ref',
'temp_diff',
'temp_Stage3',
'temp_skel',
'temp_res',
'temp_Stage1',
'temp_src']
for x in res_folders:
if os.path.isdir("{}{}".format(res_path, x)) == False:
os.mkdir("{}{}".format(res_path, x))
save_masks = "{}{}".format(res_path, "temp_masks")
save_Stage2 = "{}{}".format(res_path, "temp_Stage2")
save_ref = "{}{}".format(res_path, "temp_ref")
save_diff = "{}{}".format(res_path, "temp_diff")
save_Stage3 = "{}{}".format(res_path, "temp_Stage3")
save_skel = "{}{}".format(res_path, "temp_skel")
save_res = "{}{}".format(res_path, "temp_res")
save_Stage1 = "{}{}".format(res_path, "temp_Stage1")
save_src = "{}{}".format(res_path, "temp_src")
resize2 = transforms.Resize(size=(128, 128))
src, mask, style_img, target, gt_cloth, skel, cloth = dataset.get_img("{}_0.jpg".format(img1[:-6]),
"{}_1.jpg".format(img1[:-6]))
src, mask, style_img, target, gt_cloth, skel, cloth = src.unsqueeze(0), mask.unsqueeze(0), style_img.unsqueeze(
0), target.unsqueeze(0), gt_cloth.unsqueeze(0), skel.unsqueeze(0), cloth.unsqueeze(0) # , face.unsqueeze(0)
src1, mask1, style_img1, target1, gt_cloth1, skel1, cloth1 = Variable(src.cuda()), Variable(mask.cuda()), Variable(
style_img.cuda()), Variable(target.cuda()), Variable(gt_cloth.cuda()), Variable(skel.cuda()), Variable(
cloth.cuda()) # , Variable(face.cuda())
src, mask, style_img, target, gt_cloth, skel, cloth = dataset.get_img("{}_0.jpg".format(img2[:-6]),
"{}_1.jpg".format(img2[:-6]))
src, mask, style_img, target, gt_cloth, skel, cloth = src.unsqueeze(0), mask.unsqueeze(0), style_img.unsqueeze(
0), target.unsqueeze(0), gt_cloth.unsqueeze(0), skel.unsqueeze(0), cloth.unsqueeze(0) # , face.unsqueeze(0)
src2, mask2, style_img2, target2, gt_cloth2, skel2, cloth2 = Variable(src.cuda()), Variable(mask.cuda()), Variable(
style_img.cuda()), Variable(target.cuda()), Variable(gt_cloth.cuda()), Variable(skel.cuda()), Variable(
cloth.cuda())
gen_targ_Stage1, s_128, s_64, s_32, s_16, s_8, s_4 = netG(skel1, cloth2) # gen_targ11 is structural change cloth
gen_targ_Stage2, s_128, s_64, s_32, s_16, s_8, s_4 = netB(src1, gen_targ_Stage1,
skel1) # gen_targ12 is Stage2 image
# saving structural
pic_Stage2 = (torch.cat([gen_targ_Stage2], dim=0).data + 1) / 2.0
# save_dir = "/home/np9207/PolyGan_res/temp_Stage2/"
save_image(pic_Stage2, '%s/%d_%s_%d.jpg' % (save_Stage2, f, img1[:-6], 0), nrow=1)
msk1 = mask1[0, :, :, :].detach().cpu().permute(1, 2, 0)
plt.imsave("{}/{}_{}_mask.jpg".format(save_masks, f, img1[:-6]), msk1, cmap="gray")
I got exception on this line:
plt.imsave("{}/{}_{}_mask.jpg".format(save_masks, f, img1[:-6]), msk1, cmap="gray")
Exception Output:
Traceback (most recent call last):
File "test.py", line 208, in test
diffMask(image1, image2, opt, test_loader, args)
File "test.py", line 96, in diffMask
plt.imsave("{}/{}_{}_mask.jpg".format(save_masks, f, img1[:-6]), msk1, cmap="gray")
File "/usr/local/lib/python3.6/dist-packages/matplotlib/pyplot.py", line 2066, in imsave
return matplotlib.image.imsave(fname, arr, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/matplotlib/image.py", line 1550, in imsave
rgba = sm.to_rgba(arr, bytes=True)
File "/usr/local/lib/python3.6/dist-packages/matplotlib/cm.py", line 217, in to_rgba
xx = np.empty(shape=(m, n, 4), dtype=x.dtype)
TypeError: data type not understood
Numpy Version : 1.18.5
Thank you in advance.
| Thanks to @user2357112 @Rika and @hpaulj 's great helps, I found the cause of the problem and here is solution for those who is struggling the same issue:
msk1 = msk1.numpy()
But then (probably) you will get ValueError: Floating point image RGB values must be in the 0..1 range. exception. You have to normalize your array. Thanks to him
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalized data
"""
return np.array((x - np.min(x)) / (np.max(x) - np.min(x)))
msk1 = normalize(msk1.numpy())
| https://stackoverflow.com/questions/64589649/ |
Activate pytorch network nodes without variable assignments? | In tutorials for PyTorch networks, we typically see an implementation such as:
from torch.nn.functional import hardtanh, sigmoid
import torch.nn as nn
class great_network(nn.Module):
def __init__(self):
super(great_network, self).__init__()
self.layer1 = nn.Conv2d(2, 2, 3)
self.pool_1 = nn.MaxPool2d(1, 1)
self.layer3 = nn.ConvTranspose2d(2, 2, 3)
self.out_layer = nn.Conv2d(1, 1, 3)
def forward(self, x):
x = hardtanh(self.layer1(x))
x = self.pool_1(x)
x = hardtanh(self.layer3(x))
x = sigmoid(self.out_layer(x))
return x
net = great_network()
print(net)
great_network(
(layer1): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(pool_1): MaxPool2d(kernel_size=1, stride=1, padding=0, dilation=1, ceil_mode=False)
(layer3): ConvTranspose2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(out_layer): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1))
)
If I were to dynamically change the size of this network to run multiple experiments, I'd have to emulate the above code (resembling a data clump code bloat) without multiple assignments.
Something like this might do:
from torch.nn.functional import hardtanh, sigmoid
import torch.nn as nn
import numpy as np
class not_so_great_network(nn.Module):
def __init__(self, n):
super(not_so_great_network, self).__init__()
self.pre_layers = self.generate_pre_layers(n)
self.post_layers = self.generate_post_layers(n)
self.pool = nn.MaxPool2d(1, 1)
self.out = nn.Conv2d(1, 1, 3)
def generate_pre_layers(self, layer_num):
layers = np.empty(layer_num, dtype = object)
for lay in range(0, len(layers)):
layers[lay] = nn.Conv2d(2, 2, 3)
return layers
def generate_post_layers(self, layer_num):
layers = np.empty(layer_num, dtype = object)
for lay in range(0, len(layers)):
layers[lay] = nn.Conv2d(2, 2, 3)
return layers
def forward(self, x):
for pre in self.pre_layers:
x = hardtanh(pre(x))
x = self.pool(x)
for post in self.post_layers:
x = hardtanh(post(x))
x = sigmoid(self.out(x))
return x
However, not all layers are there:
if __name__ == '__main__':
layer_num = 5
net = not_so_great_network(layer_num)
print(net)
not_so_great_network(
(pool): MaxPool2d(kernel_size=1, stride=1, padding=0, dilation=1, ceil_mode=False)
(out): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1))
)
I am not assigning the variables because this could be more powerful if I can generate networks of varying sizes without copying and pasting. How can I emulate the output so that I may activate nodes with activation functions later?
| An alternative would be using ModuleList:
from torch import nn
from torch.nn.functional import hardtanh, sigmoid
class maybe_great_network(nn.Module):
def __init__(self, n):
super().__init__()
self.pre_layers = self.generate_pre_layers(n)
self.post_layers = self.generate_post_layers(n)
self.pool = nn.MaxPool2d(1, 1)
self.out = nn.Conv2d(1, 1, 3)
def generate_pre_layers(self, layer_num):
return nn.ModuleList([
nn.Conv2d(2, 2, 3)
for l in range(0, layer_num)
])
def generate_post_layers(self, layer_num):
return nn.ModuleList([
nn.Conv2d(2, 2, 3)
for l in range(0, layer_num)
])
def forward(self, x):
for pre in self.pre_layers:
x = hardtanh(pre(x))
x = self.pool(x)
for post in self.post_layers:
x = hardtanh(post(x))
x = sigmoid(self.out(x))
return x
Then:
>>> m = maybe_great_network(3)
>>> m
maybe_great_network(
(pre_layers): ModuleList(
(0): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(2): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
)
(post_layers): ModuleList(
(0): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(1): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
(2): Conv2d(2, 2, kernel_size=(3, 3), stride=(1, 1))
)
(pool): MaxPool2d(kernel_size=1, stride=1, padding=0, dilation=1, ceil_mode=False)
(out): Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1))
)
| https://stackoverflow.com/questions/64592818/ |
Filter out NaN values from a PyTorch N-Dimensional tensor | This question is very similar to filtering np.nan values from pytorch in a -Dimensional tensor. The difference is that I want to apply the same concept to tensors of 2 or higher dimensions.
I have a tensor that looks like this:
import torch
tensor = torch.Tensor(
[[1, 1, 1, 1, 1],
[float('nan'), float('nan'), float('nan'), float('nan'), float('nan')],
[2, 2, 2, 2, 2]]
)
>>> tensor.shape
>>> [3, 5]
I would like to find the most pythonic / PyTorch way of to filter out (remove) the rows of the tensor which are nan. By filtering this tensor along the first (0th axis) I want to obtain a filtered_tensor which looks like this:
>>> print(filtered_tensor)
>>> torch.Tensor(
[[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2]]
)
>>> filtered_tensor.shape
>>> [2, 5]
| Use PyTorch's isnan() together with any() to slice tensor's rows using the obtained boolean mask as follows:
filtered_tensor = tensor[~torch.any(tensor.isnan(),dim=1)]
Note that this will drop any row that has a nan value in it. If you want to drop only rows where all values are nan replace torch.any with torch.all.
For an N-dimensional tensor you could just flatten all the dims apart from the first dim and apply the same procedure as above:
#Flatten:
shape = tensor.shape
tensor_reshaped = tensor.reshape(shape[0],-1)
#Drop all rows containing any nan:
tensor_reshaped = tensor_reshaped[~torch.any(tensor_reshaped.isnan(),dim=1)]
#Reshape back:
tensor = tensor_reshaped.reshape(tensor_reshaped.shape[0],*shape[1:])
| https://stackoverflow.com/questions/64594493/ |
‘DataParallel’ object has no attribute ‘conv1’ | I am trying to visualize cnn network features map for conv1 layer based on the code and architecture below. It’s working properly without DataParallel, but when I am activating model = nn.DataParallel(model) it raised with error: ‘DataParallel’ object has no attribute ‘conv1’. Any suggestion appreciated.
class Model(nn.Module):
def __init__(self, kernel, num_filters, res = ResidualBlock):
super(Model, self).__init__()
self.conv0 = nn.Sequential(
nn.Conv2d(4, num_filters, kernel_size = kernel*3,
padding = 4),
nn.BatchNorm2d(num_filters),
nn.ReLU(inplace=True))
self.conv1 = nn.Sequential(
nn.Conv2d(num_filters, num_filters*2, kernel_size = kernel,
stride=2, padding = 1),
nn.BatchNorm2d(num_filters*2),
nn.ReLU(inplace=True))
self.conv2 = nn.Sequential(
nn.Conv2d(num_filters*2, num_filters*4, kernel_size = kernel, stride=2, padding = 1),
nn.BatchNorm2d(num_filters*4),
nn.ReLU(inplace=True))
self.tsconv0 = nn.Sequential(
nn.ConvTranspose2d(num_filters*4, num_filters*2, kernel_size = kernel, padding = 1),
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.ReLU(inplace=True),
nn.BatchNorm2d(num_filters*2))
self.tsconv1 = nn.Sequential(
nn.ConvTranspose2d(num_filters*2, num_filters, kernel_size = kernel, padding = 1),
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.ReLU(inplace=True),
nn.BatchNorm2d(num_filters))
self.tsconv2 = nn.Sequential(
nn.Conv2d(num_filters, 1, kernel_size = kernel*3, padding = 4, bias=False),
nn.ReLU(inplace=True))
model = Model(kernel, num_filters)
model = nn.DataParallel(model)
The code for feature map visualization:
def get_activation(name):
def hook(model, x_train_batch, y_train_pred):
activation[name] = y_train_pred.detach()
return hook
model.conv3.register_forward_hook(get_activation('conv3'))
x_train_batch[0,0,:,:]
y_train_pred = model(x_train_batch)
act = activation['conv3'].squeeze()
act1 = act.cpu().detach().numpy()
act=act[0,:,:,:]
fig, axarr = plt.subplots(6,16)
k = 0
for idx in range(act.size(0)//16):
for idy in range(act.size(0)//6):
axarr[idx, idy].imshow(act[k])
k += 1
| When you use DataParallel, add an extra module there. instead of doing model.conv3. simply write model.module.conv3.
| https://stackoverflow.com/questions/64595999/ |
Pytorch input tensor size with wrong dimension Conv1D | def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, _) in enumerate(train_loader):
data = data[None, :, :]
print(data.size()) # something seems to change between here
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data) # and here???
loss = loss_function(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(epoch, train_loss / len(train_loader.dataset)))
for epoch in range(1, 4):
train(epoch)
This is very strange looking at the training loop it does recognize that the size is [1,1,1998] but then something changes after it is sent to the device?
torch.Size([1, 1, 1998])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-138-70cca679f91a> in <module>()
27
28 for epoch in range(1, 4):
---> 29 train(epoch)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
255 _single(0), self.dilation, self.groups)
256 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 257 self.padding, self.dilation, self.groups)
258
259
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [12, 1, 1], but got 2-dimensional input of size [1, 1998] instead
Also here is my model (I recognize there is likely a couple of other issues here but I am asking about the tensor size not registering)
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.conv1 = nn.Conv1d( 1,12, kernel_size=1,stride=5,padding=0)
self.conv1_drop = nn.Dropout2d()
self.pool1 = nn.MaxPool1d(kernel_size=3, stride=2)
self.fc21 = nn.Linear(198, 1)
self.fc22 = nn.Linear(198, 1)
self.fc3 = nn.Linear(1, 198)
self.fc4 = nn.Linear(198, 1998)
def encode(self, x):
h1 = self.conv1(x)
h1 = self.conv1_drop(h1)
h1 = self.pool1(h1)
h1 = F.relu(h1)
h1 = h1.view(1, -1) # 1 is the batch size
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.rand_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 1998))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
So why doesn't Pytorch keep the dimensions after reshaping and would that be the correct tensor size if it did?
| I just found my mistake when I call forward() I am doing self.encode(x.view(-1,1998)) which is reshaping the tensor.
| https://stackoverflow.com/questions/64601301/ |
Loading data from Custom Data-Loader in pytorch only if the data specifies a certain condition | I have a CSV file with filename in the first column and a label for the filename in the second column. I also have a third column, which specifies something about the data (whether the data meets a specific condition). It will look something like,
+-----------------------------+
| Filepath 1 Label 1 'n' |
| |
+-----------------------------+
| Filepath 2 Label 2 'n' |
| |
| |
+-----------------------------+
| Filepath 3 Label 3 'n'|
| |
+-----------------------------+
| Filepath 4 Label 4 'y'|
+------------------------------+
I want to be able to load the custom dataset using getitem only when attribute column == 'y'. However, I get the following error:
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>
My code is as follows:
'''
class InterDataset(Dataset):
def __init__(self, csv_file, mode, root_dir = None, transform = None, run = None):
self.annotations = pd.read_csv(csv_file, header = None)
self.root_dir = root_dir
self.transform = transform
self.mode = mode
self.run = run
def __len__(self):
return len(self.annotations)
def __getitem__(self, index):
if self.mode == 'train':
if (self.annotations.iloc[index, 2] == 'n'):
img_path = self.annotations.iloc[index,0]
image = cv2.imread(img_path,1)
y_label = self.annotations.iloc[index,1]
if self.transform:
image = self.transform(image)
if (index+1)%300 == 0:
print('Loop {0} done'.format(index))
return [image, y_label]
'''
| You get that error because the dataloader has to return something. Here are three solutions:
There is a libary called nonechucks which lets you create dataloaders in which you can skip samples.
Usually you could preprocess/clean your data and kick the unwanted samples out.
You could return some indicator that the sample is unwanted, for example
if "y":
return data, target
else:
return -1
And then you could check in your train loop if the "data" is -1 and skip the iteration.
I hope this was helpful :)
| https://stackoverflow.com/questions/64603234/ |
Unable to Load Images to train model on Custom Datasets | I have just stuck with Image Instance Segmentation for a while. I am trying to train the Yolact model for my custom data. Here is some brief information about what I have done so far
I have annotated the image using labelme annotation tool
I have converted annotation file for each (train & validation data) using labelme2coco -> train.json & test.json
I made changes in the cofig.py file as needed and expected by yolact
As I was following this repository I encountered an error Argument 'bb' has incorrect type to which I have solved with the approach stated in this closed issue
After completing the above task I am stuck here with below-stated issue.
Scaling parameters by 0.12 to account for a batch size of 1.
Per-GPU batch size is less than the recommended limit for batch norm. Disabling batch norm.
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
/usr/local/lib/python3.6/dist-packages/torch/jit/_recursive.py:165: UserWarning: 'lat_layers' was found in ScriptModule constants, but it is a non-constant submodule. Consider removing it.
" but it is a non-constant {}. Consider removing it.".format(name, hint))
/usr/local/lib/python3.6/dist-packages/torch/jit/_recursive.py:165: UserWarning: 'pred_layers' was found in ScriptModule constants, but it is a non-constant submodule. Consider removing it.
" but it is a non-constant {}. Consider removing it.".format(name, hint))
/usr/local/lib/python3.6/dist-packages/torch/jit/_recursive.py:165: UserWarning: 'downsample_layers' was found in ScriptModule constants, but it is a non-constant submodule. Consider removing it.
" but it is a non-constant {}. Consider removing it.".format(name, hint))
Initializing weights...
Begin training!
**_Traceback (most recent call last):_**
File "train.py", line 504, in <module>
train()
File "train.py", line 270, in train
for datum in data_loader:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
**AssertionError: Caught AssertionError in DataLoader worker process 0.**
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/yolact/data/coco.py", line 94, in __getitem__
im, gt, masks, h, w, num_crowds = self.pull_item(index)
File "/content/yolact/data/coco.py", line 141, in pull_item
assert osp.exists(path), 'Image path does not exist: {}'.format(path)
**AssertionError: Image path does not exist: data/YolaDataset/train/6.JPG**
NOTE:I have moved my test & train data to yolact/data/YolactDataset ( also CWD is /yolact/ )
Here is the Log file
Yolact Leather Defect Config.log
Here is the content of config.py
yolact_leather_defect_dataset = Config({
'name': 'Yolact Leather Defect',
# Training images and annotations
'train_images': './data/YolaDataset/train/',
'train_info': './data/train.json',
# Validation images and annotations.
'valid_images': './data/YolaDataset/test/',
'valid_info': './data/test.json',
# Whether or not to load GT. If this is False, eval.py quantitative evaluation won't work.
'has_gt': True,
# A list of names for each of you classes.
'class_names': ("FC", "LF", "OC", "PM"),
# COCO class ids aren't sequential, so this is a bandage fix. If your ids aren't sequential,
# provide a map from category_id -> index in class_names + 1 (the +1 is there because it's 1-indexed).
# If not specified, this just assumes category ids start at 1 and increase sequentially.
'label_map': {
1:1, 2:2, 3:3, 4:4,
}
})
yolact_leather_defect_config = coco_base_config.copy({
'name': 'Yolact Leather Defect Config',
# Dataset stuff
'dataset': yolact_leather_defect_dataset,
'num_classes': len(yolact_leather_defect_dataset.class_names) + 1,
# Image Size
'max_size': 550,
# Training params
'lr_steps': (280000, 600000, 700000, 750000),
'max_iter': 800000,
# Backbone Settings
'backbone': resnet101_backbone.copy({
'selected_layers': list(range(1, 4)),
'use_pixel_scales': True,
'preapply_sqrt': False,
'use_square_anchors': True, # This is for backward compatability with a bug
'pred_aspect_ratios': [ [[1, 1/2, 2]] ]*5,
'pred_scales': [[24], [48], [96], [192], [384]],
}),
# FPN Settings
'fpn': fpn_base.copy({
'use_conv_downsample': True,
'num_downsample': 2,
}),
# Mask Settings
'mask_type': mask_type.lincomb,
'mask_alpha': 6.125,
'mask_proto_src': 0,
'mask_proto_net': [(256, 3, {'padding': 1})] * 3 + [(None, -2, {}), (256, 3, {'padding': 1})] + [(32, 1, {})],
'mask_proto_normalize_emulate_roi_pooling': True,
# Other stuff
'share_prediction_module': True,
'extra_head_net': [(256, 3, {'padding': 1})],
'positive_iou_threshold': 0.5,
'negative_iou_threshold': 0.4,
'crowd_iou_threshold': 0.7,
'use_semantic_segmentation_loss': True,
})
File Structure
https://drive.google.com/file/d/1GDaDNBayMsADnxKmbIn5OL9eU17SI6eV/view?usp=sharing
I have tried to resolve the issue to the best can help myself. Any help would be appreciated.
Thank You!
[Originally this issue I have posted on yolact repo.]
| You mispelled your folder names :) YolaDataset needs to be renamed to YolactDataset
# Training images and annotations
'train_images': './data/YolaDataset/train/',
'train_info': './data/train.json',
# Validation images and annotations.
'valid_images': './data/YolaDataset/test/',
'valid_info': './data/test.json',
| https://stackoverflow.com/questions/64605776/ |
BERT-based NER model giving inconsistent prediction when deserialized | I am trying to train an NER model using the HuggingFace transformers library on Colab cloud GPUs, pickle it and load the model on my own CPU to make predictions.
Code
The model is the following:
from transformers import BertForTokenClassification
model = BertForTokenClassification.from_pretrained(
"bert-base-cased",
num_labels=NUM_LABELS,
output_attentions = False,
output_hidden_states = False
)
I am using this snippet to save the model on Colab
import torch
torch.save(model.state_dict(), FILENAME)
Then load it on my local CPU using
# Initiating an instance of the model type
model_reload = BertForTokenClassification.from_pretrained(
"bert-base-cased",
num_labels=len(tag2idx),
output_attentions = False,
output_hidden_states = False
)
# Loading the model
model_reload.load_state_dict(torch.load(FILENAME, map_location='cpu'))
model_reload.eval()
The code snippet used to tokenize the text and make actual predictions is the same both on the Colab GPU notebook instance and my CPU notebook instance.
Expected Behavior
The GPU-trained model behaves correctly and classifies the following tokens perfectly:
O [CLS]
O Good
O morning
O ,
O my
O name
O is
B-per John
I-per Kennedy
O and
O I
O am
O working
O at
B-org Apple
O in
O the
O headquarters
O of
B-geo Cupertino
O [SEP]
Actual Behavior
When loading the model and use it to make predictions on my CPU, the predictions are totally wrong:
I-eve [CLS]
I-eve Good
I-eve morning
I-eve ,
I-eve my
I-eve name
I-eve is
I-geo John
B-eve Kennedy
I-eve and
I-eve I
I-eve am
I-eve working
I-eve at
I-gpe Apple
I-eve in
I-eve the
I-eve headquarters
I-eve of
B-org Cupertino
I-eve [SEP]
Does anyone have ideas why it doesn't work? Did I miss something?
| I fixed it, there were two problems:
The index-label mapping for tokens was wrong, for some reason the list() function worked differently on Colab GPU than my CPU (??)
The snippet used to save the model was not correct, for models based on the huggingface-transformers library you can't use model.save_dict() and load it later, you need to use the save_pretrained() method of your model class, and load it later using from_pretrained().
| https://stackoverflow.com/questions/64610841/ |
Torch tensor with i-th element as product of all previous | I have a torch tensor like this one:
A = torch.tensor([1,2,3,4,5,6])
there's in torch a simple way to generate a tensor like this?
[1, 1*2, 1*2*3, 1*2*3*4, 1*2*3*4*5, 1*2*3*4*5*6]
Thanks a lot
| You can use torch.cumprod:
import torch
t = torch.tensor([1, 2, 3, 4, 5, 6])
torch.cumprod(t, dim=0)
which outputs:
tensor([1, 2, 6, 24, 120, 720])
| https://stackoverflow.com/questions/64612004/ |
multiply many matrices and many vectors pytorch | I am trying to multiply the following:
A batch of matrices N x M x D
A batch of vectors N x D x 1
To get a result: N x M x 1
as if I were doing N dot products on M x D D x 1.
I cant seem to find the correct function in PyTorch.
torch.bmm as far as I can tell only works for a batch of vectors and a single matrix. If I have to use torch.einsum then so be it but id rather not!
| It's pretty straightforward and intuitive with einsum:
torch.einsum('ijk, ikl->ijl', mats, vecs)
But your operation is just:
mats @ vecs
| https://stackoverflow.com/questions/64613180/ |
Pytorch nn.CrossEntropyLoss giving, ValueError: Expected target size (x, y), got torch.Size([x, z]) for 3d tensor | I am following the example here, where the documentation says:
Input: (N, C) where C = number of classes
Target: (N) where each value is 0 ≤ targets[i] ≤ C−1
And this is the case with the example given for a 2d tensor
loss = nn.CrossEntropyLoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.empty(3, dtype=torch.long).random_(5)
output = loss(input, target)
output.backward()
But for a 2d tensor, I am getting an error
import torch.nn as nn
import torch
loss = nn.CrossEntropyLoss(ignore_index=0)
inputs = torch.rand(32, 128, 3)
targets = torch.ones(32, 128)
loss(inputs, targets.long())
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-61e7f03039a6> in <module>
7 targets = torch.ones(32, 128)
8
----> 9 loss(inputs, targets.long())
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
959
960 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 961 return F.cross_entropy(input, target, weight=self.weight,
962 ignore_index=self.ignore_index, reduction=self.reduction)
963
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2466 if size_average is not None or reduce is not None:
2467 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2469
2470
/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2271 out_size = (n,) + input.size()[2:]
2272 if target.size()[1:] != input.size()[2:]:
-> 2273 raise ValueError('Expected target size {}, got {}'.format(
2274 out_size, target.size()))
2275 input = input.contiguous()
ValueError: Expected target size (32, 3), got torch.Size([32, 128])
As far as I can tell, I am doing everything right regarding setting up the dimensions. The error message seems to think that I am giving a 2d vector, but I gave it a 3d vector, the 128 size dimension is missing.
Is there something that I didn't set up correctly for this loss function?
| This is what the documentation says about K-dimensional loss:
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch, C, d_1, d_2, ..., d_K) with K ≥ 1 , where K is the number of dimensions, and a target of appropriate shape (see below).
The correct input should have a (32, 3, 128) shape, if you have 3 classes:
import torch.nn as nn
import torch
loss = nn.CrossEntropyLoss(ignore_index=0)
inputs = torch.rand(32, 3, 128)
targets = torch.ones(32, 128)
loss(inputs, targets.long())
Or the target should have a (32, 3) shape, if you have 128 classes:
inputs = torch.rand(32, 128, 3)
targets = torch.ones(32, 3)
| https://stackoverflow.com/questions/64615990/ |
not able to install easyocr (pytorch error) | I am trying to install easyocr for python using pip
pip install easyocr
but it's not installing
its givig me this in the terminal
ERROR: torchvision 0.5.0 has requirement torch==1.4.0, but you'll have torch 0.1.2.post2 which is incompatible.
Installing collected packages: torch, torchvision, python-bidi, scipy, cycler, kiwisolver, python-dateutil, certifi, pyparsing, matplotlib, PyWavelets, imageio, decorator, networkx, tifffile, scikit-image, easyocr
Running setup.py install for torch ... error
ERROR: Command errored out with exit status 1:
command: 'c:\users\murari\appdata\local\programs\python\python37\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Murari\\AppData\\Local\\Temp\\pip-install-jl3470hv\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Murari\\AppData\\Local\\Temp\\pip-install-jl3470hv\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Murari\AppData\Local\Temp\pip-record-8j183bzh\install-record.txt' --single-version-externally-managed --compile
cwd: C:\Users\Murari\AppData\Local\Temp\pip-install-jl3470hv\torch\
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Murari\AppData\Local\Temp\pip-install-jl3470hv\torch\setup.py", line 265, in <module>
description="Tensors and Dynamic neural networks in Python with strong GPU acceleration",
File "c:\users\murari\appdata\local\programs\python\python37\lib\site-packages\setuptools\__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "c:\users\murari\appdata\local\programs\python\python37\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\users\murari\appdata\local\programs\python\python37\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "c:\users\murari\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\Murari\AppData\Local\Temp\pip-install-jl3470hv\torch\setup.py", line 99, in run
self.run_command('build_deps')
File "c:\users\murari\appdata\local\programs\python\python37\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\users\murari\appdata\local\programs\python\python37\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\Murari\AppData\Local\Temp\pip-install-jl3470hv\torch\setup.py", line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
----------------------------------------
ERROR: Command errored out with exit status 1: 'c:\users\murari\appdata\local\programs\python\python37\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Murari\\AppData\\Local\\Temp\\pip-install-jl3470hv\\torch\\setup.py'"'"'; __file__='"'"'C:\\Users\\Murari\\AppData\\Local\\Temp\\pip-install-jl3470hv\\torch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Murari\AppData\Local\Temp\pip-record-8j183bzh\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
WARNING: You are using pip version 19.2.3, however version 20.2.4 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
| I had the same issue and finally fixed it by creating the environment.
Here is the code.
!conda create -n easyocr python=3.8
!conda activate easyocr
pip install easyocr
If the issue got resolved, please vote for the answer.
| https://stackoverflow.com/questions/64618079/ |
KeyError: 'true ' error when i try to convert labels to 0 and 1 | I am using Arabert (pre trained Bert for Arabic language) for binary classification labeled as true and false, i am trying to change the labels from "true" and "false" to 0 and one i used the code:
import pandas as pd
Data=pd.read_csv("/content/500-instances.csv")
DATA_COLUMN = 'sent'
LABEL_COLUMN = 'label'
Data.columns = [DATA_COLUMN, LABEL_COLUMN]
label_map = {
'fake' : 0,
'true' : 1
}
Data[DATA_COLUMN] = Data[DATA_COLUMN].apply(lambda x: preprocess(x, do_farasa_tokenization=False, use_farasapy = False))
Data[LABEL_COLUMN] = Data[LABEL_COLUMN].apply(lambda x: label_map[x])
i get an error: KeyError: 'true ' in the last line.
Do you have any solution?!
Thanks in advance
| These is a trailing space in 'true ', that's why there is no match in label_map, try:
Data[LABEL_COLUMN] = Data[LABEL_COLUMN].apply(lambda x: label_map[x.strip()])
EDIT
If you are not sure what lies in Data[LABEL_COLUMN], I would suggest catching unknown values with a default output value using label_map.get(x.strip(), <DEFAULT_VALUE>).
| https://stackoverflow.com/questions/64620423/ |
AdamW and Adam with weight decay | Is there any difference between torch.optim.Adam(weight_decay=0.01) and torch.optim.AdamW(weight_decay=0.01)?
Link to the docs: torch.optim
| Yes, Adam and AdamW weight decay are different.
Hutter pointed out in their paper (Decoupled Weight Decay Regularization) that the way weight decay is implemented in Adam in every library seems to be wrong, and proposed a simple way (which they call AdamW) to fix it.
In Adam, the weight decay is usually implemented by adding wd*w (wd is weight decay here) to the gradients (Ist case), rather than actually subtracting from weights (IInd case).
# Ist: Adam weight decay implementation (L2 regularization)
final_loss = loss + wd * all_weights.pow(2).sum() / 2
# IInd: equivalent to this in SGD
w = w - lr * w.grad - lr * wd * w
These methods are same for vanilla SGD, but as soon as we add momentum, or use a more sophisticated optimizer like Adam, L2 regularization (first equation) and weight decay (second equation) become different.
AdamW follows the second equation for weight decay.
In Adam
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
In AdamW
weight_decay (float, optional) – weight decay coefficient (default: 1e-2)
Read more on the fastai blog.
| https://stackoverflow.com/questions/64621585/ |
Why doesn't converting the dtype of a Tensor fix "RuntimeError: expected scalar type Double but found Float"? | The important part of my code looks like this:
def forward(self, x):
x = T.tensor(x).to(self.device)
x = x.type(T.DoubleTensor)
x = self.conv1(x)
...
Yet I still get the error expected scalar type Double but found Float on the last line of that snippet. The line x = x.type(T.DoubleTensor) should fix that, right? I've also tried x = x.double() and x = T.tensor(x, dtype = T.double).to(self.device) for the earlier line, and I still get the error. I'm at a loss, what am I doing wrong?
| PyTorch expects the input to a layer to have the same device and data type (dtype) as the parameters of the layer. For most layers, including conv layers, the default data type is torch.float32, i.e. a FloatTensor.
To fix your issue you can cast x to be the same type as the weight or bias parameters of the self.conv1 layer (assuming this is a nn.Conv*d layer).
def forward(self, x):
x = T.tensor(x, device=self.device, dtype=self.conv1.weight.dtype)
x = self.conv1(x)
...
Most likely self.conv1.weight.dtype will just be torch.float32. Unless you've explicitly changed your model parameter types using something like model.to(dtype=torch.float64) then you could equivalently just use
def forward(self, x):
x = T.tensor(x, device=self.device, dtype=torch.float32)
x = self.conv1(x)
...
| https://stackoverflow.com/questions/64622234/ |
How to know input/output layer names and sizes for Pytorch model? | I have Pytorch model.pth using Detectron2's COCO Object Detection Baselines pretrained model R50-FPN.
I am trying to convert the .pth model to onnx.
My code is as follows.
import io
import numpy as np
from torch import nn
import torch.utils.model_zoo as model_zoo
import torch.onnx
from torchvision import models
model = torch.load('output_object_detection/model_final.pth')
x = torch.randn(1, 3, 1080, 1920, requires_grad=True)#0, in_cha, in_h, in_w
torch_out = torch_model(x)
print(model)
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"super_resolution.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['cls_score','bbox_pred'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
Is it correct way to convert ONNX model?
If it is the right way, how to know input_names and output_names?
Used netron to see input and output, but the graph doesn't show input/output layers.
| Try this:
import io
import numpy as np
from torch import nn
import torch.utils.model_zoo as model_zoo
import torch.onnx
from torchvision import models
model = torch.load('model_final.pth')
model.eval()
print('Finished loading model!')
print(model)
device = torch.device("cpu" if args.cpu else "cuda")
model = model.to(device)
# ------------------------ export -----------------------------
output_onnx = 'super_resolution.onnx'
print("==> Exporting model to ONNX format at '{}'".format(output_onnx))
input_names = ["input0"]
output_names = ["output0","output1"]
inputs = torch.randn(1, 3, 1080, 1920).to(device)
torch_out = torch.onnx._export(model, inputs, output_onnx, export_params=True, verbose=False,
input_names=input_names, output_names=output_names)
| https://stackoverflow.com/questions/64623277/ |
minimum the cosine similarity of two tensors and output one scalar. Pytorch | I use Pytorch cosine similarity function as follows. I have two feature vectors and my goal is to make them dissimilar to each other. So, I thought I could minimum their cosine similarity. I have some doubts about the way I have coded. I appreciate your suggestions about the following questions.
I don't know why here are some negative values in val1?
I have done three steps to convert val1 to a scalar. Am I doing it in the right way? Is there any other way?
To minimum the similarity, I have used 1/var1. Is it a standard way to do this? Is it correct if I use 1-var1?
def loss_func(feat1, feat2):
cosine_loss = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
val1 = cosine_loss(feat1, feat2).tolist()
# 1. calculate the absolute values of each element,
# 2. sum all values together,
# 3. divide it by the number of values
val1 = 1/(sum(list(map(abs, val1)))/int(len(val1)))
val1 = torch.tensor(val1, device='cuda', requires_grad=True)
return val1
| Do not convert your loss function to a list. This breaks autograd so you won't be able to optimize your model parameters using pytorch.
A loss function is already something to be minimized. If you want to minimize the similarity then you probably just want to return the average cosine similarity. If instead you want minimize the magnitude of the similarity (i.e. encourage the features to be orthogonal) then you can return the average absolute value of cosine similarity.
It seems like what you've implemented will attempt to maximize the similarity. But that doesn't appear to be in line with what you've stated. Also, to turn a minimization problem into an equivalent maximization problem you would usually just negate the measure. There's nothing wrong with a negative loss value. Taking the reciprocal of a strictly positive measure does convert it from minimization to a maximization problem, but also changes the behavior of the measure and probably isn't what you want.
Depending on what you actually want, one of these is likely to meet your needs
import torch.nn.functional as F
def loss_func(feat1, feat2):
# minimize average magnitude of cosine similarity
return F.cosine_similarity(feat1, feat2).abs().mean()
def loss_func(feat1, feat2):
# minimize average cosine similarity
return F.cosine_similarity(feat1, feat2).mean()
def loss_func(feat1, feat2):
# maximize average magnitude of cosine similarity
return -F.cosine_similarity(feat1, feat2).abs().mean()
def loss_func(feat1, feat2):
# maximize average cosine similarity
return -F.cosine_similarity(feat1, feat2).mean()
| https://stackoverflow.com/questions/64627117/ |
Pass weights into CrossEntropyLoss in correct order | I'm trying to use weight in torch.nn.CrossEntropyLoss,
but I'm not sure which order should I put
e.g.
weight = torch.tensor([1.0, 52337/34649, 52337/11066]).to(device)
criterion = nn.CrossEntropyLoss(weight=weight)
My class0 has 52337 examples it's labeled as 0 in target value, so I take 1.0
My class1 has 34649 examples it's labeled as 2 in target value, so I take 52337/34649
My class2 has 34649 examples it's labeled as 1 in target value, so I take 52337/11066
but I'm not sure which order should put in weight array,
My question is
Is there a way to show what is class0, class1 and class2 in CrossEntropy?
or the CrossEntropy will figure out the weights by itself?
| after you define your criterion as above you will have to call it like:
loss = criterion (inputs, targets)
the targets will be encoded 0 .. C-1 where C is number of classes (0,1 or 2 in your case). So your weights order and number should correspond to your targets.
| https://stackoverflow.com/questions/64627576/ |
Pytorch transform.ToTensor() changes image | I want to convert images to tensor using torchvision.transforms.ToTensor(), after processing I printed the image but the image became so weird. Here is my code:
trans = transforms.Compose([
transforms.ToTensor()])
demo = Image.open(img)
demo_img = trans(demo)
demo_array = demo_img.numpy()*255
print(Image.fromarray(demo_array.astype(np.uint8)))
The original image is this
But after processing it is showed like this
Did I write something wrong or miss something?
| It seems that the problem is with the channel axis.
If you look at torchvision.transforms docs, especially on ToTensor()
Converts a PIL Image or numpy.ndarray (H x W x C) in the range
[0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
So once you perform the transformation and return to numpy.array your shape is: (C, H, W) and you should change the positions, you can do the following:
demo_array = np.moveaxis(demo_img.numpy()*255, 0, -1)
This will transform the array to shape (H, W, C) and then when you return to PIL and show it will be the same image.
So in total:
import numpy as np
from PIL import Image
from torchvision import transforms
trans = transforms.Compose([transforms.ToTensor()])
demo = Image.open(img)
demo_img = trans(demo)
demo_array = np.moveaxis(demo_img.numpy()*255, 0, -1)
print(Image.fromarray(demo_array.astype(np.uint8)))
| https://stackoverflow.com/questions/64629702/ |
How can I add new layers on pre-trained model with PyTorch? (Keras example given.) | I am working with Keras and trying to analyze the effects on accuracy that models which are built with some layers with meaningful weights, and some layers with random initializations.
Keras:
I load VGG19 pre-trained model with include_top = False parameter on load method.
model = keras.applications.VGG19(include_top=False, weights="imagenet", input_shape=(img_width, img_height, 3))
PyTorch:
I load VGG19 pre-trained model until the same layer with the previous model which loaded with Keras.
model = torch.hub.load('pytorch/vision:v0.6.0', 'vgg19', pretrained=True)
new_base = (list(model.children())[:-2])[0]
After loaded models following images shows summary of them. (Pytorch, Keras)
So far there is no problem. After that, I want to add a Flatten layer and a Fully connected layer on these pre-trained models. I did it with Keras but I couldn't with PyTorch.
The output of new_model.summary() is that:
My question is, How can I do add a new layer in PyTorch?
| If all you want to do is to replace the classifier section, you can simply do so. That is :
model = torch.hub.load('pytorch/vision:v0.6.0', 'vgg19', pretrained=True)
model.classifier = nn.Linear(model.classifier[0].in_features, 4096)
print(model)
will give you:
Before:
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(17): ReLU(inplace=True)
(18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(24): ReLU(inplace=True)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): ReLU(inplace=True)
(27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(31): ReLU(inplace=True)
(32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(33): ReLU(inplace=True)
(34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(35): ReLU(inplace=True)
(36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Sequential(
(0): Linear(in_features=25088, out_features=4096, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=4096, out_features=4096, bias=True)
(4): ReLU(inplace=True)
(5): Dropout(p=0.5, inplace=False)
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
)
After:
VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU(inplace=True)
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(13): ReLU(inplace=True)
(14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(15): ReLU(inplace=True)
(16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(17): ReLU(inplace=True)
(18): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(19): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(20): ReLU(inplace=True)
(21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(22): ReLU(inplace=True)
(23): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(24): ReLU(inplace=True)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): ReLU(inplace=True)
(27): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(29): ReLU(inplace=True)
(30): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(31): ReLU(inplace=True)
(32): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(33): ReLU(inplace=True)
(34): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(35): ReLU(inplace=True)
(36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
(classifier): Linear(in_features=25088, out_features=4096, bias=True)
)
Also note that when you want to alter an existing architecture, you have two phases. You first get the modules you want (that's what you have done there) and then you must wrap that in a nn.Sequential because your list does not implement a forward() and thus you cant really feed it anything. its just a collection of modules.
So you need to do something like this in general (as an example):
features = nn.ModuleList(your_model.children())[:-1]
model = nn.Sequential(*features)
# carry on with what other changes you want to perform on your model
Note that if you want to create a new model and you intend on using it like:
output = model(imgs)
You need to wrap your features and new layers in a second sequential. That is, do something like this:
features = nn.ModuleList(your_model.children())[:-1]
model_features = nn.Sequential(*features)
some_more_layers = nn.Sequential(Layer1,
Layer2,
... )
model = nn.Sequential(model_features,
some_more_layers)
#
output = model(imgs)
otherwise you had to do something like :
features_output = model.features(imgs)
output = model.classifier(features_output)
| https://stackoverflow.com/questions/64631086/ |
What is the difference in RobertaTokenizer() and from_pretrained() way of initialising RobertaTokenizer? | I am a newbie to huggingface transformers and facing the below issue in training a RobertaForMaskedLM LM from scratch:
First, I have trained and saved a ByteLevelBPETokenizer as follows:
tokenizer = ByteLevelBPETokenizer()
print('Saving tokenizer at:', training_file)
tokenizer.train(files=training_file, vocab_size=VOCAB_SIZE, min_frequency=2,
special_tokens=["<s>","<pad>","</s>","<unk>","<mask>"])
tokenizer.save_model(tokenizer_mdl_dir)
Then, trained RobertaForMaskedLM using this tokenizer by creating a RobertaTokenizer as follows:
roberta_tokenizer = RobertaTokenizer(tokenizer_mdl + "/vocab.json", tokenizer_mdl + "/merges.txt")
But now, when I try to test the trained LM using a fill-mask pipeline,
fill_mask_pipeline = pipeline("fill-mask", model=roberta_model, tokenizer=roberta_tokenizer)
I got the below error:
PipelineException: No mask_token () found on the input
So, I realized, the tokenizer that I have loaded, is tokenizing the <mask> token as well. But I couldn't understand why it is doing so. Please help me understand this.
After trying several things, I loaded the tokenizer differently,
roberta_tokenizer = RobertaTokenizer.from_pretrained(tokenizer_mdl)
And, now the fill_mask_pipeline runs without errors. So, what is the difference between loading a tokenizer using RobertaTokenizer() and using the .from_pretrained() method?
| When you compare the property unique_no_split_tokens, you will see that this is initialized for the from_pretrained tokenizer but not for the other.
#from_pretrained
t1.unique_no_split_tokens
['</s>', '<mask>', '<pad>', '<s>', '<unk>']
#__init__
t2.unique_no_split_tokens
[]
This property is filled by _add_tokens() that is called by from_pretrained but not by __init__. I'm actually not sure if this is a bug or a feature. from_pretrained is the recommended method to initialize a tokenizer from a pretrained tokenizer and should therefore be used.
| https://stackoverflow.com/questions/64631665/ |
Best Loss Function for Multi-Class Multi-Target Classification Problem | I have a classification problem and I don't know how to categorize this classification problem. As per my understanding,
A Multiclass classification problem is where you have multiple mutually exclusive classes and each data point in the dataset can only be labelled by one class. For example, in an Image Classification task for fruits, a fruit data point labelled as an apple cannot be an orange and an orange cannot be a banana and so on. Each data point, in this case can only be any one of the fruits of the fruits class and so is labelled accordingly.
Where as ...
A Multilabel classification is a problem where you have multiple sets of mutually exclusive classes of which the data point can be labelled simultaneously. For example, in an Image Classification task for Cars, a car data point labelled as a sedan cannot be a hatchback and a hatchback cannot be a SUV and so on for the type of car. At the same time, the same car data point can be labelled one from VW, Ford, Mercedes, etc. as the car manufacturer. So in this case, the car data point is labeled from two different sets of mutually exclusive classes.
Please correct my understanding if I am thinking in a wrong way here.
Now to my problem, my classification problem with multiple classes, lets say A, B, C, D and E. Here each data point can have one or more than one classes from the set as shown below on the left:
|-------------|----------| |-------------|-----------------|
| X | y | | X | One-Hot-Y |
|-------------|----------| |-------------|-----------------|
| DP1 | A, B | | DP1 | [1, 1, 0, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP2 | C | | DP2 | [0, 0, 1, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP3 | B, E | | DP3 | [0, 1, 0, 0, 1] |
|-------------|----------| |-------------|-----------------|
| DP4 | A, C | | DP4 | [1, 0, 1, 0, 0] |
|-------------|----------| |-------------|-----------------|
| DP5 | D | | DP5 | [0, 0, 0, 1, 0] |
|-------------|----------| |-------------|-----------------|
I One-Hot Encoded the labels for training as shown above on the right. My question is:
What Loss function (preferably in PyTorch) can I use for training the model to optimize for the One-Hot encoded output
What do we call such a classification problem? Multi-label or Multi-class?
Thank you for your answers!
|
What Loss function (preferably in PyTorch) can I use for training the
model to optimize for the One-Hot encoded output
You can use torch.nn.BCEWithLogitsLoss (or MultiLabelSoftMarginLoss as they are equivalent) and see how this one works out. This is standard approach, other possibility could be MultilabelMarginLoss.
What do we call such a classification problem? Multi-label or Multi-class?
It is multilabel (as multiple labels can be present simultaneously). In one-hot encoding:
[1, 1, 0, 0, 0], [0, 1, 0, 0, 1] - multilabel
[0, 0, 1, 0, 0] - multiclass
[1], [0] - binary (special case of multiclass)
multiclass cannot have more than one 1 as all other labels are mutually exclusive.
| https://stackoverflow.com/questions/64634902/ |
Pytorch RuntimeError: expected scalar type Float but found Byte | I am working on the classic example with digits. I want to create a my first neural network that predict the labels of digit images {0,1,2,3,4,5,6,7,8,9}. So the first column of train.txt has the labels and all the other columns are the features of each label. I have defined a class to import my data:
class DigitDataset(Dataset):
"""Digit dataset."""
def __init__(self, file_path, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.data = pd.read_csv(file_path, header = None, sep =" ")
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
labels = self.data.iloc[idx,0]
images = self.data.iloc[idx,1:-1].values.astype(np.uint8).reshape((1,16,16))
if self.transform is not None:
sample = self.transform(sample)
return images, labels
And then I am running these commands to split my dataset into batches, to define a model and a loss:
train_dataset = DigitDataset("train.txt")
train_loader = DataLoader(train_dataset, batch_size=64,
shuffle=True, num_workers=4)
# Model creation with neural net Sequential model
model=nn.Sequential(nn.Linear(256, 128), # 1 layer:- 256 input 128 o/p
nn.ReLU(), # Defining Regular linear unit as activation
nn.Linear(128,64), # 2 Layer:- 128 Input and 64 O/p
nn.Tanh(), # Defining Regular linear unit as activation
nn.Linear(64,10), # 3 Layer:- 64 Input and 10 O/P as (0-9)
nn.LogSoftmax(dim=1) # Defining the log softmax to find the probablities
for the last output unit
)
# defining the negative log-likelihood loss for calculating loss
criterion = nn.NLLLoss()
images, labels = next(iter(train_loader))
images = images.view(images.shape[0], -1)
logps = model(images) #log probabilities
loss = criterion(logps, labels) #calculate the NLL-loss
And I take the error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-7f4160c1f086> in <module>
47 images = images.view(images.shape[0], -1)
48
---> 49 logps = model(images) #log probabilities
50 loss = criterion(logps, labels) #calculate the NLL-loss
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self,
*input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self,
*input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1688 if input.dim() == 2 and bias is not None:
1689 # fused op is marginally faster
-> 1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
1692 output = input.matmul(weight.t())
RuntimeError: expected scalar type Float but found Byte
Do you know what is wrong? Thank you for your patience and help!
| This line is the cause of your error:
images = self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))
images are uint8 (byte) while the neural network needs inputs as floating point in order to calculate gradients (you can't calculate gradients for backprop using integers as those are not continuous and non-differentiable).
You can use torchvision.transforms.functional.to_tensor to convert the image into float and into [0, 1] like this:
import torchvision
images = torchvision.transforms.functional.to_tensor(
self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))
)
or simply divide by 255 to get values into [0, 1].
| https://stackoverflow.com/questions/64635630/ |
word synonym / antonym detection | I need to create a classifier that takes 2 words and determines if they are synonyms or antonyms. I tried nltk's antsyn-net but it doesn't have enough data.
example:
capitalism <-[antonym]-> socialism
capitalism =[synonym]= free market
god <-[antonym]-> atheism
political correctness <-[antonym]-> free speach
advertising =[synonym]= marketing
I was thinking about taking a BERT model, because may be some of the relations would be embedded in it and transfer-learn on a data-set that I found.
| I would suggest a following pipeline:
Construct a training set from existing dataset of synonyms and antonyms (taken e.g. from the Wordnet thesaurus). You'll need to craft negative examples carefully.
Take a pretrained model such as BERT and fine-tune it on your tasks. If you choose BERT, it should be probably BertForNextSentencePrediction where you use your words/prhases instead of sentences, and predict 1 if they are synonyms and 0 if they are not; same for antonyms.
| https://stackoverflow.com/questions/64636782/ |
Pytorch error when computing loss between two tensors. TypeError: __init__() takes 1 positional argument but 3 were given | When trying to compute the loss between two tensors rPPG = (shape(torch.Size([4, 128])) and BVP_label = (shape(torch.Size([4, 128]))) using the following function:
class Neg_Pearson(nn.Module): # Pearson range [-1, 1] so if < 0, abs|loss| ; if >0, 1- loss
def __init__(self):
super(Neg_Pearson,self).__init__()
return
def forward(self, preds, labels): # tensor [Batch, Temporal]
loss = 0
for i in range(preds.shape[0]):
sum_x = torch.sum(preds[i]) # x
sum_y = torch.sum(labels[i]) # y
sum_xy = torch.sum(preds[i]*labels[i]) # xy
sum_x2 = torch.sum(torch.pow(preds[i],2)) # x^2
sum_y2 = torch.sum(torch.pow(labels[i],2)) # y^2
N = preds.shape[1]
pearson = (N*sum_xy - sum_x*sum_y)/(torch.sqrt((N*sum_x2 - torch.pow(sum_x,2))*(N*sum_y2 - torch.pow(sum_y,2))))
print(N)
#if (pearson>=0).data.cpu().numpy(): # torch.cuda.ByteTensor --> numpy
# loss += 1 - pearson
#else:
# loss += 1 - torch.abs(pearson)
loss += 1 - pearson
loss = loss/preds.shape[0]
return loss
#3. Calculate the loss
loss_ecg = Neg_Pearson(rPPG, BVP_label)
I keep getting the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-f14cbf0fc84b> in <module>
1 #3. Calculate the loss
----> 2 loss_ecg = Neg_Pearson(rPPG, BVP_label)
TypeError: __init__() takes 1 positional argument but 3 were given
I am new to Pytorch, I am not sure what is going on here. Any suggestions?
| You have a typo there. instead try :
neg_pears_loss = Neg_Pearson()
loss = neg_pears_loss(rPPG, BVP_label)
| https://stackoverflow.com/questions/64638693/ |
Install Pytorch for only inference time | I have already make some research but could not find any useful information. Is it possible to install something like mini pytorch just for load a pre-trained model and call prediction method?
Entire PyTorch library size is too big, so I would like to prevent on this issue.
Does anyone have idea? Thanks in advance.
| For production you need to use libtorch and the whole package is around 160MB compressed I guess.
You ship the Dlls that your application requires and at the very minimum I guess it could be around 170-200 MBs if you only use the torch.dll
And there is not a pytorch mini or anything like that as far as I know.
| https://stackoverflow.com/questions/64643258/ |
I can not import torch | when I tried the "import torch" ı am getting this error .How can ı fix it?
I left error as a photo
enter image description here
| The directory shows Anaconda 3. Try to install pytorch using the Anaconda command in the below link
https://anaconda.org/pytorch/pytorch
| https://stackoverflow.com/questions/64644745/ |
Why does the same PyTorch code (different implementation) give different loss? | I was tackling the Fashion MNIST data-set problem on Udacity. However my implementation of code is giving drastically different loss as compared to the solution shared by the Udacity team. I believe the only difference in my answer is the definition of the Neural Network and apart from that everything is the same. I am not able to figure out the reason for such a drastic difference in Loss.
Code 1: My solution:
import torch.nn as nn
from torch import optim
images, labels = next(iter(trainloader))
model = nn.Sequential(nn.Linear(784,256),
nn.ReLU(),
nn.Linear(256,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,10),
nn.LogSoftmax(dim=1))
# Flatten images
optimizer = optim.Adam(model.parameters(),lr=0.003)
criterion = nn.NLLLoss()
for i in range(10):
running_loss = 0
for images,labels in trainloader:
images = images.view(images.shape[0], -1)
output = model.forward(images)
loss = criterion(output,labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss}")
# Loss is coming around 4000
Code 2: Official Solution:
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
log_ps = model(images)
loss = criterion(log_ps, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss}")
# Loss is coming around 200
Is there any explanation for the vast difference in loss ?
| You forgot to zero out/clear the gradients in your implementation. That is you are missing :
optimizer.zero_grad()
In other words simply do:
for i in range(10):
running_loss = 0
for images,labels in trainloader:
images = images.view(images.shape[0], -1)
output = model.forward(images)
loss = criterion(output,labels)
# missed this!
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss}")
and you are good to go!
| https://stackoverflow.com/questions/64650459/ |
Subsets and Splits