instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Pytorch for Raspberry Pi 3 B+ | I have successfully cross compiled Pytorch 1.7 for Raspberry Pi3 B+ and the following wheel was generated as the result:
torch-1.7.0a0-cp37-cp37m-linux_armv7l.whl
However, when I try to install the wheel on the pi, I get this error:
ERROR: torch-1.7.0a0-cp37-cp37m-linux_armv7l.whl is not a supported wheel on this platform.
When I do lscpu, I see that the arm7l was correctly used as the target hw:
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Model: 4
Model name: ARMv7 Processor rev 4 (v7l)
CPU max MHz: 1400.0000
CPU min MHz: 600.0000
BogoMIPS: 38.40
Flags: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
And here is the output of pep425tags:
[('cp38', 'cp38', 'linux_armv7l'), ('cp38', 'abi3', 'linux_armv7l'),
('cp38', 'none', 'linux_armv7l'), ('cp37', 'abi3', 'linux_armv7l'),
('cp36', 'abi3', 'linux_armv7l'), ('cp35', 'abi3', 'linux_armv7l'),
('cp34', 'abi3', 'linux_armv7l'), ('cp33', 'abi3', 'linux_armv7l'),
('cp32', 'abi3', 'linux_armv7l'), ('py3', 'none', 'linux_armv7l'),
('cp38', 'none', 'any'), ('cp3', 'none', 'any'), ('py38', 'none',
'any'), ('py3', 'none', 'any'), ('py37', 'none', 'any'), ('py36',
'none', 'any'), ('py35', 'none', 'any'), ('py34', 'none', 'any'),
('py33', 'none', 'any'), ('py32', 'none', 'any'), ('py31', 'none',
'any'), ('py30', 'none', 'any')]
So I am wondering, what am I missing?
| I eventually found the answer, I had python 3.8 on my target hw (raspberry pi 3B+) and I used had python 3.7 on my build system. Downgrading python 3.8 to python 3.7 on target hw fixed the issue.
| https://stackoverflow.com/questions/64073994/ |
AttributeError: 'Function' object has no attribute 'block_variable' | I have written a subclass of torch_fenics. In this, the input is a vector from DG space. I use this input in the weak formulation and then calculate the solution. Further, I need the gradient of the solution with respect to the given input.
I get the following error log on running the same
~/miniconda3/envs/py37/lib/python3.7/site-packages/torch/autograd/function.py in apply(self, *args)
75
76 def apply(self, *args):
---> 77 return self._forward_cls.backward(self, *args)
78
79
~/miniconda3/envs/py37/lib/python3.7/site-packages/torch_fenics/torch_fenics.py in backward(ctx, *grad_outputs)
88 # Check which gradients need to be computed
89 controls = list(map(fenics_adjoint.Control,
---> 90 (c for g, c in zip(ctx.needs_input_grad[1:], ctx.fenics_inputs) if g)))
91
92 # Compute and accumulate gradient for each output with respect to each input
~/miniconda3/envs/py37/lib/python3.7/site-packages/pyadjoint/control.py in init(self, control)
38 def init(self, control):
39 self.control = control
---> 40 self.block_variable = control.block_variable
41
42 def data(self):
AttributeError: 'Function' object has no attribute 'block_variable'
| Don't import dolfin in your code. It will resolve the issue.
| https://stackoverflow.com/questions/64077834/ |
Creating a Pseudo-Cyclic signal using a cyclic signal | I am looking for a way using numpy or pytorch to skew a tensor.
For example given an array of samples of sin(x), i hope to get a skewed version of it (preferably the same size) such that the cycle of the function is either stretched or shrinks or even both (if it can interpolate it randomly), so in some parts the frequency is higher and in some parts lower.
I need to create "pseudo-cyclic" signals which means that they are almost cyclic but not perfectly.
| You can play around with (for instance) locally varying frequency. Considering a sine function as a base periodic function, using a locally varying frequency can give "stretched" and/or "dilated" effect.
Example 1: chirp function with linear frequency change (check the wiki page for more information):
import matplotlib.pyplot as plt
import numpy as np
# initialize
n = 1000
x = np.linspace(0, 10, n)
# variying frequency between 2 and 4
f = np.linspace(2, 4, n)
y = np.sin(f * x)
# plot local frequency values and signal
plt.subplot(211)
plt.plot(x, f)
plt.ylabel('Local frequency')
plt.subplot(212)
plt.plot(x, y)
plt.ylabel('signal')
plt.xlabel('samples')
plt.show()
leading to increasing frequency pseudo-periodic signal:
Example 2: varying frequency with arbitrary function (here polynomial):
import matplotlib.pyplot as plt
import numpy as np
# initialize
n = 1000
x = np.linspace(0, 10, n)
# variying frequency between 2 and 4
f = .5 * (2 + (x - 5) ** 2 + 2 * x)
y = np.sin(f * x)
# plot local frequency values and signal
plt.subplot(211)
plt.plot(x, f)
plt.ylabel('Local frequency')
plt.subplot(212)
plt.plot(x, y)
plt.ylabel('signal')
plt.xlabel('samples')
plt.show()
with the following output
Playing around with the frequency function can help getting other pseudo-periodic signal shapes. Also note that scipy comes with a bank of signal generators like sweep_poly from which you can start with in order to get to randomly varying signal frequencies.
| https://stackoverflow.com/questions/64087503/ |
deep neural network model stops learning after one epoch | I am training a unsupervised NN model and for some reason, after exactly one epoch (80 steps), model stops learning.
]
Do you have any idea why it might happen and what should I do to prevent it?
This is more info about my NN:
I have a deep NN that tries to solve an optimization problem. My loss function is customized and it is my objective function in the optimization problem.
So if my optimization problems is min f(x) ==> loss, now in my DNN loss = f(x). I have 64 input, 64 output, 3 layers in between :
self.l1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.LeakyReLU()
self.BN1 = nn.BatchNorm1d(hidden_size)
and last layer is:
self.l5 = nn.Linear(hidden_size, output_size)
self.tan5 = nn.Tanh()
self.BN5 = nn.BatchNorm1d(output_size)
to scale my network.
with more layers and nodes(doubles: 8 layers each 200 nodes), I can get a little more progress toward lower error, but again after 100 steps training error becomes flat!
| The symptom is that the training loss stops being improved relatively early. Suppose that your problem is learnable at all, there are many reasons for the for this behavior. Following are most relavant:
Improper preprocessing of input: Neural network prefers input with
zero mean. E.g., if the input is all positive, it will restrict the
weights to be updated in the same direction, which may not be
desirable (https://youtu.be/gYpoJMlgyXA).
Therefore, you may want to subtract the mean from all the images (e.g., subtract 127.5 from each of the 3 channels). Scaling to make unit standard deviation in each channel may also be helpful.
Generalization ability of the network: The network is not complicated
or deep enough for the task.
This is very easy to check. You can train the network on just a few
images (says from 3 to 10). The network should be able to overfit the
data and drives the loss to almost 0. If it is not the case, you may
have to add more layers such as using more than 1 Dense layer.
Another good idea is to used pre-trained weights (in applications of Keras documentation). You may adjust the Dense layers at the top to fit with your problem.
Improper weight initialization. Improper weight initialization can
prevent the network from converging (https://youtu.be/gYpoJMlgyXA,
the same video as before).
For the ReLU activation, you may want to use He initialization
instead of the default Glorot initialiation. I find that this may be
necessary sometimes but not always.
Lastly, you can use debugging tools for Keras such as keras-vis, keplr-io, deep-viz-keras. They are very useful to open the blackbox of convolutional networks.
I faced the same problem then I followed the following:
After going through a blog post, I managed to determine that my problem resulted from the encoding of my labels. Originally I had them as one-hot encodings which looked like [[0, 1], [1, 0], [1, 0]] and in the blog post they were in the format [0 1 0 0 1]. Changing my labels to this and using binary crossentropy has gotten my model to work properly. Thanks to Ngoc Anh Huynh and rafaelvalle!
| https://stackoverflow.com/questions/64095558/ |
Char RNN classification with batch size | I'm replicating this example for a classification with a Pytorch char-rnn.
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
I see that every epoch only 1 example is taken and random. I would like that each epoch all the dataset is taken with a specific batch size of examples. I can adjust the code to do this myself but I was wondering if some flags already exist.
Thank you
| If you construct a Dataset class by inheriting from the PyTorch Dataset class and then feed it into the PyTorch DataLoader class, then you can set a parameter batch_sizeto determine how many examples you will get out in each iteration of your training loop.
I have followed the same tutorial as you. I can show you how I have used the PyTorch classes above to get the data in batches.
# load data into a DataFrame using the findFiles function as in the tutorial
files = findFiles('data/names') # load the files as in the tutorial into a dataframe
df_names = pd.concat([
pd.read_table(f, names = ["names"], header = None)\
.assign(lang = f.stem)\
for f in files]).reset_index(drop = True)
print(df_names.head())
# output:
# names lang
# 0 Abe Japanese
# 1 Abukara Japanese
# 2 Adachi Japanese
# 3 Aida Japanese
# 4 Aihara Japanese
# Make train and test data
from sklearn.model_selection import train_test_split
X_train, X_dev, y_train, y_dev = train_test_split(df_names.names, df_names.lang,
train_size = 0.8)
df_train = pd.concat([X_train, y_train], axis=1)
df_val = pd.concat([X_dev, y_dev], axis=1)
Now I construct a modified Dataset class using the dataframe(s) above by inheriting from the PyTorch Dataset class.
import torch
from torch.utils.data import Dataset, DataLoader
class NameDatasetReader(Dataset):
def __init__(self, df: pd.DataFrame):
self.df = df
def __len__(self):
return len(self.df)
def __getitem__(self, idx: int):
row = self.df.loc[idx] # gets a row from the df
input_name = list(row.names) # turns name into a list of chars
len_name = len(input_name) # length of name (used to pad packed sequence)
labels = row.label # target
return input_name, len_name, labels
train_dat = NameDatasetReader(df_train) # make dataset from dataframe with training data
Now, the thing is that when you want to work with batches and sequences you need the sequences to be of equal length in each batch. That is why I also get the length of the extracted name from the dataframe in the __getitem__() function above. This is to be used in function that modifies the training examples used in each batch.
This is called a collate_batch function and in this example it modifies each batch of your training data such that the sequences in a given batch are of equal length.
# Dictionary of all letters (as in the original tutorial,
# I have just inserted also an entry for the padding token)
all_letters_dict= dict(zip(all_letters, range(1, len(all_letters) +2)))
all_letters_dict['<PAD>'] = 0
# function to turn name into a tensor
def line_to_tensor(line):
"""turns name into a tensor of one hot encoded vectors"""
tensor = torch.zeros(len(line),
len(all_letters_dict.keys())) # (name_len x vocab_size) - <PAD> is part of vocab
for li, letter in enumerate(line):
tensor[li][all_letters_dict[letter]] = 1
return tensor
def collate_batch_lstm(input_data: Tuple) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Combines multiple name samples into a single batch
:param input_data: The combined input_ids, seq_lens, and labels for the batch
:return: A tuple of tensors (input_ids, seq_lens, labels)
"""
# loops over batch input and extracts vals
names = [i[0] for i in input_data]
seq_names_len = [i[1] for i in input_data]
labels = [i[2] for i in input_data]
max_length = max(seq_names_len) # longest sequence aka. name
# Pad all of the input samples to the max length
names = [(name + ["<PAD>"] * (max_length - len(name))) for name in names]
input_ids = [line_to_tensor(name) for name in names] # turn each list of chars into a tensor with one hot vecs
# Make sure each sample is max_length long
assert (all(len(i) == max_length for i in input_ids))
return torch.stack(input_ids), torch.tensor(seq_names_len), torch.tensor(labels)
Now, I can construct a dataloader by inserting the dataset object from above, the collate_batch_lstm() function above, and a given batch_size into the DataLoader class.
train_dat_loader = DataLoader(train_dat, batch_size = 4, collate_fn = collate_batch_lstm)
You can now iterate over train_dat_loader which returns a training batch with 4 names in each iteration.
Consider a given batch from train_dat_loader:
seq_tensor, seq_lengths, labels = iter(train_dat_loader).next()
print(seq_tensor.shape, seq_lengths.shape, labels.shape)
print(seq_tensor)
print(seq_lengths)
print(labels)
# output:
# torch.Size([4, 11, 59]) torch.Size([4]) torch.Size([4])
# tensor([[[0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# ...,
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.]],
# [[0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# ...,
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.]],
# [[0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# ...,
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.]],
# [[0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# ...,
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.],
# [1., 0., 0., ..., 0., 0., 0.]]])
# tensor([11, 3, 8, 7])
# tensor([14, 1, 14, 2])
It gives us an tensor of size (4 x 11 x 59).
4 because we have specified that we want a batch size of 4.
11 is the length of the longest name in the given batch (all other names have been padded with zeros such that they are equal length).
59 is the number of characters in our vocabulary.
The next thing is to incorporate this into your training routine and use a packing routine to avoid doing redundant calculations on the zeros that you have padded your data with :)
| https://stackoverflow.com/questions/64098364/ |
Input fixed length sequence of frames to CNN | I want my pytorch CNN to take as input a sequence of length SEQ_LEN of 32x32 RGB images concatenated along channels dimension. Therefore, a single input of the network has shape (32, 32, 3, SEQ_LEN). How should I define my CNN input layer?
The common way
SEQ_LEN = 10
input_conv = nn.Conv2d(in_channels=SEQ_LEN, out_channels=32, kernel_size=3)
BATCH_SIZE = 64
frames = np.random.randint(0, 255, size=(BATCH_SIZE, SEQ_LEN, 3, 32, 32))
frames_tensor = torch.tensor(frames)
input_conv(frames_tensor)
gives the error
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 10, 3, 3], but got 5-dimensional input of size [64, 10, 3, 32, 32] instead
| Given your comments, it sounds like your data is not fit for a 2D convolutional neural network at all, and that a 3D one (Conv3d) would be more appropriate. As you can see from its documentation, its input shape is what you would expect.
| https://stackoverflow.com/questions/64100096/ |
PyTorch LSTM not learning in training | I have the following simple LSTM network:
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.rnn = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
self.batch_size = None
self.hidden = None
def forward(self, x):
h0, c0 = self.init_hidden(x)
out, (hn, cn) = self.rnn(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
def init_hidden(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
return [t for t in (h0, c0)]
I am initialising this model as"
model = LSTMClassifier(28, 10, 6, 1)
i.e. each input instance has 6 time steps and the dimension of each time step is 28, and the hidden dimension is 10. The inputs are being mapped to an output dim of 1.
The training data is being prepared in batches of size 16, meaning that the data passed in the training loop has the shape:
torch.Size([16, 6, 28])
With labels of shape:
batches[1][0].size()
An example of the input is:
tensor([[-0.3674, 0.0347, -0.2169, -0.0821, -0.3673, -0.1773, 1.1840, -0.2669,
-0.4202, -0.1473, -0.1132, -0.4756, -0.3565, 0.5010, 0.1274, -0.1147,
0.2783, 0.0836, -1.3251, -0.8067, -0.6447, -0.7396, -0.3241, 1.3329,
1.3801, 0.8198, 0.6098, 0.0697],
[-0.2710, 0.1596, -0.2524, -0.0821, -0.3673, -0.1773, 0.0302, -0.2099,
-0.4550, 0.1451, -0.4561, -0.5207, -0.5657, -0.5287, -0.2690, -0.1147,
-0.0346, -0.1043, -0.7515, -0.8392, -0.4745, -0.7396, -0.3924, 0.8122,
-0.1624, -1.2198, 0.0326, -0.9306],
[-0.1746, 0.0972, -0.2702, -0.0821, -0.3673, -0.1773, -0.0468, -1.1225,
-0.4480, -0.4397, 0.4011, -1.1073, -1.0536, -0.1855, -0.7502, -0.1147,
-0.0146, -0.1545, -0.1919, -0.1674, 0.0930, -0.7396, 0.8106, 1.1594,
0.4546, -1.2198, -0.5446, -1.2640],
[-0.2710, 0.0660, -0.2524, -0.0821, -0.4210, -0.1773, 1.8251, -0.5236,
-0.4410, -0.7321, 0.4011, -0.6110, -0.2171, 1.1875, -0.2973, -0.1147,
-0.1278, 0.7728, -0.9334, -0.5141, -2.1202, 1.3521, -0.9393, 0.5085,
-0.4709, 0.8198, -1.1218, 0.0697],
[-0.3674, -0.0277, -0.2347, -0.0821, -0.0448, -0.1773, 0.2866, -0.1386,
-0.4271, 0.4375, -0.2847, -0.1146, -0.4262, -0.3571, -0.0425, -0.1147,
-0.4207, -0.4552, -0.5277, -0.9584, -0.4177, -0.7396, -0.2967, 0.5085,
0.4546, -1.2198, -0.3522, -1.2640],
[-0.3674, -0.1447, -0.1991, -0.0821, 0.1701, -0.1773, 0.0430, 0.1324,
-0.4271, 0.7299, -0.4561, 0.2915, -0.5657, -0.1855, -0.2123, -0.1147,
-0.0413, -0.8311, -0.6396, -1.0451, -0.4177, -0.7396, -0.2967, -0.4028,
0.7631, -1.2198, -0.3522, -1.2640]])
When I train the model as:
Epochs = 10
batch_size = 32
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=5e-4)
for epoch in range(Epochs):
print(f"Epoch {epoch + 1}")
for n, (X, y) in enumerate(batches):
model.train()
optimizer.zero_grad()
y_pred = model(X)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
model.eval()
accurate = 0
for X_instance, y_instance in zip(test_X, test_y):
if y_instance == round(model(X_instance.view(-1, 6, 28)).detach().item()):
accurate += 1
print(f"Accuracy test set: {accurate/len(test_X)}")
The accuracy does not converge:
Epoch 1
Accuracy test set: 0.23169107856191745
Sample params:
tensor([-0.3356, -0.0105, -0.3405, -0.0049, 0.0037, 0.1707, 0.2685, -0.3893,
-0.4707, -0.2872, -0.1544, -0.1455, 0.0393, 0.0774, -0.4194, 0.0780,
-0.2177, -0.3829, -0.4679, 0.0370, -0.0794, 0.0455, -0.1331, -0.0169,
-0.1551, -0.0348, 0.1746, -0.5163], grad_fn=<SelectBackward>)
tensor([ 0.2137, -0.2558, 0.1509, -0.0975, 0.5591, 0.0907, -0.1249, 0.3095,
0.2112, 0.3134, -0.1581, -0.3051, -0.3559, -0.0177, 0.1485, 0.4397,
-0.1441, 0.1705, 0.3230, -0.3236, 0.0692, 0.0920, -0.2691, -0.3695,
-0.0692, 0.3747, 0.0149, 0.5216], grad_fn=<SelectBackward>)
Epoch 2
Accuracy test set: 0.23049267643142476
Sample params:
tensor([-0.3483, -0.0144, -0.3512, 0.0213, -0.0081, 0.1777, 0.2674, -0.4031,
-0.4628, -0.3041, -0.1651, -0.1511, 0.0216, 0.0513, -0.4320, 0.0839,
-0.2602, -0.3629, -0.4541, 0.0398, -0.0768, 0.0432, -0.1150, -0.0160,
-0.1346, -0.0727, 0.1801, -0.5253], grad_fn=<SelectBackward>)
tensor([ 0.1879, -0.2534, 0.1461, -0.1141, 0.5735, 0.0872, -0.1286, 0.3273,
0.2084, 0.3037, -0.1535, -0.2934, -0.3870, -0.0252, 0.1492, 0.4752,
-0.1709, 0.1776, 0.3390, -0.3318, 0.0734, 0.1077, -0.2790, -0.3777,
-0.0518, 0.3726, 0.0228, 0.5404], grad_fn=<SelectBackward>)
Epoch 3
Accuracy test set: 0.22982689747003995
Sample params:
tensor([-0.3725, -0.0069, -0.3623, 0.0393, -0.0167, 0.1748, 0.2577, -0.4183,
-0.4681, -0.3196, -0.1657, -0.1613, 0.0122, 0.0268, -0.4361, 0.0838,
-0.2962, -0.3566, -0.4344, 0.0366, -0.0822, 0.0486, -0.1150, -0.0295,
-0.1080, -0.1094, 0.1841, -0.5336], grad_fn=<SelectBackward>)
tensor([ 0.1664, -0.2456, 0.1477, -0.1332, 0.5820, 0.0819, -0.1228, 0.3426,
0.2066, 0.2985, -0.1464, -0.2824, -0.4199, -0.0323, 0.1530, 0.5057,
-0.1991, 0.1856, 0.3407, -0.3347, 0.0800, 0.1203, -0.2791, -0.3863,
-0.0426, 0.3760, 0.0327, 0.5641], grad_fn=<SelectBackward>)
Epoch 4
Accuracy test set: 0.23249001331557922
Sample params:
tensor([-0.3945, 0.0032, -0.3765, 0.0600, -0.0248, 0.1713, 0.2442, -0.4297,
-0.4741, -0.3311, -0.1653, -0.1667, 0.0029, 0.0066, -0.4373, 0.0738,
-0.3320, -0.3530, -0.4136, 0.0390, -0.0731, 0.0552, -0.1117, -0.0517,
-0.0871, -0.1455, 0.1841, -0.5359], grad_fn=<SelectBackward>)
tensor([ 0.1495, -0.2292, 0.1524, -0.1473, 0.5938, 0.0661, -0.1157, 0.3626,
0.2013, 0.2927, -0.1350, -0.2661, -0.4558, -0.0411, 0.1562, 0.5381,
-0.2279, 0.1927, 0.3319, -0.3431, 0.0852, 0.1402, -0.2747, -0.4026,
-0.0297, 0.3757, 0.0396, 0.5856], grad_fn=<SelectBackward>)
Have I made a mistake in the model definition?
| So normally 6 layers in your LSTM are way to much. The input dimension is 28 (are you training MNIST, or are the inputs letters?) so 10 as hidden dimension is acutally way to small. Try the following parameters:
hidden_dim = 128 to 512
layer_dim = 2 to max. 4
I see your output-shape is 1 and you dont use an activation function. Are you trying to predict intergers (like 1 for class "dog", 2 for class "cat")? If so you should switch to one-hot encoding, so that your output shape is equal to the classes you want to predict. And then use softmax as activation for your last layer.
| https://stackoverflow.com/questions/64103683/ |
In 2020 what is the optimal way to train a model in Pytorch on more than one GPU on one computer? | What are the best practices for training one neural net on more than one GPU on one machine?
I'm a little confused by the different options from nn.DataParallel vs putting different layers on different GPUs with .to('cuda:0') and .to('cuda:1'). I see in the Pytorch docs the latter method the date was 2017. Is there a standard or does it depend on preference or the type of model?
Method 1
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = torch.nn.Linear(10, 10)
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5)
def forward(self, x):
x = self.relu(self.net1(x))
return self.net2(x)
model = ToyModel().to('cuda')
model = nn.DataParallel(model)
Method 2
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
self.net1 = torch.nn.Linear(10, 10).to('cuda:0')
self.relu = torch.nn.ReLU()
self.net2 = torch.nn.Linear(10, 5).to('cuda:1')
def forward(self, x):
x = self.relu(self.net1(x.to('cuda:0')))
return self.net2(x.to('cuda:1'))
I'm not sure there aren't more ways Pytorch provides to train on more than one GPU.
Both of these methods seem to cause my system to freeze depending on what model I use them. In Jupyter the cell stays at a [*] and if I don't restart the kernel the screen freezes and I have to do a hard reset. A few tutorials on multi-gpu cause my system to hang and freeze like this.
| If you cannot fit all the layers of your model on a single GPU, then you can use model parallel (that article describes model parallel on a single machine, with layer0.to('cuda:0') and layer1.to('cuda:1') like you mentioned).
If you can, then you can try distributed data parallel - each worker will hold its own copy of the entire model (all layers), and will work on a small portion of the data in each batch. DDP is recommended instead of DP, even if you only use a single machine.
Do you have some examples that can reproduce the issues you're having?
Have you tried running your code with tiny inputs, and adding print statements to see whether progress is being made?
| https://stackoverflow.com/questions/64105986/ |
Why is looping through pytorch tensors so slow (compared to Numpy)? | I've been working with image transformations recently and came to a situation where I have a large array (shape of 100,000 x 3) where each row represents a point in 3D space like:
pnt = [x y z]
All I'm trying to do is iterating through each point and matrix multiplying each point with a matrix called T (shape = 3 X 3).
Test with Numpy:
def transform(pnt_cloud, T):
i = 0
for pnt in pnt_cloud:
xyz_pnt = np.dot(T, pnt)
if xyz_pnt[0] > 0:
arr[i] = xyz_pnt[0]
i += 1
return arr
Calling the following code and calculating runtime (using %time) gives the output:
Out[190]: CPU times: user 670 ms, sys: 7.91 ms, total: 678 ms
Wall time: 674 ms
Test with Pytorch Tensor:
import torch
tensor_cld = torch.tensor(pnt_cloud)
tensor_T = torch.tensor(T)
def transform(pnt_cloud, T):
depth_array = torch.tensor(np.zeros(pnt_cloud.shape[0]))
i = 0
for pnt in pnt_cloud:
xyz_pnt = torch.matmul(T, pnt)
if xyz_pnt[0] > 0:
depth_array[i] = xyz_pnt[0]
i += 1
return depth_array
Calling the following code and calculating runtime (using %time) gives the output:
Out[199]: CPU times: user 6.15 s, sys: 28.1 ms, total: 6.18 s
Wall time: 6.09 s
NOTE: Doing the same with torch.jit only reduces 2s
I would have thought that PyTorch tensor computations would be much faster due to the way PyTorch breaks its code down in the compiling stage. What am I missing here?
Would there be any faster way to do this other than using Numba?
| Why are you using a for loop??
Why do you compute a 3x3 dot product and only uses the first element of the result??
You can do all the math in a single matmul:
with torch.no_grad():
depth_array = torch.matmul(pnt_cloud, T[:1, :].T) # nx3 dot 3x1 -> nx1
# since you only want non negative results
depth_array = torch.maximum(depth_array, 0)
Since you want to compare runtime to numpy, you should disable gradient accumulation.
| https://stackoverflow.com/questions/64136656/ |
How to load checkpoints across different versions of pytorch (1.3.1 and 1.6.x) using ppc64le and x86? | As I outlined here I am stuck using old versions of pytorch and torchvision due to hardware e.g. using ppc64le IBM architectures.
For this reason, I am having issues when sending and receiving checkpoints between different computers, clusters and my personal mac. I wonder if there is any way to load models in a way to avoid this issue? e.g. perhaps saving models in with a old and new format when using 1.6.x. Of course for the 1.3.1 to 1.6.x is impossible but at leat I was hoping something would work.
Any advice? Of course my ideal solution is that I don't have to worry about it and I can always load and save my checkpoints and everything I usually pickle uniformly across all my hardware.
The first error I got was a zip jit error:
RuntimeError: /home/miranda9/data/f.pt is a zip archive (did you mean to use torch.jit.load()?)
so I used that (and other pickle libraries):
# %%
import torch
from pathlib import Path
def load(path):
import torch
import pickle
import dill
path = str(path)
try:
db = torch.load(path)
f = db['f']
except Exception as e:
db = torch.jit.load(path)
f = db['f']
#with open():
# db = pickle.load(open(path, "r+"))
# db = dill.load(open(path, "r+"))
#raise ValueError(f'FAILED: {e}')
return db, f
p = "~/data/f.pt"
path = Path(p).expanduser()
db, f = load(path)
Din, nb_examples = 1, 5
x = torch.distributions.Normal(loc=0.0, scale=1.0).sample(sample_shape=(nb_examples, Din))
y = f(x)
print(y)
print('Success!\a')
but I get complains of different pytorch versions which I am forced to use:
Traceback (most recent call last):
File "hal_pg.py", line 27, in <module>
db, f = load(path)
File "hal_pg.py", line 16, in load
db = torch.jit.load(path)
File "/home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/jit/__init__.py", line 239, in load
cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /opt/anaconda/conda-bld/pytorch-base_1581395437985/work/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /opt/anaconda/conda-bld/pytorch-base_1581395437985/work/caffe2/serialize/inline_container.cc:131)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xbc (0x7fff7b527b9c in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1d98 (0x7fff1d293c78 in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x88 (0x7fff1d2950d8 in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #3: torch::jit::import_ir_module(std::shared_ptr<torch::jit::script::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x64 (0x7fff1e624664 in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x70e210 (0x7fff7c0ae210 in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x28efc4 (0x7fff7bc2efc4 in /home/miranda9/.conda/envs/wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #26: <unknown function> + 0x25280 (0x7fff84b35280 in /lib64/libc.so.6)
frame #27: __libc_start_main + 0xc4 (0x7fff84b35474 in /lib64/libc.so.6)
any ideas how to make everything consistent across the clusters? I can't even open the pickle files.
maybe this is just impossible with the current pytorch version I am forced to use :(
RuntimeError: version_number <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /opt/anaconda/conda-bld/pytorch-base_1581395437985/work/caffe2/serialize/inline_container.cc:131, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 1. Your PyTorch installation may be too old. (init at /opt/anaconda/conda-bld/pytorch-base_1581395437985/work/caffe2/serialize/inline_container.cc:131)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xbc (0x7fff83ba7b9c in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1d98 (0x7fff25993c78 in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x88 (0x7fff259950d8 in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #3: torch::jit::import_ir_module(std::shared_ptr<torch::jit::script::CompilationUnit>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x64 (0x7fff26d24664 in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #4: <unknown function> + 0x70e210 (0x7fff8472e210 in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x28efc4 (0x7fff842aefc4 in /home/miranda9/.conda/envs/automl-meta-learning_wmlce-v1.7.0-py3.7/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #23: <unknown function> + 0x25280 (0x7fff8d335280 in /lib64/libc.so.6)
frame #24: __libc_start_main + 0xc4 (0x7fff8d335474 in /lib64/libc.so.6)
using code:
from pathlib import Path
import torch
path = '/home/miranda9/data/dataset/'
path = Path(path).expanduser() / 'fi_db.pt'
path = str(path)
# db = torch.load(path)
# torch.jit.load(path)
db = torch.jit.load(str(path))
print(db)
related links:
How to load checkpoints across different versions of pytorch (1.3.1 and 1.6.x) using ppc64le and x86?
https://discuss.pytorch.org/t/how-to-load-checkpoints-across-different-versions-of-pytorch-1-3-1-and-1-6-x-using-ppc64le-and-x86/97829
related gitissue: https://github.com/pytorch/pytorch/issues/43766
reddit: https://www.reddit.com/r/pytorch/comments/jvza7v/how_to_load_checkpoints_across_different_versions/
| I believe what the developers intend is passing a flag for saving as a pickle. Just a default behavior change.
For previously checkpointed files reload the zip file saved weights in the newer env(with pytorch>=1.6), and then checkpoint again as a pickle (no need to re-train);
update your code and add flag from next time
Deprecation from ver 1.6 :
We have switched torch.save to use a zip file-based format by default
rather than the old Pickle-based format. torch.load has retained the
ability to load the old format, but use of the new format is
recommended. The new format is:
more friendly for inspection and building tooling for manipulating the
save files fixes a long-standing issue wherein serialization
(getstate, setstate) functions on Modules that depended on
serialized Tensor values were getting the wrong data the same as the
TorchScript serialization format, making serialization more consistent
across PyTorch
Usage is as follows:
m = MyMod()
torch.save(m.state_dict(), 'mymod.pt') # Saves a zipfile to mymod.pt
To use the old format, pass the flag _use_new_zipfile_serialization=False
m = MyMod()
torch.save(m.state_dict(), 'mymod.pt', _use_new_zipfile_serialization=False) # Saves pickle
| https://stackoverflow.com/questions/64141188/ |
Fine tuning of Bert word embeddings | I would like to load a pre-trained Bert model and to fine-tune it and particularly the word embeddings of the model using a custom dataset.
The task is to use the word embeddings of chosen words for further analysis.
It is important to mention that the dataset consists of tweets and there are no labels.
Therefore, I used the BertForMaskedLM model.
Is it OK for this task to use the input ids (the tokenized tweets) as the labels?
I have no labels. There are just tweets in randomized order.
From this point, I present the code I wrote:
First, I cleaned the dataset from emojis, non-ASCII characters, etc as described in the following link (2.3 Section):
https://www.kaggle.com/jaskaransingh/bert-fine-tuning-with-pytorch
Second, the code of the fine tuning process:
import torch
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.to(device)
model.train()
lr = 1e-2
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False)
max_len = 82
chunk_size = 20
epochs = 20
for epoch in range(epochs):
epoch_losses = []
for j, batch in enumerate(pd.read_csv(path + file_name, chunksize=chunk_size)):
tweets = batch['content_cleaned'].tolist()
encoded_dict = tokenizer.batch_encode_plus(
tweets, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = max_len, # Pad & truncate all sentences.
pad_to_max_length = True,
truncation=True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
)
input_ids = encoded_dict['input_ids'].to(device)
# Is it correct? or should I train it in another way?
loss, _ = model(input_ids, labels=input_ids)
loss_score = loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
optimizer.zero_grad()
model.save_pretrained(path + "Fine_Tuned_BertForMaskedLM")
The loss starts from 50 and reduced until 2.3.
| Since the objective of the masked language model is to predict the masked token, the label and the inputs are the same. So, whatever you have written is correct.
However, I would like to add on the concept of comparing word embeddings. Since, BERT is not a word embeddings model, it is contextual, in the sense, that the same word can have different embeddings in different context. Example: the word 'talk' will have a different embeddings in the sentences "I want to talk" and "I will attend a talk". So, there is no single vector of embeddings for each word. (Which makes BERT different from word2vec or fastText). Masked Language Model (MLM) on a pre-trained BERT is usually performed when you have a small new corpus, and want your BERT model to adapt to it. However, I am not sure on the performance gain that you would get by using MLM and then fine-tuning to a specific task than directly fine-tuning the pre-trained model with task specific corpus on a downstream task.
| https://stackoverflow.com/questions/64145666/ |
BERT NER Python | I am using BERT model for Named Entity Recognition task.
I have torch version - 1.2.0+cu9.2
torch vision version - 0.4.0+cu9.2
Nvidia drivers compatible with cuda 9.2
when i am trying to train my model using the command
loss, scores = model(b_input_ids.type(torch.cuda.LongTensor), token_type_ids=None,attention_mask=b_input_mask.to(device), labels=b_labels.type(torch.cuda.LongTensor))
i am getting the error below -
C:/w/1/s/windows/pytorch/aten/src/THC/THCTensorIndex.cu:361: block: [35,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Can somebody help me with this one?
| A bit of googling provided the following hint with the following suggestions:
This is due to an out of bounds index in the embedding matrix.
If you are seeing this error using an nn.Embedding layer, you might add a print statement which shows the min and max values for each input. Some batches might have an out of bounds index.
Once you find, the erroneous batch you should have a look how it was created so that you can fix this error.
Without seeing your code nobody is going to be able to help more.
| https://stackoverflow.com/questions/64156127/ |
Pytorch issue: torch.load() does not correctly load a saved model from file after closing and reopening Spyder IDE | I followed the most basic code procedure for saving and loading neural network model parameters and it works perfectly fine. After training the network, it is saved to a specified file in a specified folder in the package using the standard torch.save(model.state_dict(), file) method; when I need to rerun the program to evaluate instead of train, it is loaded using the standard model.load_state_dict(torch.load(file)) method. However, as soon as I close the Spyder IDE application and reopen the IDE and project, the torch.load does not result in the desired saved model. Note that I am testing the 'correctness' of the model by feeding inputs and checking the outputs. I am not sure if this is a problem with Spyder or with Pytorch, though I'm learning towards Spyder because the loading does work.
In short, the program works if I run it consecutively while Spyder is open, but as soon as Spyder closes and reopens, it ceases to work properly. Does anyone have a clue?
| It might be due to the fact that the state of your program is known while running the IDE but when closing it the state is lost resulting in the inability to know how to load the model (because the IDE doesnt know what model you are using). To solve this, try defining a new model and loading the parameters to it via load like so:
my_model = MyModelClass(parameters).to(device)
checkpoint = torch.load(path)
my_model.load_state_dict(checkpoint)
This way the IDE knows what class is your model and can treat it properly.
| https://stackoverflow.com/questions/64163602/ |
How to do 'same' padding in PyTorch if (n - 1) / 2 is no integer value | I am trying to reconstruct a neural network written in tensorflow. For the convolutional layer, they just use padding='SAME'. This doesn't exist in pytorch. I know, that I can calculate the padding with p = (n - 1) / 2 for stride=1. But what if this doesn't result in an integer value? In my case, n is 4 and I always want to achieve same padding.
| Use math.floor function to round down to the nearest integer or the math.ceil function to round up to the nearest integer:
import math
# for flooring
p = math.floor((n - 1) / 2))
# for ceiling
p = math.ceil((n - 1) / 2))
For example, by default, pytorch uses flooring for MaxPool layers. So, I think flooring is a good starting point.
| https://stackoverflow.com/questions/64163825/ |
Pytorch - Trick AutoGrad into thinking another output is the final outcome | The scenario:
I have a simple torch CNN network that predicts if a given image input is a dog or a cat.
After getting the output of the neural network, I need to apply a modifier of X to each prediction. For example, if the neural network return [0.6, 0.4], and I want to apply a modifier of [0.05, -0.03], I need the result to be [0.65, 0.37].
The desired result:
I would like AutoGrad to think the final output is [0.65, 0.37]. That is, AutoGrad shouldn't consider the modifier addition at all. In fact, It needs to be tricked into thinking the last operation result is [0.65, 0.37] instead of [0.6, 0.4], and apply the backpropagation with that in mind.
Here's what I need to know:
How am I suppose to do this? I know Torch records every single operation and compute a dynamic graph accordingly. I don't want this operation to be recorded at all, and I can't use the torch.no_grad() wrapper, because I need to be able to do the backpropagation after.
Edit 1
@trsvchn Here's what happen when I do your method. Is this expected? Will autograd use the "T" value instead of the "data" value when doing the backpropagation?
| Use torch.no_grad context manager to deactivate autograd, for example this operation is not going to be recorded, but you should use in-place operation, otherwise you will add another tensor without grad_fn and break the graph. In your case:
out = model(inputs) # [0.6, 0.4]
with torch.no_grad():
out.add_(torch.tensor([0.05, -0.03])) # inplace add_ op
| https://stackoverflow.com/questions/64178913/ |
Huggingface transformers unusual memory use | I have the following code attempting to use XL transformers to vectorize text:
text = "Some string about 5000 characters long"
tokenizer = TransfoXLTokenizerFast.from_pretrained('transfo-xl-wt103', cache_dir=my_local_dir, local_files_only=True)
model = TransfoXLModel.from_pretrained("transfo-xl-wt103", cache_dir=my_local_dir, local_files_only=True)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
This produces:
output = model(**encoded_input)
File "/home/user/w/default/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/w/default/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 863, in forward
output_attentions=output_attentions,
File "/home/user/w/default/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/w/default/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 385, in forward
dec_inp, r, attn_mask=dec_attn_mask, mems=mems, head_mask=head_mask, output_attentions=output_attentions,
File "/home/user/w/default/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/w//default/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 338, in forward
attn_score = attn_score.float().masked_fill(attn_mask[:, :, :, None], -1e30).type_as(attn_score)
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 2007869696 bytes. Error code 12 (Cannot allocate memory)
I'm a little perplexed by this because this is asking for 2007869696, which is only 2GB and this machine has 64G of RAM. So I both don't understand why it is asking for this, and even more, why it is failing to get it.
Where can I change the setting that controls this and allow this process more RAM? This is such a small invocation of the example code, and I just see very few places that would even accept this argument.
| Are you sure you are using the gpu instead of cpu?
Try to run the python script with CUDA_LAUNCH_BLOCKING=1 python script.py. This will produce the correct python stack trace (as CUDA calls are asynchronous)
Also you can set the CUDA_VISIBLE_DEVICES using export CUDA_VISIBLE_DEVICES=device_number.
There is also an issue still open on the pytorch github, try to check it out.
| https://stackoverflow.com/questions/64180517/ |
Runtime Error : both arguments to matmul need to be at least 1d but they are 0d and 2d | This is the code I have written, I have tried modifying here and there and have always gotten the same error. As I am a beginner to PyTorch, I am just trying things out to see if machine learning will work on a linear dataset. So, with random, I initialized a dataset. Then, made a single linear neural network. Then, train the neural network on the linear data. Nevertheless, I have an error stating that matmul expects a 1d data, but then my data is at least 1d but is read as 0d instead.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random
class LinearNetwork(nn.Module):
def __init__(self):
super(LinearNetwork, self).__init__()
self.fc1 = nn.Linear(1, 1)
def forward(self, input):
x = self.fc1(input)
return x;
#Generate a Dataset to train on. Just a linear function, and we wanna see if this thing is working
DataSet = []
grad = random.randint(0,100)
const = random.randint(0,100)
for i in range(0, 1000):
DataSet.append([i, grad*i + const])
DataSet = torch.tensor(DataSet,dtype = torch.double, requires_grad = True)
#Declare a Linear Network
Net = LinearNetwork()
criterion = nn.MSELoss()
optimizer = optim.SGD(Net.parameters(), lr = 0.01)
print(DataSet)
#Train Model
for i, data in enumerate(DataSet, 0):
Input, Target = data
optimizer.zero_grad()
Output = Net(Input)
loss = criterion(Output, Target)
loss.backward()
optimizer.step()
print("Gradient of the Function is : " + str(grad))
print("Constant Value of the Function is : " + str(const))
print("Learned Gradient and Constant : " + list(Net.parameters()))
| There are two errors in your code, first regarding shapes, second regarding dtypes. BTW. Please use snake_case for variables (e.g. my_dataset, net) and CamelCase for classes as it's a common Python convention.
Shape error
This one lies here:
for i, data in enumerate(DataSet, 0):
input, target = data
optimizer.zero_grad()
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
When you print input.shape you get torch.Size([]) which is a 0d tensor. Matrix multiplication needs 1d tensor so you should unsqueeze it so it has this dimension. Change above output = net(input) to:
output = net(input.unsqueeze(dim=0))
dtype error
Here:
DataSet = torch.tensor(DataSet,dtype = torch.double, requires_grad = True)
You create tensor of torch.double type while your neural network is (by default) of type torch.float. The latter is usually used as that's enough numerical precision and saves memory. So instead of the above you should do:
DataSet = torch.tensor(DataSet,dtype = torch.float, requires_grad = True)
Shape again
Coming back to shapes, neural networks should operate on at least two dimensions: (batch, features). In your case batch=1 and features=1, so it should be unsqueezed once more (and i advise you to try your code using batches).
| https://stackoverflow.com/questions/64192810/ |
Pytorch listed by conda but cannot import | I am well aware similar questions have been asked
at least twice, but none of the answers seams to solve the issue at hand
My configuration
Windows 10.0.18363,
Anaconda 4.8.5,
Cuda 10.1.243
conda env create -n torch -y python 3.7
conda activate torch
conda install conda -y
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch -y
Here's the bug
python -c "import torch"
Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch'
What I tried
Verifying python and conda
where python
C:\ProgramData\Anaconda3\envs\torch\python.exe
C:\ProgramData\Anaconda3\python.exe
C:\csvn\Python25\python.exe
python -c "import site; print(site.getsitepackages())"
['C:\ProgramData\Anaconda3',
'C:\ProgramData\Anaconda3\lib\site-packages']
conda update -n base -y conda
conda update --all -y
conda init
...
No action taken.
Verifying the torch installation
conda list | findstr torch
_pytorch_select 0.1 cpu_0
pytorch 1.6.0 cpu_py37h538a6d7_0
torchvision 0.7.0 py37_cu102 pytorch
| More a suggestion than a solution: you can at least reduce the problem surface by working with a YAML instead of using a series of create/activate/install commands. Create the file:
torch.yaml
name: torch
channels:
- pytorch
- defaults
dependencies:
- python=3.7
- pytorch
- torchvision
- cudatoolkit=10.2
Then simply use
conda env create -f torch.yaml
The result should be equivalent to the environment you indicated, covering both env creation and installation of all packages in a single command.1 Plus, you don't need all those pesky --yes|-y flags.
Any problems still persisting are most likely due to PATH or other environment variable management issues.
[1] I excluded conda from the YAML because that package should only be installed in base. Perhaps you meant anaconda package?
| https://stackoverflow.com/questions/64197273/ |
How does Pytorch's `autograd` handle non-mathematical functions? | During the course of my training process, I tend to use a lot of calls to torch.cat() and copying tensors into new tensors. How are these operations handled by autograd? Is the gradient value affected by these operations?
| As pointed out in the comments, cat is a mathematical function. For example we could write the following (special case) definition of cat in more traditional mathematical notation as
The Jacobian of this function w.r.t. either of its inputs can be expressed as
Since the Jacobian is well defined you can, of course, apply back-propagation.
In reality you generally wouldn't define these operations with such notation, and a general definition of the cat operation used by pytorch in such a way would be cumbersome.
That said, internally autograd uses backward algorithms that take into account the gradients of such "index style" operations just like any other function.
| https://stackoverflow.com/questions/64211037/ |
Kernel keeps dying when plotting a graph after importing the torch library | I'm trying to run the following code:
import matplotlib.pyplot as plt
%matplotlib inline
import torch
x = y = torch.tensor([1,2,3]).numpy()
plt.plot(x,y);
I keep getting the message: The kernel appears to have died. It will restart automatically. and a restart and a red "Dead kernel" tag on the toolbar.
But the weird thing is, if I import matplotlib.pyplot and plot some random graph first, the above code plots just fine. In other words, the following code works fine.
import matplotlib.pyplot as plt
%matplotlib inline
plt.subplots(figsize=(0.01,0.01))
plt.gca().set_visible(False);
import torch
x = torch.tensor([1,2,3]).numpy()
plt.plot(x,x);
What is going on here? I'm using numpy 1.18.5, pytorch 1.6.0, matplotlib 3.2.2 on Python 3.7.7, if it matters. Thank you.
| import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
Run this first, then it'll resolve your problem. Although I guess, it is a temporary solution you can refer to this link: https://www.programmersought.com/article/53286415201/
| https://stackoverflow.com/questions/64216189/ |
How to monitor GPU memory usage when training a DNN? | I give a result example. I want to ask how to get the data like this graph.
| You can use pytorch commands such as torch.cuda.memory_stats to get information about current GPU memory usage and then create a temporal graph based on these reports.
| https://stackoverflow.com/questions/64221308/ |
How is cross entropy loss work in pytorch? | I am experimenting with some of the pytorch codes. With cross entropy loss I found some interesting results and I have used both binary cross entropy loss and cross entropy loss of pytorch.
import torch
import torch.nn as nn
X = torch.tensor([[1,0],[1,0],[0,1],[0,1]],dtype=torch.float)
softmax = nn.Softmax(dim=1)
bce_loss = nn.BCELoss()
ce_loss= nn.CrossEntropyLoss()
pred = softmax(X)
bce_loss(X,X) # tensor(0.)
bce_loss(pred,X) # tensor(0.3133)
bce_loss(pred,pred) # tensor(0.5822)
ce_loss(X,torch.argmax(X,dim=1)) # tensor(0.3133)
I expected the cross entropy loss for the same input and output to be zero. Here X, pred and torch.argmax(X,dim=1) are same/similar with some transformations. This reasoning only worked for bce_loss(X,X) # tensor(0.) where-else all other resulted in a loss greater than zero. I speculated the output for bce_loss(pred,X), bce_loss(pred,pred) and ce_loss(X,torch.argmax(X,dim=1)) should be zero.
What is the mistake here?
| The reason that you are seeing this is because nn.CrossEntropyLoss accepts logits and targets, a.k.a X should be logits, but is already between 0 and 1. X should be much bigger, because after softmax it will go between 0 and 1.
ce_loss(X * 1000, torch.argmax(X,dim=1)) # tensor(0.)
nn.CrossEntropyLoss works with logits, to make use of the log sum trick.
The way you are currently trying after it gets activated, your predictions become about [0.73, 0.26].
Binary cross entropy example works since it accepts already activated logits. By the way, you probably want to use nn.Sigmoid for activating binary cross entropy logits. For the 2-class example, softmax is also ok.
| https://stackoverflow.com/questions/64221896/ |
What is a subspace of a dimension in pytorch? | The documentation of torch.Tensor.view says:
each new view dimension must either be a subspace of an original dimension, or only span across original dimensions ...
https://pytorch.org/docs/stable/tensors.html?highlight=view#torch.Tensor.view
What is a subspace of a dimension?
| The 'subspace of an original dimension' dilemma
In order to use tensor.view() the tensor must satisfy two conditions-
each new view dimension must either be a
subspace of an original dimension
or only span across original dimensions ...
Lets discuss this one by one,
First, regarding subspace of an original dimension you need to understand the concept behind subspace. Not going into mathematical detail but in short - subspace is a subset of infinite number of n dimensional vectors(Rn) and the vectors inside the subspace must follow 2 rules -
i) Sub space will contain the Zero vector(0n)
ii) Must satisfy closure under Multiplication and addition
To visualise this in mind you can consider a 2D plane containing infinite lines. So the subspace of that 2D vector space will be set of lines which will pass through origin. These lines satisfies above two conditions.
Now there is a concept called projection of subspaces. Without digging into too much mathematical detail you can consider it as a regular line projection but for subspaces.
Now back to the point, lets assume if you have a tensor of size (4,5), you can actually consider it as a 20 dimensional vector. Assume you have a 20D space, and the subspaces will pass through the origin, and if you want to make projection of any line l1 from subspace with respect to any 2 axes, tensor.view() will output projection of that line with (2,10).
For the second part, you need to understand the concept of contiguous and non-contiguous memory allocation in pytorch. As its out of scope of the question I am going to explain it in very brief. If you view a n dimensional vector as (n/k, k) and if you run tensor.stride() on the new vector it will show you the stride for memory allocation in x and y direction. Now if you run view() again with different dimensions then this following equation must hold true for successful conversion due to non-contiguous memory allocation.
I tried my best to explain it in brief, let me know if you have more questions.
| https://stackoverflow.com/questions/64225965/ |
pyTorch gradient becomes none when dividing by scalar | Consider the following code block:
import torch as torch
n=10
x = torch.ones(n, requires_grad=True)/n
y = torch.rand(n)
z = torch.sum(x*y)
z.backward()
print(x.grad) # results in None
print(y)
As written, x.grad is None. However, if I change the definition of x by removing the scalar multiplication (x = torch.ones(n, requires_grad=True)) then indeed I got a non-None gradient that is equivalent to y.
I've googled a bunch looking for this issue, and I think it reflects something fundamental in what I don't understand about how the computational graph in torch. I'd love some clarification. Thanks!
| When you set x to a tensor divided by some scalar, x is no longer what is called a "leaf" Tensor in PyTorch. A leaf Tensor is a tensor at the beginning of the computation graph (which is a DAG graph with nodes representing objects such as tensors, and edges which represent a mathematical operation). More specifically, it is a tensor which was not created by some computational operation which is tracked by the autograd engine.
In your example - torch.ones(n, requires_grad=True) is a leaf tensor, but you can't access it directly in your code.
The reasoning behind not keeping the grad for non-leaf tensors is that typically, when you train a network, the weights and biases are leaf tensors and they are what we need the gradient for.
If you want to access the gradients of a non-leaf tensor, you should call the retain_grad function, which means in your code you should add:
x.retain_grad()
after the assignment to x.
| https://stackoverflow.com/questions/64233099/ |
I want to use Conv1D and MaxPool1D in pytorch for a 3-d tensor to its third dimension | For example, there is a 3-d tensor, I want to run the conv1d calculation on its third dimension,
import torch
import torch.nn as nn
x = torch.rand(4,5,6)
conv1d =nn.Conv1d(in_channels=1,out_channels=2,kernel_size=5,stride=3,padding=0)
y = conv1d(x)
I hope the shape of y is (4,5,2,-1), but I get an error
Given groups=1, weight of size [2, 1, 5], expected input[4, 5, 6] to have 1 channels, but got 5 channels instead
Then I modified the code,
import torch
import torch.nn as nn
x = torch.rand(4,5,6)
conv1d =nn.Conv1d(in_channels=1,out_channels=2,kernel_size=5,stride=3,padding=0)
x = x.unsqueeze(2)
y = conv1d(x)
There is another error:
Expected 3-dimensional input for 3-dimensional weight [2, 1, 5], but got 4-dimensional input of size [4, 5, 1, 6] instead
And if I want to run the maxpoo1d calulation in a tensor whose shape is (4,5,2,-1) ,in its last two dimension, what should I do?
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this. Thank you all for your help.
I made an attempt, but I felt it couldn’t meet the actual needs, I wanted to know if it's good practice to do that and what would be the best way to do that?
import torch
import torch.nn as nn
x = torch.rand(4,5,6)
conv1d =nn.Conv1d(in_channels=1,out_channels=2,kernel_size=2,stride=3,padding=0)
x = x.unsqueeze(2)
for i in range(4):
y = conv1d(x[i,:,:,:])
y = y.unsqueeze(0)
if i==0:
z = y
else:
z = torch.cat((z,y),0)
print(y)
print(z.size())
| To use Conv1d you need your input to have 3 dimensions:
[batch_size, in_channels, data_dimension]
So, this would work:
x = torch.rand(4, 1, 50) # [batch_size=4, in_channels=1, data_dimension=50]
conv1d = nn.Conv1d(in_channels=1,out_channels=2,kernel_size=2,stride=3,padding=0)
x = conv1d(x)
print(x.shape) # Will output [4, 2, 16] 4=batch_size, 2=channels, 16=data_dimension
You can use MaxPool1d in the same way:
maxpool1d = nn.MaxPool1d(5)
x = maxpool1d(x)
print(x.shape) # Will output [4, 2, 3] 4=batch_size, 2=channels, 3=data_dimension
| https://stackoverflow.com/questions/64240012/ |
Why does the evaluation loss increases when training a huggingface transformers NER model? | Training a huggingface transformers NER model according to the documentation, the evaluation loss increases after a few epochs, but the other scores (accuracy, precision, recall, f1) continuously getting better. The behaviour seems unexpected, is there a simple explanation for this effect? Can this depend on the given data?
model = TokenClassificationModel.from_pretrained('roberta-base', num_labels=len(tag_values))
model.train()
model.zero_grad()
for epoch in range(epochs):
for batch in range(batches):
-- train --
...
train_loss = model.evaluate(train_data)
validation_loss = model.evaluate(validation_data)
| Accuracy and loss are not necessarily exactly (inversely) correlated.
The loss function is often an approximation of the accuracy function - unlike accuracy, the loss function must be differentiable.
A good explanation can be found here.
| https://stackoverflow.com/questions/64313576/ |
element-wise operation in pytorch | I have two Tensors A and B, A.shape is (b,c,100,100), B.shape is (b,c,80,80),
how can I get tensor C with shape (b,c,21,21) subject to
C[:, :, i, j] = torch.mean(A[:, :, i:i+80, j:j+80] - B)?
I wonder whether there's an efficient way to solve this?
Thanks very much.
| You should use an average pool to compute the sliding window mean operation.
It is easy to see that:
mean(A[..., i:i+80, j:j+80] - B) = mean(A[..., i:i+80, j:j+80]) - mean(B)
Using avg_pool2d:
import torch.nn.functional as nnf
C = nnf.avg_pool2d(A, kernel_size=80, stride=1, padding=0) - torch.mean(B, dim=(2,3), keepdim=True)
If you are looking for a more general way of performing sliding window operations in PyTorch, you should look at fold and unfold.
| https://stackoverflow.com/questions/64313895/ |
The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1 | I am trying to do text classification using pretrained BERT model. I trained the model on my dataset, and in the phase of testing; I know that BERT can only take to 512 tokens, so I wrote if condition to check the length of the test senetence in my dataframe. If it is longer than 512 I split the sentence into sequences each sequence has 512 token. And then do tokenizer encode. The length of the seqience is 512, however, after doing tokenize encode the length becomes 707 and I get this error.
The size of tensor a (707) must match the size of tensor b (512) at non-singleton dimension 1
Here is the code I used to do the preivous steps:
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
import math
pred=[]
if (len(test_sentence_in_df.split())>512):
n=math.ceil(len(test_sentence_in_df.split())/512)
for i in range(n):
if (i==(n-1)):
print(i)
test_sentence=' '.join(test_sentence_in_df.split()[i*512::])
else:
print("i in else",str(i))
test_sentence=' '.join(test_sentence_in_df.split()[i*512:(i+1)*512])
#print(len(test_sentence.split())) ##here's the length is 512
tokenized_sentence = tokenizer.encode(test_sentence)
input_ids = torch.tensor([tokenized_sentence]).cuda()
print(len(tokenized_sentence)) #### here's the length is 707
with torch.no_grad():
output = model(input_ids)
label_indices = np.argmax(output[0].to('cpu').numpy(), axis=2)
pred.append(label_indices)
print(pred)
| This is because, BERT uses word-piece tokenization. So, when some of the words are not in the vocabulary, it splits the words to it's word pieces. For example: if the word playing is not in the vocabulary, it can split down to play, ##ing. This increases the amount of tokens in a given sentence after tokenization.
You can specify certain parameters to get fixed length tokenization:
tokenized_sentence = tokenizer.encode(test_sentence, padding=True, truncation=True,max_length=50, add_special_tokens = True)
| https://stackoverflow.com/questions/64320883/ |
Correct way to feed data to RNN in PyTorch | I have a data sequence a which is of shape [seq_len, 2], seq_len is the length of the sequence. There is time correlation among elements of a[:, 0] and a[:, 1], but a[:, 0] and a[:, 1] are independent of each other. For training I prepare data of shape [batch_size, seq_len, 2]. The initialization of BRNN that I use is
birnn_layer = nn.RNN(input_size=2, hidden_size=100, batch_first=True, bidirectional=True)
From the docs,
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
What does "number of expected features" mean? Since there is correlation along the seq_len axis should my input_size be set as seq_len and the input be permuted? Thanks.
| The question is how, if at all, your data contributes to the overall optimization problem. You said that elements of a[:, 0] are time-correlated and elements of a[:, 1] are time-correlated. Are a[i, 0] and a[i, 1] time-correlated? Does it makes sense for both sequences to be set together?
If, for example, you are trying to predict whether certain electrical machine is going to malfunction based on sequences of voltage applied to the machine a[:, 0] and humidity in the room a[:, 1] over time plus these signals were collected at the same time it is ok. But should they were collected in different time, does it makes sense? Or if you would have measured something different than humidity, would it help you predict malfunction?
number of expected features means number of features in a single timestamp so to speak. So, going along with my previous analogy, how many signals (voltage, humidity.. ) I measure simultanously.
Of course this is only an example, you do not have to have classification-over-time problem, it can be anything else. The bottom point is how your RNN and data work together.
| https://stackoverflow.com/questions/64329449/ |
How to stop importing line arguments with imported file | I have two programs train.py and predict.py and I am importing a trained model from train to predict.
Both programs accept line arguments and train runs fine, but when I run predict with its line arguments, an error occurs that I haven't typed in the arguments required by train.py.
How I can solve this?
| Your question could use some more context. But here is what suspect might be happenning :
context
When you import a file (module), its content is executed. If your file only contain declarations (such as variable, class and function definitions) all is good, and you can use them from the place you wrote your import statement.
Now if the module you import contain actual code like function calls, it's going to run !
It's likely that your train.py file is expecting functions that fail (by missing some arguments apparently).
The usual solution to avoid this is to wrap all calls in a if __name__ == "__main__": clause. That way, it will only be executed if that file is called directly (as opposed to imported).
tl;dr:
Look into your train.py file for function calls
def a_function():
pass
class SomeClass:
pass
a_function() # <--- this is a call
my_var = SomeClass() # <--- this too !
and put them in that clause
def a_function():
pass
class SomeClass:
pass
if __name__ == "__main__":
#only executed if you call this file (python train.py)
a_function()
my_var = SomeClass()
| https://stackoverflow.com/questions/64355327/ |
Python3 Pytorch RuntimeError on GCP - no msg | My System
I am running a neural network training on using Python 3.6.9 with pytorch 1.6.0
I am using a google cloud platform N1 Server with a Tesla T4, 2 cores CPU, 12GB RAM.
This is on an Ubuntu 18.04 image.
Problem
When my code reaches the training line I get the following RuntimeError with no real explanation that I can see:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/or/.local/share/virtualenvs/or-M3_AaJfY/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/or/my_model/train.py", line 88, in train_and_eval
train(rank, epoch, hps, generator, optimizer_g, train_loader, logger, writer)
File "/home/or/my_model/train.py", line 117, in train
scaled_loss.backward()
File "/home/or/.local/share/virtualenvs/or-M3_AaJfY/lib/python3.6/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/or/.local/share/virtualenvs/or-M3_AaJfY/lib/python3.6/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError
This happens while the 2 CPU cores are being used at 100% for a long while.
The RAM and GPU, though going up (as expected while training) do not reach close to their
limit.
I checked journalctl to see if this was an operating system issue but there is nothing there. I also did not find anything relevant in the /var/log/ directory or using dmesg.
I would be happy to provide more log data but I am not aware (after searching) any python logs I can look at, or any other system logs.
Please let me know of ways I can get more information if you have any ideas.
The exact same code works 100% fine on other physical machines I have tested, and a GPU only version of it runs fine on another cloud computing provider
What I am looking for
Ways to get more information about this problem and figure out why it is happening.
Ways to fix this problem
Thanks in advance for your time and any help you may be able to provide.
| Anthony Leo thank you so much for your detailed answer!
Unfortunately this ended up being a problem with one of the modules I installed while setting up my server.
This did not end up being a problem of the server itself or of my code, I just installed a module incorrectly while setting up.
I am sorry for all the time other people spent on this issue.
| https://stackoverflow.com/questions/64356499/ |
Command Errored Exit Status 1: - Pytorch Object Detection | I tried to follow this tutorial to learn how to run my own object detection, but I am running into an error that I can't seem to fix. I found a solution on some git hubs issues pages. They suggested running: !pip install git+https://github.com/philferriere/cocoapi.git, but I still get the same error. I am using Google Colab. Any suggestions?
input:
!pip installgit+https://github.com/cocodataset/cocoapi.git
output:
Collecting git+https://github.com/cocodataset/cocoapi.git
Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-u21a5wjs
Running command git clone -q https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-u21a5wjs
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
| This is what is written in https://github.com/cocodataset/cocoapi
To install:
For Python, run "make" under coco/PythonAPI
So you cannot do pip install even in local.
Follow these steps in your google colaboratory to install,
!git clone https://github.com/cocodataset/cocoapi.git
# by default you are in /content in google colab use !ls to check
!ls
cd cocoapi
!ls
# you will see these, common license.txt LuaAPI MatlabAPI PythonAPI README.txt results
cd PythonAPI
!make
!make install
Now your cocoapi github repository is installed and is ready to use.
As a sample code to test you can try these lines, it should show no errors.
from pycocotools.coco import COCO
| https://stackoverflow.com/questions/64356845/ |
Batch-wise beam search in pytorch | I'm trying to implement a beam search decoding strategy in a text generation model. This is the function that I am using to decode the output probabilities.
def beam_search_decoder(data, k):
sequences = [[list(), 0.0]]
# walk over each step in sequence
for row in data:
all_candidates = list()
for i in range(len(sequences)):
seq, score = sequences[i]
for j in range(len(row)):
candidate = [seq + [j], score - torch.log(row[j])]
all_candidates.append(candidate)
# sort candidates by score
ordered = sorted(all_candidates, key=lambda tup:tup[1])
sequences = ordered[:k]
return sequences
Now you can see this function is implemented with batch_size 1 in mind. Adding another loop for batch size would make the algorithm O(n^4). It is slow as it is now. Is there any way to improve the speed of this function. My model output is usually of the size (32, 150, 9907) which follows the format (batch_size, max_len, vocab_size)
| Below is my implementation, which may be a little bit faster than the for loop implementation.
import torch
def beam_search_decoder(post, k):
"""Beam Search Decoder
Parameters:
post(Tensor) – the posterior of network.
k(int) – beam size of decoder.
Outputs:
indices(Tensor) – a beam of index sequence.
log_prob(Tensor) – a beam of log likelihood of sequence.
Shape:
post: (batch_size, seq_length, vocab_size).
indices: (batch_size, beam_size, seq_length).
log_prob: (batch_size, beam_size).
Examples:
>>> post = torch.softmax(torch.randn([32, 20, 1000]), -1)
>>> indices, log_prob = beam_search_decoder(post, 3)
"""
batch_size, seq_length, _ = post.shape
log_post = post.log()
log_prob, indices = log_post[:, 0, :].topk(k, sorted=True)
indices = indices.unsqueeze(-1)
for i in range(1, seq_length):
log_prob = log_prob.unsqueeze(-1) + log_post[:, i, :].unsqueeze(1).repeat(1, k, 1)
log_prob, index = log_prob.view(batch_size, -1).topk(k, sorted=True)
indices = torch.cat([indices, index.unsqueeze(-1)], dim=-1)
return indices, log_prob
| https://stackoverflow.com/questions/64356953/ |
RuntimeError: Can only calculate the mean of floating types. Got Byte instead. for mean += images_data.mean(2).sum(0) | I have the following pieces of code:
# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
seed = 42
np.random.seed(seed)
torch.manual_seed(seed)
# split the dataset into validation and test sets
len_valid_set = int(0.1*len(dataset))
len_train_set = len(dataset) - len_valid_set
print("The length of Train set is {}".format(len_train_set))
print("The length of Test set is {}".format(len_valid_set))
train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set])
# shuffle and batch the datasets
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=4)
test_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=8, shuffle=True, num_workers=4)
print("LOADERS",
len(dataloader),
len(train_loader),
len(test_loader))
The length of Train set is 720
The length of Test set is 80
LOADERS 267 90 10
mean = 0.0
std = 0.0
nb_samples = 0.0
for data in train_loader:
images, landmarks = data["image"], data["landmarks"]
batch_samples = images.size(0)
images_data = images.view(batch_samples, images.size(1), -1)
mean += images_data.mean(2).sum(0)
std += images_data.std(2).sum(0)
nb_samples += batch_samples
mean /= nb_samples
std /= nb_samples
And I get this error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-9e47ddfeff5e> in <module>
7
8 images_data = images.view(batch_samples, images.size(1), -1)
----> 9 mean += images_data.mean(2).sum(0)
10 std += images_data.std(2).sum(0)
11 nb_samples += batch_samples
RuntimeError: Can only calculate the mean of floating types. Got Byte instead.
The fixed code is taken from https://stackoverflow.com/a/64349380/2414957 it worked for dataloader but not train_loader
Also, these are the results of
print(type(images_data))
print(images_data)
We have:
<class 'torch.Tensor'>
tensor([[[74, 74, 74, ..., 63, 63, 63],
[73, 73, 73, ..., 61, 61, 61],
[75, 75, 75, ..., 61, 61, 61],
...,
[74, 74, 74, ..., 38, 38, 38],
[75, 75, 75, ..., 39, 39, 39],
[72, 72, 72, ..., 38, 38, 38]],
[[75, 75, 75, ..., 65, 65, 65],
[75, 75, 75, ..., 62, 62, 62],
[75, 75, 75, ..., 63, 63, 63],
...,
[71, 71, 71, ..., 39, 39, 39],
[74, 74, 74, ..., 38, 38, 38],
[73, 73, 73, ..., 37, 37, 37]],
[[72, 72, 72, ..., 62, 62, 62],
[74, 74, 74, ..., 63, 63, 63],
[75, 75, 75, ..., 61, 61, 61],
...,
[74, 74, 74, ..., 38, 38, 38],
[74, 74, 74, ..., 39, 39, 39],
[73, 73, 73, ..., 37, 37, 37]],
...,
[[75, 75, 75, ..., 63, 63, 63],
[73, 73, 73, ..., 63, 63, 63],
[74, 74, 74, ..., 62, 62, 62],
...,
[74, 74, 74, ..., 38, 38, 38],
[73, 73, 73, ..., 39, 39, 39],
[73, 73, 73, ..., 37, 37, 37]],
[[73, 73, 73, ..., 62, 62, 62],
[75, 75, 75, ..., 62, 62, 62],
[74, 74, 74, ..., 63, 63, 63],
...,
[73, 73, 73, ..., 39, 39, 39],
[74, 74, 74, ..., 38, 38, 38],
[74, 74, 74, ..., 38, 38, 38]],
[[74, 74, 74, ..., 62, 62, 62],
[74, 74, 74, ..., 63, 63, 63],
[74, 74, 74, ..., 62, 62, 62],
...,
[74, 74, 74, ..., 38, 38, 38],
[73, 73, 73, ..., 38, 38, 38],
[72, 72, 72, ..., 36, 36, 36]]], dtype=torch.uint8)
When I tried
images_data = images_data.float()
mean += images_data.mean(2).sum(0)
I didn't get a tensor for 3 values for mean and 3 values for std like I expected but got a very large tensor (each torch.Size([600]))
| As the error says, your images_data is a ByteTensor, i.e. has dtype uint8. Torch refuses to compute the mean of integers. You can convert the data to float with:
(images_data * 1.0).mean(2)
Or
torch.Tensor.float(images_data).mean(2)
| https://stackoverflow.com/questions/64358283/ |
Run pytorch in pyodide? | Is there any way I can run the python library pytorch in pyodide? I tried installing pytorch with micropip but it gives this error message:
Couldn't find a pure Python 3 wheel for 'pytorch'
| In Pyodide micropip only allows to install pure python wheels (i.e. that don't have compiled extensions). The filename for those wheels ends with none-any.whl (see PEP 427).
If you look at Pytorch wheels currently available on PyPi, their filenames ends with e.g. x86_64.whl so it means that they would only work on the x86_64 architecture and not in the WebAssembly VM.
The general solution to this is to add a package to the Pyodide build system. However in the case of pytorch, there is a blocker that cffi is currently not supported in pyodide (GH-pyodide#761), while it's required at runtime by pytorch (see an example of build setup from conda-forge). So it is unlikely that pytorch would be availble in pyodide in the near future.
| https://stackoverflow.com/questions/64358372/ |
I get a tensor of 600 values instead of 3 values for mean and std of train_loader in PyTorch | I am trying to Normalize my images data and for that I need to find the mean and std for train_loader.
mean = 0.0
std = 0.0
nb_samples = 0.0
for data in train_loader:
images, landmarks = data["image"], data["landmarks"]
batch_samples = images.size(0)
images_data = images.view(batch_samples, images.size(1), -1)
mean += torch.Tensor.float(images_data).mean(2).sum(0)
std += torch.Tensor.float(images_data).std(2).sum(0)
###mean += images_data.mean(2).sum(0)
###std += images_data.std(2).sum(0)
nb_samples += batch_samples
mean /= nb_samples
std /= nb_samples
the mean and std here are each a torch.Size([600])
When I tried (almost) same code on dataloader, it worked as expected:
# code from https://discuss.pytorch.org/t/about-normalization-using-pre-trained-vgg16-networks/23560/6?u=mona_jalal
mean = 0.0
std = 0.0
nb_samples = 0.0
for data in dataloader:
images, landmarks = data["image"], data["landmarks"]
batch_samples = images.size(0)
images_data = images.view(batch_samples, images.size(1), -1)
mean += images_data.mean(2).sum(0)
std += images_data.std(2).sum(0)
nb_samples += batch_samples
mean /= nb_samples
std /= nb_samples
and I got:
mean is: tensor([0.4192, 0.4195, 0.4195], dtype=torch.float64), std is: tensor([0.1182, 0.1184, 0.1186], dtype=torch.float64)
So my dataloader is:
class MothLandmarksDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.landmarks_frame.iloc[idx, 1:]
landmarks = np.array([landmarks])
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
transformed_dataset = MothLandmarksDataset(csv_file='moth_gt.csv',
root_dir='.',
transform=transforms.Compose(
[
Rescale(256),
RandomCrop(224),
ToTensor()
]
)
)
dataloader = DataLoader(transformed_dataset, batch_size=3,
shuffle=True, num_workers=4)
and train_loader is:
# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
seed = 42
np.random.seed(seed)
torch.manual_seed(seed)
# split the dataset into validation and test sets
len_valid_set = int(0.1*len(dataset))
len_train_set = len(dataset) - len_valid_set
print("The length of Train set is {}".format(len_train_set))
print("The length of Test set is {}".format(len_valid_set))
train_dataset , valid_dataset, = torch.utils.data.random_split(dataset , [len_train_set, len_valid_set])
# shuffle and batch the datasets
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=4)
test_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=8, shuffle=True, num_workers=4)
Please let me know if more information is needed.
I basically need to get 3 values for mean of train_loader and 3 values for std of train_loader to use as args for Normalize.
images_data in dataloader is torch.Size([3, 3, 50176]) inside the loop and images_data in train_loader is torch.Size([8, 600, 2400])
| First, the weird shape you get for your mean and std ([600]) is unsuprising, it is due to your data having the shape [8, 600, 800, 3]. Basically, the channel dimension is the last one here, so when you try to flatten your images with
# (N, 600, 800, 3) -> [view] -> (N, 600, 2400 = 800*3)
images_data = images.view(batch_samples, images.size(1), -1)
You actually perform a weird operation that fuses together the width and channel dimensions of your image which is now [8, 600, 2400]. Thus, applying
# (8, 600, 2400) -> [mean(2)] -> (8, 600) -> [sum(0)] -> (600)
data.mean(2).sum(0)
Creates a tensor of size [600] which is what you indeed get.
There are two quite simple solutions :
Either you start by permuting the dimensions to make the 2nd dimension the channel one :
batch_samples = images.size(0)
# (N, H, W, C) -> (N, C, H, W)
reordered = images.permute(0, 3, 1, 2)
# flatten image into (N, C, H*W)
images_data = reordered.view(batch_samples, reordered.size(1), -1)
# mean is now (C) = (3)
mean += images_data.mean(2).sum(0)
Or you changes the axis along which to apply mean and sum
batch_samples = images.size(0)
# flatten image into (N, H*W, C), careful this is not what you did
images_data = images.view(batch_samples, -1, images.size(1))
# mean is now (C) = (3)
mean += images_data.mean(1).sum(0)
Finally, why did dataloaderand trainloader behave differently ? Well I think it's because one is using dataset while the other is using transformedDataset. In TransformedDataset, you apply the toTensortransform which cast a PIL image into a torch tensor, and I think that pytorch is smart enough to permute your dimensions during this operation (and put the channels in the second dimension). In other word, your two datasets just do not yield images with identical format, they differ by a permutation of the axis.
| https://stackoverflow.com/questions/64362402/ |
how to fix the torch.cuda.is_available() False problem without restarting the machine? | I have:
$ python
Python 3.7.6 (default, Jan 8 2020, 19:59:22)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
False
>>> quit()
$ nvidia-smi
Wed Oct 14 21:28:50 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P8 9W / N/A | 1257MiB / 7982MiB | 11% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1424 G /usr/lib/xorg/Xorg 823MiB |
| 0 N/A N/A 1767 G /usr/bin/gnome-shell 407MiB |
| 0 N/A N/A 6420 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 6949 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 7447 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 8888 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 9218 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 9282 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 65854 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 70801 G /usr/lib/firefox/firefox 2MiB |
+-----------------------------------------------------------------------------+
Is there a way I could fix this problem without having to restart my machine?
I have Ubuntu 20.04 and PyTorch 1.6.0
After I restarted the machine, here's what I get:
| This happens quite often to ubuntu users (I am not so sure about other distros). I have noticed this behavior especially when I leave my machine on sleep. Without restarting you could run the following commands as mentioned in this thread
sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm
| https://stackoverflow.com/questions/64363633/ |
Training Sparse Autoencoders | My dataset consists of vectors that are massive. The data points are all mostly zeros with ~3% of the features being 1. Essentially my data is super sparse and I am attempting to train an autoencoder however my model is learning just to recreate vectors of all zeros.
Are there any techniques to prevent this? I have tried replacing mean squared error with dice loss but it completely stopped learning. My other thoughts would be to use a loss function that favors guessing 1s correctly rather than zeros. I have also tried using a sigmoid and linear last activation with no clear winner. Any ideas would be awesome.
| It seems like you are facing a severe "class imbalance" problem.
Have a look at focal loss. This loss is designed for binary classification with severe class imbalance.
Consider "hard negative mining": that is, propagate gradients only for part of the training examples - the "hard" ones.
see, e.g.:
Abhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016).
| https://stackoverflow.com/questions/64364684/ |
PyTorch Circular Padding in one Dimension | for a convolution i want to apply a circular padding in one dimension and a zero padding in all other dimension. How can i do this?
For the convolution there are 28 channels and fore the the data is described in spherical bins. There are 20 bins for radius times 20 bins for polar times 20 bins for inclination.
The circular padding should only be applied for the inclination.
Small Example
# Example:
x = torch.tensor([[1,2,3],
[4,5,6],
[7,8,9]])
y = sphere_pad(x, pad=(0, 1))
# y is now tensor([[3, 1, 2, 3, 1],
# [6, 4, 5, 6, 4],
# [9, 7, 8, 9, 7]])
I have tried to apply
def sphere_pad(x, pad=(1,1)):
return x.repeat(*x.shape)[
(x.shape[0]-pad[0]):(2*x.shape[0]+pad[0]),
(x.shape[1]-pad[1]):(2*x.shape[1]+pad[1])]
and then apply a convolution with a normal zero padding ( and no padding in the last dimension).
This works for a small example but this method exceeds the GPU memory for the actual problem size.
Are there other ways?
| Using numpy, you could do a wrap padding so the array gets wrapped along the second axis:
np.pad(x, ((0,0),(1,1)), mode='wrap')
array([[3, 1, 2, 3, 1],
[6, 4, 5, 6, 4],
[9, 7, 8, 9, 7]])
| https://stackoverflow.com/questions/64368682/ |
Trying to use Tensorboard on Google Colab | The page below gives informations about Tensorboard:
https://pytorch.org/docs/stable/tensorboard.html
I am using Google Colab and when i write the following
instructions(which are in the link above):
!pip install tensorboard
tensorboard --logdir=runs
it sends me the following error message:
File "<ipython-input-111-949c7e8e565e>", line 2
tensorboard --logdir=runs
^
SyntaxError: can't assign to operator
so when i copy paste their
own example:
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
x = range(100)
for i in x:
writer.add_scalar('y=2x', i * 2, i)
writer.close()
it does not return the expected graph.
Could someone help me fix this problem?Thank you in advance!
| As explained in How to use Tensorboard with PyTorch in Google Colab.
In Google Colab you should start Tensorboard magic at the start of your code with:
%load_ext tensorboard
and after you define a summary file you need to insatiate Tensorboard with
%tensorboard --logdir $tensorboard_dir
where Tensorboard dir is a script variable of type str indicating where the summary file is.
| https://stackoverflow.com/questions/64372169/ |
How can I repartition RDD by key and then pack it to shards? | I have many files containing millions of rows in format:
id, created_date, some_value_a, some_value_b, some_value_c
This way of repartitioning was super slow and created for me over million of small ~500b files:
rdd_df = rdd.toDF(["id", "created_time", "a", "b", "c"])
rdd_df.write.partitionBy("id").csv("output")
I would like to achieve output files, where each file contains like 10000 unique IDs and all their rows.
How could I achieve something like this?
| You can repartition by adding a Random Salt key.
val totRows = rdd_df.count
val maxRowsForAnId = rdd_df.groupBy("id").count().agg(max("count"))
val numParts1 = totRows/maxRowsForAnId
val totalUniqueIds = rdd_df.select("id").distinct.count
val numParts2 = totRows/(10000*totalUniqueIds)
val numPart = numParts1.min(numParts2)
rdd_df
.repartition(numPart,col("id"),rand)
.csv("output")
The main concept is each partition will be written as 1 file. SO you would have bring your required rows in to 1 partition by repartition(numPart,col("id"),rand).
The first 4-5 operations is just to calculate how many partitions we need to achieve almost 10000 ids per file.
Calculate assuming 10000 ids per partition
Corner case : if a single id has too many rows and doesn't fit in the above calculated partition size.
Hence we calculate no of paritition according to the largest count of ID present
Take min of the 2 noOfPartitons
rand is necessary so, that we can bring multiple IDs in a single partition
NOTE : Although this will give you larger files and each file will contain a set of unique ids for sure. But this involves shuffling , due to which your operation actually might be slower than the code you have mentioned in question.
| https://stackoverflow.com/questions/64372813/ |
predict the position of an image in another image | If one image is a part of another image, then how to compute the accurate location in deep learning way?
Now I could compute this by extracting and matching key points using OpenCV, but I hope to solve it with neural networks.
Any ideas to design the networks and loss functions?
Thanks very much.
| This is a detection problem. The simplest approach to do it is to create a a network with two heads, one for classification and the other for the bounding box (regression).
you feed your network with the image and respective label, and sum the lossess and do a backward. train for some epochs and you'll get your self a detection model that you can use to detect what you need. but its just a simple approach and it can get much more complex.
You may as well skip this and use an existing detection architecture or better framework which simplifies your life much better.
For Tensorflow I belive you can use ObjectDetctionAPI and for Pytorch you can use Detectron, Detectron2, mmdetection among others.
| https://stackoverflow.com/questions/64382601/ |
AttributeError: module 'torch' has no attribute 'hstack' | I am following this doc for hstack.
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
torch.hstack((a,b))
But I keep getting the error:
AttributeError: module 'torch' has no attribute 'hstack'
Here is the torch version that results in this error:
torch.__version__
'1.6.0+cpu'
What am I doing wrong?
| Apparently you are calling a function that does not exist YET in your PyTorch version --this is what the error message is about.
Your link points to the help page related to developers preview: note .8.0a0+342069f version number in the top left corner. When clicking, Click here to view docs for latest stable release. link - an error message come.
This function becomes available in torch version 1.8.0. --until then consider using torch.cat with `dim=1'.
torch.cat([a,b], dim=1) # a, b - 2d torch.Tensors
| https://stackoverflow.com/questions/64405165/ |
Size mismatch in fully connected layers | I build the following simply model in pytorch as a first run and I am gettign a size mismatch error that does not make sense as out_feat always equals in_feat for the subsequent layer...
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
#first linear block
self.fc1=nn.Linear(32,1024)
self.b1=nn.BatchNorm1d(1024)
#Two Linear 1
self.fc2=nn.Linear(1024,1024)
self.b2=nn.BatchNorm1d(1024)
self.fc3=nn.Linear(1024,1024)
self.b3=nn.BatchNorm1d(1024)
#Two Linear 2
self.fc4=nn.Linear(1024,1024)
self.b4=nn.BatchNorm1d(1024)
self.fc5=nn.Linear(1024,1024)
self.b5=nn.BatchNorm1d(1024)
#Final Linear Layer
self.fc6=nn.Linear(1024,48)
def forward(self,x):
x1=self.fc1(x)
x1=self.b1(x1)
x1=nn.ReLU(x1)
x2=self.fc2(x1)
x2=self.b2(x2)
x2=nn.ReLU(x2)
x2=self.fc3(x2)
x2=self.b3(x2)
x2=nn.ReLU(x2)
x3=x1+x2
x4=self.fc4(x3)
x4=self.b4(x4)
x4=nn.ReLU(x4)
x4=self.fc5(x4)
x4=self.b5(x4)
x4=nn.ReLU(x4)
x5=x3+x4
x6=self.fc6(x5)
return x6
model=Network()
zeros=np.zeros((1,32))
outputs=model(torch.FloatTensor(zeros))
RuntimeError: size mismatch, m1: [1 x 32], m2: [1024 x 32] at ..\aten\src\TH/generic/THTensorMath.cpp:41
I do not understand how I am getting this error when all the dimensions match does anyone see the issue?
=================================================================
Layer (type:depth-idx) Param #
=================================================================
├─Linear: 1-1 33,792
├─BatchNorm1d: 1-2 4,096
├─Linear: 1-3 1,049,600
├─BatchNorm1d: 1-4 4,096
├─Linear: 1-5 1,049,600
├─BatchNorm1d: 1-6 4,096
├─Linear: 1-7 1,049,600
├─BatchNorm1d: 1-8 4,096
├─Linear: 1-9 1,049,600
├─BatchNorm1d: 1-10 4,096
├─Linear: 1-11 49,200
=================================================================
Total params: 4,301,872
Trainable params: 4,301,872
Non-trainable params: 0
here is model summary
| Batch normalization works when batch size is greater than 1, so an input of shape (1, 32) won't work. Try a larger batch size, like 2.
Moreover, you're trying to use ReLU in the form x = nn.ReLU(x). This is wrong, as nn.ReLU is a layer. This line of code returns you the ReLU layer itself rather than a tensor. Either define nn.ReLU() layers in your init method, or use F.relu(x) or nn.ReLU()(x). Like so:
import torch
from torch import nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
#first linear block
self.fc1=nn.Linear(32,1024)
self.b1=nn.BatchNorm1d(1024)
#Two Linear 1
self.fc2=nn.Linear(1024,1024)
self.b2=nn.BatchNorm1d(1024)
self.fc3=nn.Linear(1024,1024)
self.b3=nn.BatchNorm1d(1024)
#Two Linear 2
self.fc4=nn.Linear(1024,1024)
self.b4=nn.BatchNorm1d(1024)
self.fc5=nn.Linear(1024,1024)
self.b5=nn.BatchNorm1d(1024)
#Final Linear Layer
self.fc6=nn.Linear(1024,48)
def forward(self,x):
x1=self.fc1(x)
x1=self.b1(x1)
x1=F.relu(x1)
x2=self.fc2(x1)
x2=self.b2(x2)
x2=F.relu(x2)
x2=self.fc3(x2)
x2=self.b3(x2)
x2=F.relu(x2)
x3=x1+x2
x4=self.fc4(x3)
x4=self.b4(x4)
x4=F.relu(x4)
x4=self.fc5(x4)
x4=self.b5(x4)
x4=F.relu(x4)
x5=x3+x4
x6=self.fc6(x5)
return x6
model=Network()
zeros=torch.zeros((10, 32))
outputs=model(zeros)
print(outputs.shape)
# torch.Size([10, 48])
| https://stackoverflow.com/questions/64417660/ |
Is it possible to add own function in transform.compose in pytorch | I am using a pre-trained Alex model. I am running this model on some random image dataset. I want to convert RGB images to YCbCr images before training.
I am wondering is it possible to add a function on my own to transform.compose, For example:
transform = transforms.Compose([
ycbcr(), #something like this
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
where,
def ycbcr(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2ycbcr)
t = torch.from_numpy(img)
return t
training_dataset = datasets.ImageFolder(link_train ,transform = transform_train)
training_loader = torch.utils.data.DataLoader(training_dataset, batch_size=96, shuffle=True)
Is this process correct? Please help me on how to proceed?
| You can pass a custom transformation to torchvision.transform by defining a class.
To understand better I suggest that you read the documentations.
In your case it will be something like the following:
class ycbcr(object):
def __call__(self, img):
"""
:param img: (PIL): Image
:return: ycbr color space image (PIL)
"""
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2ycbcr)
# t = torch.from_numpy(img)
return Image.fromarray(t)
def __repr__(self):
return self.__class__.__name__+'()'
Notice that it gets a PIL image and return a PIL image. So you might want to adjust your code properly. But this is the general way to define a custom transformation.
| https://stackoverflow.com/questions/64420379/ |
What is the difference between a .ckpt and a .pth file in Pytorch? | I am following a code from GitHub that uses Pytorch.
The model is saved using :
model.save(ARGS.working_dir + '/model_%d.ckpt' % (epoch+1)).
What is the difference between using .pth and .ckpt in Pytorch?
| There is no difference. the extension in Pytorch models that you see is something random. You can choose anything.
People usually use pth to indicate a PyTorcH model (and hence .pth). but then again its completely up to you on how you want to save your model.
| https://stackoverflow.com/questions/64456843/ |
OSError: libcurand.so.10: cannot open shared object file: No such file or directory | I am working on Nvidia Jetson Tx2 (with JETPACK 4.2) and installed the pytorch following this link.
When I am importing torch in python its giving me an error OSError: libcurand.so.10: cannot open shared object file: No such file or directory I have tried all the options but nothing worked.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.0/lib64
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-10.0/lib64
export PATH=$PATH:/usr/local/cuda-10.0/lib64
Any guidance to debug the issue is requested. Thanks
| emm
I found this answer as followed...
https://forums.developer.nvidia.com/t/mounting-cuda-onto-l4t-docker-image-issues-libcurand-so-10-cannot-open-no-such-file-or-directory/121545
The key is : "You can use JetPack4.4 for CUDA 10.2 and JetPack4.3 for CUDA 10.0."
Maybe downloading Pytorch v1.4.0 and Jetpack 4.2/4.3 would solve this question...
Anyway, it is helped for me... good luck
enter image description here
| https://stackoverflow.com/questions/64482976/ |
Use CrossEntropyLoss with LogSoftmax | From the Pytorch documentation, CrossEntropyLoss combines LogSoftMax and NLLLoss together in one single class
But I am curious; what happens if we use both CrossEntropyLoss for criterion and LogSoftMax in my classifier:
model_x.fc = nn.Sequential (nn.Linear(num_ftrs, 2048, bias=True), nn.ReLU(),
nn.Linear(2048, 1024 ), nn.ReLU(),
nn.Linear(1024 ,256), nn.ReLU(),
nn.Linear(256 ,128), nn.ReLU(),
nn.Linear(128, num_labels),nn.LogSoftmax(dim = 1))
criterion = nn.CrossEntropyLoss()
Then if i have saved a trained model using the code above, how can I check the criterion used by the saved model?
| TL;DR: You will decrease the expressivity of the model because it only can produce relatively flat distribution.
What you suggest in the snippet actually means applying the softmax normalization twice. This will give you a distribution with the same rank of probabilities, but it will be much flatter and it will prevent the model from using a low-entropy output distribution. The output of the linear layer can be in theory any number. In practice, the logits are both positive and negative numbers, which allows producing spiky distributions. After softmax, you have probabilities between 0 and 1, so log-softmax will give you negative numbers.
Typically, models are saved without the loss function. Unless you explicitly saved the loss as well, there is no way to find it out.
| https://stackoverflow.com/questions/64494819/ |
Is there a way to convert the quint8 pytorch format to np.uint8 format? | I'm using the code below to get the quantized unsiged int 8 format in pytorch. However, I'm not able to convert the quant variable to the to np.uint8. Is there possible to do that?
import torch
quant = torch.quantize_per_tensor(torch.tensor([-1.0, 0.352, 1.321, 2.0]), 0.1, 10, torch.quint8)
| This can be done using torch.int_repr()
import torch
import numpy as np
# generate a test float32 tensor
float32_tensor = torch.tensor([-1.0, 0.352, 1.321, 2.0])
print(f'{float32_tensor.dtype}\n{float32_tensor}\n')
# convert to a quantized uint8 tensor. This format keeps the values in the range of
# the float32 format, with the resolution of a uint8 format (256 possible values)
quint8_tensor = torch.quantize_per_tensor(float32_tensor, 0.1, 10, torch.quint8)
print(f'{quint8_tensor.dtype}\n{quint8_tensor}\n')
# map the quantized data to the actual uint8 values (and then to an np array)
uint8_np_ndarray = torch.int_repr(quint8_tensor).numpy()
print(f'{uint8_np_ndarray.dtype}\n{uint8_np_ndarray}')
Output
torch.float32
tensor([-1.0000, 0.3520, 1.3210, 2.0000])
torch.quint8
tensor([-1.0000, 0.4000, 1.3000, 2.0000], size=(4,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
uint8
[ 0 14 23 30]
| https://stackoverflow.com/questions/64503533/ |
how to convert outlogits to tokens? | i have a forward function in allenNlp given by :
def forward(self, input_tokens, output_tokens):
'''
This is the main process of the Model where the actual computation happens.
Each Instance is fed to the forward method.
It takes dicts of tensors as input, with same keys as the fields in your Instance (input_tokens, output_tokens)
It outputs the results of predicted tokens and the evaluation metrics as a dictionary.
'''
mask = get_text_field_mask(input_tokens)
embeddings = self.embedder(input_tokens)
rnn_hidden = self.rnn(embeddings, mask)
out_logits = self.hidden2out(rnn_hidden)
loss = sequence_cross_entropy_with_logits(out_logits, output_tokens['tokens'], mask)
return {'loss': loss}
the out_logits variable contains probabilities of tokens, how to dispaly these tokens.
the outlogits gives :
array([[ 0.02416356, 0.0195566 , -0.03279119, 0.057118 , 0.05091334,
-0.01906729, -0.05311333, 0.04695245, 0.06872341, 0.05173637,
-0.03523348, -0.00537474, -0.03946163, -0.05817827, -0.04316377,
-0.06042208, 0.01190596, 0.00574979, 0.01183304, 0.02330608,
0.04587644, 0.02319966, 0.0020873 , 0.03781978, -0.03975108,
-0.0131919 , 0.00393738, 0.04785313, 0.00159995, 0.05751844,
0.05420169, -0.01404533, -0.02716331, -0.03871592, 0.00949999,
-0.02924301, 0.03504215, 0.00397302, -0.0305252 , -0.00228448,
0.04034173, 0.01458408],
[ 0.02050283, 0.0204745 , -0.03081856, 0.06295916, 0.04601778,
-0.0167818 , -0.05653084, 0.05017883, 0.07212739, 0.06197165,
-0.03590995, -0.01142827, -0.03807197, -0.05942211, -0.0375165 ,
-0.06769539, 0.01200251, 0.01012686, 0.01514241, 0.01875677,
0.04499928, 0.02748671, 0.0012517 , 0.04062563, -0.04049949,
-0.01986902, 0.00630998, 0.05092276, 0.00276728, 0.05341531,
0.05047017, -0.01111878, -0.03038253, -0.04320357, 0.01768938,
-0.03470382, 0.03567442, 0.00776757, -0.02703476, -0.00392571,
0.04700187, 0.01671317]] dtype=float32)}
i want to convert the last array to token ?
| In allennlp you have access to the self.vocab attribute with Vocabulary. get_token_from_index.
Usually to select a token from the logits one would apply a softmax (in order to have all the probability summing to 1) and then pick the most probable one.
If you want to decode sequences from a model maybe you should look into [BeamSearch] (https://docs.allennlp.org/master/api/nn/beam_search/#beamsearch).
| https://stackoverflow.com/questions/64513991/ |
What is the correct way to use a PyTorch Module inside a PyTorch Function? | We have a custom torch.autograd.Function z(x, t) which computes an output y in a way not amenable to direct automatic differentiation, and have computed the Jacobian of the operation with respect to its inputs x and t, so we can implement the backward method.
However, the operation involves making several internal calls to a neural network, which we have implemented for now as a stack of torch.nn.Linear objects, wrapped in net, a torch.nn.Module. Mathematically, these are parameterized by t.
Is there any way that we can have net itself be an input to the forward method of z? Then, we would return from our backward the list of products of the upstream gradient Dy and parameter Jacobia dydt_i, one for each of the parameters ti that are children of net (in addition to Dy*dydx, although x is data and does not need gradient accumulation).
Or do we really instead need to take t (actually a list of individual t_i), and reconstruct internally in z.forward the actions of all the Linear layers in net?
| I guess you could create a custom functor that inherits torch.autograd.Function and make the forward and backward methods non-static (i.e remove the @staticmethod in this example so that net could be an attribute of your functor. that would look like
class MyFunctor(torch.nn.autograd.Function):
def __init(net):
self.net = net
def forward(ctx, x, t):
#store x and t in ctx in the way you find useful
# not sure how t is involved here
return self.net(x)
def backward(ctx, grad):
# do your backward stuff
net = nn.Sequential(nn.Linear(...), ...)
z = MyFunctor(net)
y = z(x, t)
This will yield a warning that you are using a deprecated legacy way of creating autograd functions (because of the non-static methods), and you need to be extra careful with zeroing the gradients in netafter having backpropagated. So not really convenient, but I am not aware of any better way to have a stateful autograd function.
| https://stackoverflow.com/questions/64516138/ |
do I have to add softmax in def forward when I use torch.nn.CrossEntropyLoss | https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html
When I read the contents above, I understood that torch.nn.CrossEntropy already computes exp score of the last layer. So I thought the forward function doesn't have to include softmax. For example,
return self.fc(x) rather than return nn.softmax(self.fc(x)). However, I'm confused, for I've seen several implementations of ConvNet Classifiers that use both ways (they return with or without softmax while both use cross entropy loss).
Do this issues affect the performance of Classifier?? Which way is correct?
| JHPark,
You are correct - with torch.nn.CrossEntropyLoss there is no need to include softmax layer. If one does include softmax it will still lead to proper classification result, since softmax does not change which element has max score. However, if applied twice, it may distort relative levels of the outputs, making gradients less strong and potentially slowing training a bit.
| https://stackoverflow.com/questions/64519911/ |
Can I use BERT as a feature extractor without any finetuning on my specific data set? | I'm trying to solve a multilabel classification task of 10 classes with a relatively balanced training set consists of ~25K samples and an evaluation set consists of ~5K samples.
I'm using the huggingface:
model = transformers.BertForSequenceClassification.from_pretrained(...
and obtain quite nice results (ROC AUC = 0.98).
However, I'm witnessing some odd behavior which I don't seem to make sense of -
I add the following lines of code:
for param in model.bert.parameters():
param.requires_grad = False
while making sure that the other layers of the model are learned, that is:
[param[0] for param in model.named_parameters() if param[1].requires_grad == True]
gives
['classifier.weight', 'classifier.bias']
Training the model when configured like so, yields some embarrassingly poor results (ROC AUC = 0.59).
I was working under the assumption that an out-of-the-box pre-trained BERT model (without any fine-tuning) should serve as a relatively good feature extractor for the classification layers. So, where do I got it wrong?
| From my experience, you are going wrong in your assumption
an out-of-the-box pre-trained BERT model (without any fine-tuning) should serve as a relatively good feature extractor for the classification layers.
I have noticed similar experiences when trying to use BERT's output layer as a word embedding value with little-to-no fine-tuning, which also gave very poor results; and this also makes sense, since you effectively have 768*num_classes connections in the simplest form of output layer. Compared to the millions of parameters of BERT, this gives you an almost negligible amount of control over intense model complexity. However, I also want to cautiously point to overfitted results when training your full model, although I'm sure you are aware of that.
The entire idea of BERT is that it is very cheap to fine-tune your model, so to get ideal results, I would advise against freezing any of the layers. The one instance in which it can be helpful to disable at least partial layers would be the embedding component, depending on the model's vocabulary size (~30k for BERT-base).
| https://stackoverflow.com/questions/64526841/ |
Pytorch datatype/dimension confusion TypeError: 'Tensor' object is not callable | This piece of code is originally written in numpy and I'm trying to utilise GPU computation by rewriting it in pytorch, but as I'm new to pytorch a lot of problems occured to me. Firstly I'm confused by the dimension of the tensors. Sometimes after operating on the tensors, only transposing the tensor would fix the problem, is there anyway I can stop doing .t()? The major problem here is that in the line ar = torch.stack ... the error "TypeError: 'Tensor' object is not callable " occurs. Any suggestion/correction would be appreciated. Thxxx
def vec_datastr(vector):
vector = vector.float()
# Find the indices corresponding to non-zero entries
index = torch.nonzero(vector)
index = index.t()
# Compute probability
prob = vector ** 2
if torch.sum(prob) == 0:
prob = 0
else:
prob = prob / torch.sum(prob)
d = depth(vector)
CumProb = torch.ones((2**d-len(prob.t()),1), device ='cuda')
cp = torch.cumsum(prob, dim=0)
cp = cp.reshape((len(cp.t()),1))
CumProb = torch.cat((cp, CumProb),0)
vector = vector.t()
prob = prob.t()
ar = torch.stack((index, vector([index,1]), prob([index, 1]), CumProb([index, 1]))) # Problems occur here
ar = ar.reshape((len(index), 4))
# Store the data as a 4-dimensional array
output = dict()
output = {'index':ar[:,0], 'value': ar[:,1], 'prob':ar[:,2], 'CumProb': ar[:,3]}
return output
| ar = torch.stack(
(index, vector([index, 1]), prob([index, 1]), CumProb([index, 1]))
) # Problems occur here
vector is of type torch.Tensor. It has no __call__ defined. You are going for vector(...) (vector([index,1])) while you should slice the data directly like this: vector[index, 1]. Same goes for prob and CumProb.
Somehow, you do it correctly for ar with ar[:,0] so it might be a typo
| https://stackoverflow.com/questions/64533134/ |
What makes BertGeneration and/or RobertaForCausalLM causal models? Where does the causal attention masking happen? | I am trying to use RobertaForCausalLM and/or BertGeneration for causal language modelling / next-word-prediction / left-to-right prediction. I can't seem to figure out where the causal masking is happening? I want to train teacher forcing with the ground-truth labels, but no information from future tokens to be included in the attention mechanism. For that I thought the model would need causal attention masking, but I don't see it being applied anywhere...
If anyone could point me to where this might be happening or why it is unnecessary, that would be helpful.
Thanks!
| I have found it. It happens in get_extended_attention_mask in modeling utils. Consider this question solved :slight_smile:
| https://stackoverflow.com/questions/64537339/ |
How can I calculate FLOPs and Params without 0 weights neurons affected? | My Prune code is shown below, after running this, I will get a file named 'pruned_model.pth'.
import torch
from torch import nn
import torch.nn.utils.prune as prune
import torch.nn.functional as F
from cnn import net
ori_model = '/content/drive/My Drive/ECG_weight_prune/checkpoint_dir/model.pth'
save_path = '/content/drive/My Drive/ECG_weight_prune/checkpoint_dir/pruned_model.pth'
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = net().to(device)
model.load_state_dict(torch.load(ori_model))
module = model.conv1
print(list(module.named_parameters()))
print(list(module.named_buffers()))
prune.l1_unstructured(module, name="weight", amount=0.3)
prune.l1_unstructured(module, name="bias", amount=3)
print(list(module.named_parameters()))
print(list(module.named_buffers()))
print(module.bias)
print(module.weight)
print(module._forward_pre_hooks)
prune.remove(module, 'weight')
prune.remove(module, 'bias')
print(list(module.named_parameters()))
print(model.state_dict())
torch.save(model.state_dict(), save_path)
and results is :
[('weight', Parameter containing:
tensor([[[-0.0000, -0.3137, -0.3221, ..., 0.5055, 0.3614, -0.0000]],
[[ 0.8889, 0.2697, -0.3400, ..., 0.8546, 0.2311, -0.0000]],
[[-0.2649, -0.1566, -0.0000, ..., 0.0000, 0.0000, 0.3855]],
...,
[[-0.2836, -0.0000, 0.2155, ..., -0.8894, -0.7676, -0.6271]],
[[-0.7908, -0.6732, -0.5024, ..., 0.2011, 0.4627, 1.0227]],
[[ 0.4433, 0.5048, 0.7685, ..., -1.0530, -0.8908, -0.4799]]],
device='cuda:0', requires_grad=True)), ('bias', Parameter containing:
tensor([-0.7497, -1.3594, -1.7613, -2.0137, -1.1763, 0.4150, -1.6996, -1.5354,
0.4330, -0.9259, 0.4156, -2.3099, -0.4282, -0.5199, 0.1188, -1.1725,
-0.9064, -1.6639, -1.5834, -0.3655, -2.0727, -2.1078, -1.6431, -0.0694,
-0.5435, -1.9623, 0.5481, -0.8255, -1.5108, -0.4029, -1.9759, 0.0522,
0.0599, -2.2469, -0.5599, 0.1039, -0.4472, -1.1706, -0.0398, -1.9441,
-1.5310, -0.0837, -1.3250, -0.2098, -0.1919, 0.4600, -0.8268, -1.0041,
-0.8168, -0.8701, 0.3869, 0.1706, -0.0226, -1.2711, -0.9302, -2.0696,
-1.1838, 0.4497, -1.1426, 0.0772, -2.4356, -0.3138, 0.6297, 0.2022,
-0.4024, 0.0000, -1.2337, 0.2840, 0.4515, 0.2999, 0.0273, 0.0374,
0.1325, -0.4890, -2.3845, -1.9663, 0.2108, -0.1144, 0.0544, -0.2629,
0.0393, -0.6728, -0.9645, 0.3118, -0.5142, -0.4097, -0.0000, -1.5142,
-1.2798, 0.2871, -2.0122, -0.9346, -0.4931, -1.4895, -1.1401, -0.8823,
0.2210, 0.4282, 0.1685, -1.8876, -0.7459, 0.2505, -0.6315, 0.3827,
-0.3348, 0.1862, 0.0806, -2.0277, 0.2068, 0.3281, -1.8045, -0.0000,
-2.2377, -1.9742, -0.5164, -0.0660, 0.8392, 0.5863, -0.7301, 0.0778,
0.1611, 0.0260, 0.3183, -0.9097, -1.6152, 0.4712, -0.2378, -0.4972],
device='cuda:0', requires_grad=True))]
There are many zero weights existing. How can I calculate FLOPs and Params without counting calculations associated with these zero values?
I use the following code to calculate FLOPs and Params.
import torch
from cnn import net
from ptflops import get_model_complexity_info
ori_model = '/content/drive/My Drive/ECG_weight_prune/checkpoint_dir/model.pth'
pthfile = '/content/drive/My Drive/ECG_weight_prune/checkpoint_dir/pruned_model.pth'
model = net()
# model.load_state_dict(torch.load(ori_model))
model.load_state_dict(torch.load(pthfile))
# print(model.state_dict())
macs, params = get_model_complexity_info(model, (1, 260), as_strings=False,
print_per_layer_stat=True, verbose=True)
print('{:<30} {:<8}'.format('Computational complexity: ', macs))
print('{:<30} {:<8}'.format('Number of parameters: ', params))
The output of both ori_model nad pthfile is the same, as follows.
Warning: module Dropout2d is treated as a zero-op.
Warning: module Flatten is treated as a zero-op.
Warning: module net is treated as a zero-op.
net(
0.05 M, 100.000% Params, 0.001 GMac, 100.000% MACs,
(conv1): Conv1d(0.007 M, 13.143% Params, 0.0 GMac, 45.733% MACs, 1, 128, kernel_size=(50,), stride=(3,))
(conv2): Conv1d(0.029 M, 57.791% Params, 0.001 GMac, 50.980% MACs, 128, 32, kernel_size=(7,), stride=(1,))
(conv3): Conv1d(0.009 M, 18.619% Params, 0.0 GMac, 0.913% MACs, 32, 32, kernel_size=(9,), stride=(1,))
(fc1): Linear(0.004 M, 8.504% Params, 0.0 GMac, 0.404% MACs, in_features=32, out_features=128, bias=True)
(fc2): Linear(0.001 M, 1.299% Params, 0.0 GMac, 0.063% MACs, in_features=128, out_features=5, bias=True)
(bn1): BatchNorm1d(0.0 M, 0.515% Params, 0.0 GMac, 1.793% MACs, 128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bn2): BatchNorm1d(0.0 M, 0.129% Params, 0.0 GMac, 0.114% MACs, 32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout): Dropout2d(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, p=0.5, inplace=False)
(faltten): Flatten(0.0 M, 0.000% Params, 0.0 GMac, 0.000% MACs, )
)
Computational complexity: 1013472.0
Number of parameters: 49669
| One thing you could do is to exclude the weights below a certain threshold from the FLOPs computation. To do so you would have to modify the flop counter functions.
I'll provide examples for the modification for fc and conv layers below.
def linear_flops_counter_hook(module, input, output):
input = input[0]
output_last_dim = output.shape[-1] # pytorch checks dimensions, so here we don't care much
# MODIFICATION HAPPENS HERE
num_zero_weights = (module.weight.data.abs() < 1e-9).sum()
zero_weights_factor = 1 - torch.true_divide(num_zero_weights, module.weight.data.numel())
module.__flops__ += int(np.prod(input.shape) * output_last_dim) * zero_weights_factor.numpy()
# MODIFICATION HAPPENS HERE
def conv_flops_counter_hook(conv_module, input, output):
# Can have multiple inputs, getting the first one
input = input[0]
batch_size = input.shape[0]
output_dims = list(output.shape[2:])
kernel_dims = list(conv_module.kernel_size)
in_channels = conv_module.in_channels
out_channels = conv_module.out_channels
groups = conv_module.groups
filters_per_channel = out_channels // groups
conv_per_position_flops = int(np.prod(kernel_dims)) * in_channels * filters_per_channel
active_elements_count = batch_size * int(np.prod(output_dims))
# MODIFICATION HAPPENS HERE
num_zero_weights = (conv_module.weight.data.abs() < 1e-9).sum()
zero_weights_factor = 1 - torch.true_divide(num_zero_weights, conv_module.weight.data.numel())
overall_conv_flops = conv_per_position_flops * active_elements_count * zero_weights_factor.numpy()
# MODIFICATION HAPPENS HERE
bias_flops = 0
if conv_module.bias is not None:
bias_flops = out_channels * active_elements_count
overall_flops = overall_conv_flops + bias_flops
conv_module.__flops__ += int(overall_flops)
Note that I'm using 1e-9 as a threshold for a weight counting as zero.
| https://stackoverflow.com/questions/64551002/ |
Calculate covariance of torch tensor (2d feature map) | I have a torch tensor with shape (batch_size, number_maps, x_val, y_val). The tensor is normalized with a sigmoid function, so within range [0, 1]. I want to find the covariance for each map, so I want to have a tensor with shape (batch_size, number_maps, 2, 2). As far as I know, there is no torch.cov() function as in numpy. How can I efficiently calculate the covariance without converting it to numpy?
Edit:
def get_covariance(tensor):
bn, nk, w, h = tensor.shape
tensor_reshape = tensor.reshape(bn, nk, 2, -1)
x = tensor_reshape[:, :, 0, :]
y = tensor_reshape[:, :, 1, :]
mean_x = torch.mean(x, dim=2).unsqueeze(-1)
mean_y = torch.mean(y, dim=2).unsqueeze(-1)
xx = torch.sum((x - mean_x) * (x - mean_x), dim=2).unsqueeze(-1) / (h*w - 1)
xy = torch.sum((x - mean_x) * (y - mean_y), dim=2).unsqueeze(-1) / (h*w - 1)
yx = xy
yy = torch.sum((y - mean_y) * (y - mean_y), dim=2).unsqueeze(-1) / (h*w - 1)
cov = torch.cat((xx, xy, yx, yy), dim=2)
cov = cov.reshape(bn, nk, 2, 2)
return cov
I tried the following now, but I m pretty sure it's not correct.
| You could try the function suggested on Github:
def cov(x, rowvar=False, bias=False, ddof=None, aweights=None):
"""Estimates covariance matrix like numpy.cov"""
# ensure at least 2D
if x.dim() == 1:
x = x.view(-1, 1)
# treat each column as a data point, each row as a variable
if rowvar and x.shape[0] != 1:
x = x.t()
if ddof is None:
if bias == 0:
ddof = 1
else:
ddof = 0
w = aweights
if w is not None:
if not torch.is_tensor(w):
w = torch.tensor(w, dtype=torch.float)
w_sum = torch.sum(w)
avg = torch.sum(x * (w/w_sum)[:,None], 0)
else:
avg = torch.mean(x, 0)
# Determine the normalization
if w is None:
fact = x.shape[0] - ddof
elif ddof == 0:
fact = w_sum
elif aweights is None:
fact = w_sum - ddof
else:
fact = w_sum - ddof * torch.sum(w * w) / w_sum
xm = x.sub(avg.expand_as(x))
if w is None:
X_T = xm.t()
else:
X_T = torch.mm(torch.diag(w), xm).t()
c = torch.mm(X_T, xm)
c = c / fact
return c.squeeze()
https://github.com/pytorch/pytorch/issues/19037
| https://stackoverflow.com/questions/64554658/ |
Why multiplication on GPU is slower than on CPU? | Here is my code (simulate the feed-forward neural network):
import torch
import time
print(torch.cuda.is_available()) # True
device = torch.device('cuda:0' )
a = torch.tensor([1,2,3,4,5,6]).float().reshape(-1,1)
w1 = torch.rand(120,6)
w2 = torch.rand(1,120)
b1 = torch.rand(120,1)
b2 = torch.rand(1,1).reshape(1,1)
start = time.time()
for _ in range(100000):
ans = torch.mm(w2, torch.mm(w1,a)+b1)+b2
end = time.time()
print(end-start) # 1.2725720405578613 seconds
a = a.to(device)
w1 = w1.to(device)
w2 = w2.to(device)
b1 = b1.to(device)
b2 = b2.to(device)
start = time.time()
for _ in range(100000):
ans = torch.mm(w2, torch.mm(w1,a)+b1)+b2
end = time.time()
print(end-start) # 5.6569812297821045 seconds
I wonder if I did it the wrong way or what, and how can I change my code to show that GPU IS faster then CPU on matrix multiplication?
| The reason can be a lot of things:
Your model is simple.
For GPU calculation there is the cost of memory transfer to and from the GPU's memory
You calculation is on a small data batch, probably with bigger data sample you should see better performance on GPU than CPU
We should not forget the caching, you calculate the same operations over and over again, maybe would be better to generate random a tensors for every run
Here is a thread on the pytorch forum: https://discuss.pytorch.org/t/cpu-faster-than-gpu/25343
Also you should use better profiler, like explaind in this thread: https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964
| https://stackoverflow.com/questions/64556682/ |
What do you use to access CSV data on S3 and other object storage providers as a PyTorch Dataset? | My dataset is stored as a collection of CSV files in an Amazon Web Services (AWS) Simple Storage Service (S3) bucket. I'd like to train a PyTorch model based on this data but the built-in Dataset classes do not provide native support for object storage services like S3 or Google Cloud Storage (GCS), Azure Blob storage, and such. I checked the PyTorch documentation here https://pytorch.org/docs/stable/data.html# about the available Dataset classes and it comes up short when it comes to public cloud object storage support.
It looks like I have to create my own custom Dataset according to the following instructions: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class but the effort seems overwhelming: I need to figure out how to download data from the object storage to local node, parse the CSV files to read them into PyTorch tensors, and then deal with the possibility of running out of disk space since my dataset is 100s of GBs.
Since PyTorch models are trained using gradient descent and I only need to store just a small batch of data (less than 1GB) in memory at once, is there a custom dataset implementation that can help?
| Check out ObjectStorage Dataset which has support for object storage services like S3 and GCS osds.readthedocs.io/en/latest/gcs.html
You can run
pip install osds
to install it and then point it at your S3 bucket to instantiate the PyTorch Dataset and DataLoader using something like
from osds.utils import ObjectStorageDataset
from torch.utils.data import DataLoader
ds = ObjectStorageDataset(f"gcs://gs://cloud-training-demos/taxifare/large/taxi-train*.csv",
storage_options = {'anon' : False },
batch_size = 32768,
worker = 4,
eager_load_batches = False)
dl = DataLoader(ds, batch_size=None)
where you use your S3 location path instead of gcs://gs://cloud-training-demos/taxifare/large/taxi-train*.csv. So your glob for S3 would be something like s3://<bucket name>/<object path>/*.csv depending on the bucket and the bucket directory where you store your CSV objects for the dataset.
| https://stackoverflow.com/questions/64580099/ |
I have an error on Pytorch and in particular with nllloss | I want to appply the criterion, where
criterion = nn.NLLLoss()
I apply it on output and labels
loss = criterion(output.view(-1,1), labels.long())
where:
*the shape of the labels
labels
tensor([ 1, 4, 1, 1, 4, 1, 2, 3, 2, 4, 2, 3, 3, 4,
0, 4])
output
tensor([ 0.1829, 0.1959, 0.1909, 0.1895, 0.1914, 0.1883, 0.1895,
0.1884, 0.1865, 0.1931, 0.1883, 0.1917, 0.1942, 0.1937,
0.1897, 0.1934])
the shape of the output
torch.Size([16])
On the following line:
loss = criterion(output.view(-1,1), labels.long())
I get this error:
The error is:
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THNN/generic/ClassNLLCriterion.c:97
Any ideas?
| Your label and output shapes must be [batch_size] and [batch_size, n_classes] respectively.
| https://stackoverflow.com/questions/64581993/ |
How to index intermediate dimension with an index tensor in pytorch? | How can I index a tensor t with n dimensions with an index tensor of m < n dimensions, such that the last dimensions of t are preserved? The index tensor is shaped equal to tensor t for all dimensions before dimension m. Or in other terms, I want to index intermediate dimensions of a tensor, while keeping all the following dimensions of the selected indices preserved.
For example, lets say we have the two tensors:
t = torch.randn([3, 5, 2]) * 10
index = torch.tensor([[1, 3],[0,4],[3,2]]).long()
with t:
tensor([[[ 15.2165, -7.9702],
[ 0.6646, 5.2844],
[-22.0657, -5.9876],
[ -9.7319, 11.7384],
[ 4.3985, -6.7058]],
[[-15.6854, -11.9362],
[ 11.3054, 3.3068],
[ -4.7756, -7.4524],
[ 5.0977, -17.3831],
[ 3.9152, -11.5047]],
[[ -5.4265, -22.6456],
[ 1.6639, 10.1483],
[ 13.2129, 3.7850],
[ 3.8543, -4.3496],
[ -8.7577, -12.9722]]])
Then the output I would like to have would have shape (3, 2, 2) and be:
tensor([[[ 0.6646, 5.2844],
[ -9.7319, 11.7384]],
[[-15.6854, -11.9362],
[ 3.9152, -11.5047]],
[[ 3.8543, -4.3496],
[ 13.2129, 3.7850]]])
Another example would be that I have a tensor t of shape (40, 10, 6, 2) and an index tensor of shape (40, 10, 3). This should query dimension 3 of tensor t and the expected output shape would be (40, 10, 3, 2).
How can I achieve this in a generic way, without using loops?
| In this case, you can do something like this:
t[torch.arange(t.shape[0]).unsqueeze(1), index, ...]
Full code:
import torch
t = torch.tensor([[[ 15.2165, -7.9702],
[ 0.6646, 5.2844],
[-22.0657, -5.9876],
[ -9.7319, 11.7384],
[ 4.3985, -6.7058]],
[[-15.6854, -11.9362],
[ 11.3054, 3.3068],
[ -4.7756, -7.4524],
[ 5.0977, -17.3831],
[ 3.9152, -11.5047]],
[[ -5.4265, -22.6456],
[ 1.6639, 10.1483],
[ 13.2129, 3.7850],
[ 3.8543, -4.3496],
[ -8.7577, -12.9722]]])
index = torch.tensor([[1, 3],[0,4],[3,2]]).long()
output = t[torch.arange(t.shape[0]).unsqueeze(1), index, ...]
# tensor([[[ 0.6646, 5.2844],
# [ -9.7319, 11.7384]],
#
# [[-15.6854, -11.9362],
# [ 3.9152, -11.5047]],
#
# [[ 3.8543, -4.3496],
# [ 13.2129, 3.7850]]])
| https://stackoverflow.com/questions/64590830/ |
How to make Intel GPU available for processing through pytorch? | I'm using a laptop which has Intel Corporation HD Graphics 520.
Does anyone know how to it set up for Deep Learning, specifically Pytorch? I have seen if you have Nvidia graphics I can install cuda but what to do when you have intel GPU?
| PyTorch doesn't support anything other than NVIDIA CUDA and lately AMD Rocm.
Intels support for Pytorch that were given in the other answers is exclusive to xeon line of processors and its not that scalable either with regards to GPUs.
Intel's oneAPI formerly known ad oneDNN however, has support for a wide range of hardwares including intel's integrated graphics but at the moment, the full support is not yet implemented in PyTorch as of 10/29/2020 or PyTorch 1.7.
But you still have other options. for inference you have couple of options.
DirectML is one of them. basically you convert your model into onnx, and then use directml provider to run your model on gpu (which in our case will use DirectX12 and works only on Windows for now!)
Your other Option is to use OpenVino and TVM both of which support multi platforms including Linux, Windows, Mac, etc.
All of them use ONNX models so you need to first convert your model to onnx format and then use them.
| https://stackoverflow.com/questions/64593792/ |
Accessing functions in the class modules of nn.Sequential | When running nn.Sequential, I include a list of class modules (which would be layers of a neural network). When running nn.Sequential, it calls forward functions of the modules. However each of the class modules also has a function which I would like to access when the nn.Sequential runs. How can I access and run this function when running nn.Sequential?
| You can use a hook for that. Let's consider the following example demonstrated on VGG16:
This is the network architecture:
Say we want to monitor the input and output for layer (2) in the features Sequential (that Conv2d layer you see above).
For this matter we register a forward hook, named my_hook which will be called on any forward pass:
import torch
from torchvision.models import vgg16
def my_hook(self, input, output):
print('my_hook\'s output')
print('input: ', input)
print('output: ', output)
# Sample net:
net = vgg16()
#Register forward hook:
net.features[2].register_forward_hook(my_hook)
# Test:
img = torch.randn(1,3,512,512)
out = net(img) # Will trigger my_hook and the data you are looking for will be printed
| https://stackoverflow.com/questions/64606524/ |
RuntimeError: Expected hidden[0] size (1, 1, 512), got (1, 128, 512) for LSTM pytorch | I trained the LSTM with a batch size of 128 and during testing my batch size is 1, why do I get this error? I'm suppose to initialize the hidden size when doing testing?
Here is the code that i'm using, I initialize the hidden state init_hidden function as (number_of_layers, batch_size, hidden_size) since batch_first=True
class ImageLSTM(nn.Module):
def __init__(self, n_inputs:int=49,
n_outputs:int=4096,
n_hidden:int=256,
n_layers:int=1,
bidirectional:bool=False):
"""
Takes a 1D flatten images.
"""
super(ImageLSTM, self).__init__()
self.n_inputs = n_inputs
self.n_hidden = n_hidden
self.n_outputs = n_outputs
self.n_layers = n_layers
self.bidirectional = bidirectional
self.lstm = nn.LSTM( input_size=self.n_inputs,
hidden_size=self.n_hidden,
num_layers=self.n_layers,
dropout = 0.5 if self.n_layers>1 else 0,
bidirectional=self.bidirectional,
batch_first=True)
if (self.bidirectional):
self.FC = nn.Sequential(
nn.Linear(self.n_hidden*2, self.n_outputs),
nn.Dropout(p=0.5),
nn.Sigmoid()
)
else:
self.FC = nn.Sequential(
nn.Linear(self.n_hidden, self.n_outputs),
# nn.Dropout(p=0.5),
nn.Sigmoid()
)
def init_hidden(self, batch_size, device=None): # input 4D tensor: (batch size, channels, width, height)
# initialize the hidden and cell state to zero
# vectors:(number of layer, batch size, number of hidden nodes)
if (self.bidirectional):
h0 = torch.zeros(2*self.n_layers, batch_size, self.n_hidden)
c0 = torch.zeros(2*self.n_layers, batch_size, self.n_hidden)
else:
h0 = torch.zeros(self.n_layers, batch_size, self.n_hidden)
c0 = torch.zeros(self.n_layers, batch_size, self.n_hidden)
if device is not None:
h0 = h0.to(device)
c0 = c0.to(device)
self.hidden = (h0,c0)
def forward(self, X): # X: tensor of shape (batch_size, channels, width, height)
# forward propagate LSTM
lstm_out, self.hidden = self.lstm(X, self.hidden) # lstm_out: tensor of shape (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.FC(lstm_out[:, -1, :])
return out
| please edit your post and add code. How did you initialize the hidden-state? What does you model look like.
hidden[0] is not your hidden-size, its the hidden-state of the lstm. The shape of the hidden-state has to be initialized like this:
hidden = ( torch.zeros((batch_size, layers, hidden_size)), torch.zeros((layers, batch_size, hidden_size)) )
You seem to have done this correctly. But the error tells you that you gave a batch of size 1 (because as you said you want to test with only one sample) but the hidden-state is initialized with batch-size=128.
So I guess (please add code) that you hard-coded that the batch-size = 128. Dont do that. Since you have to reinitialize the hidden-state every forward pass you can do this:
...
def forward(self, x):
batch_size = x.shape[0]
hidden = (torch.zeros(self.layers, batch_size, self.hidden_size).to(device=device), torch.zeros(self.layers, batch_size, self.hidden_size).to(device=device))
output, hidden = lstm(x, hidden)
# then do what every you want with the output
I guess that this is what causes this error but please post your code, too!
| https://stackoverflow.com/questions/64629583/ |
Can´t install Pytorch on PyCharm: No matching distribution found for torch==1.7.0+cpu | I tried multiple times installing Pytorch on Pycharm. I used the code that the pytorch web site give you for a specific configuration. I use this one:
enter image description here
Then I copied this information on Pycharm Terminal and I get this message:
(venv) D:\Usuarios\AuCap\Documents\mnist>pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio===0.7.0 -f https://download.pytorch.org/w
hl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.7.0+cpu
I also tried installing it using Python interpreter in the Pycharm Settings and also didnd´t work.
Thanks for your help
| Downgrade your Python version as python3.9 is not supported by PyTorch right now (python3.8 is fine though).
See this issue, it will be supported in subsequent releases.
You can build your own PyTorch from source if you wish though.
| https://stackoverflow.com/questions/64636103/ |
object detection: is object in the photo, python | I am trying to detect plants in the photos, i've already labeled photos with plants (with labelImg), but i don't understand how to train model with only background photos, so that when there is no plant here model can tell me so.
Do I need to set labeled box as the size of image?
p.s. new to ml so don't be rude, please)
| I recently had a problem where all my training images were zoomed in on the object. This meant that the training images all had very little background information. Since object detection models use space outside bounding boxes as negative examples of these objects, this meant that the model had no background knowledge. So the model knew what objects were, but didn't know what they were not.
So I disagree with @Rika, since sometimes background images are useful. With my example, it worked to introduce background images.
As I already said, object detection models use non-labeled space in an image as negative examples of a certain object. So you have to save annotation files without bounding boxes for background images. In the software you use here (labelImg), you can use verify image to say that it saves the annotation file of the image without boxes. So it saves a file that says it should be included in training, but has no bounding box information. The model uses this as negative examples.
| https://stackoverflow.com/questions/64641364/ |
PyTorch dll issues for Caffe2 | I am using a Windows 10 Machine and after re-installing Anaconda and all of the packages I had previously, including torchvision, torch and necessary dependencies, I am still getting this error:
OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\XXX\Anaconda3\envs\XXX\lib\site-packages\torch\lib\caffe2.dll" or one of its dependencies.
I am using python 3.7.9 and:
torchaudio=0.6.0=py37
torchvision=0.7.0=py37_cpu
tornado=6.0.4=py37he774522_1
traitlets=5.0.5=py_0
I've looked into it quite a bit but feel like this should be an easy solve...
I do not have CUDA and have used this:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
as per instructed on the official website of pytorch
| After a long time trying many things with Anaconda I decided to use bare python instead and I installed Python 3.8.6 and installed PyTorch from the link you provided and it finally worked even with CUDA support. Make sure to completely remove all Anaconda/Other Python version scripts from your path to ensure only the 3.8.6 version is used by your prompt.
| https://stackoverflow.com/questions/64653750/ |
Saving and reload huggingface fine-tuned transformer | I am trying to reload a fine-tuned DistilBertForTokenClassification model. I am using transformers 3.4.0 and pytorch version 1.6.0+cu101. After using the Trainer to train the downloaded model, I save the model with trainer.save_model() and in my trouble shooting I save in a different directory via model.save_pretrained(). I am using Google Colab and saving the model to my Google drive. After testing the model I also evaluated the model on my test getting great results, however, when I return to the notebook (or Factory restart the colab notebook) and try to reload the model, the predictions are terrible. Upon checking the directories, the config.json file is there as is the pytorch_mode.bin. Below is the full code.
from transformers import DistilBertForTokenClassification
# load the pretrained model from huggingface
#model = DistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(uniq_labels))
model = DistilBertForTokenClassification.from_pretrained('distilbert-base-uncased', num_labels=len(uniq_labels))
model.to('cuda');
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir = model_dir + 'mitmovie_pt_distilbert_uncased/results', # output directory
#overwrite_output_dir = True,
evaluation_strategy='epoch',
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir = model_dir + 'mitmovie_pt_distilbert_uncased/logs', # directory for storing logs
logging_steps=10,
load_best_model_at_end = True
)
trainer = Trainer(
model = model, # the instantiated Transformers model to be trained
args = training_args, # training arguments, defined above
train_dataset = train_dataset, # training dataset
eval_dataset = test_dataset # evaluation dataset
)
trainer.train()
trainer.evaluate()
model_dir = '/content/drive/My Drive/Colab Notebooks/models/'
trainer.save_model(model_dir + 'mitmovie_pt_distilbert_uncased/model')
# alternative saving method and folder
model.save_pretrained(model_dir + 'distilbert_testing')
Coming back to the notebook after restarting...
from transformers import DistilBertForTokenClassification, DistilBertConfig, AutoModelForTokenClassification
# retreive the saved model
model = DistilBertForTokenClassification.from_pretrained(model_dir + 'mitmovie_pt_distilbert_uncased/model',
local_files_only=True)
model.to('cuda')
Model predictions are terrible now from either directory, however, the model does work and outputs the number of classes I would expect, it appears that the actual trained weights have not been saved or are somehow not getting loaded.
| Do you tried loading the by the trainer saved model in the folder:
mitmovie_pt_distilbert_uncased/results
The Huggingface trainer saves the model directly to the defined output_dir.
| https://stackoverflow.com/questions/64663385/ |
complex functions that already support autograd - Pytorch | I am using this customized function to reshape my tensors in the customized loss function.
def reshape_fortran(x, shape):
if len(x.shape) > 0:
x = x.permute(*reversed(range(len(x.shape))))
return x.reshape(*reversed(shape)).permute(*reversed(range(len(shape))))
Though, I receive this error:
RuntimeError: _unsafe_view does not support automatic differentiation for outputs with complex dtype.
for reshape_fortran output.
Do you know what might be the problem? which function is not supported in Pytorch autograd for complex numbers?
| Complex Autograd was in Beta as of version 1.8, but is now stable and should fully support such operations as of 1.9.
| https://stackoverflow.com/questions/64689253/ |
Unable to install PyTorch on Windows 10 (x86_64) with Cuda 11.0 using pip | I tried following the instructions on pytorch.org and ran the command provided by them for my configuration, but I get the following error
ERROR: Could not find a version that satisfies the requirement torch===1.7.0+cu110 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch===1.7.0+cu110
A similar error is being thrown for any other installation methods that I try, e.g. earlier cuda versions, CPU only etc. My Python version is 3.9 and was installed using the .exe installer provided at python.org, if that's relevant.
| Just go to
https://pytorch.org/get-started/locally/
and get PyTorch for any platform using conda or pip.
| https://stackoverflow.com/questions/64691517/ |
IndexError: index_select(): Index is supposed to be a vector | for batch_id, (data, target) in enumerate(tqdm(train_loader)):
print(target)
print('Entered for loop')
target = torch.sparse.torch.eye(10).index_select(dim=0, index=target)
data, target = Variable(data), Variable(target)
The line which contains the index_select function gives this error and I am not able to find a solution to it anywhere. The target variable on printing looks like this:
tensor([[4],
[1],
[8],
[5],
[9],
[5],
[5],
[8],
[4],
[6]])
How do I convert the target variable into a vector? Isn’t it already a vector?
| If you would look at the shape of your target variable, you would find that it is a 2D tensor of shape:
target.shape # torch.Size([10, 1])
Error message is a bit confusing, but in essence index should be a 1D tensor (vector). So using .squeeze method would make:
target.squeeze().shape # torch.Size([10])
and index_select method would not complain.
| https://stackoverflow.com/questions/64693739/ |
PyTorch is tiling images when loaded with Dataloader | I am trying to load an Images Dataset using the PyTorch dataloader, but the resulting transformations are tiled, and don't have the original images cropped to the center as I am expecting them.
transform = transforms.Compose([transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor()])
dataset = datasets.ImageFolder('ml-models/downloads/', transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True)
images, labels = next(iter(dataloader))
import matplotlib.pyplot as plt
plt.imshow(images[6].reshape(224, 224, 3))
The resulting image is tiled, and not center cropped.[![as shown in the Jupyter snapshot here][1]][1]
Is there something wrong in the provided transformation? (Image shown below on link: )
[1]: https://i.stack.imgur.com/HtrIa.png
| Pytorch stores tensors in channel-first format, so a 3 channel image is a tensor of shape (3, H, W). Matplotlib expects data to be in channel-last format i.e. (H, W, 3). Reshaping does not rearrange the dimensions, for that you need Tensor.permute.
plt.imshow(images[6].permute(1, 2, 0))
| https://stackoverflow.com/questions/64705364/ |
clever image augmentation - random zoom out | i'm building a CNN to identify facial keypoints. i want to make the net more robust, so i thought about applying some zoom-out transforms because most pictures have about the same location of keypoints, so the net doesn't learn much.
my approach:
i want augmented images to keep the original image size so apply MaxPool2d and then random (not equal) padding until the original size is reached.
first question
is it going to work with simple average padding or zero padding? i'm sure it would be even better if i made the padding appear more like a background but is there a simple way to do that?
second question
the keypoints are the target vector, they come as a row vector of 30. i'm getting confused with the logic needed to transform them to the smaller space.
generally if an original point was at (x=5,y=7) it transforms to (x=2,y=3)- i'm not sure about it but so far manually checked and it's correct. but what to do if to keypoints are in the same new pixel? i can't feed the network with less target values.
that's it. would be happy to hear your thoughts
| I suggest to use torchvision.transforms.RandomResizedCrop as a part of your Compose statement. which will give you random zooms AND resize the resulting the images to some standard size. This avoids issues in both your questions.
| https://stackoverflow.com/questions/64727718/ |
Defining Loss function in pytorch | I have to define a huber loss function which is this:
This is my code
def huber(a, b):
res = (((a-b)[abs(a-b) < 1]) ** 2 / 2).sum()
res += ((abs(a-b)[abs(a-b) >= 1]) - 0.5).sum()
res = res / torch.numel(a)
return res
'''
yet, it is not working properly. Do you have any idea what is wrong?
| Huber loss function already exists in PyTorch under the name of torch.nn.SmoothL1Loss.
Follow this link https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html for more!
| https://stackoverflow.com/questions/64735517/ |
How to Use Class Weights with Focal Loss in PyTorch for Imbalanced dataset for MultiClass Classification | I am working on Multiclass Classification (4 classes) for Language Task and I am using the BERT model for classification task. I am following this blog as reference. My BERT Fine Tuned model returns nn.LogSoftmax(dim=1).
My data is pretty imbalanced so I used sklearn.utils.class_weight.compute_class_weight to compute weights of the classes and used the weights inside the Loss.
class_weights = compute_class_weight('balanced', np.unique(train_labels), train_labels)
weights= torch.tensor(class_weights,dtype=torch.float)
cross_entropy = nn.NLLLoss(weight=weights)
My results were not so good so I thought of Experementing with Focal Loss and have a code for Focal Loss.
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=2, logits=False, reduce=True):
super(FocalLoss, self).__init__()
self.alpha = alpha
self.gamma = gamma
self.logits = logits
self.reduce = reduce
def forward(self, inputs, targets):
BCE_loss = nn.CrossEntropyLoss()(inputs, targets)
pt = torch.exp(-BCE_loss)
F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
if self.reduce:
return torch.mean(F_loss)
else:
return F_loss
I have 3 questions now. First and the Most important is
Should I use Class Weight with Focal Loss?
If I have to Implement weights inside this Focal Loss, can I use weights parameters inside nn.CrossEntropyLoss()
If this implement is incorrect, what should be the proper code for this one including the weights (if possible)
| You may find answers to your questions as follows:
Focal loss automatically handles the class imbalance, hence weights are not required for the focal loss. The alpha and gamma factors handle the class imbalance in the focal loss equation.
No need of extra weights because focal loss handles them using alpha and gamma modulating factors
The implementation you mentioned is correct according to the focal loss formula but I had trouble in causing my model to converge with this version hence, I used the following implementation from mmdetection framework
pred_sigmoid = pred.sigmoid()
target = target.type_as(pred)
pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
focal_weight = (alpha * target + (1 - alpha) *
(1 - target)) * pt.pow(gamma)
loss = F.binary_cross_entropy_with_logits(
pred, target, reduction='none') * focal_weight
loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
return loss
You can also experiment with another focal loss version available
| https://stackoverflow.com/questions/64751157/ |
Pytorch installation could not find a version that satisfies the requirement | When i tried to install Pytorch in the way they suggest on their website:
pip install torch===1.7.0 torchvision===0.8.1 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
this is the error that appear:
ERROR: Could not find a version that satisfies the requirement torch===1.7.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch===1.7.0
How can i solve it?
| Just wanted to start out by letting all the mac, linux, and python 3.8.x- users here know that adding "https://" to the command does not solve the problem: Proof that it doesn't help or solve anything
Here's why: OP, you probably have python 3.9 installed on your machine. Unfortunately, Python 3.9 is not yet supported by Pytorch. Install python 3.8.6 instead, that's the latest version that's currently supported. You'll find similar problems trying to install packages like sklearn or tensorflow as well.
So here's the answer: either wait, or uninstall python and roll back to python 3.8.6
Sorry it couldn't be a better one.
| https://stackoverflow.com/questions/64756531/ |
Pytorch : W ParallelNative.cpp:206 | I'm trying to use a pre-trained template on my image set by following the tutorial right here :
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
Only I always get this "error" when I run my code and the console locks up :
[W ParallelNative.cpp:206] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
Thank you in advance for your help,
| I have the same problem.
Mac. Python 3.6 (also reproduces on 3.8). Pytorch 1.7.
It seems that with this error dataloaders don't (or can't) use parallel computing.
You can remove the error (this will not fix the problem) in two ways.
If you can access your dataloaders, set num_workers=0 when creating a dataloader
Set environment variable export OMP_NUM_THREADS=1
Again, both solutions kill parallel computing and may slow down data loading (and therefore training). I look forward to efficient solutions or a patch in Pytorch 1.7
| https://stackoverflow.com/questions/64772335/ |
Indexing list of tensors | I have two identical lists of tensors (with different sizes) except that for the first one all of the tensors are assigned to the cuda device. For example:
list1=[torch.tensor([0,1,2]).cuda(),torch.tensor([3,4,5,6]).cuda(),torch.tensor([7,8]).cuda()]
>>> list1
[tensor([0, 1, 2], device='cuda:0'), tensor([3, 4, 5, 6], device='cuda:0'), tensor([7, 8], device='cuda:0')]
list2=[torch.tensor([0,1,2]),torch.tensor([3,4,5,6]),torch.tensor([7,8])]
>>> list2
[tensor([0, 1, 2]), tensor([3, 4, 5, 6]), tensor([7, 8])]
I want to extract some tensors from the lists according to an array of indices such as:
ind=torch.tensor([0,2])
>>> ind
tensor([0, 2])
So my solution was to do something like that:
np.array(list1)[ind]
np.array(list2)[ind]
My question is why it works with the first list with the tensors defined on the cuda device and gives an error with the second list as shown below:
>>> np.array(list1)[ind]
array([tensor([0, 1, 2], device='cuda:0'),
tensor([7, 8], device='cuda:0')], dtype=object)
>>> np.array(list2)[ind]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: only one element tensors can be converted to Python scalars
EDIT:
Just to clarify, the error isn't raised because the tensors have different shapes. The following examples illustrate this point:
list3=[torch.tensor([1,2,3]).cuda()]
list4=[torch.tensor([1,2,3]).cuda(),torch.tensor([4,5,6]).cuda()]
list5=[torch.tensor([1,2,3])]
list6=[torch.tensor([1,2,3]),torch.tensor([4,5,6])]
And the results are:
>>> np.array(list3)
array([tensor([1, 2, 3], device='cuda:0')], dtype=object)
>>> np.array(list4)
array([tensor([1, 2, 3], device='cuda:0'),
tensor([4, 5, 6], device='cuda:0')], dtype=object)
>>> np.array(list5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: only one element tensors can be converted to Python scalars
>>> np.array(list6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: only one element tensors can be converted to Python scalars
| np.array trys to convert each of the elements of a list into a numpy array. This is only supported for CPU tensors. The short answer is you can explicitly instruct numpy to create an array with dtype=object to make the CPU case works. To understand what exactly is happening lets take a closer look at both cases.
Case 1 (CUDA tensors)
First note that if you attempt to use np.array on a CUDA tensor you get the following error
np.array(torch.zeros(2).cuda())
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
In your example, numpy tries to convert each element of list1 to a numpy array, however an exception is raised so it just settles on creating an array with dtype=object.
You end up with
np.array([torch.tensor([0,1,2]).cuda(), torch.tensor([3,4,5,6]).cuda(), torch.tensor([7,8]).cuda()])
being just a container pointing to different objects
array([tensor([0, 1, 2], device='cuda:0'),
tensor([3, 4, 5, 6], device='cuda:0'),
tensor([7, 8], device='cuda:0')], dtype=object)
Case 2 (CPU tensors)
For CPU tensors, PyTorch knows how to convert to numpy arrays. So when you run
np.array(torch.zeros(2))
you get a numpy array with dtype float32
array([0., 0.], dtype=float32)
The problem comes in your code when numpy successfully converts each element in list2 into a numpy array and then tries to stack them into a single multi-dimensional array. Numpy expects that each list entry represents one row of a multi-dimensional array, but in your case it finds that not all rows have the same shape, so doesn't know how to proceed and raises an exception.
One way to get around this is to explicitly specify that dtype should remain object. This basically tells numpy "don't try to convert the entries to numpy arrays first".
np.array([torch.tensor([0,1,2]), torch.tensor([3,4,5,6]), torch.tensor([7,8])], dtype=object)
which now gives a similar result to case 1
array([tensor([0, 1, 2]),
tensor([3, 4, 5, 6]),
tensor([7, 8])], dtype=object)
| https://stackoverflow.com/questions/64775560/ |
Whats the equivalent of tf.keras.Input() in pytorch? | can someone tell me what the equivalent of tf.keras.Input() in pytorch is?
At the documentation it says, "Initiates a Keras Tensor", so does it just creates a new empty tensor?
Thanks
| There's no equivalent in PyTorch to the Keras' Input. All you have to do is pass on the inputs as a tensor to the PyTorch model.
For eg: If you're working with a Conv net:
# Keras Code
input_image = Input(shape=(32,32,3)) # An input image of 32x32x3 (HxWxC)
feature = Conv2D(16, activation='relu', kernel_size=(3, 3))(input_image)
# PyTorch code
input_tensor = torch.randn(1, 3, 32, 32) # An input tensor of shape BatchSizexChannelsxHeightxWidth
feature = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=(2, 2))(input_tensor)
If it is a normal Dense layer:
# Keras
dense_input = Input(shape=(1,))
features = Dense(100)(dense_input)
# PyTorch
input_tensor = torch.tensor([10]) # A 1x1 tensor with value 10
features = nn.Linear(1, 100)(input_tensor)
| https://stackoverflow.com/questions/64780641/ |
Optuna Pytorch: returned value from the objective function cannot be cast to float | def autotune(trial):
cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu",
# 'train_batch_size' : 64,
# 'test_batch_size' : 1000,
# 'n_epochs' : 1,
# 'seed' : 0,
# 'log_interval' : 100,
# 'save_model' : False,
# 'dropout_rate' : trial.suggest_uniform('dropout_rate',0,1.0),
'lr' : trial.suggest_loguniform('lr', 1e-3, 1e-2),
'momentum' : trial.suggest_uniform('momentum', 0.4, 0.99),
'optimizer': trial.suggest_categorical('optimizer',[torch.optim.Adam,torch.optim.SGD, torch.optim.RMSprop, torch.optim.$
'activation': F.tanh}
optimizer = cfg['optimizer'](model.parameters(), lr=cfg['lr'])
#optimizer = torch.optim.Adam(model.parameters(),lr=0.001
As u can see above , I am trying to run Optuna trials to search for the most optimal hyper parameters for my CNN model.
# Train the model
# use small epoch for large dataset
# An epoch is 1 run through all the training data
# losses = [] # use this array for plotting losses
for _ in range(epochs):
# using data_loader
for i, (data, labels) in enumerate(trainloader):
# Forward and get a prediction
# x is the training data which is X_train
if name.lower() == "rnn":
model.hidden = (torch.zeros(1,1,model.hidden_sz),
torch.zeros(1,1,model.hidden_sz))
y_pred = model.forward(data)
# compute loss/error by comparing predicted out vs acutal labels
loss = criterion(y_pred, labels)
#losses.append(loss)
if i%10==0: # print out loss at every 10 epoch
print(f'epoch {i} and loss is: {loss}')
#Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
study = optuna.create_study(sampler=optuna.samplers.TPESampler(), direction='minimize',pruner=optuna.pruners.SuccessiveHalvingPrune$
study.optimize(autotune, n_trials=1)
BUT , when I run the above code to tune and find out my most optimal parameters , the follow error occured, seems like the trial has failed even though I still get epoch losses and values. Please advise thanks!
[W 2020-11-11 13:59:48,000] Trial 0 failed, because the returned value from the objective function cannot be cast to float. Returned value is: None
Traceback (most recent call last):
File "autotune2", line 481, in <module>
n_instances, n_features, scores = run_analysis()
File "autotune2", line 350, in run_analysis
print(study.best_params)
File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 67, in best_params
return self.best_trial.params
File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/study.py", line 92, in best_trial
return copy.deepcopy(self._storage.get_best_trial(self._study_id))
File "/home/shar/anaconda3/lib/python3.7/site-packages/optuna/storages/_in_memory.py", line 287, in get_best_trial
raise ValueError("No trials are completed yet.")
ValueError: No trials are completed yet.
| This exception is raised because the objetive function from your study must return a float.
In your case, the problem is in this line:
study.optimize(autotune, n_trials=1)
The autotune function you defined before does not return a value and cannot be used for optimization.
How to fix?
For hyperparameter search, the autotune function must return the either some metric you can get after some training - like the loss or cross-entropy.
A quick fix on your code could be something like this:
def autotune():
cfg= { 'device' : "cuda" if torch.cuda.is_available() else "cpu"
...etc...
}
best_loss = 1e100; # or larger
# Train the model
for _ in range(epochs):
for i, (data, labels) in enumerate(trainloader):
... (train the model) ...
# compute loss/error by comparing predicted out vs actual labels
loss = criterion(y_pred, labels)
best_loss = min(loss,best_loss)
return best_loss
There is a good example with Pythorch in the Optuna repo that uses a pythoch callback to retrieve the accuracy (but can be changed easily to use the RMSE if needed). It also uses more than one experiment and takes the median for hyperparameters.
| https://stackoverflow.com/questions/64781266/ |
How to implement Softmax regression with pytorch? | I am working on a uni assignment where I need to implement Softmax Regression with Pytorch. The assignment says:
Implement Softmax Regression as an nn.Module and pipe its output with its output with torch.nn.Softmax.
As I am new to pytorch, I am not sure how to do it exactly. So far I have tried:
class SoftmaxRegression(nn.Module): # inheriting from nn.Module!
def __init__(self, num_labels, num_features):
super(SoftmaxRegression, self).__init__()
self.linear = torch.nn.Linear(num_labels, num_features)
def forward(self, x):
# should return the probabilities for the classes, e.g.
# tensor([[ 0.1757, 0.3948, 0.4295],
# [ 0.0777, 0.3502, 0.5721],
# ...
# not sure what to do here
does anybody have any idea how I could go about it? I am not sure what should be written in the forward method. I appreciate any help !
| As far as I understand, the assignment wants you to implement your own version of the Softmax function. But, I didn't get what do you mean by and pipe its output with torch.nn.Softmax. Are they asking you to return the output of your custom Softmax along with torch.nn.Softmax from your custom nn.Module? You could do this:
class SoftmaxRegression(nn.Module):
def __init__(self, dim=0):
super(SoftmaxRegression, self).__init__()
self.dim = dim
def forward(self, x):
means = torch.mean(x, self.dim, keepdim=True)[0]
exp_x= torch.exp(x-means)
sum_exp_x = torch.sum(exp_x, self.dim, keepdim=True)
value = exp_x/sum_exp_x
return value
| https://stackoverflow.com/questions/64783744/ |
Is NN just Bad at Solving this Simple Linear Problem, or is it because of Bad Training? | I was trying to train a very straightforward (I thought) NN model with PyTorch and skorch, but the bad performance really baffles me, so it would be great if you have any insight into this.
The problem is something like this: there are five objects, A, B, C,
D, E, (labeled by their fingerprint, e.g.(0, 0) is A, (0.2, 0.5) is B,
etc) each correspond to a number, and the problem is trying to find
what number does each correspond to. The training data is a list of
"collections" and the corresponding sum. for example: [A, A, A, B, B]
== [(0,0), (0,0), (0,0), (0.2,0.5), (0.2, 0.5)] --> 15, [B, C, D, E] == [(0.2,0.5), (0.5,0.8), (0.3,0.9), (1,1)] --> 30 .... Note that number of object in one collection is not constant
There is no noise or anything, so it's just a linear system that can be solved directly. So I would thought this would be very easy for a NN for find out. I'm actually using this example as a sanity check for a more complicated problem, but was surprised that NN couldn't even solve this.
Now I'm just trying to pinpoint exactly where it went wrong. The model definition seem to be right, the data input is right, is the bad performance due to bad training? or is NN just bad at these things?
here is the model definition:
class NN(nn.Module):
def __init__(
self,
input_dim,
num_nodes,
num_layers,
batchnorm=False,
activation=Tanh,
):
super(SingleNN, self).__init__()
self.get_forces = get_forces
self.activation_fn = activation
self.model = MLP(
n_input_nodes=input_dim,
n_layers=num_layers,
n_hidden_size=num_nodes,
activation=activation,
batchnorm=batchnorm,
)
def forward(self, batch):
if isinstance(batch, list):
batch = batch[0]
with torch.enable_grad():
fingerprints = batch.fingerprint.float()
fingerprints.requires_grad = True
#index of the current "collection" in the training list
idx = batch.idx
sorted_idx = torch.unique_consecutive(idx)
o = self.model(fingerprints)
total = scatter(o, idx, dim=0)[sorted_idx]
return total
@property
def num_params(self):
return sum(p.numel() for p in self.parameters())
class MLP(nn.Module):
def __init__(
self,
n_input_nodes,
n_layers,
n_hidden_size,
activation,
batchnorm,
n_output_nodes=1,
):
super(MLP, self).__init__()
if isinstance(n_hidden_size, int):
n_hidden_size = [n_hidden_size] * (n_layers)
self.n_neurons = [n_input_nodes] + n_hidden_size + [n_output_nodes]
self.activation = activation
layers = []
for _ in range(n_layers - 1):
layers.append(nn.Linear(self.n_neurons[_], self.n_neurons[_ + 1]))
layers.append(activation())
if batchnorm:
layers.append(nn.BatchNorm1d(self.n_neurons[_ + 1]))
layers.append(nn.Linear(self.n_neurons[-2], self.n_neurons[-1]))
self.model_net = nn.Sequential(*layers)
def forward(self, inputs):
return self.model_net(inputs)
and the skorch part is straightforward
model = NN(2, 100, 2)
net = NeuralNetRegressor(
module=model,
...
)
net.fit(train_dataset, None)
For a test run, the dataset looks like the following (16 collections in total):
[[0.7484336 0.5656401]
[0. 0. ]
[0. 0. ]
[0. 0. ]]
[[1. 1.]
[0. 0.]
[0. 0.]]
[[0.51311415 0.67012525]
[0.51311415 0.67012525]
[0. 0. ]
[0. 0. ]]
[[0.51311415 0.67012525]
[0.7484336 0.5656401 ]
[0. 0. ]]
[[0.51311415 0.67012525]
[1. 1. ]
[0. 0. ]
[0. 0. ]]
[[0.51311415 0.67012525]
[0.51311415 0.67012525]
[0. 0. ]
[0. 0. ]
[0. 0. ]
[0. 0. ]
[0. 0. ]
[0. 0. ]]
[[0.51311415 0.67012525]
[1. 1. ]
[0. 0. ]
[0. 0. ]
[0. 0. ]
[0. 0. ]]
....
with corresponding total:
[10, 11, 14, 14, 17, 18, ...]
It's easy to tell what are the objects/how many of them are in one collection just by eyeballing it
and the training process looks like:
epoch train_energy_mae train_loss cp dur
------- ------------------ ------------ ---- ------
1 4.9852 0.5425 + 0.1486
2 16.3659 4.2273 0.0382
3 6.6945 0.7403 0.0025
4 7.9199 1.2694 0.0024
5 12.0389 2.4982 0.0024
6 9.9942 1.8391 0.0024
7 5.6733 0.7528 0.0024
8 5.7007 0.5166 0.0024
9 7.8929 1.0641 0.0024
10 9.2560 1.4663 0.0024
11 8.5545 1.2562 0.0024
12 6.7690 0.7589 0.0024
13 5.3769 0.4806 0.0024
14 5.1117 0.6009 0.0024
15 6.2685 0.8831 0.0024
....
290 5.1899 0.4750 0.0024
291 5.1899 0.4750 0.0024
292 5.1899 0.4750 0.0024
293 5.1899 0.4750 0.0024
294 5.1899 0.4750 0.0025
295 5.1899 0.4750 0.0025
296 5.1899 0.4750 0.0025
297 5.1899 0.4750 0.0025
298 5.1899 0.4750 0.0025
299 5.1899 0.4750 0.0025
300 5.1899 0.4750 0.0025
301 5.1899 0.4750 0.0024
302 5.1899 0.4750 0.0025
303 5.1899 0.4750 0.0024
304 5.1899 0.4750 0.0024
305 5.1899 0.4750 0.0025
306 5.1899 0.4750 0.0024
307 5.1899 0.4750 0.0025
You can see that it just stopped training after a while.
I can confirm that the NN does give different result for different fingerprint, but somehow the final predicted value is just never good enough.
I have tried different NN size, learning rate, batch size, activation function (tanh, relu, etc) and non of them seem to help. Do you have any insight into this? is there anything I did wrong/could try, or is NN just bad at this kind of task?
| First thing I've noticed: super(SingleNN, self).__init__() should be super(NN, self).__init__() instead. Change that and let me know if you still get any errors.
| https://stackoverflow.com/questions/64795826/ |
Extracting feature vector for grey images via ResNet18: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224] | I have 600x800 images that have only 1 channel. I am trying to use pre-trained ResNet18 to extract their features however the code expects 3 channel:
import torch
import torchvision
import torchvision.models as models
from PIL import Image
img = Image.open("labeled-data/train_moth/moth/frame163.png")
# Load the pretrained model
model = models.resnet18(pretrained=True)
# Use the model object to select the desired layer
layer = model._modules.get('avgpool')
# Set model to evaluation mode
model.eval()
transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize(256),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
def get_vector(image):
# Create a PyTorch tensor with the transformed image
t_img = transforms(image)
t_img = torch.cat((t_img, t_img, t_img), 0)
# Create a vector of zeros that will hold our feature vector
# The 'avgpool' layer has an output size of 512
my_embedding = torch.zeros(512)
# Define a function that will copy the output of a layer
def copy_data(m, i, o):
my_embedding.copy_(o.flatten()) # <-- flatten
# Attach that function to our selected layer
h = layer.register_forward_hook(copy_data)
# Run the model on our transformed image
with torch.no_grad(): # <-- no_grad context
model(t_img.unsqueeze(0)) # <-- unsqueeze
# Detach our copy function from the layer
h.remove()
# Return the feature vector
return my_embedding
Here's the error I am getting:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-59ab45f8c1e6> in <module>
42
43
---> 44 pic_vector = get_vector(img)
<ipython-input-5-59ab45f8c1e6> in get_vector(image)
21 def get_vector(image):
22 # Create a PyTorch tensor with the transformed image
---> 23 t_img = transforms(image)
24 t_img = torch.cat((t_img, t_img, t_img), 0)
25 # Create a vector of zeros that will hold our feature vector
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
59 def __call__(self, img):
60 for t in self.transforms:
---> 61 img = t(img)
62 return img
63
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, tensor)
210 Tensor: Normalized Tensor image.
211 """
--> 212 return F.normalize(tensor, self.mean, self.std, self.inplace)
213
214 def __repr__(self):
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py in normalize(tensor, mean, std, inplace)
296 if std.ndim == 1:
297 std = std[:, None, None]
--> 298 tensor.sub_(mean).div_(std)
299 return tensor
300
RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224]
pic_vector = get_vector(img)
Code is from: https://stackoverflow.com/a/63552285/2414957
I thought using
t_img = torch.cat((t_img, t_img, t_img), 0)
would be helpful but I was wrong.
Here's a bit about image:
$ identify frame163.png
frame163.png PNG 800x600 800x600+0+0 8-bit Gray 256c 175297B 0.000u 0:00.000
| many models (almost all models) from torchvision module expects our input to be in 3 channel.
So when ever you are using pretrained model , just convert your image to RGB scale.
So if i see your code
just change this
img = Image.open("labeled-data/train_moth/moth/frame163.png")
to this
img = Image.open("labeled-data/train_moth/moth/frame163.png").convert('RGB')
The above line will just stack your gray scale image to have 3 channel
Second option what you have is Defining our model class...with single channel as input
model = models.resnet18(pretrained=True)
model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
Plz vote if you find this useful
| https://stackoverflow.com/questions/64796538/ |
Build KNN graph over some subset of Node features | I have a point cloud that I want to use a graph neural network on. Each point in the point cloud is characterised by its positional coordinates as well as it's color. So a single node is (X, Y, Z, C).
Now I want to apply an Edge Convolution on this (as described in the DGL Edge-Conv example, and to do it I should build a Nearest Neighbors graph on (X, Y, Z) (And not on C), then use all the 4 properties as features for my neural network.
What would be a clean and efficient way to do this? (I have a lot of data so I want to batch and collate well)
| Supposing you have a tensor pc of shape (NUM_POINTS, 4) where each row is (X, Y, Z, C), then you could use sklearn as follows:
from sklearn.neighbors import NearestNeighbors
import dgl
k = 3 # number of neighbours you want
neigh = NearestNeighbors(n_neighbors=k)
neigh.fit(pc[:, :3].numpy()) # selects only (X, Y, Z)
knn = neigh.kneighbors_graph()
graph = dgl.from_scipy(knn)
graph.ndata['x'] = pc
I would recommend saving these graphs to disk, so they are not computed each time you train etc.
| https://stackoverflow.com/questions/64800266/ |
trying to build pytorch 1.0.0 cuda 10.2 with support for old gpu (3.0) | I'm playing with a couple of projects that explicitly require pytorch == 1.0.0, but I have an old graphics card that only supports cuda 3.0 so I'm using the cpu, which is very slow, being the graphics card a dual gpu I decided to give a try and build pytorch from the sources with support for 3.0 (I have planned to update the pc but is not gonna happen anytime soon).
I am using docker to do the build, in particular I tried to modify an existing Dockerfile from build-pytorch, on the host system I am using debian/sid and there is cuda 10.2 cudnn 7.6 installed, I'm not sure if I can downgrade cuda, and I don't know if the versions in the container must be exactly the same as the host (like for nvidia drivers).
Gist of the modified Dockerfile
The first thing I noticed when updating the versions is that package cuda-cublas-dev-10-2 was not found, the latest version was 10-0,
CUBLAS packaging changed in CUDA 10.1 to be outside of the toolkit installation path
If I install cublas version 10-0 or if I don't install it obviously no header files are found (error below), if I install the recommended libcublas-dev version the build continues for a while, with some warnings (below) , but then it stops with the error below.
I searched for the error online but I did not find anything specific, if I understand correctly there is a function declared more than once and when it is called the choice is ambiguous, but I have not yet investigated looking at the sources.
I would like to know if anyone has run into this error before and knows how to fix it.
libcublas-dev installed error:
[ 67%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/caffe2_gpu_generated_SparseCUDABlas.cu.o
/pytorch/aten/src/ATen/native/sparse/cuda/SparseCUDABlas.cu(58): error: more than one instance of function "at::native::sparse::cuda::cusparseGetErrorString" matches the argument list:
function "cusparseGetErrorString(cusparseStatus_t)"
function "at::native::sparse::cuda::cusparseGetErrorString(cusparseStatus_t)"
argument types are: (cusparseStatus_t)
1 error detected in the compilation of "/tmp/tmpxft_00004ccc_00000000-6_SparseCUDABlas.cpp1.ii".
CMake Error at caffe2_gpu_generated_SparseCUDABlas.cu.o.Release.cmake:279 (message):
Error generating file
/pytorch/build/caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/./caffe2_gpu_generated_SparseCUDABlas.cu.o
caffe2/CMakeFiles/caffe2_gpu.dir/build.make:1260: recipe for target 'caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/sparse/cuda/caffe2_gpu_generated_SparseCUDABlas.cu.o' failed
warnings:
ptxas warning : Too big maxrregcount value specified 96, will be ignored
missing header error:
Scanning dependencies of target caffe2_pybind11_state
[ 59%] Building CXX object caffe2/CMakeFiles/caffe2_pybind11_state.dir/python/pybind_state.cc.o
In file included from /pytorch/aten/src/THC/THC.h:4:0,
from /pytorch/torch/lib/THD/../THD/base/TensorDescriptor.h:6,
from /pytorch/torch/lib/THD/../THD/base/TensorDescriptor.hpp:6,
from /pytorch/torch/lib/THD/../THD/THD.h:14,
from /pytorch/torch/lib/THD/base/DataChannelRequest.h:3,
from /pytorch/torch/lib/THD/base/DataChannelRequest.hpp:6,
from /pytorch/torch/lib/THD/base/DataChannelRequest.cpp:1:
/pytorch/build/caffe2/aten/src/THC/THCGeneral.h:17:23: fatal error: cublas_v2.h: No such file or directory
compilation terminated.
make[2]: *** [caffe2/torch/lib/THD/CMakeFiles/THD.dir/base/DataChannelRequest.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
| Apparently the problem was that both libcusparse and aten/src/ATen/native/sparse/cuda/SparseCUDABlas.cu implement cusparseGetErrorString() and for version >= 10.2 the one in the library should be used.
--- aten/src/ATen/native/sparse/cuda/SparseCUDABlas.cu.orig 2020-11-16 12:13:17.680023134 +0000
+++ aten/src/ATen/native/sparse/cuda/SparseCUDABlas.cu 2020-11-16 12:13:45.158407583 +0000
@@ -9,7 +9,7 @@
namespace at { namespace native { namespace sparse { namespace cuda {
-
+#if 0
std::string cusparseGetErrorString(cusparseStatus_t status) {
switch(status)
{
@@ -51,6 +51,7 @@
}
}
}
+#endif
inline void CUSPARSE_CHECK(cusparseStatus_t status)
{
I haven't tried yet if it works at runtime but the build is successful.
| https://stackoverflow.com/questions/64853878/ |
How to convert pip install to Poetry file? | With hours of research, I still can't figure out a way to convert this pip install cmd to pyproject.toml
file. I'm trying to install PyTorch.
pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
This is what I got at the moment (Completely Wrong!)
[tool.poetry]
name = "poetry-test"
version = "0.1.0"
description = ""
authors = ["Your Name <[email protected]>"]
[tool.poetry.dependencies]
python = "^3.8"
torch = "^1.7.0"
torchvision = "^0.8.1"
torchaudio = "^0.7.0"
[tool.poetry.dependencies.torch]
url = "https://download.pytorch.org/whl/torch_stable.html"
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
| Previous solution didnt work for me as mentioned in this poetry issue#2543. So what worked for me in the meantime was to upgrade to version 1.2(preview) which addresses that issue.
Install poetry 1.2
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python - --preview
Add below repo to your pyproject.toml as suggested here issue#4124, and you shouldnt get version mismatch.
[[tool.poetry.source]]
name = "torch_rep"
url = "https://eternalphane.github.io/pytorch-pypi"
After that, you can just proceed to install them
poetry add torch =1.7.0+cpu
poetry add torchvision=0.8.1+cpu
Alternatively, adding dependencies to pyproject.toml and doing poetry install should also work.
| https://stackoverflow.com/questions/64871630/ |
LU is not same as A when I used torch.solver | let A=[[3,2],[1,-3]] and B=[[3],[-10]]
and solve equation AX=B by using torch.solve:
X, LU = torch.solve(B,A)
Then I got X=[[-1],[3]] and LU=[[3,2],[0.333,-3.666]].
According to definition of LU decompose, LU must be same as A, however they aren't same.
Can anyone explain this???
Thank you
| The representation you got is a compact way of representing the lower trainagular matrix L and the upper trainagular matrix U. You can use torch.tril and torch.triu to get these matrices explicitly:
L = torch.tril(LU, -1) + torch.eye(LU.shape[-1])
U = torch.triu(LU)
verify:
In [*]: L
Out[*]:
tensor([[1.0000, 0.0000],
[0.3333, 1.0000]])
In [*]: U
Out[*]:
tensor([[ 3.0000, 2.0000],
[ 0.0000, -3.6667]])
And the product is indeed equal to A:
In [*]: torch.dist(L @ U , A)
Out[*]: tensor(0.)
| https://stackoverflow.com/questions/64874981/ |
What is the correct input to LSTM? | I have tensors of varying length . These tensors are data for different time period. My aim is to get final output of the lstm.
torch.randn(4)-Time1
torch.randn(3,4)-Time2
torch.randn(4,4)-Time3
These are my data, what is the input to LSTM from here? , my aim is to get the final output from the lstm
For example, this is what I did
out_position = self.linear_out_position(features)
_, (hn, _) = self.lstm(out_position)
output = self.ffn(hn)
however i am getting result for each tensor, how can i only get the final result? Please I need your help
| You can access the last hidden layer as hn[-1]
output = self.ffn(hn[-1])
| https://stackoverflow.com/questions/64875981/ |
How to run a pytorch project with CPU? | A Pytorch project is supposed to run on GPU. I want to run it on my laptop only with CPU. There are a lot of places calling .cuda() on models, tensors, etc., which fail to execute when cuda is not available. Is it possible to do it without changing the code everywhere?
| Here's the simplest fix I can think of:
Put the following line near the top of your code:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
Do a global replace. Change .cuda() to .to(device), where device is the variable set in step 1.
| https://stackoverflow.com/questions/64897730/ |
How to write a contextmanager to throw and catch errors | I want to catch the runtime error CUDA out of memory on multiple occasions in my code. I do this to then rerun the whole training workflow with lower batch size. What is the best way to do that?
I am currently doing this:
try:
result = model(input)
# if the GPU runs out of memory, start the experiment again with a smaller batch size
except RuntimeError as e:
if str(e).startswith('CUDA out of memory.') and batch_size > 10:
raise CudaOutOfMemory(e)
else:
raise e
I then catch the error CudaOutOfMemory outside my main function.
However, this is a pretty long piece of code that I need to repeat many times. Is there any way to make a context manager for this?
such that instead I can run:
with catch_cuda_out_of_mem_error:
result = model(input)
Edit:
I want to create a context manager instead of a function because the functions I want to wrap the "try, except" around are not always the same. In my workflow, I have many functions that use a lot of GPU memory and I would like to catch this error in any of them.
| Using a context manager is about properly acquiring and releasing a resource. Here you don't really have any resource that you are acquiring and releasing, so I don't think a context manager is appropriate. How about just using a function?
def try_compute_model(input):
try:
return model(input)
# if the GPU runs out of memory, start the experiment again with a smaller batch size
except RuntimeError as e:
if str(e).startswith('CUDA out of memory.') and batch_size > 10:
raise CudaOutOfMemory(e)
else:
raise e
Then use it like
result = try_compute_model(input)
| https://stackoverflow.com/questions/64900712/ |
How to use fairseq interactive.py non-interactively? | I am trying to translate from English to Arabic using Fairseq. But the interactive.py script translate pieces of text fragment on-the-fly. But I need to use it as reading an input text file and writing output text file write. I referred this GitHub issue - https://github.com/pytorch/fairseq/issues/858 But it doesn't clearly explain on how to do it in general.
Any suggestions ?
| fairseq-interactive can read lines from a file with the --input parameter, and it outputs translations to standard output.
So let's say I have this input text file source.txt (where every sentence to translate is on a separate line):
Hello world!
My name is John
You can run:
fairseq-interactive --input=source.txt [all-your-fairseq-parameters] > target.txt
Where > target.txt means "put in the target.txt file all (standard) output generated by fairseq-interactive". The file will be created if it doesn't exist yet.
With an English to French model it would generate a file target.txt that looks something like this (actual output may vary depending on your model, configuration and Fairseq version):
S-0 Hello world!
W-0 0.080 seconds
H-0 -0.43813419342041016 Bonj@@ our le monde !
D-0 -0.43813419342041016 Bonjour le monde !
P-0 -0.1532 -1.7157 -0.0805 -0.0838 -0.1575
S-1 My name is John
W-1 0.080 seconds
H-1 -0.3272092938423157 Je m' appelle John .
D-1 -0.3272092938423157 Je m'appelle John.
P-2 -0.3580 -0.2207 -0.0398 -0.1649 -1.0216 -0.1583
To keep only the translations (lines starting with D-), you would have to filter the content of this file. You could use this command for example:
grep -P "D-[0-9]+" target.txt | cut -f3 > only_translations.txt
but you can merge all commands in one line:
fairseq-interactive --input=source.txt [all-your-fairseq-parameters] | grep -P "D-[0-9]+" | cut -f3 > target.txt
(Actual command will depend on the actual structure of target.txt.)
Finally, know that you can use --input=- to read input from standard input.
| https://stackoverflow.com/questions/64902144/ |
PyTorch: Vectorizing patch selection from a batch of images | Suppose I have a batch of images as a tensor, for example:
images = torch.zeros(64, 3, 1024, 1024)
Now, I want to select a patch from each of those images. All the patches are of the same size, but have different starting positions for each image in the batch.
size_x = 100
size_y = 100
start_x = torch.zeros(64)
start_y = torch.zeros(64)
I can achieve the desired result like that:
result = []
for i in range(arr.shape[0]):
result.append(arr[i, :, start_x[i]:start_x[i]+size_x, start_y[i]:start_y[i]+size_y])
result = torch.stack(result, dim=0)
The question is -- is it possible to do the same thing faster, without a loop? Perhaps there is some form of advanced indexing, or a PyTorch function that can do this?
| You can use torch.take to get rid of a for loop. But first, an array of indices should be created with this function
def convert_inds(img_a,img_b,patch_a,patch_b,start_x,start_y):
all_patches = np.zeros((len(start_x),3,patch_a,patch_b))
patch_src = np.zeros((patch_a,patch_b))
inds_src = np.arange(patch_b)
patch_src[:] = inds_src
for ind,info in enumerate(zip(start_x,start_y)):
x,y = info
if x + patch_a + 1 > img_a: return False
if y + patch_b + 1 > img_b: return False
start_ind = img_b * x + y
end_ind = img_b * (x + patch_a -1) + y
col_src = np.linspace(start_ind,end_ind,patch_b)[:,None]
all_patches[ind,:] = patch_src + col_src
return all_patches.astype(np.int)
As you can see, this function essentially creates the indices for each patch you want to slice. With this function, the problem can be easily solved by
size_x = 100
size_y = 100
start_x = torch.zeros(64)
start_y = torch.zeros(64)
images = torch.zeros(64, 3, 1024, 1024)
selected_inds = convert_inds(1024,1024,100,100,start_x,start_y)
selected_inds = torch.tensor(selected_inds)
res = torch.take(images,selected_inds)
UPDATE
OP's observation is correct, the approach above is not faster than a naive approach. In order to avoid building indices every time, here is another solution based on unfold
First, build a tensor of all the possible patches
# create all possible patches
all_patches = images.unfold(2,size_x,1).unfold(3,size_y,1)
Then, slice the desired patches from all_patches
img_ind = torch.arange(images.shape[0])
selected_patches = all_patches[img_ind,:,start_x,start_y,:,:]
| https://stackoverflow.com/questions/64903931/ |
Why we need a decoder_start_token_id during generation in HuggingFace BART? | During the generation phase in HuggingFace's code:
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L88-L100
They pass in a decoder_start_token_id, I'm not sure why they need this. And in the BART config, the decoder_start_token_id is actually 2 (https://huggingface.co/facebook/bart-base/blob/main/config.json), which is the end of sentence token </s>.
And I tried a simple example:
from transformers import *
import torch
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
input_ids = torch.LongTensor([[0, 894, 213, 7, 334, 479, 2]])
res = model.generate(input_ids, num_beams=1, max_length=100)
print(res)
preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True).strip() for g in res]
print(preds)
The results I obtained:
tensor([[ 2, 0, 894, 213, 7, 334, 479, 2]])
['He go to school.']
Though it does not affect the final "tokenization decoding" results. But it seems weird to me that the first token we generate is actually 2(</s>).
| You can see in the code for encoder-decoder models that the input tokens for the decoder are right-shifted from the original (see function shift_tokens_right). This means that the first token to guess is always BOS (beginning of sentence). You can check that this is the case in your example.
For the decoder to understand this, we must choose a first token that is always followed by BOS, so which could it be? BOS? Obviously not because it must be followed by regular tokens. The padding token? Also not a good choice because it is followed by another padding token or by EOS (end of sentence). So, what about EOS then? Well, that makes sense because it is never followed by anything in the training set so there is no next token coming in conflict. And besides, isn't it natural that the beginning of sentence follows the end of another?
| https://stackoverflow.com/questions/64904840/ |
how to add transformation in pytorch object detection | I'm new to PyTorch & going through the PyTorch object detection documentation tutorial pytorch docx.
At their collab version, I made the below changes to add some transformation techniques.
First change to the __getitem__ method of class PennFudanDataset(torch.utils.data.Dataset)
if self.transforms is not None:
img = self.transforms(img)
target = T.ToTensor()(target)
return img, target
In actual documentation it is
if self.transforms is not None:
img, target = self.transforms(img, target)
Secondly, at the get_transform(train)function.
def get_transform(train):
if train:
transformed = T.Compose([
T.ToTensor(),
T.GaussianBlur(kernel_size=5, sigma=(0.1, 2.0)),
T.ColorJitter(brightness=[0.1, 0.2], contrast=[0.1, 0.2], saturation=[0, 0.2], hue=[0,0.5])
])
return transformed
else:
return T.ToTensor()
**In the documentation it is-**
def get_transform(train):
transforms = []
transforms.append(T.ToTensor())
if train:
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
While implementing the code, I'm getting the below error. I'm not able to get what I',m doing wrong.
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 272, in __getitem__
return self.dataset[self.indices[idx]]
File "<ipython-input-41-94e93ff7a132>", line 72, in __getitem__
target = T.ToTensor()(target)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 104, in __call__
return F.to_tensor(pic)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 64, in to_tensor
raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))
TypeError: pic should be PIL Image or ndarray. Got <class 'dict'>
| I believe the Pytorch transforms only work on images (PIL images or np arrays in this case) and not labels (which are dicts according to the trace). As such, I don't think you need to "tensorify" the labels as in this line target = T.ToTensor()(target) in the __getitem__ function.
| https://stackoverflow.com/questions/64905441/ |
PyTorch: RuntimeError: Input, output and indices must be on the current device | I am running a BERT model on torch. It's a multi-class sentiment classification task with about 30,000 rows. I have already put everything on cuda, but not sure why I'm getting the following run time error. Here is my code:
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'finetuned_BERT_epoch_{epoch}.model')
tqdm.write(f'\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_validation)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (Weighted): {val_f1}')
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-67-9306225bb55a> in <module>()
17 }
18
---> 19 outputs = model(**inputs)
20
21 loss = outputs[0]
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
RuntimeError: Input, output and indices must be on the current device
Any suggestions would be appreciated. Thanks!
| You should put your model on the device, which is probably cuda:
device = "cuda:0"
model = model.to(device)
Then make sure the inputs of the model(input) are on the same device as well:
input = input.to(device)
It should work!
| https://stackoverflow.com/questions/64914598/ |
cuda is not available on my pytorch, but I can't find anything wrong with the version | for some reason I have to use cuda version10.0 instead of upgrading it
The version of DriverAPI is higher than RunTimeAPI but somebody told me thats OK
Others who asked the same question at last found their version was not match. emmmm not like me
details here
OS:Windows10
Python 3.7&3.8 both tried
result of 'nvcc -V':
nvcc: release 10.0, V10.0.130
nvidia-smi:
NVIDIA-SMI 451.67 Driver Version: 451.67 CUDA Version: 11.0
conda list: cudatoolkit 10.0.130 0
import torch
print(torch.version.cuda)-----None
torch.cuda.is_available()-----False
| How did you install it ? I assume with pip. For pytorch I would recommend manually downloading the wheel from https://download.pytorch.org/whl/torch_stable.html and install it with:
pip install torch-1.4.0+cu100-cp38-cp38-linux_x86_64.whl
Assuming python 3.8 and linux. If you use something different make sure to select the appropriate version for your OS, Cuda version and python interpreter.
| https://stackoverflow.com/questions/64917109/ |
Pytorch: use pretrained vectors to initialize nn.Embedding, but this embedding layer is not updated during the training | I initialized nn.Embedding with some pretrain parameters (they are 128 dim vectors), the following code demonstrates how I do this:
self.myvectors = gensim.models.KeyedVectors.load_word2vec_format(cfg.vec_dir)
self.vec_weights = torch.FloatTensor(self.myvectors.vectors)
self.embeds = torch.nn.Embedding.from_pretrained(self.vec_weights)
cfg.vec_dir is a json file where vec_dir indicates the path of the pretrained 128 dim vectors I used to initialize this layer.
After the model is trained, I print out this embedding layer, and I found that the parameters are exactly the same as I initialized them, so clearly the parameters are not updated during the training. Why is this happening? What should I do in order to update these vectors?
| The torch.nn.Embedding.from_pretrained classmethod by default freezes the parameters. If you want to train the parameters, you need to set the freeze keyword argument to False. See the documentation.
So you might try this instead:
self.embeds = torch.nn.Embedding.from_pretrained(self.vec_weights, freeze=False)
| https://stackoverflow.com/questions/64919743/ |
Training Data Split across GPUs in DDP Pytorch Lightning | Goal: Train a model in Distriubted Data Parallel(DDP) setting using Pytorch Lightning Framework
Questions:
Training Data Partition: How is data partition across separate GPUs is handled with Pytorch Lightning? Am I supposed to manually partition the data or Pytorch lightning will take care of that?
Loss Averaging: Do I have to aggregate the losses myself or Pytorch Lightning is going automatically do that?
I have been spending time with the code-base of pytorch lightning, looking for how DDP sync is handled, but unable to find that exact code. Would appreciate a clarification on this.
| Lightning handles both of these scenarios for you out of the for you but it can be overridden. The code for this can be found in the official github here.
| https://stackoverflow.com/questions/64920829/ |
The result is empty when prediction of Faster RCNN model (Pytorch) |
I'm trying to train Faster RCNN model. After training, I try to predict the result of image but the result is empty.
My data is w: 1600, h: 800, c: 3, classes: 7, bounding boxes:(x1, y1, x2, y2)
My model is below.
My model
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator
def get_instance_segmentation_model(num_classes):
backbone = torchvision.models.vgg16(pretrained=True).features
backbone.out_channels = 512
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],
output_size=7,
sampling_ratio=2)
model = FasterRCNN(backbone,
num_classes=2,
rpn_anchor_generator=anchor_generator,
box_roi_pool=roi_pooler)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
num_classes = 2
model = get_instance_segmentation_model(num_classes)
model.to(device)
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
training
# let's train it for 10 epochs
num_epochs = 10
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
train_one_epoch(model, optimizer, train_data_loader, device, epoch, print_freq=10)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
evaluate(model, valid_data_loader, device=device)
prediction:
prediction
[{'boxes': tensor([], device='cuda:0', size=(0, 4)),
'labels': tensor([], device='cuda:0', dtype=torch.int64),
'scores': tensor([], device='cuda:0')}]
| You should change the number of classes to
model = FasterRCNN(backbone,
num_classes=YOUR_CLASSES+1, # +1 is for the background
rpn_anchor_generator=anchor_generator,
box_roi_pool=roi_pooler)
Remember that the class 0 is reserved for the background, so your classes should start from 1.
Also, please make sure your network has converged during the training.
| https://stackoverflow.com/questions/64931112/ |
What is the most efficient way to broadcast an operation on slices of PyTorch Tensors? | I have a tensor T of shape (b, r)
I want to do an operation for each (r), in a way that it gets parallelized by the GPU
The naive implementation, in numpy for simplicity, would look something like:
T_dash = np.array([(T[i] - np.max(T[i]) for i in range(T.size[0])])
What would be the best way to do this?
| There's a new vmap function available (in the master branch at the time of writing, experimental) that will help do batch operations, where you define the operation to be performed for each element.
vmap can be helpful in hiding batch dimensions. In your case, it goes something like
def each_elem_fn(tensor):
return tensor - np.max(tensor)
torch.vmap(each_elem_fn)(batch_of_elems)
Note: As @jodag mentioned in the comment, for simply subtracting max you can use
T = T - T.max(dim=1, keepdim=True)[0]
however, if you want to generalize it to any custom function you shall use vmap.
| https://stackoverflow.com/questions/64934952/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.