instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How to sample from normal distribution? | If I have n sized mean vector, n sized variance vector, then how do I do this?
z ∼ N (μ, σ)
import torch
x = torch.randn(3, 3)
mu = x.mean()
sigma = x.var()
What do I do to get z?
| If you want to sample from a normal distribution with mean mu and std sigma then you can simply
z = torch.randn_like(mu) * sigma + mu
If you sample many such z their mean and std will converge to sigma and mu:
mu = torch.arange(10.)
Out[]: tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
sigma = 5. - 0.5 * torch.arange(10.)
Out[]: tensor([5.0000, 4.5000, 4.0000, 3.5000, 3.0000, 2.5000, 2.0000, 1.5000, 1.0000, 0.5000])
z = torch.randn(10, 1000000) * sigma[:, None] + mu[:, None]
z.mean(dim=1)
Out[]:
tensor([-5.4823e-03, 1.0011e+00, 1.9982e+00, 2.9985e+00, 4.0017e+00,
4.9972e+00, 6.0010e+00, 7.0004e+00, 7.9996e+00, 9.0006e+00])
z.std(dim=1)
Out[]:
tensor([4.9930, 4.4945, 4.0021, 3.5013, 3.0005, 2.4986, 1.9997, 1.4998, 0.9990,
0.5001])
As you can see when you sample 1,000,000 elements from the distribution the sample mean and std are close to the original mu and sigma you started with.
| https://stackoverflow.com/questions/59044629/ |
what is the default weight initialization used in Pytorch embedding layer? | When we create an embedding layer using the class torch.nn.Embedding, how are the weights initialized ? Is uniform, normal or initialization techniques like He or Xavier used by default?
| In Embedding, by default, the weights are initialization from the Normal distribution. You can check it from the reset_parameters() method:
def reset_parameters(self):
init.normal_(self.weight)
...
| https://stackoverflow.com/questions/59045488/ |
Use of torchvision.utils.save_image twice on the same image tensor makes the second save not work. What's going on? | (Fast Gradient Sign Attack method detailed here: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html)
I have a trained classifier with >90% accuracy which I am using to create these adversarial examples, then I am using torchvision.utils.save_image to save the images to different folders.
The folder hierarchy is as follows:
FOLDER_1
original_image.jpg (1)
perturbed_image.jpg (2)
FOLDER_2
perturbed_image.jpg (3)
Here (2) and (3) are the same image tensor, which is the sum of the original image tensor and a perturbation image tensor---I just want to save images that fooled the classifier twice. What I'm finding is that (1) and (2) print O.K., but (3) only prints the perturbation image tensor (it subtracted the original image tensor!). So when I open up (2) I see my original picture with all the noise on top (random RGB pixel switches from the FGSM attack), but when I open up the (3) I see a blank canvas with those random RGB pixel switches ONLY.
Since I am printing the same variable (perturbed_data) twice, I don't understand why torchvision.utils.save_image is choosing to subtract the perturbation image tensor the second time I call it. The code for what I'm describing is below, and data is the original image tensor.
epsilon = 0.5
# Collect datagrad
data_grad = data.grad.data
# Call FGSM Attack
perturbed_data = fgsm_attack(data, epsilon, data_grad)
# Re-classify the perturbed image
perturbed_output = model(perturbed_data)
perturbed_output = torch.sigmoid(perturbed_output)
perturbed_output = perturbed_output.max(1, keepdim=True)[1]
max_pred = perturbed_output.item()
final_pred = torch.tensor([0, 0]).to(device)
final_pred[max_pred] = 1
# Store all original and perturbed images, regardless of model prediction
torchvision.utils.save_image(data, "./FOLDER_1/original.jpg")
torchvision.utils.save_image(perturbed_data, "./FOLDER_1/perturbed_image.jpg")
# If the perturbed image fools our classifier, put a copy of it in FOLDER_2
if !torch.all(torch.eq(final_pred, target)):
torchvision.utils.save_image(perturbed_data, "./FOLDER_2/perturbed_image.jpg")
I'm almost sure that this is a torchvision bug, but I thought I would ask here before submitting a bug report. Maybe someone sees something I don't. I've also attached an example of (2) and (3) for visualization. The first image is in the correct format, but the second one prints without the original image tensor.
| It turns out torchvision.utils.save_image modifies the input tensor. A workaround to this is to add a line somewhere before calling torchvision.utils.save_image that's similar to this:
perturbed_data_copy = perturbed_data
Then you can safely save the perturbed image twice if on the second call you use perturbed_data_copy instead of the perturbed_data (which was modified by torchvision.utils.save_image). I will be submitting a bug report and tagging this post. Thanks @Mat for pointing this out!
| https://stackoverflow.com/questions/59054886/ |
Transform flair language model tensors for viewing in TensorBoard Projector | I want to convert "vectors,"
vectors = [token.embedding for token in sentence]
print(type(vectors))
<class 'list'>
print(vectors)
[tensor([ 0.0077, -0.0227, -0.0004, ..., 0.1377, -0.0003, 0.0028]),
...
tensor([ 0.0003, -0.0461, 0.0043, ..., -0.0126, -0.0004, 0.0142])]
to
0.0077 -0.0227 -0.0004 ... 0.1377 -0.0003 0.0028
...
0.0003 -0.0461 0.0043 ... -0.0126 -0.0004 0.0142
and write that to a TSV.
Aside: those embeddings are from flair (https://github.com/zalandoresearch/flair): how can I get the full output, not the -0.0004 ... 0.1377 abbreviated output?
| OK, I dug around ...
It turns out those are PyTorch tensors (Flair uses PyTorch). For a simple conversion to NumPy arrays (per the PyTorch docs at https://pytorch.org/docs/stable/tensors.html#torch.Tensor.tolist and this StackOverFlow answer use tolist(), a PyTorch method.
>>> import torch
>>> a = torch.randn(2, 2)
>>> print(a)
tensor([[-2.1693, 0.7698],
[ 0.0497, 0.8462]])
>>> a.tolist()
[[-2.1692984104156494, 0.7698001265525818],
[0.049718063324689865, 0.8462421298027039]]
Per my original question, here's how to convert those data to plain text and write them to TSV files.
from flair.embeddings import FlairEmbeddings, Sentence
from flair.models import SequenceTagger
from flair.embeddings import StackedEmbeddings
embeddings_f = FlairEmbeddings('pubmed-forward')
embeddings_b = FlairEmbeddings('pubmed-backward')
sentence = Sentence('The RAS-MAPK signalling cascade serves as a central node in transducing signals from membrane receptors to the nucleus.')
tagger = SequenceTagger.load('ner')
tagger.predict(sentence)
embeddings_f.embed(sentence)
stacked_embeddings = StackedEmbeddings([
embeddings_f,
embeddings_b,
])
stacked_embeddings.embed(sentence)
# for token in sentence:
# print(token)
# print(token.embedding)
# print(token.embedding.shape)
tokens = [token for token in sentence]
print(tokens)
'''
[Token: 1 The, Token: 2 RAS-MAPK, Token: 3 signalling, Token: 4 cascade, Token: 5 serves, Token: 6 as, Token: 7 a, Token: 8 central, Token: 9 node, Token: 10 in, Token: 11 transducing, Token: 12 signals, Token: 13 from, Token: 14 membrane, Token: 15 receptors, Token: 16 to, Token: 17 the, Token: 18 nucleus.]
'''
## https://www.geeksforgeeks.org/python-string-split/
tokens = [str(token).split()[2] for token in sentence]
print(tokens)
'''
['The', 'RAS-MAPK', 'signalling', 'cascade', 'serves', 'as', 'a', 'central', 'node', 'in', 'transducing', 'signals', 'from', 'membrane', 'receptors', 'to', 'the', 'nucleus.']
'''
tensors = [token.embedding for token in sentence]
print(tensors)
'''
[tensor([ 0.0077, -0.0227, -0.0004, ..., 0.1377, -0.0003, 0.0028]),
tensor([-0.0007, -0.1601, -0.0274, ..., 0.1982, 0.0013, 0.0042]),
tensor([ 4.2534e-03, -3.1018e-01, -3.9660e-01, ..., 5.9336e-02, -9.4445e-05, 1.0025e-02]),
tensor([ 0.0026, -0.0087, -0.1398, ..., -0.0037, 0.0012, 0.0274]),
tensor([-0.0005, -0.0164, -0.0233, ..., -0.0013, 0.0039, 0.0004]),
tensor([ 3.8261e-03, -7.6409e-02, -1.8632e-02, ..., -2.8906e-03, -4.4556e-04, 5.6909e-05]),
tensor([ 0.0035, -0.0207, 0.1700, ..., -0.0193, 0.0017, 0.0006]),
tensor([ 0.0159, -0.4097, -0.0489, ..., 0.0743, 0.0005, 0.0012]),
tensor([ 9.7725e-03, -3.3817e-01, -2.2848e-02, ..., -6.6284e-02, 2.3646e-04, 1.0505e-02]),
tensor([ 0.0219, -0.0677, -0.0154, ..., 0.0102, 0.0066, 0.0016]),
tensor([ 0.0092, -0.0431, -0.0450, ..., 0.0060, 0.0002, 0.0005]),
tensor([ 0.0047, -0.2732, -0.0408, ..., 0.0136, 0.0005, 0.0072]),
tensor([ 0.0072, -0.0173, -0.0149, ..., -0.0013, -0.0004, 0.0056]),
tensor([ 0.0086, -0.1151, -0.0629, ..., 0.0043, 0.0050, 0.0016]),
tensor([ 7.6452e-03, -2.3825e-01, -1.5683e-02, ..., -5.4974e-04, -1.4646e-04, 6.6120e-03]),
tensor([ 0.0038, -0.0354, -0.1337, ..., 0.0060, -0.0004, 0.0102]),
tensor([ 0.0186, -0.0151, -0.0641, ..., 0.0188, 0.0391, 0.0069]),
tensor([ 0.0003, -0.0461, 0.0043, ..., -0.0126, -0.0004, 0.0142])]
'''
# ----------------------------------------
## Write those data to TSV files.
## https://stackoverflow.com/a/29896136/1904943
import csv
metadata_f = 'metadata.tsv'
tensors_f = 'tensors.tsv'
with open(metadata_f, 'w', encoding='utf8', newline='') as tsv_file:
tsv_writer = csv.writer(tsv_file, delimiter='\t', lineterminator='\n')
for token in tokens:
## Assign to a dummy variable ( _ ) to suppress character counts;
## if I use (token), rather than ([token]), I get spaces between all characters:
_ = tsv_writer.writerow([token])
## metadata.tsv :
'''
The
RAS-MAPK
signalling
cascade
serves
as
a
central
node
in
transducing
signals
from
membrane
receptors
to
the
nucleus.
'''
with open(metadata_f, 'w', encoding='utf8', newline='') as tsv_file:
tsv_writer = csv.writer(tsv_file, delimiter='\t', lineterminator='\n')
_ = tsv_writer.writerow(tokens)
## metadata.tsv :
'''
The RAS-MAPK signalling cascade serves as a central node in transducing signals from membrane receptors to the nucleus.
'''
tensors = [token.embedding for token in sentence]
print(tensors)
'''
[tensor([ 0.0077, -0.0227, -0.0004, ..., 0.1377, -0.0003, 0.0028]),
tensor([-0.0007, -0.1601, -0.0274, ..., 0.1982, 0.0013, 0.0042]),
tensor([ 4.2534e-03, -3.1018e-01, -3.9660e-01, ..., 5.9336e-02, -9.4445e-05, 1.0025e-02]),
tensor([ 0.0026, -0.0087, -0.1398, ..., -0.0037, 0.0012, 0.0274]),
tensor([-0.0005, -0.0164, -0.0233, ..., -0.0013, 0.0039, 0.0004]),
tensor([ 3.8261e-03, -7.6409e-02, -1.8632e-02, ..., -2.8906e-03, -4.4556e-04, 5.6909e-05]),
tensor([ 0.0035, -0.0207, 0.1700, ..., -0.0193, 0.0017, 0.0006]),
tensor([ 0.0159, -0.4097, -0.0489, ..., 0.0743, 0.0005, 0.0012]),
tensor([ 9.7725e-03, -3.3817e-01, -2.2848e-02, ..., -6.6284e-02, 2.3646e-04, 1.0505e-02]),
tensor([ 0.0219, -0.0677, -0.0154, ..., 0.0102, 0.0066, 0.0016]),
tensor([ 0.0092, -0.0431, -0.0450, ..., 0.0060, 0.0002, 0.0005]),
tensor([ 0.0047, -0.2732, -0.0408, ..., 0.0136, 0.0005, 0.0072]),
tensor([ 0.0072, -0.0173, -0.0149, ..., -0.0013, -0.0004, 0.0056]),
tensor([ 0.0086, -0.1151, -0.0629, ..., 0.0043, 0.0050, 0.0016]),
tensor([ 7.6452e-03, -2.3825e-01, -1.5683e-02, ..., -5.4974e-04, -1.4646e-04, 6.6120e-03]),
tensor([ 0.0038, -0.0354, -0.1337, ..., 0.0060, -0.0004, 0.0102]),
tensor([ 0.0186, -0.0151, -0.0641, ..., 0.0188, 0.0391, 0.0069]),
tensor([ 0.0003, -0.0461, 0.0043, ..., -0.0126, -0.0004, 0.0142])]
'''
with open(tensors_f, 'w', encoding='utf8', newline='') as tsv_file:
tsv_writer = csv.writer(tsv_file, delimiter='\t', lineterminator='\n')
for token in sentence:
embedding = token.embedding
_ = tsv_writer.writerow(embedding.tolist())
## tensors.tsv (18 lines: one embedding per token in metadata.tsv):
## note: enormous output, even for this simple sentence.
'''
0.007691788021475077 -0.02268664352595806 -0.0004340760060586035 ...
'''
Last, my intention for all of that was to load contextual language embeddings (Flair, etc.) into TensorFlow's Embedding Projector. It turns out all I needed to to was to convert (here, Flair data) to NumPy arrays, and load them into a TensorFlow TensorBoard instance (no need for TSV files!).
I describe that in detail in my blog post, here: Visualizing Language Model Tensors (Embeddings) in TensorFlow's TensorBoard [TensorBoard Projector: PCA; t-SNE; ...].
| https://stackoverflow.com/questions/59063417/ |
Is this the way to create a PyTorch scalar? | I'm new to PyTorch, and just wanted to kindly confirm if the following creates scalars with values, 1, 2, and 3, respectively?
import torch
a = torch.tensor(1)
b = torch.tensor(2)
c = torch.tensor(3)
Thanks.
| From the example in the documentation for torch.tensor:
>>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
tensor(3.1416)
Therefore, it appears they endorse that passing a number creates the corresponding scalar.
| https://stackoverflow.com/questions/59072659/ |
How to get rid of every column that are filled with zero from a Pytorch tensor? | I have a pytorch tensor A like below:
A =
tensor([[ 4, 3, 3, ..., 0, 0, 0],
[ 13, 4, 13, ..., 0, 0, 0],
[707, 707, 4, ..., 0, 0, 0],
...,
[ 7, 7, 7, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[195, 195, 195, ..., 0, 0, 0]], dtype=torch.int32)
I would like to:
identify all the columns whose all of its entries are equal to 0
delete only those columns that has all of their entries equal to 0
I can imagine doing:
zero_list = []
for j in range(A.size()[1]):
if torch.sum(A[:,j]) == 0:
zero_list = zero_list.append(j)
to identify the columns that only has 0 for its elements
but I am not sure how to delete such columns filled with 0 from the original tensor.
How can I delete the columns with zero from a pytorch tensor based on the index number?
Thank you,
| It makes more sense to index the columns you want to keep instead of what you want to delete.
valid_cols = []
for col_idx in range(A.size(1)):
if not torch.all(A[:, col_idx] == 0):
valid_cols.append(col_idx)
A = A[:, valid_cols]
Or a little more cryptically
valid_cols = [col_idx for col_idx, col in enumerate(torch.split(A, 1, dim=1)) if not torch.all(col == 0)]
A = A[:, valid_cols]
| https://stackoverflow.com/questions/59076228/ |
Fail to load a .pth file (pre-trained neural network) using torch.load() on google colab | My google drive is linked to my google colab notebook. Using the pytorch library torch.load($PATH) fails to load this 219 Mo file (pre-trained neural network) (https://drive.google.com/drive/folders/1-9m4aVg8Hze0IsZRyxvm5gLybuRLJHv-) which is in my google drive. However it works fine when I do it locally on my computer. The error i get on google collab is: (settings: Python 3.6, pytorch 1.3.1):
state_dict = torch.load(model_path)['state_dict']
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 303, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 454, in _load
return legacy_load(f)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 380, in legacy_load
with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar,
File "/usr/lib/python3.6/tarfile.py", line 1589, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1619, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1482, in init
self.firstmember = self.next()
File "/usr/lib/python3.6/tarfile.py", line 2297, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/usr/lib/python3.6/tarfile.py", line 1092, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
OSError: [Errno 5] Input/output error```
Any help would be much appreciated!
| It worked by uploading directly the file to google colab instead of loading it from google drive using:
from google.colab import files
uploaded= files.upload()
I guess this solution is similar to the one proposed by @Yuri
| https://stackoverflow.com/questions/59077284/ |
Compare Number of Equal Elements in Tensors | I have two tensors of dimension 1000 * 1. I want to check how many of the 1000 elements are equal in the two tensors. I think I should be able to do this in one line like Numpy but couldn't find a similar function.
| You can just use the == operator to check for equality and then sum the resulting tensor:
# Import torch and create dummy tensors
>>> import torch
>>> A = torch.randint(2, (10,))
>>> A
tensor([0, 0, 0, 1, 0, 1, 0, 0, 1, 1])
>>> B = torch.randint(2, (10,))
>>> B
tensor([0, 1, 1, 0, 1, 0, 0, 1, 1, 0])
# Checking for number of equal values
>>> (A == B).sum()
tensor(3)
Edit:
torch.eq yields to the same result. So if you for some reason prefer that:
>>> torch.eq(A, B).sum()
tensor(3)
| https://stackoverflow.com/questions/59078318/ |
Faster method of computing confusion matrix? | I am computing my confusion matrix as shown below for image semantic segmentation which is a pretty verbose approach:
def confusion_matrix(preds, labels, conf_m, sample_size):
preds = normalize(preds,0.9) # returns [0,1] tensor
preds = preds.flatten()
labels = labels.flatten()
for i in range(len(preds)):
if preds[i]==1 and labels[i]==1:
conf_m[0,0] += 1/(len(preds)*sample_size) # TP
elif preds[i]==1 and labels[i]==0:
conf_m[0,1] += 1/(len(preds)*sample_size) # FP
elif preds[i]==0 and labels[i]==0:
conf_m[1,0] += 1/(len(preds)*sample_size) # TN
elif preds[i]==0 and labels[i]==1:
conf_m[1,1] += 1/(len(preds)*sample_size) # FN
return conf_m
In the prediction loop:
conf_m = torch.zeros(2,2) # two classes (object or no-object)
for img,label in enumerate(data):
...
out = Net(img)
conf_m = confusion_matrix(out, label, len(data))
...
Is there a faster approach (in PyTorch) to efficiently calculate the confusion matrix for a sample of inputs for image semantic segmentation?
| I use these 2 functions to calc confusion matrix (as it defined in sklearn):
# rewrite sklearn method to torch
def confusion_matrix_1(y_true, y_pred):
N = max(max(y_true), max(y_pred)) + 1
y_true = torch.tensor(y_true, dtype=torch.long)
y_pred = torch.tensor(y_pred, dtype=torch.long)
return torch.sparse.LongTensor(
torch.stack([y_true, y_pred]),
torch.ones_like(y_true, dtype=torch.long),
torch.Size([N, N])).to_dense()
# weird trick with bincount
def confusion_matrix_2(y_true, y_pred):
N = max(max(y_true), max(y_pred)) + 1
y_true = torch.tensor(y_true, dtype=torch.long)
y_pred = torch.tensor(y_pred, dtype=torch.long)
y = N * y_true + y_pred
y = torch.bincount(y)
if len(y) < N * N:
y = torch.cat(y, torch.zeros(N * N - len(y), dtype=torch.long))
y = y.reshape(N, N)
return y
y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]
confusion_matrix_1(y_true, y_pred)
# tensor([[2, 0, 0],
# [0, 0, 1],
# [1, 0, 2]])
Second function is faster in case of small number of classes.
%%timeit
confusion_matrix_1(y_true, y_pred)
# 102 µs ± 30.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
confusion_matrix_2(y_true, y_pred)
# 25 µs ± 149 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
| https://stackoverflow.com/questions/59080843/ |
What is the best way to handle the background pixel classes (ignore_label), when training deep learning models for semantic segmentation? | I am trying to train a UNET model on the cityscapes dataset which has 20 'useful' semantic classes and a bunch of background classes that can be ignored (ex. sky, ego vehicle, mountains, street lights). To train the model to ignore these background pixels I am using the following popular solution on the internet :
I assign a common ignore_label (ex: ignore_label=255) for all
the pixels belonging to the ignore classes
Train the model using the cross_entropy loss for each pixel prediction
Provide the ignore_label parameter in the cross_entropy loss, therefore the
loss computed ignores the pixels with the unnecessary classes.
But this approach has a problem. Once trained, the model ends up classifying these background pixels as belonging to one of the 20 classes instead. This is expected as in the loss we do not penalize the model for whatever classification it makes for the background pixels.
The second obvious solution is therefore to use a extra class for all the background pixels. Therefore it is the 21st class in cityscapes. However, here I am worried that I will 'waste' my model's capacity by teaching it to classify this additional unnecessary class.
What is the most accurate way of handling the background pixel classes ?
| Definitely the second solution is the better one. This is the best solution, the background class is definitely and additional class but not an unnecessary one, since in this way there is a clear differentiation between the classes you want to detect and the background.
In fact, this is a standard procedure recommended in segmentation, to assign a class to a background, where background of course represents everything else apart from your specific classes.
| https://stackoverflow.com/questions/59083222/ |
what is the inputs to a torch.nn.gru function in pytorch? | I am using a gru function to implement a RNN. This RNN (GRU) is used after some CNN layers. Can someone please tell me what is the input to a GRU function here? Especially, is the hidden size fixed?
self.gru = torch.nn.GRU(
input_size=input_size,
hidden_size=128,
num_layers=1,
batch_first=True,
bidirectional=True)
According to my understanding the input size will be the number of features and the hidden size for GRU is always fixed as 128? Can some one please correct me. Or give their feedback
| First, GRU is not a function but a class and you are calling its constructor. You are creating an instance of class GRU here, which is a layer (or Module in pytorch).
The input_size must match the out_channels of the previous CNN layer.
None of the parameters you see is fixed. Just put another value there and it will be something else, i.e. replace the 128 with anything you like.
Even though it is called hidden_size, for a GRU this parameter also determines the output features. In other words, if you have another layer after GRU, this layer's input_size (or in_features or in_channels or whatever it is called) must match the GRU's hidden_size.
Also, have a look at the documentation. This tells you exactly what the parameters you pass to the constructor are good for. Also, it tells you what will be the expected input once you actually use your layer (via self.gru(...)) and what will be the output of that call.
| https://stackoverflow.com/questions/59085745/ |
PyTorch DataLoader adding extra dimension for TorchVision MNIST | I am fairly new to PyTorch and have been experimenting with the DataLoader class.
When I attempt to load the MNIST dataset, the DataLoader appears to add an additional dimension after the batch dimension. I am not sure what is causing this to occur.
import torch
from torchvision.datasets import MNIST
from torchvision import transforms
if __name__ == '__main__':
mnist_train = MNIST(root='./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor()]))
first_x = mnist_train.data[0]
print(first_x.shape) # expect to see [28, 28], actual [28, 28]
train_loader = torch.utils.data.DataLoader(mnist_train, batch_size=200)
batch_x, batch_y = next(iter(train_loader)) # get first batch
print(batch_x.shape) # expect to see [200, 28, 28], actual [200, 1, 28, 28]
# Where is the extra dimension of 1 from?
Can anyone shed some light on the issue?
| I guess that is the number of channels of the input image. So basically it is
batch_x.shape = Batch-size, No of channels, Height of the image, Width of the image
| https://stackoverflow.com/questions/59088898/ |
How do I add some Gaussian noise to a tensor in PyTorch? | I have a tensor I created using
temp = torch.zeros(5, 10, 20, dtype=torch.float64)
## some values I set in temp
Now I want to add to each temp[i,j,k] a Gaussian noise (sampled from normal distribution with mean 0 and variance 0.1). How do I do it? I would expect there is a function to noise a tensor, but couldn't find anything. I did find this:
How to add Poisson noise and Gaussian noise?
but it seems to be related to images.
| The function torch.randn produces a tensor with elements drawn from a Gaussian distribution of zero mean and unit variance. Multiply by sqrt(0.1) to have the desired variance.
x = torch.zeros(5, 10, 20, dtype=torch.float64)
x = x + (0.1**0.5)*torch.randn(5, 10, 20)
| https://stackoverflow.com/questions/59090533/ |
In python3: strange behaviour of list(iterables) | I have a specific question regarding the behaviour of iterables in python. My iterable is a custom built Dataset class in pytorch:
import torch
from torch.utils.data import Dataset
class datasetTest(Dataset):
def __init__(self, X):
self.X = X
def __len__(self):
return len(self.X)
def __getitem__(self, x):
print('***********')
print('getitem x = ', x)
print('###########')
y = self.X[x]
print('getitem y = ', y)
return y
The weird behaviour now comes about when I initialize a specific instance of that datasetTest class. Depending on what data structure I pass as an argument X, it behaves differently when I call list(datasetTestInstance). In particular, when passing a torch.tensor as argument there is no problem, however when passing a dict as argument it will throw a KeyError. The reason for this is that list(iterable) not just calls i=0, ..., len(iterable)-1, but it calls i=0, ..., len(iterable). That is, it will iterate until (inclusive) the index equal to the length of the iterable. Obviously, this index is not definied in any python datastructure, as the last element has always the index len(datastructure)-1 and not len(datastructure). If X is a torch.tensor or a list, no error will be risen, even though I think the should be an error. It will still call getitem even for the (non-existent) element with index len(datasetTestinstance) but it will not compute y=self.X[len(datasetTestInstance]. Does anyone know if pytorch handels this somehow gracefully internally?
When passing a dict as data it will throw an error in the last iteration, when x=len(datasetTestInstance). This is actually the expected behaviour I guess. But why does this only happen for a dict and not for a list or torch.tensor?
if __name__ == "__main__":
a = datasetTest(torch.randn(5,2))
print(len(a))
print('++++++++++++')
for i in range(len(a)):
print(i)
print(a[i])
print('++++++++++++')
print(list(a))
print('++++++++++++')
b = datasetTest({0: 12, 1:35, 2:99, 3:27, 4:33})
print(len(b))
print('++++++++++++')
for i in range(len(b)):
print(i)
print(b[i])
print('++++++++++++')
print(list(b))
You could try out that snippet of code if you want to understand better what I have observed.
My questions are:
1.) Why does list(iterable) iterate until (including) the len(iterable)? A for loop doesnt do that.
2.) In case of a torch.tensor or a list passed as data X: Why does it not throw an error even when calling the getitem method for the index len(datasetTestInstance) which should actually be out of range since it is not defined as an index in the tensor/list? Or, in other words, when having reached the index len(datasetTestInstance) and then going into the getitem method, what happens exactly? It obviously doesnt make the call 'y = self.X[x]' anymore (otherwiese there would be an IndexError) but it DOES enter the getitem method which I can see as it prints the index x from within the getitem method. So what happens in that method? And why does it behave different depending on whether having a torch.tensor/list or a dict?
| This isn't really a pytorch specific issue, it's a general python question.
You're constructing a list using list(iterable) where an iterable class is one which implements sequence semantics.
Take a look here at the expected behavior of __getitem__ for sequence types (most relevant parts are in bold)
object.__getitem__(self, key)
Called to implement evaluation of
self[key]. For sequence types, the accepted keys should be integers
and slice objects. Note that the special interpretation of negative
indexes (if the class wishes to emulate a sequence type) is up to the
__getitem__() method. If key is of an inappropriate type, TypeError may be raised; if of a value outside the set of indexes for the
sequence (after any special interpretation of negative values),
IndexError should be raised. For mapping types, if key is missing (not
in the container), KeyError should be raised.
Note: for loops expect that an IndexError will be raised for illegal
indexes to allow proper detection of the end of the sequence.
The problem here is that for sequence types python expects an IndexError in the case where __getitem__ is invoked with an invalid index. It appears the list constructor relies on this behavior. In your example when X is a dict, attempting to access an invalid key causes __getitem__ to raise KeyError instead which isn't expected so isn't caught and causes the construction of the list to fail.
Based on this information you could do something like the following
class datasetTest:
def __init__(self):
self.X = {0: 12, 1:35, 2:99, 3:27, 4:33}
def __len__(self):
return len(self.X)
def __getitem__(self, index):
if index < 0 or index >= len(self):
raise IndexError
return self.X[index]
d = datasetTest()
print(list(d))
I can't recommend doing this in practice since it relies on your dictionary X containing only the integer keys 0, 1, ..., len(X)-1 which means it end up behaving just like a list for most cases, so you're probably better off just using a list.
| https://stackoverflow.com/questions/59091544/ |
What is the difference between .pt, .pth and .pwf extentions in PyTorch? | I have seen in some code examples, that people use .pwf as model file saving format. But in PyTorch documentation .pt and .pth are recommended. I used .pwf and worked fine for small 1->16->16 convolutional network.
My question is what is the difference between these formats? Why is .pwf extension not even recommended in PyTorch documentation and why do people still use it?
| There are no differences between the extensions that were listed: .pt, .pth, .pwf. One can use whatever extension (s)he wants. So, if you're using torch.save() for saving models, then it by default uses python pickle (pickle_module=pickle) to save the objects and some metadata. Thus, you have the liberty to choose the extension you want, as long as it doesn't cause collisions with any other standardized extensions.
Having said that, it is however not recommended to use .pth extension when checkpointing models because it collides with Python path (.pth) configuration files. Because of this, I myself use .pth.tar or .pt but not .pth, or any other extensions.
The standard way of checkpointing models in PyTorch is not finalized yet. Here is an open issue, as of this writing: Recommend a different file extension for models (.PTH is a special extension for Python) - issues/14864
It's been suggested by @soumith to use:
.pt for checkpointing models in pickle format
.ptc for checkpointing models in pytorch compiled (for JIT)
| https://stackoverflow.com/questions/59095824/ |
In Pytorch, how to test simple image with my loaded model? | I made a alphabet classification CNN model using Pytorch, and then use that model to test it with a single image that I've never seen before. I extracted a bounding box in my handwriting image with opencv, but I don't know how to apply it to the model.
bounded my_image
this is custom dataset
class CustomDatasetFromCSV(Dataset):
def __init__(self, csv_path, height, width, transforms=None):
"""
Args:
csv_path (string): path to csv file
height (int): image height
width (int): image width
transform: pytorch transforms for transforms and tensor conversion
"""
self.data = pd.read_csv(csv_path)
self.labels = np.asarray(self.data.iloc[:, 0])
self.height = height
self.width = width
self.transforms = transforms
def __getitem__(self, index):
single_image_label = self.labels[index]
# Read each 784 pixels and reshape the 1D array ([784]) to 2D array ([28,28])
img_as_np = np.asarray(self.data.iloc[index][1:]).reshape(28,28).astype('uint8')
# Convert image from numpy array to PIL image, mode 'L' is for grayscale
img_as_img = Image.fromarray(img_as_np)
img_as_img = img_as_img.convert('L')
# Transform image to tensor
if self.transforms is not None:
img_as_tensor = self.transforms(img_as_img)
# Return image and the label
return (img_as_tensor, single_image_label)
def __len__(self):
return len(self.data.index)
transformations = transforms.Compose([
transforms.ToTensor()
])
alphabet_from_csv = CustomDatasetFromCSV("/content/drive/My Drive/A_Z Handwritten Data.csv",
28, 28, transformations)
random_seed = 50
data_size = len(alphabet_from_csv)
indices = list(range(data_size))
split = int(np.floor(0.2 * data_size))
if True:
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, test_indices = indices[split:], indices[:split]
train_dataset = SubsetRandomSampler(train_indices)
test_dataset = SubsetRandomSampler(test_indices)
train_loader = torch.utils.data.DataLoader(dataset = alphabet_from_csv,
batch_size = batch_size,
sampler = train_dataset)
test_loader = torch.utils.data.DataLoader(dataset = alphabet_from_csv,
batch_size = batch_size,
sampler = test_dataset)
this is my model
class ConvNet3(nn.Module):
def __init__(self, num_classes=26):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 28, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(28),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.layer2 = nn.Sequential(
nn.Conv2d(28, 56, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(56),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.fc = nn.Sequential(
nn.Dropout(p = 0.5),
nn.Linear(56 * 7 * 7, 512),
nn.BatchNorm1d(512),
nn.ReLU(),
nn.Dropout(p = 0.5),
nn.Linear(512, 26),
)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
model = ConvNet3(num_classes).to(device)
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
def train():
# train phase
model.train()
# create a progress bar
batch_loss_list = []
progress = ProgressMonitor(length=len(train_dataset))
for batch, target in train_loader:
# Move the training data to the GPU
batch, target = batch.to(device), target.to(device)
# forward propagation
output = model( batch )
# calculate the loss
loss = loss_func( output, target )
# clear previous gradient computation
optimizer.zero_grad()
# backpropagate to compute gradients
loss.backward()
# update model weights
optimizer.step()
# update progress bar
batch_loss_list.append(loss.item())
progress.update(batch.shape[0], sum(batch_loss_list)/len(batch_loss_list) )
def test():
# test phase
model.eval()
correct = 0
# We don't need gradients for test, so wrap in
# no_grad to save memory
with torch.no_grad():
for batch, target in test_loader:
# Move the training batch to the GPU
batch, target = batch.to(device), target.to(device)
# forward propagation
output = model( batch )
# get prediction
output = torch.argmax(output, 1)
# accumulate correct number
correct += (output == target).sum().item()
# Calculate test accuracy
acc = 100 * float(correct) / len(test_dataset)
print( 'Test accuracy: {}/{} ({:.2f}%)'.format( correct, len(test_dataset), acc ) )
for epoch in range(num_epochs):
print("{}'s try".format(int(epoch)+1))
train()
test()
print("-----------------------------------------------------------------------------")
this is my image to bound
import cv2
import matplotlib.image as mpimg
im = cv2.imread('/content/drive/My Drive/my_handwritten.jpg')
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(blur, 255, 1, 1, 11, 2)
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[1]
rects=[]
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if h < 20: continue
red = (0, 0, 255)
cv2.rectangle(im, (x, y), (x+w, y+h), red, 2)
rects.append((x,y,w,h))
cv2.imwrite('my_handwritten_bounding.png', im)
img_result = []
img_for_class = im.copy()
margin_pixel = 60
for rect in rects:
#[y:y+h, x:x+w]
img_result.append(
img_for_class[rect[1]-margin_pixel : rect[1]+rect[3]+margin_pixel,
rect[0]-margin_pixel : rect[0]+rect[2]+margin_pixel])
# Draw the rectangles
cv2.rectangle(im, (rect[0], rect[1]),
(rect[0] + rect[2], rect[1] + rect[3]), (0, 0, 255), 2)
count = 0
nrows = 4
ncols = 7
plt.figure(figsize=(12,8))
for n in img_result:
count += 1
plt.subplot(nrows, ncols, count)
plt.imshow(cv2.resize(n,(28,28)), cmap='Greys', interpolation='nearest')
plt.tight_layout()
plt.show()
| You have already written the function test to test your net. The only thing you should do — create batch with one image with same preprocessing as images in your dataset.
def test_one_image(I, model):
'''
I - 28x28 uint8 numpy array
'''
# test phase
model.eval()
# convert image to torch tensor and add batch dim
batch = torch.tensor(I / 255).unsqueeze(0)
# We don't need gradients for test, so wrap in
# no_grad to save memory
with torch.no_grad():
batch = batch.to(device)
# forward propagation
output = model( batch )
# get prediction
output = torch.argmax(output, 1)
return output
| https://stackoverflow.com/questions/59097657/ |
Facing error in pytorch v-1.1: "RuntimeError: all tensors must be on devices[0]" | I am using nn.DataParallel() for my model but facing an error.
I am doing something like
self.model = self.model.to(device)
self.model = nn.DataParallel(self.model)
If the device is cuda:1 then I get RuntimeError: all tensors must be on devices[0].
But if I change the device to cuda:0 the parallel training on multiple GPUs works with no error. I am wondering what the problem is.
| I changed this: self.model = DataParallel(self.model) to this: self.model = DataParallel(self.model, device_ids=[1,0])
Now it is working fine. But if I write: self.model = DataParallel(self.model, device_ids=[0,1]) the error pops up. If there are more GPUs, say 4, then write: self.model = DataParallel(self.model, device_ids=[1,2,3,0])
| https://stackoverflow.com/questions/59098200/ |
How to get rid of checkerboard artifacts | Hello fellow coders,
I am using a fully convolutional autoencoder to color black and white images, however, the output has a checkerboard pattern and I want to get rid of it. The checkerboard artifacts I have seen so far allways have been far smaller than mine and the usual way to get rid of them is replacing all unpooling operations with bilinear upsampling (I have been told that).
But I can not simply replace the unpooling operation because I work with different sized images, thus the unpooling operation is needed, else the output tensor could have a different size than the original.
TLDR:
How can I get rid of these checkerboard-artifacts without replacing the unpooling operations?
class AE(nn.Module):
def __init__(self):
super(AE, self).__init__()
self.leaky_reLU = nn.LeakyReLU(0.2)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=1, return_indices=True)
self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2, padding=1)
self.softmax = nn.Softmax2d()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1)
self.conv5 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1)
self.conv6 = nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=3, stride=1, padding=1)
self.conv7 = nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=3, stride=1, padding=1)
self.conv8 = nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=3, stride=1, padding=1)
self.conv9 = nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=3, stride=1, padding=1)
self.conv10 = nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=3, stride=1, padding=1)
def forward(self, x):
# encoder
x = self.conv1(x)
x = self.leaky_reLU(x)
size1 = x.size()
x, indices1 = self.pool(x)
x = self.conv2(x)
x = self.leaky_reLU(x)
size2 = x.size()
x, indices2 = self.pool(x)
x = self.conv3(x)
x = self.leaky_reLU(x)
size3 = x.size()
x, indices3 = self.pool(x)
x = self.conv4(x)
x = self.leaky_reLU(x)
size4 = x.size()
x, indices4 = self.pool(x)
######################
x = self.conv5(x)
x = self.leaky_reLU(x)
x = self.conv6(x)
x = self.leaky_reLU(x)
######################
# decoder
x = self.unpool(x, indices4, output_size=size4)
x = self.conv7(x)
x = self.leaky_reLU(x)
x = self.unpool(x, indices3, output_size=size3)
x = self.conv8(x)
x = self.leaky_reLU(x)
x = self.unpool(x, indices2, output_size=size2)
x = self.conv9(x)
x = self.leaky_reLU(x)
x = self.unpool(x, indices1, output_size=size1)
x = self.conv10(x)
x = self.softmax(x)
return x
.
EDIT - Solution:
Skip-Connections are the way to go!
class AE(nn.Module):
def __init__(self):
super(AE, self).__init__()
self.leaky_reLU = nn.LeakyReLU(0.2)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=1, return_indices=True)
self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2, padding=1)
self.softmax = nn.Softmax2d()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1)
self.conv5 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1)
self.conv6 = nn.Conv2d(in_channels=1024, out_channels=512, kernel_size=3, stride=1, padding=1)
self.conv7 = nn.Conv2d(in_channels=1024, out_channels=256, kernel_size=3, stride=1, padding=1)
self.conv8 = nn.Conv2d(in_channels=512, out_channels=128, kernel_size=3, stride=1, padding=1)
self.conv9 = nn.Conv2d(in_channels=256, out_channels=64, kernel_size=3, stride=1, padding=1)
self.conv10 = nn.Conv2d(in_channels=128, out_channels=2, kernel_size=3, stride=1, padding=1)
def forward(self, x):
# encoder
x = self.conv1(x)
out1 = self.leaky_reLU(x)
x = out1
size1 = x.size()
x, indices1 = self.pool(x)
x = self.conv2(x)
out2 = self.leaky_reLU(x)
x = out2
size2 = x.size()
x, indices2 = self.pool(x)
x = self.conv3(x)
out3 = self.leaky_reLU(x)
x = out3
size3 = x.size()
x, indices3 = self.pool(x)
x = self.conv4(x)
out4 = self.leaky_reLU(x)
x = out4
size4 = x.size()
x, indices4 = self.pool(x)
######################
x = self.conv5(x)
x = self.leaky_reLU(x)
x = self.conv6(x)
x = self.leaky_reLU(x)
######################
# decoder
x = self.unpool(x, indices4, output_size=size4)
x = self.conv7(torch.cat((x, out4), 1))
x = self.leaky_reLU(x)
x = self.unpool(x, indices3, output_size=size3)
x = self.conv8(torch.cat((x, out3), 1))
x = self.leaky_reLU(x)
x = self.unpool(x, indices2, output_size=size2)
x = self.conv9(torch.cat((x, out2), 1))
x = self.leaky_reLU(x)
x = self.unpool(x, indices1, output_size=size1)
x = self.conv10(torch.cat((x, out1), 1))
x = self.softmax(x)
return x
| Skip connection is commonly used in Encoder-Decoder architecture and it helps to produce accurate result by passing appearance information from shallow layer of encoder (discriminator) to corresponding deeper layer of decoder (generator). Unet is the widely used Encoder-Decoder type architecture. Linknet is also very popular and it differs with Unet in the way of fusing appearance information of encoder layer with the decoder layer. In case of Unet, incoming features (from encoder) are concatenated in the corresponding decoder layer. On the other hand, Linknet performs addition and that why Linknet requires fewer number of operations in a single forward pass and significantly faster than the Unet.
Your each convolution block in Decoder might looks like following:
Additionally, i'm attaching a figure bellow depicting architecture of Unet and LinkNet. Hope using skip connection will help.
| https://stackoverflow.com/questions/59101333/ |
PyTorch: why is the loss unchanging, in this simple example? | I'm writing a code example to do a simple linear projection (like PCA) in PyTorch. Everything appears to be OK except that the loss does not change as training progresses. Changing the learning rate doesn't affect this, and it's a simple one-dimensional problem so the loss should certainly be changing. What am I missing here?
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as nnF
class PCArot2D(nn.Module):
"2D PCA rotation, expressed as a gradient-descent problem"
def __init__(self):
super(PCArot2D, self).__init__()
self.theta = nn.Parameter(torch.tensor(np.random.random() * 2 * np.pi))
def getrotation(self):
sintheta = torch.sin(self.theta)
costheta = torch.cos(self.theta)
return torch.tensor([[costheta, -sintheta], [sintheta, costheta]], requires_grad=True, dtype=torch.double)
def forward(self, x):
xmeans = torch.mean(x, dim=1, keepdim=True)
rot = self.getrotation()
return torch.mm(rot, x - xmeans)
def covariance(y):
"Calculates the covariance matrix of its input (as torch variables)"
ymeans = torch.mean(y, dim=1, keepdim=True)
ycentred = y - ymeans
return torch.mm(ycentred, ycentred.T) / ycentred.shape[1]
net = PCArot2D()
example2 = torch.tensor(np.random.randn(2, 33))
# define a loss function and an optimiser
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.1)
# train the network
num_epochs = 1000
for epoch in range(num_epochs):
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(torch.DoubleTensor(example2))
# the covariance between output channels is the measure that we wish to minimise
covariance = (outputs[0, :] * outputs[1, :]).mean()
loss = criterion(covariance, torch.tensor(0, dtype=torch.double))
loss.backward()
optimizer.step()
running_loss = loss.item()
if ((epoch & (epoch - 1)) == 0) or epoch==(num_epochs-1): # don't print on all epochs
# print statistics
print('[%d] loss: %.8f' %
(epoch, running_loss))
print('Finished Training')
Output:
[0] loss: 0.00629047
[1] loss: 0.00629047
[2] loss: 0.00629047
[4] loss: 0.00629047
[8] loss: 0.00629047
etc
| It seems the problem is in your getrotation function. When creating a new tensor from the other tensors it is not back-probable anymore:
def getrotation(self):
sintheta = torch.sin(self.theta)
costheta = torch.cos(self.theta)
return torch.tensor([[costheta, -sintheta], [sintheta, costheta]], requires_grad=True, dtype=torch.double)
So you need to find some other way to construct your return tensor.
Here is one suggestion that seems to work using torch.cat:
def getrotation(self):
sintheta = torch.sin(self.theta)
costheta = torch.cos(self.theta)
#return torch.tensor([[costheta, -sintheta], [sintheta, costheta]], requires_grad=True, dtype=torch.double)
A = torch.cat([costheta.unsqueeze(0), -sintheta.unsqueeze(0)], dim=0)
B = torch.cat([sintheta.unsqueeze(0), costheta.unsqueeze(0)], dim=0)
return torch.cat([A.unsqueeze(0), B.unsqueeze(0)], dim=0).double()
After implementing this change the loss changes:
[0] loss: 0.00765365
[1] loss: 0.00764726
[2] loss: 0.00764023
[4] loss: 0.00762607
[8] loss: 0.00759777
[16] loss: 0.00754148
[32] loss: 0.00742997
[64] loss: 0.00721117
[128] loss: 0.00679025
[256] loss: 0.00601233
[512] loss: 0.00469085
[999] loss: 0.00288501
Finished Training
I hope this helps!
Edit:
A simpler and prettier version by @DanStowell:
def getrotation(self):
sintheta = torch.sin(net.theta).double().unsqueeze(0)
costheta = torch.cos(net.theta).double().unsqueeze(0)
return torch.cat([costheta, -sintheta, sintheta, costheta]).reshape((2,2))
| https://stackoverflow.com/questions/59103555/ |
Convolution Neural Network for regression using pytorch | I am trying to do create CNN for regression purpose. Input is image data.
For learning purpose , i have 10 image of shape (10,3,448,448), where 10 are images, 3 are channel and 448 are hieght and width.
Output lables are (10,245).
Here is my architecture
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 32, kernel_size=5)
self.conv3 = nn.Conv2d(32,64, kernel_size=5)
self.fc1 = nn.Linear(3*3*64, 256)
self.fc2 = nn.Linear(256, 245)
def forward(self, x):
x = F.relu(self.conv1(x))
#x = F.dropout(x, p=0.5, training=self.training)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = F.dropout(x, p=0.5, training=self.training)
x = F.relu(F.max_pool2d(self.conv3(x),2))
x = F.dropout(x, p=0.5, training=self.training)
x = x.view(-1,3*3*64 )
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return x
cnn = CNN()
print(cnn)
it = iter(train_loader)
X_batch, y_batch = next(it)
print(cnn.forward(X_batch).shape)
Using batch size 2 i am expecting data shape produced by model is (2,245). But it is producing data of shape (2592, 245)
| after self.conv3 you have tensors of shape [2, 64, 108, 108] which produces [2592, 576] after reshape. So this is where 2592 comes from.
Change the lines:
"self.fc1 = nn.Linear(3*3*64, 256)"
and
"x = x.view(-1,3*3*64)"
so that they use proper image size after the layers.
below is the fixed code:
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
self.conv2 = nn.Conv2d(32, 32, kernel_size=5)
self.conv3 = nn.Conv2d(32,64, kernel_size=5)
self.fc1 = nn.Linear(108*108*64, 256)
self.fc2 = nn.Linear(256, 245)
def forward(self, x):
print (x.shape)
x = F.relu(self.conv1(x))
print (x.shape)
#x = F.dropout(x, p=0.5, training=self.training)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
print (x.shape)
x = F.dropout(x, p=0.5, training=self.training)
print (x.shape)
x = F.relu(F.max_pool2d(self.conv3(x),2))
print (x.shape)
x = F.dropout(x, p=0.5, training=self.training)
print (x.shape)
x = x.view(-1,108*108*64 )
print (x.shape)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return x
cnn = CNN()
print(cnn)
# X_batch, y_batch = next(it)
print(cnn.forward(X_batch).shape)
| https://stackoverflow.com/questions/59110844/ |
Constructing Key:Value pair from list comprehension in Python | I am trying to extend some code.
What works:
pretrained_dict = {k: v for k, v pretrained_dict.items if k in model_dict}
However, if I extend it to:
pretrained_dict = {k: v if k in model_dict else k1:v1 for k, v, k1, v1 in zip(pretrained_dict.items(), model_dict.items()) }
The code fails, If I put the else at the end it still fails:
pretrained_dict = {k: v if k in model_dict for k, v, k1, v1 in zip(pretrained_dict.items(), model_dict.items()) else k1:v1}
^
SyntaxError: invalid syntax
How can I construct the key value pair using an if else condition over two lists?
| You can use a ChainMap to achieve what you want without having to use comprehensions at all
from collections import ChainMap
pretrained_dict = ChainMap(pretrained_dict, model_dict)
This returns a dictionary-like object that will lookup keys in pretrained_dict first and if it is not present then lookup the key in model_dict
| https://stackoverflow.com/questions/59112905/ |
transforms.Normalize() between 0 and 1 when using Lab | which mean, std should I use when I want to normalize a tensor to a range of 0 to 1? But I work with images with 2 channels (a, b channel -> -128 to 127) only instead of 3 channels. Thus, the usual mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] will not do the job.
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
This leads to this error message:
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: The size of tensor a (2) must match the size of tensor b
(3) at non-singleton dimension 0
| As you can see, PyTorch complains about the Tensor size, since you lack a channel.
Additionally, the "usual" mean and std values are computed on ImageNet dataset, and are useful if the statistics of your data match the ones of that dataset.
As you work with two channels only, I assume that your domain might be fairly different from 3-channels natural images. In that case I would simply use 0.5 for both mean and std, such that the minimum value 0 will be converted to (0 - 0.5) / 0.5 = -1 and the maximum value of 1 to (1 - 0.5) / 0.5 = 1.
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5],
std=[0.5, 0.5])
])
Edit: I would recommend zero-centering of the input.
However, if for some reason you must have it in range [0, 1], calling only ToTensor() would suffice.
In this case, a word of caution. I think ToTensor() assumes your input to lie in range [0, 255] prior to the transform, so it basically divides it by 255. If that is not the case in your domain (e.g. your input is always in range [1, 50] for some reason) I would simply create a custom transform to divide for the actual upper bound for your data.
| https://stackoverflow.com/questions/59115888/ |
How to create properly shaped input using Dataloader? | I am trying to input an image and a vector as input to the model. The image has the correct shape of 4d, but the vector that I input doesn't have such shape. The image size is 424x512 while the vector is of shape (18,). After using dataloader, I get batches of shape (50x1x424x512) and (50x18). Model gives error as it needs the vector shape to be 4d too. How do I do that?
Here is my code :
def loadTrainingData_B(args):
fdm = []
tdm = []
parameters = []
for i in image_files[:4]:
try:
false_dm = np.fromfile(join(ref, i), dtype=np.int32)
false_dm = Image.fromarray(false_dm.reshape((424, 512, 9)).astype(np.uint8)[:,:,1])
fdm.append(false_dm)
true_dm = np.fromfile(join(ref, i), dtype=np.int32)
true_dm = Image.fromarray(true_dm.reshape((424, 512, 9)).astype(np.uint8)[:,:,1])
tdm.append(true_dm)
pos = param_filenames.index(i)
param = np.array(params[pos, 1:])
param = np.where(param == '-point-light-source', 1, param).astype(np.float64)
parameters.append(param)
except:
print('[!] File {} not found'.format(i))
return (fdm, parameters, tdm)
class Flat_ModelB(Dataset):
def __init__(self, args, train=True, transform=None):
self.args = args
if train == True:
self.fdm, self.parameters, self.tdm = loadTrainingData_B(self.args)
else:
self.fdm, self.parameters, self.tdm = loadTestData_B(self.args)
self.data_size = len(self.parameters)
self.transform = transforms.Compose([transforms.ToTensor()])
def __getitem__(self, index):
return (self.transform(self.fdm[index]).double(), torch.from_numpy(self.parameters[index]).double(), self.transform(self.tdm[index]).double())
def __len__(self):
return self.data_size
The error I get is :
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1 5 5, but got 2-dimensional input of size [50, 18] instead
Here is the model :
class Model_B(nn.Module):
def __init__(self, config):
super(Model_B, self).__init__()
self.config = config
# CNN layers for fdm
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=2, padding=2),
nn.ReLU(),
nn.BatchNorm2d(16))
self.layer2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=2, padding=2),
nn.ReLU(),
nn.BatchNorm2d(32))
self.layer3 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=32, kernel_size=5, stride=2, padding=2),
nn.ReLU(),
nn.BatchNorm2d(32))
self.layer4 = nn.Sequential(
nn.ConvTranspose2d(in_channels=32, out_channels=32, kernel_size=5, stride=2, padding=2, output_padding=1),
nn.ReLU(),
nn.BatchNorm2d(32))
self.layer5 = nn.Sequential(
nn.ConvTranspose2d(in_channels=32, out_channels=16, kernel_size=5, stride=2, padding=2,output_padding=1),
nn.ReLU(),
nn.BatchNorm2d(16))
self.layer6 = nn.Sequential(
nn.ConvTranspose2d(in_channels=16, out_channels=1, kernel_size=5, stride=2, padding=2, output_padding=1),
nn.ReLU(),
nn.BatchNorm2d(1))
# CNN layer for parameters
self.param_layer1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=2, padding=2),
nn.ReLU(),
nn.BatchNorm2d(32))
def forward(self, x, y):
out = self.layer1(x)
out_param = self.param_layer1(y)
print("LayerParam 1 Output Shape : {}".format(out_param.shape))
print("Layer 1 Output Shape : {}".format(out.shape))
out = self.layer2(out)
print("Layer 2 Output Shape : {}".format(out.shape))
out = self.layer3(out)
# out = torch.cat((out, out_param), dim=2)
print("Layer 3 Output Shape : {}".format(out.shape))
out = self.layer4(out)
print("Layer 4 Output Shape : {}".format(out.shape))
out = self.layer5(out)
print("Layer 5 Output Shape : {}".format(out.shape))
out = self.layer6(out)
print("Layer 6 Output Shape : {}".format(out.shape))
return out
and the method by which I access the data :
for batch_idx, (fdm, parameters) in enumerate(self.data):
if self.config.gpu:
fdm = fdm.to(device)
parameters = parameters.to(device)
print('shape of parameters for model a : {}'.format(parameters.shape))
output = self.model(fdm)
loss = self.criterion(output, parameters)
Edit :
I think my code is incorrect as I am trying to apply convolutions over a vector of (18). I tried to copy the vector and make it (18x64) and then input it. It still doesnt work and gives this output :
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 1 5 5, but got 3-dimensional input of size [4, 18, 64] instead
I am not sure how to concatenate an 18 length vector to the output of layer 3, if I can't do any of these things.
| Looks like you are training an autoencoder model and want to parameterize it with some additional vector input in the bottleneck layer. If you want to perform some transformations on it then you have to decide whether if you need any spatial dependencies. Given the constant input size (N, 1, 424, 512), the output of layer3 will have a shape (N, 32, 53, 64). You have a lot of options, depending on you desired model performance:
Use a nn.Linear with activations to transform the parameter vector. Then you might add extra spatial dimensions and repeat this vector in all spatial locations:
img = torch.rand((1, 1, 424, 512))
vec = torch.rand(1, 19)
layer3_out = model(img)
N, C, H, W = layer3_out.shape
param_encoder = nn.Sequential(nn.Linear(19, 30), nn.ReLU(), nn.Linear(30, 10))
param = param_encoder(vec)
param = param.unsqueeze(-1).unsqueeze(-1).expand(N, -1, H, W)
encoding = torch.cat([param, layer3_out], dim=1)
Use transposed convolutions to upsample your parameter vector to the size of layer3 output. But that would be harder to implement as you have to calculate exact output shape to fit with (N, 32, 53, 64)
Transform input vector with MLP using nn.Linear to the size 2x of the channels in layer3 output. Then use so called Feature-wise transformations to scale and shift feature maps from layer3.
I would recomend to start with the first option since this is the simplest one to implement and then try others.
| https://stackoverflow.com/questions/59119247/ |
Why is the variable `input` often marked special in Python syntax markers? | In many syntax highlighters (e.g. jupyter notebooks) the variable input is highlighted in python. Is there a reason for this? I've avoided using input as a variablename because I thought it did something special internally.
I'm asking because pytorch often uses "input" as a variable in their tutorials.
| input is a function.
This will happen as well with for example list.
It is recommended not to overwrite the build-in functions with own variables.
| https://stackoverflow.com/questions/59123783/ |
How to properly implement 1D CNN for numerical data in PyTorch? | I have a 500x2000 matrix, where each row represents an individual and each column is a measurement of some particular quality about that individual. I'm using a batch size of 64, so the input for each cycle of the network is actually a 64x2000 matrix. I'm trying to build a CNN in PyTorch to classify individuals given a set of these measurements. However, I've stumbled on the parameters for the convolutional layer.
Below is my current definition for a simple convolutional neural network.
class CNNnet(nn.Module)
def __init__(self):
self.conv1 = nn.Conv1d(2000, 200, (1,2), stride=10)
self.pool = nn.MaxPool1d(kernel_size = (1, 2), stride = 2)
self.fc1 = nn.Linear(64, 30)
self.fc2 = nn.Linear(30, 7)
def forward(self, x):
x = x.view(64, 2000, 1)
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
Attempting to train this model produces the following error:
"RuntimeError: Expected 4-dimensional input for 4-dimensional weight
200 2000 1 2, but got 3-dimensional input of size [64, 2000, 1]
instead".
I'm confused on why it's expecting a 4D 200x2000x1x2 matrix (shouldn't the number of output channels be irrelevant to the input? And why is there a 2 at the end?).
My question is what would be the proper syntax or approach for writing a CNN (specifically the convolutional layer) when dealing with 1D data. Any help is greatly appreciated.
| So the kernel size in the 1 dimensional case is simply a vector. So if you’ll want a kernel of size ‘1X2’ you need to specify the ‘2’
In the 2 dimensional case 2 will mean a ‘2X2’ kernel size.
You gave a tuple of 2 values so you use 2 kernel types each will create its own channel
| https://stackoverflow.com/questions/59125208/ |
Where is the source code of pytorch conv2d? | Where do I find the source code of the pytorch function conv2d?
It should be in torch.nn.functional but I only find _add_docstr lines,
if i search for conv2d. I looked here:
https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py
Update:
It's is not my typing, I do mean the function.
Conv2d class uses conv2d function from the nn.functional
Here:
https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py
On line 338:
return F.conv2d(F.pad(input, expanded_padding, mode='circular')
F is how they import functional
So then I went there, but I don't find the code.
| The functional code is all implemented in C++. The entry point into the C++ pytorch code for conv2d is here.
| https://stackoverflow.com/questions/59127007/ |
What's different between convolution layer in `Torch`(i.e `nn.SpatialConvolution`) and convolution layer in `Pytorch`(i.e `torch.nn.Conv2d`) | I would like to know the difference between convolution layer in Torch(i.e nn.SpatialConvolution) and convolution layer in Pytorch(i.e torch.nn.Conv2d)
In Torch's Docs, I found the output shape of SpatialConvolution
It says "If the input image is a 3D tensor nInputPlane x height x width, the output image size will be nOutputPlane x oheight x owidth where
owidth = floor((width + 2*padW - kW) / dW + 1)
oheight = floor((height + 2*padH - kH) / dH + 1)
"
which is different from torch.nn.Conv2d's in Pytorch Docs.
Does it mean they are different operation?
| Yes, they are different as torch does not have dilation parameter (for dilation explanation see here, basically the kernel has "spaces" between each kernel element width and height wise and this is what slides over the image).
Except for dilation both equations are the same (set dilation to one in pytorch's version and it's equal).
If you want to use dilation in torch there is a separate class for that called nn.SpatialDilatedConvolution.
| https://stackoverflow.com/questions/59127541/ |
How to avoid "CUDA out of memory" in PyTorch | I think it's a pretty common message for PyTorch users with low GPU memory:
RuntimeError: CUDA out of memory. Tried to allocate MiB (GPU ; GiB total capacity; GiB already allocated; MiB free; cached)
I tried to process an image by loading each layer to GPU and then loading it back:
for m in self.children():
m.cuda()
x = m(x)
m.cpu()
torch.cuda.empty_cache()
But it doesn't seem to be very effective. I'm wondering is there any tips and tricks to train large deep learning models while using little GPU memory.
| Although
import torch
torch.cuda.empty_cache()
provides a good alternative for clearing the occupied cuda memory and we can also manually clear the not in use variables by using,
import gc
del variables
gc.collect()
But still after using these commands, the error might appear again because pytorch doesn't actually clears the memory instead clears the reference to the memory occupied by the variables.
So reducing the batch_size after restarting the kernel and finding the optimum batch_size is the best possible option (but sometimes not a very feasible one).
Another way to get a deeper insight into the alloaction of memory in gpu is to use:
torch.cuda.memory_summary(device=None, abbreviated=False)
wherein, both the arguments are optional. This gives a readable summary of memory allocation and allows you to figure the reason of CUDA running out of memory and restart the kernel to avoid the error from happening again (Just like I did in my case).
Passing the data iteratively might help but changing the size of layers of your network or breaking them down would also prove effective (as sometimes the model also occupies a significant memory for example, while doing transfer learning).
| https://stackoverflow.com/questions/59129812/ |
Simple way to evaluate Pytorch torchvision on a single image | I have a pre-trained model on Pytorch v1.3, torchvision v0.4.2 as the following:
import PIL, torch, torchvision
# Load and normalize the image
img_file = "./robot_image.jpg"
img = PIL.Image.open(img_file)
img = torchvision.transforms.ToTensor()((img))
img = 0.5 + 0.5 * (img - img.mean()) / img.std()
# Load a pre-trained network and compute its prediction
alexnet = torchvision.models.alexnet(pretrained=True)
I want to test this single image, but I get an error:
alexnet(img)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 11 11, but got 3-dimensional input of size [3, 741, 435] instead
what is the most simple and idiomatic way of getting the model to evaluate a single data point?
| AlexNet is expecting a 4-dimensional tensor of size (batch_size x channels x height x width). You are providing a 3-dimensional tensor.
To change your tensor to size (1, 3, 741, 435) simply add the line:
img = img.unsqueeze(0)
You will also need to downsample your image as AlexNet expects inputs of height and width 224x224.
| https://stackoverflow.com/questions/59129975/ |
tf.cast equivalent in pytorch? | I am new to PyTorch. TensorFlow has an API tf.cast() and tf.shape(). the tf.cast has specific purpose in TensorFlow, is there anything equivalent in torch?
i have tensor x= tensor(shape(128,64,32,32)): tf.shape(x) create tensor of dimension 1 x.shape create the true dimension. i need to use tf.shape(x) in torch.
tf.cast has a different role than just changing tensor dtype in torch.
did anyone have equivalent API in torch/PyTorch.
| Check out the PyTorch Documentation
As they mentioned:
print(x.dtype) # Prints "torch.int64", currently 64-bit integer type
x = x.type(torch.FloatTensor)
print(x.dtype) # Prints "torch.float32", now 32-bit float
print(x.float()) # Still "torch.float32"
print(x.type(torch.DoubleTensor)) # Prints "tensor([0., 1., 2., 3.], dtype=torch.float64)"
print(x.type(torch.LongTensor)) # Cast back to int-64, prints "tensor([0, 1, 2, 3])"
| https://stackoverflow.com/questions/59132647/ |
How to modify path where Torch Hub models are downloaded | When I download models through Torch Hub, models are automatically downloaded in /home/me/.cache/torch.
How can I modify this behavior ?
| From official documentation, there is several ways to modify this path.
In priority order :
Calling hub.set_dir()
$TORCH_HOME/hub, if environment variable TORCH_HOME is set.
$XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.
~/.cache/torch/hub
So I just had to do :
export TORCH_HUB=/my/path/
Edit
TORCH_HUB appear to be deprecated, use TORCH_HOME instead
| https://stackoverflow.com/questions/59134499/ |
How do you invert a tensor of boolean values in Pytorch? | With NumPy, you can do it with np.invert(array), but there's no invert function in Pytorch. Let's say I have a 2D tensor of boolean values:
import torch
ts = torch.rand((10, 4)) < .5
tensor([[ True, True, False, True],
[ True, True, True, True],
[ True, False, True, True],
[False, True, True, False],
[False, True, True, True],
[ True, True, True, True],
[ True, False, True, True],
[False, True, False, True],
[ True, True, False, True],
[False, False, True, False]])
How do I transform the False into True and vice versa?
| Literally just use the tilde to transform all True into False and vice versa.
ts = ~ts
| https://stackoverflow.com/questions/59149138/ |
Custom conv2d operation Pytorch | I have tried a custom Conv2d function which has to work similar to nn.Conv2d but the multiplication and addition used inside nn.Conv2d are replaced with mymult(num1,num2) and myadd(num1,num2).
As per insight from very helpful forums 1,2 what i can do is try unfolding it and then do matrix multiplication. That @ part given in the code below can be done using loops with mymult() and myadd() as i believe this @ is doing matmul.
def convcheck():
torch.manual_seed(123)
batch_size = 2
channels = 2
h, w = 2, 2
image = torch.randn(batch_size, channels, h, w) # input image
out_channels = 3
kh, kw = 1, 1# kernel size
dh, dw = 1, 1 # stride
size = int((h-kh+2*0)/dh+1) #include padding in place of zero
conv = nn.Conv2d(in_channels=channels, out_channels=out_channels, kernel_size=kw, padding=0,stride=dh ,bias=False)
out = conv (image)
#print('out', out)
#print('out.size()', out.size())
#print('')
filt = conv.weight.data
imageunfold = F.unfold(image,kernel_size=kh,padding=0,stride=dh)
print("Unfolded image","\n",imageunfold,"\n",imageunfold.shape)
kernels_flat = filt.view(out_channels,-1)
print("Kernel Flat=","\n",kernels_flat,"\n",kernels_flat.shape)
res = kernels_flat @ imageunfold # I have to replace this operation with mymult() and myadd()
print(res,"\n",res.shape)
#print(res.size(2),"\n",res.shape)
res = res.view(-1, out_channels, size, size)
#print("Same answer as buitlin function",res)
res = kernels_flat @ imageunfold can be replaced with this. although there can be some other efficient implementation which i am looking to get help for.
for m_batch in range(len(imageunfold)):
#iterate through rows of X
# iterate through columns of Y
for j in range(imageunfold.size(2)):
# iterate through rows of Y
for k in range(imageunfold.size(1)):
#print(result[m_batch][i][j]," +=", kernels_flat[i][k], "*", imageunfold[m_batch][k][j])
result[m_batch][i][j] += kernels_flat[i][k] * imageunfold[m_batch][k][j]
Can someone please help me vectorize these three loops for faster execution.
| The problem was with the dimesions as kernels_flat[dim0_1,dim1_1] and imageunfold[batch,dim0_2,dim1_2] the resultant should have [batch,dim0_1,dim1_2]
res = kernels_flat @ imageunfold can be replaced with this. although there can be some other efficient implementation.
for m_batch in range(len(imageunfold)):
#iterate through rows of X
# iterate through columns of Y
for j in range(imageunfold.size(2)):
# iterate through rows of Y
for k in range(imageunfold.size(1)):
#print(result[m_batch][i][j]," +=", kernels_flat[i][k], "*", imageunfold[m_batch][k][j])
result[m_batch][i][j] += kernels_flat[i][k] * imageunfold[m_batch][k][j]
| https://stackoverflow.com/questions/59149785/ |
CNN to first make feature key-points prediction then classify image on the base of these keypoints using Pytorch or TensorFlow | I have already trained an Image Classifier using MobileNet in Pytorch to classify between close eyes image and open eyes image and also deployed it to mobile using Tensorflow
But problem is that the dataset is not big enough and it also doesn't work when the face object is far away or zoomed out.
I want to classify face with predefined key points like following
I want to make a CNN to first make feature key-points prediction and then classify image on the base of these keypoints.
Please guide me to any research paper or guide to predict feature keypoints using CNN and classify keypoints to two classes using Deep Learning techniques. The more Deep Learning used is better
I have already read about unsupervised machine learning but it is not working for me. I want to used deeplearning and pytorch or tensorflow
| It seems to me you have enough data. The key is preprocessing. I'll suggest to use MTCNN (implementations: one, two, three) for lightweight face and eyes detection, crop eyes and pass them through your net. Of course you should learn on cropped eyes (not whole images). You can get more precise eyes keypoints by libs like OpenPose, FAN or Seeta.
| https://stackoverflow.com/questions/59151504/ |
Why "conv1d" is different in C code, python and pytorch | I want to reproduce "Conv1D" results of pytorch in C code.
I tried to implement "Conv1D" using three methods (C code, Python, Pytorch), but the results are different. Only seven fraction digits are reasonable. Assuming there are multiple layers of conv1d in the structure, the fraction digits accuracy will gradually decrease.
According to everyone's recommend, I tried to change the C code type of input data to double but the result is still incorrect.
Have I done something wrong?
For example:
The output of Pytorch: 0.2380688339471817017
The output of Python: 0.2380688637495040894
The output of C code (float): 0.2380688637
The output of C code (double): 0.238068885344539680
Here is my current implementation
Input:
input dim. = 80, output dim. = 128, kernel size = 5
Pytorch: Conv1D_input.npy, Conv1D_weight.npy
Python: Conv1D_input.npy, Conv1D_weight.npy (same as Pytorch)
C code: Conv1D_input.txt, Conv1D_weight.txt (convert from Pytorch, IEEE 754 single precision)
Pytorch
import torch
import numpy as np
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
import argparse
import sys
import io
import time
import os
class RNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.c1 = nn.Conv1d(input_size, hidden_size, kernel_size = 5, bias=False)
self.c1.weight = torch.nn.Parameter(torch.Tensor(np.load("CONV1D_WEIGHT.npy")))
def forward(self, inputs):
c = self.c1(inputs)
return c
input_size = 80
hidden_size = 128
kernel_size = 5
rnn = RNN(input_size, hidden_size)
inputs = torch.nn.Parameter(torch.Tensor(np.load("CONV1D_IN.npy")))
print("inputs", inputs)
outputs = rnn(inputs)
sub_np456 = outputs[0].cpu().detach().numpy()
np.savetxt("Pytorch_CONV1D_OUTPUT.txt", sub_np456)
print('outputs', outputs)
Python
import struct
import numpy as np
if __name__ == "__main__":
row = 80
col = 327
count = 0
res_out_dim = 128
in_dim = 80
kernel_size = 5
filter = np.zeros((80, 5), dtype = np.float32)
featureMaps = np.zeros((128, 323), dtype = np.float32)
spectrum = np.load("CONV1D_INPUT.npy")
weight = np.load("CONV1D_WEIGHT.npy")
spectrum_2d = spectrum.reshape(80, 327)
for i in range(res_out_dim):
for j in range(in_dim):
for k in range(kernel_size):
filter[j][k] = weight[i][j][k]
while count < (col-kernel_size+1):
for j in range(in_dim):
for k in range(count, kernel_size+count):
featureMaps[i][count] = featureMaps[i][count] + spectrum_2d[j][k]*filter[j][k-count]
count = count + 1
count = 0
np.savetxt("Python_CONV1D_OUTPUT.txt", featureMaps)
C code (float)
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<time.h>
const char CONV1D_WEIGHT[] = "CONV1D_WEIGHT.txt";
const char CONV1D_INPUT[] = "CONV1D_INPUT.txt";
void parameterFree(float **matrix, int row)
{
int i = 0;
for(i=0; i<row; i++)
free(matrix[i]);
free(matrix);
}
float** createMatrix_2D(int row, int col)
{
int i = 0;
float **matrix = NULL;
matrix = (float**)malloc(sizeof(float*) * row);
if(matrix == NULL)
printf("Matrix2D malloc failed\n");
for(i=0; i<row; i++)
{
matrix[i] = (float*)malloc(sizeof(float) * col);
if(matrix[i] == NULL)
printf("Matrix2D malloc failed\n");
}
return matrix;
}
float** conv_1D(const char weightFile[], float **source, int *row, int *col, int in_dim, int res_out_dim, int kernel_size)
{
float **filter = createMatrix_2D(in_dim, kernel_size);
float **featureMaps = createMatrix_2D(res_out_dim, *col-kernel_size+1);
int i = 0, j = 0, k = 0, count = 0;
char str[10];
float data = 0.0;
FILE *fp = fopen(weightFile, "r");
if(fp == NULL)
printf("Resnet file open failed\n");
else
{
/*initial featureMaps*/
for(i=0; i<res_out_dim; i++)
{
for(j=0; j<*col-kernel_size+1; j++)
{
featureMaps[i][j] = 0.0;
}
}
/*next filter*/
for(i=0; i<res_out_dim; i++)
{
/*read filter*/
for(j=0; j<in_dim; j++)
{
for(k=0; k<kernel_size; k++)
{
fscanf(fp, "%s", str);
sscanf(str, "%x", &data);
filter[j][k] = data;
}
}
/* (part of source * filter) */
while(count < *col-kernel_size+1)
{
for(j=0; j<in_dim; j++)
{
for(k=count; k<kernel_size+count; k++)
{
featureMaps[i][count] += source[j][k]*filter[j][k-count];
}
}
count++;
}
count = 0;
}
fclose(fp);
}
parameterFree(source, *row);
parameterFree(filter, in_dim);
*row = res_out_dim;
*col = *col-kernel_size+1;
return featureMaps;
}
int main()
{
int row = 80;
int col = 327;
int in_dim = 80;
int res_out_dim = 128;
int kernel_size = 5;
int i, j;
float data;
char str[10];
float **input = createMatrix_2D(row, col);
FILE *fp = fopen(CONV1D_INPUT, "r");
FILE *fp2 = fopen("C code_CONV1D_OUTPUT.txt", "w");
if(fp == NULL)
printf("File open failed\n");
else
{
for(i=0; i<row; i++)
{
for(j=0; j<col; j++)
{
fscanf(fp, "%s", str);
sscanf(str, "%x", &data);
input[i][j] = data;
}
}
}
float **CONV1D_ANS = conv_1D(CONV1D_WEIGHT, input, &row, &col, in_dim, res_out_dim, kernel_size);
for(i=0; i<row; i++)
{
for(j=0; j<col; j++)
{
fprintf(fp2, "[%.12f] ", CONV1D_ANS[i][j]);
}
fprintf(fp2, "\n");
}
return 0;
}
C code (double)
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<time.h>
const char CONV1D_WEIGHT[] = "CONV1D_WEIGHT.txt";
const char CONV1D_INPUT[] = "CONV1D_INPUT.txt";
void parameterFree(double **matrix, int row)
{
int i = 0;
for(i=0; i<row; i++)
free(matrix[i]);
free(matrix);
}
double** createMatrix_2D(int row, int col)
{
int i = 0;
double **matrix = NULL;
matrix = (double**)malloc(sizeof(double*) * row);
if(matrix == NULL)
printf("Matrix2D malloc failed\n");
for(i=0; i<row; i++)
{
matrix[i] = (double*)malloc(sizeof(double) * col);
if(matrix[i] == NULL)
printf("Matrix2D malloc failed\n");
}
return matrix;
}
double** conv_1D(const char weightFile[], double **source, int *row, int *col, int in_dim, int res_out_dim, int kernel_size)
{
double **filter = createMatrix_2D(in_dim, kernel_size);
double **featureMaps = createMatrix_2D(res_out_dim, *col-kernel_size+1);
int i = 0, j = 0, k = 0, count = 0;
char str[10];
float data = 0.0;
FILE *fp = fopen(weightFile, "r");
if(fp == NULL)
printf("Resnet file open failed\n");
else
{
/*initial featureMaps*/
for(i=0; i<res_out_dim; i++)
{
for(j=0; j<*col-kernel_size+1; j++)
{
featureMaps[i][j] = 0.0;
}
}
/*next filter*/
for(i=0; i<res_out_dim; i++)
{
/*read filter*/
for(j=0; j<in_dim; j++)
{
for(k=0; k<kernel_size; k++)
{
fscanf(fp, "%s", str);
sscanf(str, "%x", &data);
filter[j][k] = (double)data;
}
}
/* (part of source * filter) */
while(count < *col-kernel_size+1)
{
for(j=0; j<in_dim; j++)
{
for(k=count; k<kernel_size+count; k++)
{
featureMaps[i][count] += source[j][k]*filter[j][k-count];
}
}
count++;
}
count = 0;
}
fclose(fp);
}
parameterFree(source, *row);
parameterFree(filter, in_dim);
*row = res_out_dim;
*col = *col-kernel_size+1;
return featureMaps;
}
int main()
{
int row = 80;
int col = 327;
int in_dim = 80;
int res_out_dim = 128;
int kernel_size = 5;
int i, j;
float data;
char str[10];
double **input = createMatrix_2D(row, col);
FILE *fp = fopen(CONV1D_INPUT, "r");
FILE *fp2 = fopen("C code_CONV1D_OUTPUT.txt", "w");
if(fp == NULL)
printf("File open failed\n");
else
{
for(i=0; i<row; i++)
{
for(j=0; j<col; j++)
{
fscanf(fp, "%s", str);
sscanf(str, "%x", &data);
input[i][j] = (double)data;
}
}
}
double **CONV1D_ANS = conv_1D(CONV1D_WEIGHT, input, &row, &col, in_dim, res_out_dim, kernel_size);
for(i=0; i<row; i++)
{
for(j=0; j<col; j++)
{
fprintf(fp2, "[%.18f] ", CONV1D_ANS[i][j]);
}
fprintf(fp2, "\n");
}
return 0;
}
| Floating point numbers are not precise (by design). Depending on in which order operations are performed, the results might vary. Even worse, some formulas are straight numerical unstable, whereas another one for the same analytical expression can be stable.
Compilers often rearange statements as an optimization measure. Convolution is an operation which contains notoriously many operations and loops. So unless you directly compare executed bytecode, this speculation is kind of pointless.
| https://stackoverflow.com/questions/59152826/ |
torch.cuda.is_available() returns false in colab | I am trying to use GPU in google colab. Below are the details of the versions of pytorch and cuda installed in my colab.
Torch 1.3.1 CUDA 10.1.243
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
I am pretty new to using a GPU for transfer learning on pytorch models. My torch.cuda.is_available() returns false and I am unabel to use a GPU. torch.backends.cudnn.enabled returns true. What might be going wrong here?
| Worked with all the versions mentioned above and I did not have to downgrade my CUDA to 10.0. I had restarted my colab after the updates which set my running machine back to CPU and I just had to change it back to GPU.
| https://stackoverflow.com/questions/59154623/ |
Indexing Pytorch tensor | I have a Pytorch code which generates a Pytorch tensor in each iteration of for loop, all of the same size. I want to assign each of those tensors to a row of new tensor, which will include all the tensors at the end. In other works something like this
for i=1:N:
X = torch.Tensor([[1,2,3], [3,2,5]])
#Y is a pytorch tensor
Y[i] = X
I wonder how I can implement this with Pytorch.
| You can concatenate the tensors using torch.cat:
tensors = []
for i in range(N):
X = torch.tensor([[1,2,3], [3,2,5]])
tensors.append(X)
Y = torch.cat(tensors, dim=0) # dim 0 is the rows of the tensor
| https://stackoverflow.com/questions/59154920/ |
Can't init the weights of my neural network PyTorch | I can't initialize the weights with the function MyNet.apply(init_weights).
These are my functions:
def init_weights(net):
if type(net) == torch.nn.Module:
torch.nn.init.kaiming_uniform_(net.weight)
net.bias.data.fill_(0.01) # tots els bias a 0.01
My neural net is the following:
class NeuralNet(torch.nn.Module):
def __init__(self):
super().__init__() # Necessary for torch to detect this class as trainable
# Here define network architecture
self.layer1 = torch.nn.Linear(28**2, 32).to(device) # Linear layer with 32 neurons
self.layer2 = torch.nn.Linear(32, 64).to(device) # Linear layer with 64 neurons
self.layer3 = torch.nn.Linear(64, 128).to(device) # Linear layer with 128 neurons
self.output = torch.nn.Linear(128, 1).to(device) # Linear layer with 1 output neuron (binary output)
def forward(self, x):
# Here define architecture behavior
x = torch.sigmoid(self.layer1(x)).to(device) # x = torch.nn.functional.relu(self.layer1(x))
x = torch.sigmoid(self.layer2(x)).to(device)
x = torch.sigmoid(self.layer3(x)).to(device)
return torch.sigmoid(self.output(x)).to(device) # Binary output
The type(net) prints as linear so it never gets inside the if statement, and if I remove it produces the following error:
AttributeError: 'NeuralNet' object has no attribute 'weight'
| You should init only the weight of the linear layers:
def init_weights(net):
if type(net) == torch.nn.Linear:
torch.nn.init.kaiming_uniform_(net.weight)
net.bias.data.fill_(0.01) # tots els bias a 0.01
| https://stackoverflow.com/questions/59156629/ |
Using autograd to compute Jacobian matrix of outputs with respect to inputs | I apologize if this question is obvious or trivial. I am very new to pytorch and I am trying to understand the autograd.grad function in pytorch. I have a neural network G that takes in inputs (x,t) and outputs (u,v). Here is the code for G:
class GeneratorNet(torch.nn.Module):
"""
A three hidden-layer generative neural network
"""
def __init__(self):
super(GeneratorNet, self).__init__()
self.hidden0 = nn.Sequential(
nn.Linear(2, 100),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(100, 100),
nn.LeakyReLU(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(100, 100),
nn.LeakyReLU(0.2)
)
self.out = nn.Sequential(
nn.Linear(100, 2),
nn.Tanh()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
Or simply G(x,t) = (u(x,t), v(x,t)) where u(x,t) and v(x,t) are scalar valued. Goal: Compute $\frac{\partial u(x,t)}{\partial x}$ and $\frac{\partial u(x,t)}{\partial t}$. At every training step, I have a minibatch of size $100$ so u(x,t) is a [100,1] tensor. Here is my attempt to compute the partial derivatives, where coords is the input (x,t) and just like below I added the requires_grad_(True) flag to the coords as well:
tensor = GeneratorNet(coords)
tensor.requires_grad_(True)
u, v = torch.split(tensor, 1, dim=1)
du = autograd.grad(u, coords, grad_outputs=torch.ones_like(u), create_graph=True,
retain_graph=True, only_inputs=True, allow_unused=True)[0]
du is now a [100,2] tensor.
Question: Is this the tensor of the partials for the 100 input points of the minibatch?
There are similar questions like computing derivatives of the output with respect to inputs but I could not really figure out what's going on. I apologize once again if this is already answered or trivial. Thank you very much.
| The code you posted should give you the partial derivative of your first output w.r.t. the input. However, you also have to set requires_grad_(True) on the inputs, as otherwise PyTorch does not build up the computation graph starting at the input and thus it cannot compute the gradient for them.
This version of your code example computes du and dv:
net = GeneratorNet()
coords = torch.randn(10, 2)
coords.requires_grad = True
tensor = net(coords)
u, v = torch.split(tensor, 1, dim=1)
du = torch.autograd.grad(u, coords, grad_outputs=torch.ones_like(u))[0]
dv = torch.autograd.grad(v, coords, grad_outputs=torch.ones_like(v))[0]
You can also compute the partial derivative for a single output:
net = GeneratorNet()
coords = torch.randn(10, 2)
coords.requires_grad = True
tensor = net(coords)
u, v = torch.split(tensor, 1, dim=1)
du_0 = torch.autograd.grad(u[0], coords)[0]
where du_0 == du[0].
| https://stackoverflow.com/questions/59161001/ |
PyTorch equivalent of a Tensorflow linear layer | I was trying to reimplement a Tensorflow code using PyTorch framework. Below I have included the TF sample code and my PyT interpretation.
TensorFlow implementation:
W1 = tf.Variable(xavier_init([135, 128]))
b1 = tf.Variable(tf.zeros(shape=[128]))
def fcn(x):
z = tf.reshape(x, (-1, 135))
out1 = leaky_relu( tf.matmul(z, W1) + b1 )
return out1
PyTorch implementation:
class decoder(nn.Module):
def __init__(self):
super(decoder, self).__init__()
self.layer_10 = nn.Linear(135, 128, bias=True)
self.leaky = nn.LeakyReLU(0.2, inplace=False)
init.xavier_uniform(self.layer_10.weight)
def forward(self, x):
z = x.view(-1, 135)
h30 = self.leaky(self.layer_10(z))
return h30
I was wondering what is the proper way to implement the matmul part, given that the weights in pytorch are not explicitly defined as they are in the TF (or correct me if I'm wrong).
| You do not need to explicitly call torch.matmul: it is in the implementation of the forward method of the nn.Linear layer. By calling self.layer_10(z) you are actually calling (behind the scene) the forward method that does the matrix multiplication and adds the bias for you.
If you want your code to be exactly the same, you might want to explicitly initialize the weights using the same method. For that you have nn.init that implements a variety of weight initializations. Specifically, you might find nn.init.xavier_uniform_ relevant.
| https://stackoverflow.com/questions/59167849/ |
How to have two optimizers such that one optimizer trains the whole parameter and the other trains partial of the parameter? | I have a model:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(128, 128, (3,3))
self.conv2 = nn.Conv2d(128, 256, (3,3))
self.conv3 = nn.Conv2d(256, 256, (3,3))
def forward(self,):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
return x
model = MyModel()
I want to train model in such a way that in every training step DATA_X1 should train
['conv1', 'conv2', 'conv3'] layers and DATA_X2 should train only ['conv3'] layers.
I tried making two optimizer:
# Full parameters train
all_params = model.parameters()
all_optimizer = optim.Adam(all_params, lr=0.01)
# Partial parameters train
partial_params = model.parameters()
for p, (name, param) in zip(list(partial_params), model.named_parameters()):
if name in ['conv3']:
p.requires_grad = True
else:
p.requires_grad = False
partial_optimizer = optim.Adam(partial_params, lr=0.01)
But this affects both the optimizer with required_grad = False
Is there any way I can do this?
| Why not build this functionality into the model?
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(128, 128, (3,3))
self.conv2 = nn.Conv2d(128, 256, (3,3))
self.conv3 = nn.Conv2d(256, 256, (3,3))
self.partial_grad = False # a flag
def forward(self, x):
if self.partial_grad:
with torch.no_grad():
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
else:
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
return x
Now you can have a single optimizer with all the parameters, and you can switch model.partial_grad on and off according to your training data:
optimizer.zero_grad()
model.partial_grad = False # prep for DATA_X1 training
x1, y1 = DATA_X1.item() # this is not really a code, but you get the point
out = model(x1)
loss = criterion(out, y1)
loss.backward()
optimizer.step()
# do a partial opt for DATA_X2
optimizer.zero_grad()
model.partial_grad = True # prep for DATA_X2 training
x2, y2 = DATA_X2.item() # this is not really a code, but you get the point
out = model(x2)
loss = criterion(out, y2)
loss.backward()
optimizer.step()
Having a single optimizer should be more beneficial since you can track the momentum and the change of parameters across both datasets.
| https://stackoverflow.com/questions/59170388/ |
How to properly implement data reorganization using PyTorch? | It's going to be a long post, sorry in advance...
I'm working on a denoising algorithm and my goal is to:
Use PyTorch to design / train the model
Convert the PyTorch model into a CoreML model
The denoising algorithm consists in the following 3 parts:
A "down-sampling" + noise level map
A regular convnet
An "up-sampling"
The first part is quite simple in its idea, but not so easy to explain. Given for instance an input color image and a input value "sigma" that represents the standard deviation of the image noise.
The "down-sampling" part is in fact a space-to-depth. In short, for a given channel and for a subset of 2x2 pixels, the space-to-depth creates a single pixel composed of 4 channels. The number of channels is multiplied by 4 while the height and width are divided by 2. The data is simply reorganized.
The noise level map consists in creating 3 channels containing the standard deviation value so that the convnet knows how to properly denoise the input image.
This will be maybe more clear with some code:
def downsample_and_noise_map(input, sigma):
# Input tensor size (batch, channels, height, width)
in_n, in_c, in_h, in_w = input.size()
# Output tensor size
out_h = in_h // 2
out_w = in_w // 2
sigma_c = in_c # nb of channels of the standard deviation tensor
image_c = in_c * 4 # nb of channels of the image tensor
# Standard deviation tensor
output_sigma = sigma.view(1, 1, 1, 1).repeat(in_n, sigma_c, out_h, out_w)
# Image tensor
output_image = torch.zeros((in_n, image_c, out_h, out_w))
output_image[:, 0::4, :, :] = input[:, :, 0::2, 0::2]
output_image[:, 1::4, :, :] = input[:, :, 0::2, 1::2]
output_image[:, 2::4, :, :] = input[:, :, 1::2, 0::2]
output_image[:, 3::4, :, :] = input[:, :, 1::2, 1::2]
# Concatenate standard deviation and image tensors
return torch.cat((output_sigma, output_image), dim=1)
This function is then called as the first step in the model's forward function:
def forward(self, x, sigma):
x = downsample_and_noise_map(x, sigma)
x = self.convnet(x)
x = upsample(x)
return x
Let's consider an input tensor of size 1x3x100x100 (PyTorch standard: batch, channels, height, width) and a sigma value of 0.1. The output tensor has the following properties:
Tensor's shape is 1x15x50x50
Tensor's values for channels 0, 1 and 2 are all equal to sigma = 0.1
Tensor's values for channels 3, 4, 5, 6 are composed of the input image values of channel 0
Tensor's values for channels 7, 8, 9, 10 are composed of the input image values of channel 1
Tensor's values for channels 11, 12, 13, 14 are composed of the input image values of channel 2
If this code is not clear enough, I can post an even more naive version.
The up-sampling part is the reciprocal function of the downsampling one.
I was able to use this function for training and testing in PyTorch.
Then, I tried to convert the model to CoreML with ONNX as an intermediate step.
The conversion to ONNX generated "TracerWarning". Conversion from ONNX to CoreML failed (TypeError: 1.0 has type numpy.float64, but expected one of: int, long). The problem came from the down-sampling + noise level map (and from up-sampling too).
When I removed the down-sampling + noise level map and up-sampling layers, I was able to convert to ONNX and to CoreML very easily since only a simple convnet remained. This means I have a solution to my problem: implement these 2 layers using 2 shaders on the mobile side. But I'm not satisfied with this solution as I want my model to contain all layers ^^
Before considering writing a post here, I crawled Internet to find an answer and I was able to write a better version of the previous function using reshape and permute. This version removed all ONNX warning, but the CoreML conversion still failed...
def downsample_and_noise_map(input, sigma):
# Input image size
in_n, in_c, in_h, in_w = input.size()
# Output tensor size
out_n = in_n
out_h = in_h // 2
out_w = in_w // 2
# Create standard deviation tensor
output_sigma = sigma.view(out_n, 1, 1, 1).repeat(out_n, in_c, out_h, out_w)
# Split RGB channels
channels_rgb = torch.split(input, 1, dim=1)
# Reshape (space-to-depth) each image channel
channels_reshaped = []
for channel in channels_rgb:
channel = channel.reshape(1, out_h, 2, out_w, 2)
channel = channel.permute(2, 4, 0, 1, 3)
channel = channel.reshape(1, 4, out_h, out_w)
channels_reshaped.append(channel)
# Concatenate all reshaped image channels together
output_image = torch.cat(channels_reshaped, dim=1)
# Concatenate standard deviation and image tensors
output = torch.cat([output_sigma, output_image], dim=1)
return output
So here are (some of) my questions:
What is the preferred PyTorch way to implement a function such as downsample_and_noise_map function within a model?
Same question but when the conversion to ONNX and then to CoreML is part of the equation?
Is the PyTorch -> ONNX -> CoreML still best path to deploy the model for iOS production?
Thanks for your help (and your patience) ^^
| Disclaimer I'm not familiar with CoreML or deploying to iOS but I do have experience deploying PyTorch models in TensorRT and OpenVINO via ONNX.
The main issues I've faced when deploying to other frameworks is that operations like slicing and repeating tensors tend to have limited support in other frameworks. Often we can construct equivalent conv or transpose-conv operations which achieve the desired behavior.
In order to ensure we don't export the logic used to construct the conv weights I've separated the weight initialization from the application of the weights. This makes the ONNX export much more straightforward since all it sees is some constant tensors being applied.
class DownsampleAndNoiseMap():
def __init__(self):
self.initialized = False
self.weight = None
self.zeros = None
def init_weights(self, input):
with torch.no_grad():
in_n, in_c, in_h, in_w = input.size()
out_h = int(in_h // 2)
out_w = int(in_w // 2)
sigma_c = in_c
image_c = in_c * 4
# conv weights used for downsampling
self.weight = torch.zeros(image_c, in_c, 2, 2).to(input)
for c in range(in_c):
self.weight[4 * c, c, 0, 0] = 1
self.weight[4 * c + 1, c, 0, 1] = 1
self.weight[4 * c + 2, c, 1, 0] = 1
self.weight[4 * c + 3, c, 1, 1] = 1
# zeros used to replace repeat
self.zeros = torch.zeros(in_n, sigma_c, out_h, out_w).to(input)
self.initialized = True
def __call__(self, input, sigma):
assert self.initialized
output_sigma = self.zeros + sigma
output_image = torch.nn.functional.conv2d(input, self.weight, stride=2)
return torch.cat((output_sigma, output_image), dim=1)
class Upsample():
def __init__(self):
self.initialized = False
self.weight = None
def init_weights(self, input):
with torch.no_grad():
in_n, in_c, in_h, in_w = input.size()
image_c = in_c * 4
self.weight = torch.zeros(in_c + image_c, in_c, 2, 2).to(input)
for c in range(in_c):
self.weight[in_c + 4 * c, c, 0, 0] = 1
self.weight[in_c + 4 * c + 1, c, 0, 1] = 1
self.weight[in_c + 4 * c + 2, c, 1, 0] = 1
self.weight[in_c + 4 * c + 3, c, 1, 1] = 1
self.initialized = True
def __call__(self, input):
assert self.initialized
return torch.nn.functional.conv_transpose2d(input, self.weight, stride=2)
I made the assumption that upsample was the reciprocal of downsample in the sense that x == upsample(downsample_and_noise_map(x, sigma)) (correct me if I'm wrong in this assumption). I also verified that my version of downsample agrees with yours.
# consistency checking code
x = torch.randn(1, 3, 100, 100)
sigma = torch.randn(1)
# OP downsampling
y1 = downsample_and_noise_map(x, sigma)
ds = DownsampleAndNoiseMap()
ds.init_weights(x)
y2 = ds(x, sigma)
print('downsample diff:', torch.sum(torch.abs(y1 - y2)).item())
us = Upsample()
us.init_weights(x)
x_recov = us(ds(x, sigma))
print('recovery error:', torch.sum(torch.abs(x - x_recov)).item())
which results in
downsample diff: 0.0
recovery error: 0.0
Exporting to ONNX
When exporting we need to invoke init_weights for the new classes before using torch.onnx.export. For example
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.downsample = DownsampleAndNoiseMap()
self.upsample = Upsample()
self.convnet = lambda x: x # placeholder
def init_weights(self, x):
self.downsample.init_weights(x)
self.upsample.init_weights(x)
def forward(self, x, sigma):
x = self.downsample(x, sigma)
x = self.convnet(x)
x = self.upsample(x)
return x
x = torch.randn(1, 3, 100, 100)
sigma = torch.randn(1)
model = Model()
# ... load state dict here
model.init_weights(x)
torch.onnx.export(model, (x, sigma), 'deploy.onnx', verbose=True, input_names=["input", "sigma"], output_names=["output"])
which gives the ONNX graph
graph(%input : Float(1, 3, 100, 100)
%sigma : Float(1)) {
%2 : Float(1, 3, 50, 50) = onnx::Constant[value=<Tensor>](), scope: Model
%3 : Float(1, 3, 50, 50) = onnx::Add(%2, %sigma), scope: Model
%4 : Float(12, 3, 2, 2) = onnx::Constant[value=<Tensor>](), scope: Model
%5 : Float(1, 12, 50, 50) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%input, %4), scope: Model
%6 : Float(1, 15, 50, 50) = onnx::Concat[axis=1](%3, %5), scope: Model
%7 : Float(15, 3, 2, 2) = onnx::Constant[value=<Tensor>](), scope: Model
%output : Float(1, 3, 100, 100) = onnx::ConvTranspose[dilations=[1, 1], group=1, kernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2]](%6, %7), scope: Model
return (%output);
}
As for the last question about the recommended way to deploy on iOS I can't answer that since I don't have experience in that area.
| https://stackoverflow.com/questions/59177052/ |
Multi-class for sentence classification with pytorch (Using nn.LSTM) | I have this network, that I took from this tutorial, and I want to have sentences as input (Which is already done) and just a one line tensor as a result.
From the tutorial, this sentence “John’s dog likes food”, gets a 1 column tensor returned:
tensor([[-3.0462, -4.0106, -0.6096],
[-4.8205, -0.0286, -3.9045],
[-3.7876, -4.1355, -0.0394],
[-0.0185, -4.7874, -4.6013]])
...and class list:
tag_list[ “name”, “verb”, “noun”]
Each line has the probability of a tag being associated with the word. (The first word has [-3.0462, -4.0106, -0.6096] vector where the last element corresponds to the maximum scoring tag, "noun")
The tutorial’s dataset looks like this:
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
And I want mine to be of this format:
training_data = [
("Hello world".split(), ["ONE"]),
("I am dog".split(), ["TWO"]),
("It's Britney glitch".split(), ["THREE"])
]
The parameters are defined as:
class LSTMTagger(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
As of now, the sizes from input and output are not matching and i get:
ValueError: Expected input batch_size (2) to match target batch_size (1).
The criterion function doesn't accept the input due to size missmatch it seems:
loss = criterion(tag_scores, targets)
I've read the last layer could be defined as nn.Linear in order to squash the outputs but i can't seem to get any results. Tried other loss functions
How can I change it in order for the model to classify the sentence , and not each word, as in the original tutorial?
| I solved this issue, by simply getting the hidden states of the last
tag_space = self.hidden2tag(lstm_out[-1])
| https://stackoverflow.com/questions/59177430/ |
Hidden size vs input size in RNN | Premise 1:
Regarding neurons in a RNN layer - it is my understanding that at "each time step, every neuron receives both the input vector x (t) and the output vector from the previous time step y (t –1)" [1]:
Premise 2:
It is also my understanding that in Pytorch's GRU layer, input_size and hidden_size mean the following:
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
So naturally, hidden_size should represent the number of neurons in a GRU layer.
My question:
Given the following GRU layer:
# assume that hidden_size = 3
class Encoder(nn.Module):
def __init__(self, src_dictionary_size, hidden_size):
super(Encoder, self).__init__()
self.embedding = nn.Embedding(src_dictionary_size, hidden_size)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size)
Assuming a hidden_size of 3, my understanding is that the GRU layer above would have 3 neurons, each which accepts an input vector of size 3 simultaneously for every timestep.
My question is: why do the arguments to hidden_size and input_size have to be equal? I.e. why can't each of the 3 neurons accept say, an input vector of size 5?
Case in point: both of the following produce size mismatch:
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size-1)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size+1)
[1] Géron, Aurélien. Hands-On Machine Learning with Scikit-Learn and TensorFlow (p. 388). O'Reilly Media. Kindle Edition.
[3] https://pytorch.org/docs/stable/nn.html#torch.nn.GRU
Adding full code for reproducibility:
import torch
import torch.nn as nn
class Encoder(nn.Module):
def __init__(self, src_dictionary_size, hidden_size):
super(Encoder, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(src_dictionary_size, hidden_size)
self.gru = nn.GRU(input_size = hidden_size, hidden_size = hidden_size-1)
def forward(self, pad_seqs, seq_lengths, hidden):
"""
Args:
pad_seqs of shape (max_seq_length, batch_size, 1): Padded source sequences.
seq_lengths: List of sequence lengths.
hidden of shape (1, batch_size, hidden_size): Initial states of the GRU.
Returns:
outputs of shape (max_seq_length, batch_size, hidden_size): Padded outputs of GRU at every step.
hidden of shape (1, batch_size, hidden_size): Updated states of the GRU.
"""
embedded_sqs = self.embedding(pad_seqs).squeeze(2)
packed_sqs = pack_padded_sequence(embedded_sqs, seq_lengths)
packed_output, h_n = self.gru(packed_sqs, hidden)
output, input_sizes = pad_packed_sequence(packed_output)
return output, h_n
def init_hidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)
def test_Encoder_shapes():
hidden_size = 5
encoder = Encoder(src_dictionary_size=5, hidden_size=hidden_size)
# maximum word count
max_seq_length = 4
# num sentences
batch_size = 2
hidden = encoder.init_hidden(batch_size=batch_size)
# these are padded sequences (sentences of words). There are 2 sentences (i.e. 2 batches) with a maximum of 4 words.
pad_seqs = torch.tensor([
[1, 2],
[2, 3],
[3, 0],
[4, 0]
]).view(max_seq_length, batch_size, 1)
outputs, new_hidden = encoder.forward(pad_seqs=pad_seqs, seq_lengths=[4, 2], hidden=hidden)
assert outputs.shape == torch.Size([4, batch_size, hidden_size]), f"Bad outputs.shape: {outputs.shape}"
assert new_hidden.shape == torch.Size([1, batch_size, hidden_size]), f"Bad new_hidden.shape: {new_hidden.shape}"
print('Success')
test_Encoder_shapes()
| I just resolved this and the mistake was self-inflicted.
Conclusion: input_size and hidden_size can differ in size and there is no inherent problem with this. The premises in the question are correctly stated.
The problem with the (full) code above was that the initial hidden state of the GRU did not have the correct dimensions. The initial hidden state must have the same dimensions as subsequent hidden states. In my case, the initial hidden state had the shape of (1,2,5) instead of (1,2,4). In the former, 5 represents the dimensionality of the embedding vector. 4 represents the hidden_size (num neurons) in the GRU. The correct code is below:
import torch
import torch.nn as nn
class Encoder(nn.Module):
def __init__(self, src_dictionary_size, input_size, hidden_size):
super(Encoder, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(src_dictionary_size, input_size)
self.gru = nn.GRU(input_size = input_size, hidden_size = hidden_size)
def forward(self, pad_seqs, seq_lengths, hidden):
"""
Args:
pad_seqs of shape (max_seq_length, batch_size, 1): Padded source sequences.
seq_lengths: List of sequence lengths.
hidden of shape (1, batch_size, hidden_size): Initial states of the GRU.
Returns:
outputs of shape (max_seq_length, batch_size, hidden_size): Padded outputs of GRU at every step.
hidden of shape (1, batch_size, hidden_size): Updated states of the GRU.
"""
embedded_sqs = self.embedding(pad_seqs).squeeze(2)
packed_sqs = pack_padded_sequence(embedded_sqs, seq_lengths)
packed_output, h_n = self.gru(packed_sqs, hidden)
output, input_sizes = pad_packed_sequence(packed_output)
return output, h_n
def init_hidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)
def test_Encoder_shapes():
hidden_size = 4
embedding_size = 5
encoder = Encoder(src_dictionary_size=5, input_size = embedding_size, hidden_size = hidden_size)
print(encoder)
max_seq_length = 4
batch_size = 2
hidden = encoder.init_hidden(batch_size=batch_size)
pad_seqs = torch.tensor([
[1, 2],
[2, 3],
[3, 0],
[4, 0]
]).view(max_seq_length, batch_size, 1)
outputs, new_hidden = encoder.forward(pad_seqs=pad_seqs, seq_lengths=[4, 2], hidden=hidden)
assert outputs.shape == torch.Size([4, batch_size, hidden_size]), f"Bad outputs.shape: {outputs.shape}"
assert new_hidden.shape == torch.Size([1, batch_size, hidden_size]), f"Bad new_hidden.shape: {new_hidden.shape}"
print('Success')
test_Encoder_shapes()
| https://stackoverflow.com/questions/59182518/ |
In Pytorch, what is the most efficient way to copy the learned params of a model as the initialization for a second model of the same architecture? | I have a CNN Model which has the following architecture:
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(4, 32, (8, 8), 4)
self.conv2 = nn.Conv2d(32, 64, (4, 4), 2)
self.conv3 = nn.Conv2d(64, 64, (3, 3), 1)
self.dense = nn.Linear(4*4*64, 512)
self.out = nn.Linear(512, 18)
I am training it using a certain optimizer. I then want to use these learned parameters from the first model as the initialization scheme for a second model of the exact same architecture (as opposed to using, say, Xavier). I am aware that I need to use model_object.apply(initalization_function), but what would be the most efficient way to do this vis-a-vis the initialization scheme I described where I am using the learned parameters from another model as initialization for a new model?
| If you want to load model1 parameters in model2, I believe this would work:
model2.load_state_dict(model1.state_dict()))
See an example of something similar in the official PyTorch transfer learning tutorial
| https://stackoverflow.com/questions/59183865/ |
How to train only front part of a neural network? | I'm using pytorch to train part of the network. For example, I have a model structure
hidden1 = Layer1(x)
hidden2 = Layer2(hidden1)
out = Layer3(hidden2)
If I want to train Layer3 only, I can use
hidden1 = Layer1(x)
hidden2 = Layer2(hidden1).detach()
out = Layer3(hidden2)
However, this time I want to train Layer1 only. How can I achieve this? Thanks.
| detach will not really "freeze" your layer.
If you don't want to train a layer, you should use requires_grad=False instead.
For example:
hidden2.weight.requires_grad = False
hidden2.bias.requires_grad = False
Then to unfreeze, you do the same with requires_grad=True.
| https://stackoverflow.com/questions/59186114/ |
PyTorch equivalent of a Tensorflow loss function | I was trying to reimplement a TensorFlow code using PyTorch framework. Below I have included the TF sample code and my PyT interpretation, for a target of size (Batch, 9, 9, 4) and a network output of size (Batch, 9, 9, 4)
TensorFlow implementation:
loss = tf.nn.softmax_cross_entropy_with_logits(labels=target, logits=output)
loss = tf.matrix_band_part(loss, 0, -1) - tf.matrix_band_part(loss, 0, 0)
PyTorch implementation:
output = torch.tensor(output, requires_grad=True).view(-1, 4)
target = torch.tensor(target).view(-1, 4).argmax(1)
loss = torch.nn.CrossEntropyLoss(reduction='none')
my_loss = loss(output, target).view(-1,9,9)
For the PyTorch implementation, I'm not sure how to implement tf.matrix_band_part. I was thinking about defining a mask, but I was not sure if that would hurt the backpropagation or not. I am aware of torch.triu, but this function does not work for tensors with more than 2 dimensions.
| Since (at least) version 1.2.0 torch.triu works with batches well (as per docs).
You can get diagonal elements via einsum: torch.einsum('...ii->...i', A).
Applying mask doesn't hurt backprop. You can think about it as projection (which is obviously works well with backprop).
| https://stackoverflow.com/questions/59187874/ |
Expected object of scalar type Short but got scalar type Float for argument #2 'mat2' in call to _th_mm | I have data type problem implementing LSTM with Pytorch. Based on similar problems I tried to change the format of Input, h and c to ShortTensor as you can see but I still get the same error:
RuntimeError: Expected object of scalar type Short but got scalar type
Float for argument #2 'mat2' in call to _th_mm
class data(Dataset):
def __init__(self, samples=10000, number=30):
self.x = torch.from_numpy(np.matrix
(np.random.random_integers(0,9,samples*number).reshape(samples, number)))
self.y = torch.from_numpy(np.zeros((samples))).type(torch.ShortTensor)
for index, row in enumerate(self.x):
self.y[index] = 1 if torch.sum(row) >= 130 else 0
class LSTM(nn.Module):
def __init__(self,i_size, h_size, n_layer, batch_size = 30 ):
super().__init__()
self.lstm = nn.LSTM(input_size=i_size, hidden_size=h_size, num_layers=n_layer)
self.h = torch.randn(n_layer, batch_size, h_size).type(torch.ShortTensor)
self.c = torch.randn(n_layer, batch_size, h_size).type(torch.ShortTensor)
self.hidden = (self.h, self.c)
self.linear = nn.Linear(n_layer, 1)
def forward(self,x):
out, hidden = self.lstm(x.type(torch.ShortTensor), self.hidden)
out = nn.Softmax(self.linear(out.short()))
return out
data_set = data()
train_data = data_set.x[0:8000, :, None]
train_label = data_set.y[0:8000]
test_data = data_set.x[8000:, : , None]
test_label = data_set.y[8000:]
input_size = 1
hidden_size = 30
layer_num = 200
model_LSTM = LSTM(input_size, hidden_size, layer_num)
#model_LSTM.cuda()
y_ = model_LSTM(train_data)
| I was producing the input data in int16 and apparently nn.LSTM takes only float32 and the error wasn't clear about that.
| https://stackoverflow.com/questions/59194333/ |
Python recursive function failing | The issue that I am having is a really strange issue.
What I am trying to accomplish is the following: I am training a neural network using pytorch, and I want to restart my training function if the training loss doesn't decrease, so as to re-initialize the neural network with a different set of weights. The training function is presented below:
def __train__(dp, i, j, net, restarts, epoch=0):
if net == '2CH': model = TwoChannelCNN().cuda()
elif net == 'Siam' : model = SiameseCNN().cuda()
elif net == 'Trad' : model = TraditionalCNN().cuda()
ls_fn = torch.nn.MSELoss(reduce=True)
optim = torch.optim.SGD(model.parameters(), lr=1e-6, momentum=0.9)
epochs = np.arange(100)
eloss = []
for epoch in epochs:
model.train()
train_loss = []
tr_batches = np.array_split(dp.train_set, int(len(dp.train_set)/8))
for tr_batch in tr_batches:
if net == '2CH': loaded_batch = dp.__load2CH__(tr_batch)
elif net == 'Siam': loaded_batch = dp.__loadSiam__(tr_batch)
elif net == 'Trad' : loaded_batch = dp.__load__(tr_batch, i)
for x_batch, y_batch in loaded_batch:
x_var, y_var = Variable(x_batch.cuda()), Variable(y_batch.cuda())
y_pred = torch.clamp(model(x_var), 0, 1)
loss = ls_fn(y_pred, y_var)
train_loss.append(abs(loss.item()))
optim.zero_grad()
loss.backward()
optim.step()
eloss.append(np.mean(train_loss))
print(epoch, np.mean(train_loss))
if epoch == 10 and np.mean(train_loss) > 0.2:
restarts += 1
print('Number of restarts for client {} and fold {}: {}'.format(i,j,restarts))
__train__(dp, i, j, net, restarts, epoch=0)
__plotLoss__(epochs, eloss, 'train', str(i), str(j))
torch.save(model.state_dict(), "Output/client_{}_fold_{}.pt".format(i, j))
So the restarting based on if epoch == 10 and np.mean(train_loss) > 0.2: works, but only sometimes, which is beyond my comprehension. Here is an example of the output:
0 0.5000133737921715
1 0.4999906486272812
2 0.464298670232296
3 0.2727506290078163
4 0.2628978116512299
5 0.2588871221542358
6 0.25728522151708605
7 0.25630473804473874
8 0.2556223524808884
9 0.25522999209165576
10 0.25467908215522767
Number of restarts for client 5 and fold 1: 3
0 0.10957609283713009
1 0.02840371729924134
2 0.021477583368030594
3 0.017759160268232682
4 0.015173796122947827
5 0.013349939693290782
6 0.011949078906879265
7 0.010810676779671655
8 0.00987362345259362
9 0.009110640348696108
10 0.008239036202623808
11 0.007680381585537574
12 0.007171026876221333
13 0.006765962297888837
14 0.006428168776848068
15 0.006133011780953467
16 0.005819878347673745
17 0.005572605537395361
18 0.00535818950227004
19 0.005159409143814457
20 0.0049763926251294235
21 0.004738794513338235
22 0.004578812885309958
23 0.004428663117960554
24 0.004282198464788351
25 0.004145324644400691
26 0.004018862769889626
27 0.0039044404603504573
28 0.0037960831121495744
29 0.0036947361258523586
30 0.0035982220717533267
31 0.0035018146670104723
32 0.0034150678806059887
33 0.0033372560733512698
34 0.003261332974241583
35 0.00318166259540763
36 0.003108531899014735
37 0.0030385089141125848
38 0.002977990984523103
39 0.0029195284016142937
40 0.002870084639441188
41 0.0028180573325994373
42 0.0027717544270049643
43 0.002719321814503495
44 0.0026704726860933194
45 0.0026204266263459316
46 0.002570544072460258
47 0.0025225681523167224
48 0.0024814611543610746
49 0.0024358948737413116
50 0.002398673941639636
51 0.0023606415423654587
52 0.002330436484101057
53 0.0022891738560574027
54 0.002260655496376241
55 0.002227568955708719
56 0.002191826719741698
57 0.0021609061182290058
58 0.0021279943092100666
59 0.0020966088490456513
60 0.002066195117003474
61 0.0020381672924407895
62 0.002009863329306995
63 0.001986304977759602
64 0.0019564831849032487
65 0.0019351609173580756
66 0.0019077356409993626
67 0.0018875047204855945
68 0.0018617453310780547
69 0.001839518720600381
70 0.001815563331498197
71 0.0017149778925132932
72 0.0016894878409248121
73 0.0016652211918212743
74 0.0016422999463582074
75 0.0016183732903472788
76 0.0015962369183098418
77 0.0015757764620279887
78 0.0015542267022799728
79 0.0015323152910759318
80 0.0014337954093957706
81 0.001410489170542867
82 0.0013871921329466962
83 0.0013641994057461773
84 0.001345829172682187
85 0.001322142209181493
86 0.00130379223035348
87 0.001282231878045458
88 0.001263879886683956
89 0.001243419097817167
90 0.0012279346547037929
91 0.001206978429649382
92 0.0011871445969959496
93 0.001172510546330841
94 0.0011529557384797045
95 0.0011350733004023273
96 0.001118382818282214
97 0.001103347793609089
98 0.0010848538354748599
99 0.0010698940242660911
11 0.2542190085053444
12 0.2538975296020508
So here you can see that the restarting is correct from the 3rd restart, but then, since the network converges, the training should be complete, but the function restarts AGAIN after the 99th epoch (for an unknown reason), and somehow starts at the 11th epoch, which also makes no sense as I am explicitly specifying epoch = 0 whenever the function starts or restarts. I should also add that, SOMETIMES, the function completes correctly after the epoch 99, when convergence has been achieved, and does not restart.
So my question is, why does this piece of code produce inconsistent results and outcomes? What am I missing here? Thanks in advance for any suggestions.
| You are restarting the training by calling __train__ a second time in the case if epoch == 10 and np.mean(train_loss) > 0.2: but you never terminate the first loop.
So, after the second training has converged, the outer loop continues at epoch 11.
What you need is a break statement after the inner call to __train__.
| https://stackoverflow.com/questions/59209048/ |
How to convert Onnx model (.onnx) to Tensorflow (.pb) model | I am trying to convert .onxx model to .pb model. I have written the code but i am getting error:
@tf_func(tf.ceil)AttributeError: module 'tensorflow' has no attribute 'ceil'
Code:
import onnx
from tensorflow.python.tools.import_pb_to_tensorboard import import_to_tensorboard
from onnx_tf.backend import prepare
onnx_model = onnx.load("original_3dlm.onnx")
tf_rep = prepare(onnx_model)
tf_rep.export_graph("model_var.pb")
import_to_tensorboard("model_var.pb", "tb_log")
How to resolve this issue? Is there any other way to convert Onxx to Tensorflow?
| I solve this issue with this.
Tensorflow Backend for ONNX.
Let me know if you have any issue.
Change from tensorflow 2.0 to 1.14.Maybe solve the problem.
| https://stackoverflow.com/questions/59209061/ |
calculate perplexity in pytorch | I've just trained an LSTM language model using pytorch. The main body of the class is this:
class LM(nn.Module):
def __init__(self, n_vocab,
seq_size,
embedding_size,
lstm_size,
pretrained_embed):
super(LM, self).__init__()
self.seq_size = seq_size
self.lstm_size = lstm_size
self.embedding = nn.Embedding.from_pretrained(pretrained_embed, freeze = True)
self.lstm = nn.LSTM(embedding_size,
lstm_size,
batch_first=True)
self.fc = nn.Linear(lstm_size, n_vocab)
def forward(self, x, prev_state):
embed = self.embedding(x)
output, state = self.lstm(embed, prev_state)
logits = self.fc(output)
return logits, state
Now I want to write a function which calculates how good a sentence is, based on the trained language model (some score like perplexity, etc.).
I'm a bit confused and I don't know how should I calculate this. A similar sample would be of greate use.
| When using Cross-Entropy loss you just use the exponential function torch.exp() calculate perplexity from your loss.
(pytorch cross-entropy also uses the exponential function resp. log_n)
So here is just some dummy example:
import torch
import torch.nn.functional as F
num_classes = 10
batch_size = 1
# your model outputs / logits
output = torch.rand(batch_size, num_classes)
# your targets
target = torch.randint(num_classes, (batch_size,))
# getting loss using cross entropy
loss = F.cross_entropy(output, target)
# calculating perplexity
perplexity = torch.exp(loss)
print('Loss:', loss, 'PP:', perplexity)
In my case the output is:
Loss: tensor(2.7935) PP: tensor(16.3376)
You just need to be beware of that if you want to get the per-word-perplexity you need to have per word loss as well.
Here is a neat example for a language model that might be interesting to look at that also computes the perplexity from the output:
https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/language_model/main.py#L30-L50
| https://stackoverflow.com/questions/59209086/ |
Proper dataloader setup to train fasterrcnn-resnet50 for object detection with pytorch | I am trying to train pytorches torchvision.models.detection.fasterrcnn_resnet50_fpn to detect objects in my own images.
According to the documentation, this model expects a list of images and a list of dictionaries with
'boxes' and 'labels' as keys. So my dataloaders __getitem__() looks like this:
def __getitem__(self, idx):
# load images
_, img = self.images[idx].getImage()
img = Image.fromarray(img, mode='RGB')
objects = self.images[idx].objects
boxes = []
labels = []
for o in objects:
# append bbox to boxes
boxes.append([o.x, o.y, o.x+o.width, o.y+o.height])
# append the 4th char of class_id, the number of lights (1-4)
labels.append(int(str(o.class_id)[3]))
# convert everything into a torch.Tensor
boxes = torch.as_tensor(boxes, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
# transforms consists only of transforms.Compose([transforms.ToTensor()]) for the time being
if self.transforms is not None:
img = self.transforms(img)
return img, target
To my best knowledge, it returns exactly what's asked. My dataloader looks like this
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=4, shuffle=False, num_workers=2)
however, when it get's to this stage:
for images, targets in dataloaders[phase]:
it raises
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 12 and 7 in dimension 1 at C:\w\1\s\windows\pytorch\aten\src\TH/generic/THTensor.cpp:689
Can someone point me in the right direction?
| @jodag was right, I had to write a seperate collate function in order for the net to receive the data like it was supposed to. In my case I only needed to bypass the default function.
| https://stackoverflow.com/questions/59215086/ |
Does data = batch['data'].cuda().function().cpu() make sense? | I have a dataset, which I call with batch['data'] and get my image output MxM. After I get my image I want to process it with some numpy operations. In this process I want my dataset to give me the image with GPU and changing the outputs device to CPU after that.
My question is, is concetanation of functions in Python being executed in an order? And can I make this process with
base = batch['data'].cuda().function().cpu()
And is this the same as:
base = batch['data'].cuda().function()
base.cpu()
Thanks in advance!
| Well, the CPU(s) will do the same work, but the result is not the same.
base = batch['data'].cuda().cpu()
After that line, you have the output of cpu() stored in the variable called base.
base = batch['data'].cuda()
base.cpu()
After these two lines, you have the output of cuda() stored in the variable called base and you have forgotten the result of cpu().
| https://stackoverflow.com/questions/59215256/ |
RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224] | This is the error i get when I try to train my network.
The class we used to store Images from the Caltech 101 dataset was provided us by our teachers.
from torchvision.datasets import VisionDataset
from PIL import Image
import os
import os.path
import sys
def pil_loader(path):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
class Caltech(VisionDataset):
def __init__(self, root, split='train', transform=None, target_transform=None):
super(Caltech, self).__init__(root, transform=transform, target_transform=target_transform)
self.split = split # This defines the split you are going to use
# (split files are called 'train.txt' and 'test.txt')
'''
- Here you should implement the logic for reading the splits files and accessing elements
- If the RAM size allows it, it is faster to store all data in memory
- PyTorch Dataset classes use indexes to read elements
- You should provide a way for the __getitem__ method to access the image-label pair
through the index
- Labels should start from 0, so for Caltech you will have lables 0...100 (excluding the background class)
'''
# Open file in read only mode and read all lines
file = open(self.split, "r")
lines = file.readlines()
# Filter out the lines which start with 'BACKGROUND_Google' as asked in the homework
self.elements = [i for i in lines if not i.startswith('BACKGROUND_Google')]
# Delete BACKGROUND_Google class from dataset labels
self.classes = sorted(os.listdir(os.path.join(self.root, "")))
self.classes.remove("BACKGROUND_Google")
def __getitem__(self, index):
'''
__getitem__ should access an element through its index
Args:
index (int): Index
Returns:
tuple: (sample, target) where target is class_index of the target class.
'''
img = Image.open(os.path.join(self.root, self.elements[index].rstrip()))
target = self.classes.index(self.elements[index].rstrip().split('/')[0])
image, label = img, target # Provide a way to access image and label via index
# Image should be a PIL Image
# label can be int
# Applies preprocessing when accessing the image
if self.transform is not None:
image = self.transform(image)
return image, label
def __len__(self):
'''
The __len__ method returns the length of the dataset
It is mandatory, as this is used by several other components
'''
# Provides a way to get the length (number of elements) of the dataset
length = len(self.elements)
return length
Whereas the preprocessing phase is done by this code:
# Define transforms for training phase
train_transform = transforms.Compose([transforms.Resize(256), # Resizes short size of the PIL image to 256
transforms.CenterCrop(224), # Crops a central square patch of the image
# 224 because torchvision's AlexNet needs a 224x224 input!
# Remember this when applying different transformations, otherwise you get an error
transforms.ToTensor(), # Turn PIL Image to torch.Tensor
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # Normalizes tensor with mean and standard deviation
])
# Define transforms for the evaluation phase
eval_transform = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
In the end this is the preparation of datasets and dataloader:
# Clone github repository with data
if not os.path.isdir('./Homework2-Caltech101'):
!git clone https://github.com/MachineLearning2020/Homework2-Caltech101.git
# Commands to execute when there is an error saying no file or directory related to ./Homework2-Caltech101/
# !rm -r ./Homework2-Caltech101/
# !git clone https://github.com/MachineLearning2020/Homework2-Caltech101.git
DATA_DIR = 'Homework2-Caltech101/101_ObjectCategories'
SPLIT_TRAIN = 'Homework2-Caltech101/train.txt'
SPLIT_TEST = 'Homework2-Caltech101/test.txt'
# 1 - Data preparation
myTrainDS = Caltech(DATA_DIR, split = SPLIT_TRAIN, transform=train_transform)
myTestDS = Caltech(DATA_DIR, split = SPLIT_TEST, transform=eval_transform)
print('My Train DS: {}'.format(len(myTrainDS)))
print('My Test DS: {}'.format(len(myTestDS)))
# 1 - Data preparation
myTrain_dataloader = DataLoader(myTrainDS, batch_size=BATCH_SIZE, shuffle=True, num_workers=4, drop_last=True)
myTest_dataloader = DataLoader(myTestDS, batch_size=BATCH_SIZE, shuffle=False, num_workers=4)
Okay now the two .txt files contain the lists of images we want to have in the train and test splits, so we have to get them from there, but that should have been done correctly. The thing is that when I approach my training phase (see code later) I am presented the error in the title. I already tried to add the following line in the transform function:
[...]
transforms.Lambda(lambda x: x.repeat(3, 1, 1)),
after the centercrop, but it says that Image has no attribute repeat, so I'm kinda stuck.
The training code line which gives me the error is the following:
# Iterate over the dataset
for images, labels in myTrain_dataloader:
If needed, full error is:
RuntimeError Traceback (most recent call last)
<ipython-input-197-0e4710a9855d> in <module>()
47
48 # Iterate over the dataset
---> 49 for images, labels in myTrain_dataloader:
50
51 # Bring data over the device of choice
2 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self)
817 else:
818 del self._task_info[idx]
--> 819 return self._process_data(data)
820
821 next = __next__ # Python 2 compatibility
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
844 self._try_put_index()
845 if isinstance(data, ExceptionWrapper):
--> 846 data.reraise()
847 return data
848
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
383 # (https://bugs.python.org/issue2651), so we work around it.
384 msg = KeyErrorMessage(msg)
--> 385 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-180-0b00b175e18c>", line 72, in __getitem__
image = self.transform(image)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 70, in __call__
img = t(img)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/transforms.py", line 175, in __call__
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 217, in normalize
tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: output with shape [1, 224, 224] doesn't match the broadcast shape [3, 224, 224]
I'm using Alexnet and the code I was provided is the following:
net = alexnet() # Loading AlexNet model
# AlexNet has 1000 output neurons, corresponding to the 1000 ImageNet's classes
# We need 101 outputs for Caltech-101
net.classifier[6] = nn.Linear(4096, NUM_CLASSES) # nn.Linear in pytorch is a fully connected layer
# The convolutional layer is nn.Conv2d
# We just changed the last layer of AlexNet with a new fully connected layer with 101 outputs
# It is mandatory to study torchvision.models.alexnet source code
| The first dimension of the tensor means the color, so what your error means is that you are giving a grayscale picture (1 channel), while the data loader expects a RGB image (3 channels). You defined a pil_loader function that returns an image in RGB, but you are never using it.
So you have two options:
Work with the image in Grayscale instead of rgb, which is cheaper computationally speaking.
Solution: Both in train and test transforms change transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) for transforms.Normalize((0.5), (0.5))
Make sure your image is in rgb. I don't know how your images are stored, but I guess you downloaded the dataset in grayscale. One thing you could try is using the pil_loader function you defines. Try changing img = Image.open(os.path.join(self.root, self.elements[index].rstrip())) for img = pil_loader(os.path.join(self.root, self.elements[index].rstrip())) in yout __getitem__ function.
Let me know how it goes!
| https://stackoverflow.com/questions/59218671/ |
Repeat specific columns of a tensor in Pytorch | I have a pytorch tensor X of size m x n and a list of nonnegative integers num_repeats of length n (assume sum(num_repeats)>0). Inside a forward() method, I want to create a tensor X_dup of size m x sum(num_repeats) where column i of X is repeated num_repeats[i] times. The tensor X_dup is to be used downstream in the forward() method so the gradient needs to be backpropogated correctly.
All solutions I could come up with require inplace operations (creating a new tensor and populating it by iterating over num_repeats), but if I understand correctly this won't preserve the gradient (please correct me if I'm wrong, I'm new to the whole Pytorch thing).
| Provided you're using PyTorch >= 1.1.0 you can use torch.repeat_interleave.
repeat_tensor = torch.tensor(num_repeats).to(X.device, torch.int64)
X_dup = torch.repeat_interleave(X, repeat_tensor, dim=1)
| https://stackoverflow.com/questions/59226963/ |
Implementing LeNet in Pytorch | Sorry if this question is incredibly basic. I feel like there is a wealth of resources online, but most of them are half-complete or skip over the details that I want to know.
I am trying to implement LeNet with Pytorch for practice.
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html
How come in this examples and many examples online, they define the convolutional layers and the fc layers in init, but the subsampling and activation functions in forward?
What is the purpose of using torch.nn.functional for some functions, and torch.nn for others? For example, you have convolution with torch.nn (https://pytorch.org/docs/stable/nn.html#conv1d) and convolution with torch.nn.functional (https://pytorch.org/docs/stable/nn.functional.html#conv1d). Why choose one or the other?
Let's say I want to try different image sizes, like 28x28 (MNIST). The tutorial recommends I resize MNIST. Is there a way to instead change the values of LeNet? What happens if I don't change them?
What is the purpose of num_flat_features? If you wanted to flatten the features, couldn't you just do x = x.view(-1, 16*5*5)?
|
How come in this examples and many examples online, they define the
convolutional layers and the fc layers in init, but the subsampling
and activation functions in forward?
Any layer with trainable parameters should be defined in __init__. Subsampling, certain activations, dropout, etc.. don't have any trainable parameters so can be defined either in __init__ or used directly via the torch.nn.functional interface during forward.
What is the purpose of using torch.nn.functional for some functions, and torch.nn for others?
The torch.nn.functional functions are the actual functions that are used at the heart of the majority of torch.nn layers, they call into C++ compiled code. For example nn.Conv2d subclasses nn.Module, as should any custom layer or model which contains trainable parameters. The class handles registering parameters and encapsulates some other necessary functionality required for training and testing. During forward it actually uses nn.functional.conv2d to apply the convolution operation. As mentioned in the first question, when performing a parameterless operation like ReLU there's effectively no difference between using the nn.ReLU class and the nn.functional.relu function.
The reason they are provided is they give some freedom to do unconventional things. For example in this answer which I wrote the other day, providing a solution without nn.functional.conv2d would have been difficult.
Let's say I want to try different image sizes, like 28x28 (MNIST). The
tutorial recommends I resize MNIST. Is there a way to instead change
the values of LeNet? What happens if I don't change them?
There's no obvious way to change an existing, trained model to support different image sizes. The size of the input to the linear layer is necessarily fixed and the number of features at that point in the model is generally determined by the size of the input to the network. If the size of the input differs from the size that the model was designed for then when the data progresses to the linear layers it will have the wrong number of elements and cause the program will crash. Some models can handle a range of input sizes, usually by using something like an nn.AdaptiveAvgPool2d layer before the linear layer to ensure the input shape to the linear layer is always the same. Even so, if the input image size is too small then the downsampling and/or pooling operations in the network will cause the feature maps to vanish at some point, causing the program to crash.
What is the purpose of num_flat_features? If you wanted to flatten the
features, couldn't you just do x = x.view(-1, 16*5*5)?
When you define the linear layer you need to tell it how large the weight matrix is. A linear layer's weights are simply an unconstrained matrix (and bias vector). The shape of the weight matrix therefore is determined by the input shape, but you don't know the input shape before you run forward so it needs to be provided as an additional parameter (or hard coded) when you initialize the model.
To get to the actual question. Yes, during forward you could simply use
x = x.view(-1, 16*5*5)
Better yet, use
x = torch.flatten(x, start_dim=1)
This tutorial was written before the .flatten function was added to the library. The authors effectively just wrote their own flatten functionality which could be used regardless of the shape of x. This was probably so you had some portable code that could be used in your model without hard coding sizes. From a programming perspective it's nice to generalize such things since it means you wouldn't have to worry about changing those magic numbers if you decide to change part of the model (though this concern didn't appear to extend to the initialization).
| https://stackoverflow.com/questions/59231709/ |
Pytorch: LSTM input and output dimentions | I am a bit confused about LSTM input and output dimensions:
Here is my network:
Intent_LSTM(
(embedding): Embedding(41438, 400)
(lstm): LSTM(400, 512, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.5, inplace=False)
(fc): Linear(in_features=512, out_features=3, bias=True)
).
Here the shape of my embdeddings is : [50, 150, 400] 50 being batch size, 150 seq lenth of my input. 400 being my embedded dimensions. I am feeding this into my LSTM. But when I was going through pytorch documentation. It states that input has to be in the form :
input of shape (seq_len, batch, input_size)
So should the input be converted to this format. ([150,50,400]) ?
If yes how do I do that?
Here is my forward pass:
def forward(self, x):
"""
Perform a forward pass
"""
batch_size = x.size(0)
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
# reshape to be batch_size first
out = out.view(batch_size, -1,3)
#print("sig_out",sig_out.shape)
out = out[:, -1,:] # get last batch of labels
# return last sigmoid output and hidden state
return out
| You can set the input parameter batch_first = True to have the batch dimension first.
See docs for reference.
| https://stackoverflow.com/questions/59233832/ |
How to add parameters in module class in pytorch custom model? | I tried to find the answer but I can't.
I make a custom deep learning model using pytorch. For example,
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.nn_layers = nn.ModuleList()
self.layer = nn.Linear(2,3).double()
torch.nn.init.xavier_normal_(self.layer.weight)
self.bias = torch.nn.Parameter(torch.randn(3))
self.nn_layers.append(self.layer)
def forward(self, x):
activation = torch.tanh
output = activation(self.layer(x)) + self.bias
return output
If I print
model = Net()
print(list(model.parameters()))
it does not contains model.bias, so
optimizer = optimizer.Adam(model.parameters()) does not update model.bias.
How can I go through this?
Thanks!
| You need to register your parameters:
self.register_parameter(name='bias', param=torch.nn.Parameter(torch.randn(3)))
Update:
In more recent versions of PyTorch, you no longer need to explicitly register_parameter, it's enough to set a member of your nn.Module with nn.Parameter to "notify" pytorch that this variable should be treated as a trainable parameter:
self.bias = torch.nn.Parameter(torch.randn(3))
Please note that is you want to have more complex data structures of parameters (e.g., lists, etc.) you should use dedicated containers like torch.nn.ParameterList or torch.nn.ParameterDict.
| https://stackoverflow.com/questions/59234238/ |
pysyft torrch.jit. script RuntimeError: undefined value _Reduction | I was trying to reproduce Pysyft Asynchronous-federated-learning-on-MNIST from its advanced example. where @torch.jit.script is used before loss function. I am getting this error and have no clue what this is about
RuntimeError: undefined value _Reduction: at
/home/ab/.virtualenvs/aic/lib/python3.6/site-packages/syft/generic/frameworks/hook/hook.py:1829:20
reduction = _Reduction.legacy_get_string(size_average, reduce)
It is actually caused by these lines
@torch.jit.script
def loss_fn(pred, target):
return F.nll_loss(input=pred, target=target)
train_config = sy.TrainConfig(
model=traced_model,
loss_fn=loss_fn,
batch_size=batch_size,
shuffle=True,
max_nr_batches=max_nr_batches,
epochs=1,
optimizer="SGD",
optimizer_args={"lr": lr},
)
| Writing answer so it could help others.It turns out that @torch.jit.script needs to be at the top of the file (after import) and I was having it after two function definitions.
Moving it to the top worked
| https://stackoverflow.com/questions/59239818/ |
pytorch cnn model stop at loss.backward() without any prompt? | My aim is to make a five-category text classification
I am running bert fine tuning with cnnbase model but my project stops at loss.backward() without any prompt in cmd.
My program runs successfully in rnn base such as lstm and rcnn.
But when I am running some cnnbase model a strange bug appears.
My cnn model code:
import torch
import torch.nn as nn
import torch.nn.functional as F
# from ..Models.Conv import Conv1d
from transformers.modeling_bert import BertPreTrainedModel, BertModel
n_filters = 200
filter_sizes = [2,3,4]
class BertCNN(BertPreTrainedModel):
def __init__(self, config):
super(BertPreTrainedModel, self).__init__(config)
self.num_filters = n_filters
self.filter_sizes = filter_sizes
self.bert = BertModel(config)
for param in self.bert.parameters():
param.requires_grad = True
self.convs = nn.ModuleList(
[nn.Conv2d(1, self.num_filters, (k, config.hidden_size))
for k in self.filter_sizes])
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.fc_cnn = nn.Linear(self.num_filters *
len(self.filter_sizes), config.num_labels)
def conv_and_pool(self, x, conv):
x = F.relu(conv(x)).squeeze(3)
x = F.max_pool1d(x, x.size(2)).squeeze(2)
return x
def forward(self, input_ids,
attention_mask=None, token_type_ids=None, head_mask=None):
outputs = self.bert(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
head_mask=head_mask)
encoder_out, text_cls = outputs
out = encoder_out.unsqueeze(1)
out = torch.cat([self.conv_and_pool(out, conv)
for conv in self.convs], 1)
out = self.dropout(out)
out = self.fc_cnn(out)
return out
My train code:
for step, batch in enumerate(data):
self.model.train()
batch = tuple(t.to(self.device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
print("input_ids, input_mask, segment_ids, label_ids SIZE: \n")
print(input_ids.size(), input_mask.size(),segment_ids.size(), label_ids.size())
# torch.Size([2, 80]) torch.Size([2, 80]) torch.Size([2, 80]) torch.Size([2])
logits = self.model(input_ids, segment_ids, input_mask)
print("logits and label ids size: ",logits.size(), label_ids.size())
# torch.Size([2, 5]) torch.Size([2])
loss = self.criterion(output=logits, target=label_ids)
if len(self.n_gpu) >= 2:
loss = loss.mean()
if self.gradient_accumulation_steps > 1:
loss = loss / self.gradient_accumulation_steps
if self.fp16:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
clip_grad_norm_(amp.master_params(self.optimizer), self.grad_clip)
else:
loss.backward() # I debug find that the program stop at this line without any error prompt
change the batchsize to 1
the bug still occured
the step1 logits :
logits tensor([[ 0.8831, -0.0368, -0.2206, -2.3484, -1.3595]], device='cuda:1',
grad_fn=)
the step1 loss:
tensor(1.5489, device='cuda:1', grad_fn=NllLossBackward>)
but why can't loss.backward()?
| I tried to run my program on linux platform, and it ran successfully.
Therefore, it is very likely that it is caused by different os
Previous os:win 10
| https://stackoverflow.com/questions/59245714/ |
The output of numerical calculation of BCEWithLogitsLoss for PyTorch example | When I looked the example code of BCEWithLogitsLoss in PyTorch Docs. I am confusing about the output result of loss function and formula.
PyTorch Example:
>>> loss = nn.BCEWithLogitsLoss()
>>> input = torch.randn(3, requires_grad=True)
>>> target = torch.empty(3).random_(2)
>>> output = loss(input, target)
>>> output.backward()
input : tensor([0.4764, -2.4063, 0.1563], requires_grad=True)
target: tensor([0., 1., 1.])
output: tensor(1.3567, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
But according to the formula by showing:
The output of loss function should have shape (3,) instead of single value ,since the shape of input and output both are (3,) . I was thinking that the output may was the sum of Ln or else, but still no idea. Could someone help me to explain that?
As @Dishin H Goyani reminded that the default reduction is 'mean'. I did a simple test.
>>> target_n = target.numpy()
>>> input_n = input.detach().numpy()
>>> def sigmoid(array):return 1/(1+np.exp(-array))
>>> output_n = -1*(target_n*np.log(sigmoid(input_n))+(1-target_n)*np.log(1-sigmoid(input_n)))
output_n : array([0.95947516, 2.4926252 , 0.61806685], dtype=float32)
>>> np.mean(output_n)
1.3567224
The result is matched.
As you seen, the default Wn is 1.
| As reduction parameter default value is 'mean' in BCEWithLogitsLoss.
The output is mean - the sum of the output will be divided by the number of elements in the output.
Read Doc here for more detail:
Parameters
...
reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'.
'none': no reduction will be applied,
'mean': the sum of the output will be divided by the number of elements in the output,
'sum': the output will be summed.
Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
...
| https://stackoverflow.com/questions/59246416/ |
RuntimeError: module must have its parameters and buffers on device cuda:1 (device_ids[0]) but found one of them on device: cuda:2 | I have 4 GPUs (0,1,2,3) and I want to run one Jupyter notebook on GPU 2 and another one on GPU 0. Thus, after executing,
export CUDA_VISIBLE_DEVICES=0,1,2,3
for the GPU 2 notebook I do,
device = torch.device( f'cuda:{2}' if torch.cuda.is_available() else 'cpu')
device, torch.cuda.device_count(), torch.cuda.is_available(), torch.cuda.current_device(), torch.cuda.get_device_properties(1)
and after creating a new model or loading one,
model = nn.DataParallel( model, device_ids = [ 0, 1, 2, 3])
model = model.to( device)
Then, when I start training the model, I get,
RuntimeError Traceback (most recent call last)
<ipython-input-18-849ffcb53e16> in <module>
46 with torch.set_grad_enabled( phase == 'train'):
47 # [N, Nclass, H, W]
---> 48 prediction = model(X)
49 # print( prediction.shape, y.shape)
50 loss_matrix = criterion( prediction, y)
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
~/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
144 raise RuntimeError("module must have its parameters and buffers "
145 "on device {} (device_ids[0]) but found one of "
--> 146 "them on device: {}".format(self.src_device_obj, t.device))
147
148 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:2
| DataParallel requires every input tensor be provided on the first device in its device_ids list.
It basically uses that device as a staging area before scattering to the other GPUs and it's the device where final outputs are gathered before returning from forward. If you want device 2 to be the primary device then you just need to put it at the front of the list as follows
model = nn.DataParallel(model, device_ids = [2, 0, 1, 3])
model.to(f'cuda:{model.device_ids[0]}')
After which all tensors provided to model should be on the first device as well.
x = ... # input tensor
x = x.to(f'cuda:{model.device_ids[0]}')
y = model(x)
| https://stackoverflow.com/questions/59249563/ |
What does required_grad do in PyTorch? (Not requires_grad) | I have been trying to carry out transfer learning on a multiclass classification task using resnet as my backbone.
In many tutorials, it was stated that it would be wise to try and train only the last layer (usually a fully connected layer) again, while freezing the other layers. The freezing would be done as so:
for param in model.parameters():
param.requires_grad = False
However, I just realized that all of my layers were actually not freezed, and while checking on my code I realized I had made a typo:
for param in model.parameters():
param.required_grad = False
In a way that I wrote required_grad instead of requires_grad.
I can't seem to find information on required_grad - what it is, nor what it does. The only thing I found out was that it did not change the requires_grad flag, and that there is a separate required_grad flag which is set to False instead.
Can anyone explain what required_grad does? Have I been 'not freezing' my other layers all this time?
| Ok, this was really silly.
for param in model.parameters():
param.required_grad = False
In this case, a new 'required_grad' is created due to the typo I made.
For example, even the following wouldn't invoke an error:
for param in model.parameters():
param.what_in_the_world = False
And all the parameters of the model would now have a what_in_the_world attribute.
I hope no one else wastes their time due to this.
| https://stackoverflow.com/questions/59260854/ |
Memory leak in Pytorch: object detection | I am working on the object detection tutorial on PyTorch. The original tutorial works fine with the few epochs given. I expanded it to large epochs and encounter out of memory error.
I tried to debug it and find something interesting. This is the tool I am using:
def debug_gpu():
# Debug out of memory bugs.
tensor_list = []
for obj in gc.get_objects():
try:
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
tensor_list.append(obj)
except:
pass
print(f'Count of tensors = {len(tensor_list)}.')
And I used it to monitor the memory of training one epoch:
def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq):
...
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
# inference + backward + optimization
debug_gpu()
The output is something like this:
Count of tensors = 414.
Count of tensors = 419.
Count of tensors = 424.
Count of tensors = 429.
Count of tensors = 434.
Count of tensors = 439.
Count of tensors = 439.
Count of tensors = 444.
Count of tensors = 449.
Count of tensors = 449.
Count of tensors = 454.
As you can see, the count of tensors tracked by garbage collector increases constantly.
Relevant files to execute can be found here.
I have two questions:
1. What is holding up the garbage collector to release these tensors?
2. What should I do with the out of memory error?
|
How I identify the error?
With the help of tracemalloc, I take two snapshots with several hundred iterations between. The tutorial will show you it easy to follow.
What cause the error?
rpn.anchor_generator._cache in the Pytorch is a python dict which trace the grid anchors. It is an attribute with the detection model and the size increases with each proposal.
How to solve it?
An easy bypass is put model.rpn.anchor_generator._cache.clear() at the end of training iterations.
I have submit a fix to PyTorch. You may won't have the OOM error since torchvision 0.5.
| https://stackoverflow.com/questions/59265818/ |
Does calling forward() on a model in pytorch require extra gpu memory after already having loaded the model and data in gpu memory? | I can load the model and a data sample in gpu memory, but when I call forward on the model with the sample, it gives a CUDA out of memory error.
I'm sure the model and data have been loaded, as my code is structured as follows (pseudocode):
model = Model()
sample = load_sample()
sleep(5) # to check memory usage with nvidia-smi
print('before forward')
model(sample)
print('after forward')
"before forward" gets printed, but "after forward" does not.
I assumed all the necessary memory for a forward pass gets allocated during construction of the model, but I don't know how else this error can happen. I also cannot find it on Google.
Python: 3.6.9
PyTorch: 1.2.0
| It is not possible to determine the amount of space required to store the activations before runtime and hence GPU memory increases. Pytorch maintains a dynamic computation graph and hence the order of computations is not at all known before runtime. When you declare/initialize the model, only __init__ is called and model parameters are initialized. To figure out the graph one would need to look at the forward call and maybe also loss function (if it is not within forward call).
Let's say we can look at the forward call before running the model but still the batch size is unknown and hence memory can't be pre-allocated for activations.
Even if the batch size is known, there could be other unknowns like sequence size (for RNN), or episode size in RL that make it hard to pre-allocate memory for activations. Even if we account for all this at the declaration, pytorch naturally allows for for-loops which makes it almost impossible to pre-allocate space for activations and hence GPU memory can increase during runtime depending on the use case.
| https://stackoverflow.com/questions/59268952/ |
Type mismatch in pytorch | I am trying my hands on PyTorch.
I am getting this error:
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight' in call to _thnn_conv2d_forward
This is my code(shamelessly copied from a online tutorial):
class Net(Module):
def __init__(self):
super(Net,self).__init__()
self.cnn_layers = Sequential(
Conv2d(1,4,kernel_size=3,stride=1,padding=1),
BatchNorm2d(4),
ReLU(inplace=True),
MaxPool2d(kernel_size=2,stride=2),
Conv2d(4,4,kernel_size=3,stride=1,padding=1),
BatchNorm2d(4),
ReLU(inplace=True),
MaxPool2d(kernel_size=2,stride=2)
)
self.linear_layers = Sequential(
Linear(900,10)
)
def forward(self,x):
# self.weights = self.weights.double()
x = self.cnn_layers(x)
x = x.view(x.size(0),-1)
x = self.linear_layers(x)
return x
tdata = dt.Data("train")
train_x = torch.from_numpy(tdata.get_train()[0].reshape(925,1,300,300))
train_y = torch.from_numpy(tdata.get_train()[1].astype(int))
val_x = torch.from_numpy(tdata.get_test()[0].reshape(102,1,300,300))
val_y = torch.from_numpy(tdata.get_test()[1].astype(int))
print(val_y.shape)
plt.imshow(tdata.get_train()[0][100],cmap='gray')
plt.show()
model = Net()
# defining the optimizer
optimizer = Adam(model.parameters(), lr=0.07)
# defining the loss function
criterion = CrossEntropyLoss()
# checking if GPU is available
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
print(model)
def train(epoch):
model.train()
tr_loss = 0
# getting the training set
x_train, y_train = Variable(train_x.double()), Variable(train_y.double())
# getting the validation set
x_val, y_val = Variable(val_x), Variable(val_y)
# converting the data into GPU format
if torch.cuda.is_available():
x_train = x_train.cuda()
y_train = y_train.cuda()
x_val = x_val.cuda()
y_val = y_val.cuda()
# clearing the Gradients of the model parameters
optimizer.zero_grad()
# prediction for training and validation set
output_train = model(x_train.double())
output_val = model(x_val)
# computing the training and validation loss
loss_train = criterion(output_train, y_train)
loss_val = criterion(output_val, y_val)
train_losses.append(loss_train)
val_losses.append(loss_val)
# computing the updated weights of all the model parameters
loss_train.backward()
optimizer.step()
tr_loss = loss_train.item()
if epoch%2 == 0:
# printing the validation loss
print('Epoch : ',epoch+1, '\t', 'loss :', loss_val)
n_epochs = 25
# empty list to store training losses
train_losses = []
# empty list to store validation losses
val_losses = []
# training the model
for epoch in range(n_epochs):
train(epoch)
tdata.get_train() and tdata.get_test() returns a tuple (numpy(dtype='double'),numpy(dtype='int') )
I think weights are an internal data structure. So, its type should be adjusted by PyTorch itself. What is the problem here?
| You may just add .to(torch.float32) to your train_x and val_x tensors
| https://stackoverflow.com/questions/59272784/ |
How to create a list of modulelists | Is it ok to create a python-list of PyTorch modulelists?
If for example, I want to have a few Conv1d in a layer and then another layer with different Conv1d. In each layer I need to do a different manipulation on the output depending on the layer number. What is the correct way to build this "python-list" of modulelists?
This way:
class test(nn.Module):
def __init__(...):
self.modulelists = []
for i in range(4):
self.modulelists.append(nn.ModuleList([nn.Conv1d(10, 10, kernel_size=5) for _ in range(5)]))
or this way:
class test(nn.Module):
def __init__(...):
self.modulelists = nn.ModuleList()
for i in range(4):
self.modulelists.append(nn.ModuleList([nn.Conv1d(10, 10, kernel_size=5) for _ in range(5)]))
Thanks
| You need to register all sub-modules of your net properly so that pytorch can have access to their parameters, buffers etc.
This can be done only if you use proper containers.
If you store sub-modules in a simple pythonic list pytorch will have no idea there are sub modules there and they will be ignored.
So, if you use simple pythonic list to store the sub-modules, when you call, for instance, model.cuda() the parameters of the sub-modules in the list will not be transferred to GPU, but rather remain on CPU. If you call model.parameters() to pass all trainable parameters to an optimizer, all the sub-modules parameters will not be detected by pytorch and thus the optimizer will not "see" them.
| https://stackoverflow.com/questions/59277388/ |
Using Torchvision ImageFolder with Test Set | I am trying to solve the Dogs-vs-Cats challenge on Kaggle using the Sample Notebook that has been provided in the Udacity course. I have rearranged the files into two folder dogs/ and cats/ in the train/ directory so that the ImageFolder class can pick up the categories, but I don't know what to do in the test folder? I don't have the labels ready.
Do I just not use the ImageFolder API (seems that the course used it, so it should be usable, and obviously very convenient), or is there some option to use it when the classes are not already known. I could not find anything in this vein on the official documentation, but it should be possible seeing the course solution does it that way. Thanks for any help.
| Usually, in the context of training networks, the "test" set is actually a "validation" set: its a subset of the labeled examples that the model does not train on, but only being evaluated. This validation set is used for tuning meta parameters (e.g., number of epochs, learning rate, batch size etc.).
Therefore, despite the fact that the validation ("test") set is not used for actual SGD training, you do have its labels and they are used to estimate the generalization error of the trained model.
Since you usually do have the labels for this set, you can read it using ImageFolder class same as the training set.
However, if you have a test set that has no labels at all, you can still use the ImageFolder class to handle the set. All you need is to create a dummy subfolder to represent a "label" for the set: ImageFolder assumes the images are stored in subfolders based on their labels.
| https://stackoverflow.com/questions/59280880/ |
Saving PyTorch model with no access to model class code | How can I save a PyTorch model without a need for the model class to be defined somewhere?
Disclaimer:
In Best way to save a trained model in PyTorch?, there are no solutions (or a working solution) for saving the model without access to the model class code.
| If you plan to do inference with the Pytorch library available (i.e. Pytorch in Python, C++, or other platforms it supports) then the best way to do this is via TorchScript.
I think the simplest thing is to use trace = torch.jit.trace(model, typical_input) and then torch.jit.save(trace, path). You can then load the traced model with torch.jit.load(path).
Here's a really simple example. We make two files:
train.py :
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(4, 4)
def forward(self, x):
x = torch.relu(self.linear(x))
return x
model = Model()
x = torch.FloatTensor([[0.2, 0.3, 0.2, 0.7], [0.4, 0.2, 0.8, 0.9]])
with torch.no_grad():
print(model(x))
traced_cell = torch.jit.trace(model, (x))
torch.jit.save(traced_cell, "model.pth")
infer.py :
import torch
x = torch.FloatTensor([[0.2, 0.3, 0.2, 0.7], [0.4, 0.2, 0.8, 0.9]])
loaded_trace = torch.jit.load("model.pth")
with torch.no_grad():
print(loaded_trace(x))
Running these sequentially gives results:
python train.py
tensor([[0.0000, 0.1845, 0.2910, 0.2497],
[0.0000, 0.5272, 0.3481, 0.1743]])
python infer.py
tensor([[0.0000, 0.1845, 0.2910, 0.2497],
[0.0000, 0.5272, 0.3481, 0.1743]])
The results are the same, so we are good. (Note that the result will be different each time here due to randomness of the initialisation of the nn.Linear layer).
TorchScript provides for much more complex architectures and graph definitions (including if statements, while loops, and more) to be saved in a single file, without needing to redefine the graph at inference time. See the docs (linked above) for more advanced possibilities.
| https://stackoverflow.com/questions/59287728/ |
Neural network versus random forest performance discrepancy | I want to run some experiments with neural networks using PyTorch, so I tried a simple one as a warm-up exercise, and I cannot quite make sense of the results.
The exercise attempts to predict the rating of 1000 TPTP problems from various statistics about the problems such as number of variables, maximum clause length etc. Data file https://github.com/russellw/ml/blob/master/test.csv is quite straightforward, 1000 rows, the final column is the rating, started off with some tens of input columns, with all the numbers scaled to the range 0-1, I progressively deleted features to see if the result still held, and it does, all the way down to one input column; the others are in previous versions in Git history.
I started off using separate training and test sets, but have set aside the test set for the moment, because the question about whether training performance generalizes to testing, doesn't arise until training performance has been obtained in the first place.
Simple linear regression on this data set has a mean squared error of about 0.14.
I implemented a simple feedforward neural network, code in https://github.com/russellw/ml/blob/master/test_nn.py and copied below, that after a couple hundred training epochs, also has an mean squared error of 0.14.
So I tried changing the number of hidden layers from 1 to 2 to 3, using a few different optimizers, tweaking the learning rate, switching the activation functions from relu to tanh to a mixture of both, increasing the number of epochs to 5000, increasing the number of hidden units to 1000. At this point, it should easily have had the ability to just memorize the entire data set. (At this point I'm not concerned about overfitting. I'm just trying to get the mean squared error on training data to be something other than 0.14.) Nothing made any difference. Still 0.14. I would say it must be stuck in a local optimum, but that's not supposed to happen when you've got a couple million weights; it's supposed to be practically impossible to be in a local optimum for all parameters simultaneously. And I do get slightly different sequences of numbers on each run. But it always converges to 0.14.
Now the obvious conclusion would be that 0.14 is as good as it gets for this problem, except that it stays the same even when the network has enough memory to just memorize all the data. But the clincher is that I also tried a random forest, https://github.com/russellw/ml/blob/master/test_rf.py
... and the random forest has a mean squared error of 0.01 on the original data set, degrading gracefully as features are deleted, still 0.05 on the data with just one feature.
Nowhere in the lore of machine learning is it said 'random forests vastly outperform neural nets', so I'm presumably doing something wrong, but I can't see what it is. Maybe it's something as simple as just missing a flag or something you need to set in PyTorch. I would appreciate it if someone could take a look.
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
# data
df = pd.read_csv("test.csv")
print(df)
print()
# separate the output column
y_name = df.columns[-1]
y_df = df[y_name]
X_df = df.drop(y_name, axis=1)
# numpy arrays
X_ar = np.array(X_df, dtype=np.float32)
y_ar = np.array(y_df, dtype=np.float32)
# torch tensors
X_tensor = torch.from_numpy(X_ar)
y_tensor = torch.from_numpy(y_ar)
# hyperparameters
in_features = X_ar.shape[1]
hidden_size = 100
out_features = 1
epochs = 500
# model
class Net(nn.Module):
def __init__(self, hidden_size):
super(Net, self).__init__()
self.L0 = nn.Linear(in_features, hidden_size)
self.N0 = nn.ReLU()
self.L1 = nn.Linear(hidden_size, hidden_size)
self.N1 = nn.Tanh()
self.L2 = nn.Linear(hidden_size, hidden_size)
self.N2 = nn.ReLU()
self.L3 = nn.Linear(hidden_size, 1)
def forward(self, x):
x = self.L0(x)
x = self.N0(x)
x = self.L1(x)
x = self.N1(x)
x = self.L2(x)
x = self.N2(x)
x = self.L3(x)
return x
model = Net(hidden_size)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
# train
print("training")
for epoch in range(1, epochs + 1):
# forward
output = model(X_tensor)
cost = criterion(output, y_tensor)
# backward
optimizer.zero_grad()
cost.backward()
optimizer.step()
# print progress
if epoch % (epochs // 10) == 0:
print(f"{epoch:6d} {cost.item():10f}")
print()
output = model(X_tensor)
cost = criterion(output, y_tensor)
print("mean squared error:", cost.item())
| can you please print the shape of your input ?
I would say check those things first:
that your target y have the shape (-1, 1) I don't know if pytorch throws an Error in this case. you can use y.reshape(-1, 1) if it isn't 2 dim
your learning rate is high. usually when using Adam the default value is good enough or try simply to lower your learning rate. 0.1 is a high value for a learning rate to start with
place the optimizer.zero_grad at the first line inside the for loop
normalize/standardize your data ( this is usually good for NNs )
remove outliers in your data (my opinion: I think this can't affect Random forest so much but it can affect NNs badly)
use cross validation (maybe skorch can help you here. It's a scikit learn wrapper for pytorch and easy to use if you know keras)
Notice that Random forest regressor or any other regressor can outperform neural nets in some cases. There is some fields where neural nets are the heros like Image Classification or NLP but you need to be aware that a simple regression algorithm can outperform them. Usually when your data is not big enough.
| https://stackoverflow.com/questions/59288733/ |
Masking and computing loss for a padded batch sent through an RNN with a linear output layer in pytorch | Although a typical use case, I can't find one simple and clear guide on what is the canonical way to compute loss on a padded minibatch in pytorch, when sent through an RNN.
I think a canonical pipeline could be:
1) The pytorch RNN expects a padded batch tensor of shape: (max_seq_len, batch_size, emb_size)
2) So we give an Embedding layer for example this tensor:
tensor([[1, 1],
[2, 2],
[3, 9]])
9 is the padding index. Batch size is 2. The Embedding layer will make it to be of shape (max_seq_len, batch_size, emb_size). The sequences in the batch are in descending order, so we can pack it.
3) We apply pack_padded_sequence, we apply the RNN, finally we apply pad_packed_sequence. We have at this point (max_seq_len, batch_size, hidden_size)
4) Now we apply the linear output layer on the result and let's say the log_softmax. So at the end we have a tensor for a batch of scores of shape: (max_seq_len, batch_size, linear_out_size)
How should I compute the loss from here, masking out the padded part (with an arbitrary target)? Thanks!
| I think the PyTocrh Chatbot Tutorial might be instructional for you.
Basically, you calculate the mask of valid output values (paddings are not valid), and use that to calculate the loss for only those values.
See the outputVar and maskNLLLoss methods on the tutorial page. For your convenience I copied the code here, but you really need to see it in context of all the code.
# Returns padded target sequence tensor, padding mask, and max target length
def outputVar(l, voc):
indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l]
max_target_len = max([len(indexes) for indexes in indexes_batch])
padList = zeroPadding(indexes_batch)
mask = binaryMatrix(padList)
mask = torch.BoolTensor(mask)
padVar = torch.LongTensor(padList)
return padVar, mask, max_target_len
def maskNLLLoss(inp, target, mask):
nTotal = mask.sum()
crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1)).squeeze(1))
loss = crossEntropy.masked_select(mask).mean()
loss = loss.to(device)
return loss, nTotal.item()
| https://stackoverflow.com/questions/59292708/ |
Can i have mutliple labels for a single image in an image classification scenario | If I am building a model where I need to predict the vehicle, color of it, and make of it, then can I use all the labels for a single image and build my model around it.
Like for a single image of a vehicle which is a car (car1.jpg) will have labels like - Sedan(Make), Blue(Color) and Car(Type of vehicle). Can I make a single model for this or I will have to make 3 separate models for this problem.
| You can have a single model with multiple outputs. As you seem interested with image processing, detection models for example (like SSD, RFCN, etc.) have multiple outputs, one for classes, one for box coordinates. Take a look a the page 3 of this article for the "feature extractor"/"classifier" split.
In fact, you will have a first common part of your model (with mostly the convolution layers to extract your features).
Deeper in your model, you will have separate parts, one for each kind of prediction.
| https://stackoverflow.com/questions/59307823/ |
different between class name vs self calling in super() | isn't it same meaning of class name NeuralNet and self keyword passing in super call -
super(NeuralNet,self).__init__() # init super
here is the code snippet from example:
class NeuralNet(nn.Module):
def __init__(self, use_batch_norm, input_size=784, hidden_dim=256, output_size=10):
"""
Creates a PyTorch net using the given parameters.
"""
super(NeuralNet, self).__init__() # init super
# continues code
| Given your question, I kindly but very strongly suggest you do the full official Python tutorial.
And no, NeuralNet and self are NOT the same thing. The first is the NeuralNet class, the second is the current NeuralNet instance ("current": the one on which the method has been called).
| https://stackoverflow.com/questions/59308972/ |
Attribute error: 'ReduceLROnPlateau' object has no attribute _cmp | I'm trying to load an existing pytorch model with -
from mymodels import model
m = torch.load("model.pth", map_location=torch.device("cpu"))
I'm getting the error -
line 613, in _load
result = unpickler.load()
Attribute error: 'ReduceLROnPlateau' object has no attribute _cmp
my pytorch version is 1.3.1. How do I fix this?
| This was because of a version difference between the pytorch the network is trained on and the pytorch being used. It was resolved by using the same versions.
| https://stackoverflow.com/questions/59319363/ |
Pytorch: Batch size is missing in data after torch.utils.random_split() is used on dataloader.dataset | I used random_split() to divide my data into train and test and I observed that if random split is done after the dataloader is created, batch size is missing when getting a batch of data from the dataloader.
import torch
from torchvision import transforms, datasets
from torch.utils.data import random_split
# Normalize the data
transform_image = transforms.Compose([
transforms.Resize((240, 320)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
data = '/data/imgs/train'
def load_dataset():
data_path = data
main_dataset = datasets.ImageFolder(
root = data_path,
transform = transform_image
)
loader = torch.utils.data.DataLoader(
dataset = main_dataset,
batch_size= 64,
num_workers = 0,
shuffle= True
)
# Dataset has 22424 data points
trainloader, testloader = random_split(loader.dataset, [21000, 1424])
return trainloader, testloader
trainloader, testloader = load_dataset()
Now to get a single batch of images from the train and test loaders:
images, labels = next(iter(trainloader))
images.shape
# %%
len(trainloader)
# %%
images_test, labels_test = next(iter(testloader))
images_test.shape
# %%
len(testloader)
The output that I get is does not have the batch size for train or test batches. Teh output dims should be [batch x channel x H x W] but I get [channel x H x W].
Output:
But if I create the split from the dataset and then make two data loaders using the splits, I get the batchsize in the output.
def load_dataset():
data_path = data
main_dataset = datasets.ImageFolder(
root = data_path,
transform = transform_image
)
# Dataset has 22424 data points
train_data, test_data = random_split(main_dataset, [21000, 1424])
trainloader = torch.utils.data.DataLoader(
dataset = train_data,
batch_size= 64,
num_workers = 0,
shuffle= True
)
testloader = torch.utils.data.DataLoader(
dataset = test_data,
batch_size= 64,
num_workers= 0,
shuffle= True
)
return trainloader, testloader
trainloader, testloader = load_dataset()
On running the same 4 commands to get a single train and test batch:
Is the first approach wrong? Although the length shows that the data has been split. So why do I not see the batch size?
| The first approach is wrong.
Only DataLoader instances return batches of items. The Dataset like instances don't.
When you call make_split you pass it loader.dataset which is just a reference to main_dataset (not a DataLoader). The result is that trainloader and testloader are Datasets not DataLoaders. In fact you discard loader which is your only DataLoader when you return from load_dataset.
The second version is what you should do to get two separate DataLoaders.
| https://stackoverflow.com/questions/59322580/ |
Using Torch for numerical optimization, getting optimizer got an empty parameter list, | I am doing some computations and would like to optimise the parameters of this by using Pytorch. I am NOT defining a neural network, so no layers and stuff like that. Just a simple sequence of computations. I use the torch.nn.Module to be able to use Pytorch's optimisers.
My class looks something like this:
class XTransformer(torch.nn.Module):
def __init__(self, x):
super(ReLUTransformer, self).__init__()
self.x = x
def funky_function(self, m, c):
# do some computations
m = self.x * 2 - m + c
return m, c
def forward(self, m, c):
m, c = self.funky_function(m, c)
return m, c
Later on I define and try to optimise this parameter x like so:
x = torch.autograd.Variable(x, requires_grad=True)
model = XTransformer(x)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
m, c = smt
loss = loss_func()
for t in range(100):
m , c = model(m, c)
l = loss(m, true)
optimizer.zero_grad()
l.backward()
optimizer.step()
I don't know what to do. I get the "ValueError: optimizer got an empty parameter list" error. when I just give [x] as an argument to the optimizer, It doesn't update and change x for me. What should I do?
| You need to register x as a parameter to let PyTorch know this should be a trainable parameter. This can be done by defining it as a nn.Parameter during init
def __init__(self, x):
super(ReLUTransformer, self).__init__()
self.x = torch.nn.Parameter(x)
| https://stackoverflow.com/questions/59326769/ |
Python list.append(another_list) is just repeating another_list again and again | I am writing a program which goes through a loop.
In the loop body it make a Python list name value and append this value to another global list values
But i am having a issue that after using
values.append(value)
but it append the value to every element of values
# values
[['closed_eye_0003.jpg_face_2.jpg', 0]]
# value
['closed_eye_0007.jpg_face_1.jpg', 0]
# after appending the value to values the output is
[['closed_eye_0007.jpg_face_1.jpg', 0], ['closed_eye_0007.jpg_face_1.jpg', 0]]
The Code is
import face_recognition
values = list()
value = list()
root_dir = '/content/dataset_facialImages_300/test/CloseFace'
isOpen = 0
for img_name in imgs_names:
img_file = root_dir + '/' + img_name
# Load the jpg file into a numpy array
image = face_recognition.load_image_file(img_file)
# Find all facial features in all the faces in the image
face_landmarks_list = face_recognition.face_landmarks(image)
if len(face_landmarks_list):
# print(len(face_landmarks_list))
# print(img_name)
first_face = face_landmarks_list[0]
left_eye = first_face['left_eye']
right_eye = first_face['right_eye']
value.clear()
value.append(img_name)
value.append(isOpen)
# for i in right_eye:
# value.append(i[0])
# value.append(i[1])
print(value)
# values.insert( len(values), value)
values.append(value)
print(values)
# print( (len(values) / len(imgs_names)) * 100 )
The Output is
['closed_eye_0003.jpg_face_2.jpg', 0]
[['closed_eye_0003.jpg_face_2.jpg', 0]]
['closed_eye_0007.jpg_face_1.jpg', 0]
[['closed_eye_0007.jpg_face_1.jpg', 0], ['closed_eye_0007.jpg_face_1.jpg', 0]]
['closed_eye_0009.jpg_face_1.jpg', 0]
[['closed_eye_0009.jpg_face_1.jpg', 0], ['closed_eye_0009.jpg_face_1.jpg', 0], ['closed_eye_0009.jpg_face_1.jpg', 0]]
['closed_eye_0012.jpg_face_1.jpg', 0]
[['closed_eye_0012.jpg_face_1.jpg', 0], ['closed_eye_0012.jpg_face_1.jpg', 0], ['closed_eye_0012.jpg_face_1.jpg', 0], ['closed_eye_0012.jpg_face_1.jpg', 0]]
If i use
values.extend(value)
it does not repeat
if i use
values.insert( len(values), value)
the output is same
The output should be
['closed_eye_0003.jpg_face_2.jpg', 0]
[['closed_eye_0003.jpg_face_2.jpg', 0]]
['closed_eye_0007.jpg_face_1.jpg', 0]
[['closed_eye_0003.jpg_face_1.jpg', 0], ['closed_eye_0007.jpg_face_1.jpg', 0]]
| Create value = list() inside the loop instead of clearing it.
If you print value and values after clearing value, you would understand what's going on.
| https://stackoverflow.com/questions/59332543/ |
replacement of nditer for numpy array for pytorch tensor | I am looking for similar functionality for pytorch tensor as of nditer of numpy array, see this link with a small example.
https://discuss.pytorch.org/t/replacement-of-np-nditer-for-torch/64024?u=songqsh
| I agree with comments that iterating element by element is reraly a good idea for interaction with arrays or tensors. nditer has lots of functionality, here is just an iteration through all elements in the tensor which returns coordinates of the element together with element itself:
def deep_iter(data, ix=tuple()):
try:
for i, element in enumerate(data):
yield from deep_iter(element, ix + (i,))
except:
yield ix, data
So for example at pytorch forum it can be used as following:
new_values = {}
for i, value in deep_iter(a):
if all(map(lambda x: 0 < x < (a.shape[1] - 1), i)):
new_values[i] = calc_average(i, a) #write func to calc average
for i, new_value in new_values.items():
a[i] = new_value
| https://stackoverflow.com/questions/59332694/ |
How to convert list of str into Pytorch tensor | How do I convert a list of strings into a string/character tensor in Pytorch?
Related example with numpy:
import numpy as np
mylist = ["this","is","my","list"]
np.array([mylist])
Returns:
array([['this', 'is', 'my', 'list']], dtype='<U4')
However, in pytorch:
torch.tensor(mylist)
Returns:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-156-36722d81da09> in <module>
----> 1 torch.tensor(mylist)
ValueError: too many dimensions 'str'
A tensor is a multi-dimensional array, so I'm assuming this is possible pytorch.
Note: this post does not answer my question
| There is no string tensor so you cannot directly convert to pytorch tensor of strings.
Alternative, you can convert the string to ASCII char values and save that as a Tensor.
| https://stackoverflow.com/questions/59340566/ |
ModuleNotFoundError: No module named 'torch_scope' | Using the macOS terminal, I'm trying to run ./autoner_train.sh by following this guide on GitHub.
I have activated my Conda environment and check my PyTorch version
(pytorch_env) myname (master) AutoNER $ python -c "import torch; print(torch.__version__)"
1.3.1
After that, when running, I get the following error
ModuleNotFoundError: No module named 'torch_scope'
I don't know where's the problem. I have installed everything and I tried googling the problem, all I found is that I need PyTorch installed, which I already have.
| I the documentation at the Dependencies section you can read:
Dependencies
This project is based on python>=3.6. The dependent package for this
project is listed as below:
numpy==1.13.1
tqdm
torch-scope>=0.5.0
pytorch==0.4.1
So you need to install torch-scope>=0.5.0 too:
pip install torch-scope
| https://stackoverflow.com/questions/59343283/ |
Neural Network for bit operation AND | This is probably not the best place to ask this question but I am really struggling.
I want to create a "trivial" neural network which has 2 inputs, 3 hidden neurons and 1 output. The idea is that we feed it two booleans and it outputs the AND result.
I wanted to do it in a somewhat "conventional" way (I am still very new to pytorch). I'm using the Adam optimiser with a learning rate of 0.001 and the nll_loss function (although I need to change it as I need to have only 1 output and this requires the number of outputs to match the number of classes - in this case 1 and 0)
I know this should be a really trivial problem but I am really struggling and can't find anything useful on google.
| Make a .csv file with your training data.
x1 x2 label
TRUE FALSE FALSE
TRUE TRUE TRUE
FALSE FALSE FALSE
FALSE TRUE FALSE
Create a dataset class.
class boolData(Dataset):
def __init__(self, csv_path):
self.label = pd.read_csv(csv_path)
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
sample = torch.tensor(self.label.iloc[idx,0:2]).int()
label = torch.tensor(self.label.iloc[idx,2]).int()
return sample, label
tensor_dataset = boolData(csv_path='sample_bool_stack.csv')
boolDL = DataLoader(tensor_dataset, batch_size=4, shuffle=True)
batch, labels = next(iter(boolDL))
batch, labels
(tensor([[1, 0],
[1, 1],
[0, 1],
[0, 0]], dtype=torch.int32), tensor([0, 1, 0, 0], dtype=torch.int32))
Initialize embeddings.
def _emb_init(x):
x = x.weight.data
sc = 2/(x.size(1)+1)
x.uniform_(-sc,sc)
The model converts True, False to integers and takes the sum as the input.
EDIT: output with just single neuron.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.emb = nn.Embedding(3, 2)
_emb_init(self.emb)
self.fc1 = nn.Linear(2, 10)
self.fc2 = nn.Linear(10, 1)
def forward(self, x):
x = torch.sum(x, dim=1)
x = self.emb(x)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
Initialize model, optimizer, and loss function. EDIT: change loss to match single neuron output.
model = Net()
opt = optim.SGD(model.parameters(), lr=1e-3)
loss_func = nn.BCEWithLogitsLoss()
num_epochs = 10000
device = "cpu"
Train model.
for epoch in range(num_epochs):
for i,(inputs, labels) in enumerate(boolDL):
inputs = inputs.to(device).long()
labels = labels.to(device).float()
opt.zero_grad()
output = model(inputs)
loss = loss_func(output.view(-1), labels)
loss.backward()
opt.step()
with torch.no_grad():
if epoch % 2000 == 0: print(loss.item())
0.004416308831423521
0.002891995944082737
0.004371378570795059
0.0017852336168289185
Test model.
inputs = torch.tensor([[0,0],[0,1],[0,0]])
def check_model(inputs):
out = model(inputs)
preds = torch.sigmoid(out).round()
return preds.detach().numpy()
check_model(inputs)
array([[0.],
[0.],
[0.]], dtype=float32)
| https://stackoverflow.com/questions/59346002/ |
How to extract value from multi dimensional tensor without losing backward info - PyTorch | I am trying to extract some specific columns of a multi-dimensional tensor. I can take these values in the concept of multi-dimensional access; however I am losing the "grad_fn" option of the tensor when I create new values.
My actual multi-dimensional tensor, x, is:
tensor([[[-8.5780, -7.1091, -8.9204, ..., -8.0616, -8.4115, -7.6345],
[-7.9776, -7.3767, -8.2914, ..., -7.9634, -9.1003, -7.5687],
[-7.7192, -7.4307, -8.4294, ..., -7.8605, -8.1345, -7.0781],
...,
[-8.3652, -7.5910, -8.6671, ..., -8.0487, -8.5826, -7.8624],
[-8.1572, -7.1679, -8.8977, ..., -8.1059, -8.1500, -7.7310],
[-8.1821, -7.5455, -9.2328, ..., -8.7435, -8.4430, -7.2503]]],
grad_fn=<LogSoftmaxBackward>)
with shape (1,10,2000).
For example, I want to extract its specific 10 columns and finally create an array with a shape (1,10,10) or (10,10).
Currently, of course, I can extract these values into an array:
for i in range(x.size(0)):
values = np.array([x[i][1:, 1].detach().numpy()])
idx_to_get = [1,5,7,25,37,44,720,11,25,46]
for idx in idx_to_get:
values = np.append(sentence_scores, np.array([np.array(x[0][1:,idx].detach().numpy())]), axis=0)
values = np.delete(values, 0, axis=0)
print(torch.from_numpy(values))
Running the code above gave me this output for the values:
tensor([[-7.5589, -6.7990, -7.2068, -7.4451, -7.6688, -7.2991, -7.1398, -7.4362,
-7.4959, -8.0101, -7.5106, -8.0425, -7.6203, -7.7266, -7.9249, -7.6479,
-7.6684],
[-7.2831, -7.7666, -7.8302, -7.3651, -7.2184, -6.7932, -7.1968, -7.6590,
-7.4033, -6.9504, -7.0767, -7.5366, -7.8364, -7.5935, -8.1235, -7.3222,
-7.8096],
[-7.5589, -6.7990, -7.2068, -7.4451, -7.6688, -7.2991, -7.1398, -7.4362,
-7.4959, -8.0101, -7.5106, -8.0425, -7.6203, -7.7266, -7.9249, -7.6479,
-7.6684],
[-7.5650, -7.6627, -7.4230, -7.4726, -7.5621, -7.4489, -7.8344, -7.6130,
-7.9440, -7.6158, -7.1895, -7.8070, -7.2306, -7.6364, -7.7390, -7.6832,
-7.5931],
[-7.5589, -6.7990, -7.2068, -7.4451, -7.6688, -7.2991, -7.1398, -7.4362,
-7.4959, -8.0101, -7.5106, -8.0425, -7.6203, -7.7266, -7.9249, -7.6479,
-7.6684],
[-7.5589, -6.7990, -7.2068, -7.4451, -7.6688, -7.2991, -7.1398, -7.4362,
-7.4959, -8.0101, -7.5106, -8.0425, -7.6203, -7.7266, -7.9249, -7.6479,
-7.6684],
[-7.2831, -7.7666, -7.8302, -7.3651, -7.2184, -6.7932, -7.1968, -7.6590,
-7.4033, -6.9504, -7.0767, -7.5366, -7.8364, -7.5935, -8.1235, -7.3222,
-7.8096],
[-8.3559, -8.3751, -8.2082, -8.6825, -8.4860, -8.4156, -8.4683, -8.8760,
-8.7354, -8.6155, -8.7544, -8.4527, -8.3690, -8.5795, -8.6023, -8.2267,
-8.4736],
[-7.4392, -7.4694, -7.4094, -7.5062, -7.7691, -7.9009, -7.7664, -7.1755,
-8.0641, -7.6327, -7.6413, -7.9604, -7.9520, -7.8893, -7.8119, -7.8718,
-8.0961],
[-8.2182, -8.0280, -8.1398, -8.0258, -7.9951, -8.0664, -8.1976, -7.6182,
-8.0356, -8.0293, -7.7228, -7.7847, -7.4966, -7.6925, -7.5268, -7.0476,
-7.2920]])
But the values also should have grad_fn.
What should I do?
I know, using values.requires_grad_(True) worked, but I believe that with using that function, I loss the LogSoftmaxBackward on x.
| The problem is that you can not use numpy functions to get this done AND retain the graph. You must use PyTorch functions only.
x = torch.rand((1,10,2000), requires_grad=True)
idx_to_get = [1,5,7,25,37,44,720,11,25,46]
values = x[0,1:,idx_to_get]
values
tensor([[0.6669, 0.1121, 0.1323, 0.7351, 0.0252, 0.2551, 0.3044, 0.3986, 0.7351,
0.1060],
[0.6169, 0.7715, 0.2829, 0.2860, 0.6810, 0.2485, 0.8585, 0.5284, 0.2860,
0.8363],
[0.6877, 0.0899, 0.6346, 0.7018, 0.7357, 0.1477, 0.2073, 0.3877, 0.7018,
0.0226],
[0.9241, 0.7883, 0.8442, 0.1831, 0.0551, 0.0209, 0.5300, 0.6909, 0.1831,
0.2950],
[0.5141, 0.5072, 0.4354, 0.3998, 0.5152, 0.9183, 0.2200, 0.5955, 0.3998,
0.8631],
[0.9630, 0.3542, 0.8435, 0.8299, 0.8614, 0.5029, 0.8672, 0.4985, 0.8299,
0.2207],
[0.6399, 0.5128, 0.2131, 0.4255, 0.9318, 0.6598, 0.8478, 0.7902, 0.4255,
0.9080],
[0.8920, 0.0357, 0.8957, 0.7379, 0.0191, 0.6750, 0.8326, 0.8535, 0.7379,
0.9087],
[0.5243, 0.7446, 0.4278, 0.3542, 0.1104, 0.2999, 0.0132, 0.8218, 0.3542,
0.6761]], grad_fn=<IndexBackward>)
| https://stackoverflow.com/questions/59349058/ |
Cifar10 dataset: read certain number of images from a class | I am currently learning deep learning with Pytorch and doing some experiment with Cifar 10 dataset. Which is having 10 classes each class is having 5000 test images. I want to use only 60% of dog and deer classes data and 100% data of other classes.
As per my understanding I need to use custom dataset. But I am not actually able to figure it out. Any Idea, sample code or link if you can share will be helpful for me.
| You can use Subset like this:
from torchvision.datasets import CIFAR10
from torch.utils.data import Subset
ds = CIFAR10('~/.torch/data/', train=True, download=True)
dog_indices, deer_indices, other_indices = [], [], []
dog_idx, deer_idx = ds.class_to_idx['dog'], ds.class_to_idx['deer']
for i in range(len(ds)):
current_class = ds[i][1]
if current_class == dog_idx:
dog_indices.append(i)
elif current_class == deer_idx:
deer_indices.append(i)
else:
other_indices.append(i)
dog_indices = dog_indices[:int(0.6 * len(dog_indices))]
deer_indices = deer_indices[:int(0.6 * len(deer_indices))]
new_dataset = Subset(ds, dog_indices+deer_indices+other_indices)
| https://stackoverflow.com/questions/59351910/ |
Constraint on dimensions of activation/feature map in convolutional network | Let's say input to intermediate CNN layer is of size 512×512×128 and that in the convolutional layer we apply 48 7×7 filters at stride 2 with no padding. I want to know what is the size of the resulting activation map?
I checked some previous posts (e.g., here or here) to point to this Stanford course page. And the formula given there is (W − F + 2P)/S + 1 = (512 - 7)/2 + 1, which would imply that this set up is not possible, as the value we get is not an integer.
However if I run the following snippet in Python 2.7, the code seems to suggest that the size of activation map was computed via (512 - 6)/2, which makes sense but does not match the formula above:
>>> import torch
>>> conv = torch.nn.Conv2d(in_channels=128, out_channels=48, kernel_size=7, stride=2, padding=0)
>>> conv
Conv2d(128, 48, kernel_size=(7, 7), stride=(2, 2))
>>> img = torch.rand((1, 128, 512, 512))
>>> out = conv(img)
>>> out.shape
(1, 48, 253, 253)
Any help in understanding this conundrum is appreciated.
| Here is the formula being used in pytorch: conv2d(go to the shape section)
Also, as far as I know, this is the best tutorial on this subject.
Bonus: here is a neat visualizer for conv calculations.
| https://stackoverflow.com/questions/59352314/ |
Train part of the network | I am new to Deep Learning and Pytorch. I have the following problem:
My overall architecture consists of a network that I define (NN1), and another pretrained network (NN2), so that the output of NN1 is the input of NN2. I want to define the loss of NN1 using the difference (RMSE) between the output of NN2 and a know ground-truth.
I need to back-propagate through NN2 and NN1 (to train NN1), without changing NN2.
I can use requires_grad=False on NN2, but will it disable the back-propagation through NN2? How do I specify this requirement in Pytorch?
Thanks.
| You will use two different modules. The first will be your module/model/NN (the NN1) and the second the pretrained one.
Then, you will use model_nn1.train() and model_nn2.eval(). Then, you can do:
optimizer = torch.optim.Adam(model_nn1.parameters()) # You can use your own preferred optimizer
model_nn1.train()
model_nn2.eval()
for epoch in range(epochs):
for x, y in dataloader: # the dataloader is your dataloader according to the torch.utils.data.DataLoader
optimizer.zero_grad()
h_nn1 = model_nn1(x) # x is the input
y_hat = model_nn2(h_nn1) # y_hat is the output
loss = torch.sqrt(torch.mean((yhat-y)**2))
loss.backward()
optimizer.step()
You can check the requirement about gradient doing:
>>> import torch
>>> x = torch.nn.Linear(2, 3)
>>> x2 = torch.nn.Linear(3, 2)
>>> z = torch.rand(2, 2)
>>> y = torch.rand(2, 2)
>>> x.train()
Linear(in_features=2, out_features=3, bias=True)
>>> x2.eval()
Linear(in_features=3, out_features=2, bias=True)
>>> h = x(z)
>>> h.requires_grad
True
>>> y_hat = x2(h)
>>> y_hat.requires_grad
True
| https://stackoverflow.com/questions/59355518/ |
Reading vector information instead when trying to read filename to create custom dataset | Using python v 3.7.3, pytorch v 0.4.1, windows 10, running code on Jupyter
I am pretty new to programming and deep learning, and creating a new dataset by combining 2 existing datasets. I am trying to read the filenames of the images but instead it is returning the vector information. Here is a snippet of the output (imgname is the path, names should just be the filename eg vinegar_41):
imgname: C:\Users\User\Documents\Dataset\train\vinegar\vinegar_41.png
names: [[[128 128 128]
[128 128 128]
[128 128 128]
...
[128 128 128]
[128 128 128]
[128 128 128]]
[[128 128 128]
[128 128 128]
[128 128 128]
...
[128 128 128]
[128 128 128]
[128 128 128]]
Here is my code thus far:
__all__ = ['MyDataset']
class MyDataset(Dataset):
def __init__(self, root, transform=None, target_transform=None):
root = train_set
print(root)
root_list = os.listdir(root)
print(root_list)
for f in root_list:
print('checkpoint')
imgs = []
for img in os.listdir(f):
#print('\n image: ', img)
imgname = (root + '\\' + f + '\\' + img)
print('\n imgname:', imgname)
open(imgname, 'r')
if img.endswith('.png'):
names = cv2.imread(imgname)
imgs.append(names)
#print('\n imgs: ', imgs)
print('\n names: ', names)
#Image.close()
self.imgs = imgs
self.transform = transform
self.target_transform = target_transform
def __getitem__(self, index):
fn, label = self.imgs[index]
img = PIL.Image.open(fn).convert('RGB')
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
label = self.transform(label)
return img, label
def __len__(self):
return len(self.imgs)
I'm in over my head, but my project is due in a couple days, so any advice on how to make this work would be greatly appreciated.
| from skimage import io
class MyDataset(Dataset):
"""My dataset."""
def __init__(self, csv_file, root_dir, transform=None,
target_transform=None):
"""
Args:
csv_file (string): Path to the csv file with labels.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
target_transform (callable, optional): Optional transform to
be applied on a label.
"""
self.label = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
img_name = os.path.join(self.root_dir, self.label.iloc[idx, 0])
image = io.imread(img_name)
sample = image
lab = self.label.iloc[idx, 1]
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
lab = self.target_transform(lab)
return sample, lab
The csv file should look like:
fname,label
image1.png,1
image2.png,0
image3.png,1
...
imagen.png,0
| https://stackoverflow.com/questions/59363296/ |
Cannot deploy trained model to Google Cloud Ai-Platform with custom prediction routine: Model requires more memory than allowed | I am trying to deploy a pretrained pytorch model to AI Platform with a custom prediction routine. After following the instructions described here the deployment fails with the following error:
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to have error, please contact Cloud ML.
The contents of the model folder are 83.89 MB large and are below the 250 MB limit that's described in the documentation. The only files in the folder are the checkpoint file (.pth) for the model and the tarball required for the custom prediction routine.
Command to create the model:
gcloud beta ai-platform versions create pose_pytorch --model pose --runtime-version 1.15 --python-version 3.5 --origin gs://rcg-models/pytorch_pose_estimation --package-uris gs://rcg-models/pytorch_pose_estimation/my_custom_code-0.1.tar.gz --prediction-class predictor.MyPredictor
Changing the runtime version to 1.14 leads to the same error.
I have tried changing the --machine-type argument to mls1-c4-m2 like Parth suggested but I still get the same error.
The setup.py file that generates my_custom_code-0.1.tar.gz looks like this:
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py'],
install_requires=["opencv-python", "torch"]
)
Relevant code snippet from the predictor:
def __init__(self, model):
"""Stores artifacts for prediction. Only initialized via `from_path`.
"""
self._model = model
self._client = storage.Client()
@classmethod
def from_path(cls, model_dir):
"""Creates an instance of MyPredictor using the given path.
This loads artifacts that have been copied from your model directory in
Cloud Storage. MyPredictor uses them during prediction.
Args:
model_dir: The local directory that contains the trained Keras
model and the pickled preprocessor instance. These are copied
from the Cloud Storage model directory you provide when you
deploy a version resource.
Returns:
An instance of `MyPredictor`.
"""
net = PoseEstimationWithMobileNet()
checkpoint_path = os.path.join(model_dir, "checkpoint_iter_370000.pth")
checkpoint = torch.load(checkpoint_path, map_location='cpu')
load_state(net, checkpoint)
return cls(net)
Additionally I have enabled logging for the model in AI Platform and I get the following outputs:
2019-12-17T09:28:06.208537Z OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
2019-12-17T09:28:13.474653Z WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/google/cloud/ml/prediction/frameworks/tf_prediction_lib.py:48: The name tf.saved_model.tag_constants.SERVING is deprecated. Please use tf.saved_model.SERVING instead.
2019-12-17T09:28:13.474680Z {"textPayload":"","insertId":"5df89fad00073e383ced472a","resource":{"type":"cloudml_model_version","labels":{"project_id":"rcg-shopper","region":"","version_id":"lightweight_pose_pytorch","model_id":"pose"}},"timestamp":"2019-12-17T09:28:13.474680Z","logName":"projects/rcg-shopper/logs/ml.googleapis…
2019-12-17T09:28:13.474807Z WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/google/cloud/ml/prediction/frameworks/tf_prediction_lib.py:50: The name tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY is deprecated. Please use tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY instead.
2019-12-17T09:28:13.474829Z {"textPayload":"","insertId":"5df89fad00073ecd4836d6aa","resource":{"type":"cloudml_model_version","labels":{"project_id":"rcg-shopper","region":"","version_id":"lightweight_pose_pytorch","model_id":"pose"}},"timestamp":"2019-12-17T09:28:13.474829Z","logName":"projects/rcg-shopper/logs/ml.googleapis…
2019-12-17T09:28:13.474918Z WARNING:tensorflow:
2019-12-17T09:28:13.474927Z The TensorFlow contrib module will not be included in TensorFlow 2.0.
2019-12-17T09:28:13.474934Z For more information, please see:
2019-12-17T09:28:13.474941Z * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
2019-12-17T09:28:13.474951Z * https://github.com/tensorflow/addons
2019-12-17T09:28:13.474958Z * https://github.com/tensorflow/io (for I/O related ops)
2019-12-17T09:28:13.474964Z If you depend on functionality not listed there, please file an issue.
2019-12-17T09:28:13.474999Z {"textPayload":"","insertId":"5df89fad00073f778735d7c3","resource":{"type":"cloudml_model_version","labels":{"version_id":"lightweight_pose_pytorch","model_id":"pose","project_id":"rcg-shopper","region":""}},"timestamp":"2019-12-17T09:28:13.474999Z","logName":"projects/rcg-shopper/logs/ml.googleapis…
2019-12-17T09:28:15.283483Z ERROR:root:Failed to import GA GRPC module. This is OK if the runtime version is 1.x
2019-12-17T09:28:16.890923Z Copying gs://cml-489210249453-1560169483791188/models/pose/lightweight_pose_pytorch/15316451609316207868/user_code/my_custom_code-0.1.tar.gz...
2019-12-17T09:28:16.891150Z / [0 files][ 0.0 B/ 8.4 KiB]
2019-12-17T09:28:17.007684Z / [1 files][ 8.4 KiB/ 8.4 KiB]
2019-12-17T09:28:17.009154Z Operation completed over 1 objects/8.4 KiB.
2019-12-17T09:28:18.953923Z Processing /tmp/custom_code/my_custom_code-0.1.tar.gz
2019-12-17T09:28:19.808897Z Collecting opencv-python
2019-12-17T09:28:19.868579Z Downloading https://files.pythonhosted.org/packages/d8/38/60de02a4c9013b14478a3f681a62e003c7489d207160a4d7df8705a682e7/opencv_python-4.1.2.30-cp37-cp37m-manylinux1_x86_64.whl (28.3MB)
2019-12-17T09:28:21.537989Z Collecting torch
2019-12-17T09:28:21.552871Z Downloading https://files.pythonhosted.org/packages/f9/34/2107f342d4493b7107a600ee16005b2870b5a0a5a165bdf5c5e7168a16a6/torch-1.3.1-cp37-cp37m-manylinux1_x86_64.whl (734.6MB)
2019-12-17T09:28:52.401619Z Collecting numpy>=1.14.5
2019-12-17T09:28:52.412714Z Downloading https://files.pythonhosted.org/packages/9b/af/4fc72f9d38e43b092e91e5b8cb9956d25b2e3ff8c75aed95df5569e4734e/numpy-1.17.4-cp37-cp37m-manylinux1_x86_64.whl (20.0MB)
2019-12-17T09:28:53.550662Z Building wheels for collected packages: my-custom-code
2019-12-17T09:28:53.550689Z Building wheel for my-custom-code (setup.py): started
2019-12-17T09:28:54.212558Z Building wheel for my-custom-code (setup.py): finished with status 'done'
2019-12-17T09:28:54.215365Z Created wheel for my-custom-code: filename=my_custom_code-0.1-cp37-none-any.whl size=7791 sha256=fd9ecd472a6a24335fd24abe930a4e7d909e04bdc4cf770989143d92e7023f77
2019-12-17T09:28:54.215482Z Stored in directory: /tmp/pip-ephem-wheel-cache-i7sb0bmb/wheels/0d/6e/ba/bbee16521304fc5b017fa014665b9cae28da7943275a3e4b89
2019-12-17T09:28:54.222017Z Successfully built my-custom-code
2019-12-17T09:28:54.650218Z Installing collected packages: numpy, opencv-python, torch, my-custom-code
| This is a common problem and we understand this is a pain point. Please do the following:
torchvision has torch as dependency and by default, it pulls torch from pypi.
When deploying the model, even if you point to use custom ai-platform torchvision packages it will do it, since torchvision when is built by PyTorch team, it is configured to use torch as dependency. This torch dependency from pypi, gives a 720mb file because it includes the GPU units
To solve #1, you need to build torchvision from source and tell torchvision where you want to get torch from, you need to set it to go to the torch website as the package is smaller. Rebuild the torchvision binary using Python PEP-0440 direct references feature. In torchvision setup.py we have:
pytorch_dep = 'torch'
if os.getenv('PYTORCH_VERSION'):
pytorch_dep += "==" + os.getenv('PYTORCH_VERSION')
Update setup.py in torchvision to use direct references feature:
requirements = [
#'numpy',
#'six',
#pytorch_dep,
'torch @ https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl'
]
* I already did this for you*, so I build 3 wheel files you can use:
gs://dpe-sandbox/torchvision-0.4.0-cp37-cp37m-linux_x86_64.whl (torch 1.2.0, vision 0.4.0)
gs://dpe-sandbox/torchvision-0.4.2-cp37-cp37m-linux_x86_64.whl (torch 1.2.0, vision 0.4.2)
gs://dpe-sandbox/torchvision-0.5.0-cp37-cp37m-linux_x86_64.whl (torch 1.4.0 vision 0.5.0)
These torchvision packages will get torch from the torch site instead of pypi: (Example: https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl)
Update your model setup.py when deploying the model to AI Platform so it does not include torch nor torchvision.
Redeploy the model as follows:
PYTORCH_VISION_PACKAGE=gs://dpe-sandbox/torchvision-0.5.0-cp37-cp37m-linux_x86_64.whl
gcloud beta ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--python-version=3.7 \
--runtime-version={RUNTIME_VERSION} \
--machine-type=mls1-c4-m4 \
--package-uris=gs://{BUCKET}/{GCS_PACKAGE_URI},{PYTORCH_VISION_PACKAGE}\
--prediction-class={MODEL_CLASS}
You can change the PYTORCH_VISION_PACKAGE to any of the options I mentioned in #2
| https://stackoverflow.com/questions/59372655/ |
Pytorch does not backpropagate through a iterative tensor construction | I am currently trying to build a tensor iteratively in Pytorch.
Sadly the backprop does not work with the inplace operation in the loop. I already tried equivalent programs with stack for example. Does somebody know how I could build the tensor with a working backprop?
This is a minimal example which produces the error:
import torch
k=2
a =torch.Tensor([10,20])
a.requires_grad_(True)
b = torch.Tensor([10,20])
b.requires_grad_(True)
batch_size = a.size()[0]
uniform_samples = Uniform(torch.tensor([0.0]), torch.tensor([1.0])).rsample(torch.tensor([batch_size,k])).view(-1,k)
exp_a = 1/a
exp_b = 1/b
km = (1- uniform_samples.pow(exp_b)).pow(exp_a)
sticks = torch.zeros(batch_size,k)
remaining_sticks = torch.ones_like(km[:,0])
for i in range(0,k-1):
sticks[:,i] = remaining_sticks * km[:,i]
remaining_sticks *= (1-km[:,i])
sticks[:,k-1] = remaining_sticks
latent_variables = sticks
latent_variables.sum().backward()
The stack trace:
/opt/conda/conda-bld/pytorch_1570910687230/work/torch/csrc/autograd/python_anomaly_mode.cpp:57: UserWarning: Traceback of forward call that caused the error:
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py", line 664, in launch_instance
app.start()
File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 563, in start
self.io_loop.start()
File "/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "/opt/conda/lib/python3.6/asyncio/base_events.py", line 438, in run_forever
self._run_once()
File "/opt/conda/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once
handle._run()
File "/opt/conda/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/opt/conda/lib/python3.6/site-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 787, in inner
self.run()
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 748, in run
yielded = self.gen.send(value)
File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 361, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 541, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 300, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "/opt/conda/lib/python3.6/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-124-2bbdbc3af797>", line 16, in <module>
sticks[:,i] = remaining_sticks * km[:,i]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-124-2bbdbc3af797> in <module>
19 latent_variables = sticks
20
---> 21 latent_variables.sum().backward()
/opt/conda/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
148 products. Defaults to ``False``.
149 """
--> 150 torch.autograd.backward(self, gradient, retain_graph, create_graph)
151
152 def register_hook(self, hook):
/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2]] is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
| You can not do any inplace operations. So you may not use *= in your algorithm.
k = 2
a = torch.tensor(np.array([10.,20]), requires_grad=True).float()
b = torch.tensor(np.array([10.,20]), requires_grad=True).float()
batch_size = a.size()[0]
uniform_samples = Uniform(torch.tensor([0.]), torch.tensor([1.])).rsample(torch.tensor([batch_size,k])).view(-1,k)
exp_a = 1/a
exp_b = 1/b
km = (1 - uniform_samples**exp_b)**exp_a
sticks = torch.zeros(batch_size,k)
remaining_sticks = torch.ones_like(km[:,0])
for i in range(0,k-1):
sticks[:,i] = remaining_sticks * km[:,i]
remaining_sticks = remaining_sticks * (1-km[:,i])
sticks[:,k-1] = remaining_sticks
latent_variables = sticks
latent_variables = torch.sum(latent_variables)
latent_variables.backward()
| https://stackoverflow.com/questions/59378742/ |
LSTM in Pytorch: how to add/change sequence length dimension? | I am running LSTM in pytorch but as I understand, it is only taking sequence length = 1. When I reshape to have sequence length to 4 or other number, then I get an error of mismatching length in input and target. If I reshape both input and target, then the model complains that it does not accept multi-target labels.
My train dataset has 66512 rows and 16839 columns, 3 categories/classes in the target. I would like to use a batch size 200 and a sequence length of 4, i.e. use 4 rows of data in a sequence.
Please advise how to adjust my model and/or data to be able to run model for various sequence lengths (e.g., 4).
batch_size=200
import torch
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
train_target = torch.tensor(train_data[['Label1','Label2','Label3']].values.astype(np.float32))
train_target = np.argmax(train_target, axis=1)
train = torch.tensor(train_data.drop(['Label1','Label2','Label3'], axis = 1).values.astype(np.float32))
train_tensor = TensorDataset(train.unsqueeze(1), train_target)
train_loader = DataLoader(dataset = train_tensor, batch_size = batch_size, shuffle = True)
print(train.shape)
print(train_target.shape)
torch.Size([66512, 16839])
torch.Size([66512])
import torch.nn as nn
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(LSTMModel, self).__init__()
# Hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# Building LSTM
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
# Readout layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initialize hidden state with zeros
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_().to(device)
# Initialize cell state
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_().to(device)
out, (hn, cn) = self.lstm(x, (h0,c0))
# Index hidden state of last time step
out = self.fc(out[:, -1, :])
return out
input_dim = 16839
hidden_dim = 100
output_dim = 3
layer_dim = 1
batch_size = batch_size
num_epochs = 1
model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.1
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
print(len(list(model.parameters())))
for i in range(len(list(model.parameters()))):
print(list(model.parameters())[i].size())
6
torch.Size([400, 16839])
torch.Size([400, 100])
torch.Size([400])
torch.Size([400])
torch.Size([3, 100])
torch.Size([3])
for epoch in range(num_epochs):
for i, (train, train_target) in enumerate(train_loader):
# Load data as a torch tensor with gradient accumulation abilities
train = train.requires_grad_().to(device)
train_target = train_target.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(train)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, train_target)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
print('Epoch: {}. Loss: {}. Accuracy: {}'.format(epoch, np.around(loss.item(), 4), np.around(accuracy,4)))
| This is what worked eventually - reshaping the input data into sequences of 4 and having one target value per sequence, for which I picked last value in the target sequence per my problem logic. Seems very easy now but was very tricky back then. The rest of the posted code is the same.
train_target = torch.tensor(train_data[['Label1','Label2','Label3']].iloc[3::4].values.astype(np.float32))
train_target = np.argmax(train_target, axis=1)
train = torch.tensor(train_data.drop(['Label1','Label2','Label3'], axis = 1).values.reshape(-1, 4, 16839).astype(np.float32))
train_tensor = TensorDataset(train, train_target)
train_loader = DataLoader(dataset = train_tensor, batch_size = batch_size, shuffle = True)
print(train.shape)
print(train_target.shape)
torch.Size([16628, 4, 16839])
torch.Size([16628])
| https://stackoverflow.com/questions/59381695/ |
Pytorch: How to access CrossEntropyLoss() gradient? | I want to modify the tensor that stores the CrossEntropyLoss() gradient, that is, P(i)-T(i). Where is it stored and how do I access it?
code:
input = torch.randn(3, 5, requires_grad=True)
input.register_hook(lambda x: print(" \n input hook: ",x))
print(input)
target = torch.empty(3, dtype=torch.long).random_(5)
print(target)
criterion = nn.CrossEntropyLoss()
criterion.requires_grad = True
loss0 = criterion(input,target)
loss0.register_hook(lambda x: print(" \n loss0 hook: ",x))
print("before backward loss0.grad :",loss0.grad)
print("loss0 :",loss0)
loss0.backward()
print("after backward loss0.grad :",loss0.grad)
output:
tensor([[-0.6149, -0.8179, 0.6084, -0.2837, -0.5316],
[ 1.7246, 0.5348, 1.3646, -0.7148, -0.3421],
[-0.3478, -0.6732, -0.7610, -1.0381, -0.5570]], requires_grad=True)
tensor([4, 1, 0])
before backward loss0.grad : None
loss0 : tensor(1.7500, grad_fn=<NllLossBackward>)
loss0 hook: tensor(1.)
input hook: tensor([[ 0.0433, 0.0354, 0.1472, 0.0603, -0.2862],
[ 0.1504, -0.2876, 0.1050, 0.0131, 0.0190],
[-0.2432, 0.0651, 0.0597, 0.0452, 0.0732]])
after backward loss0.grad : None
| Given your specification in the comments you want the gradient with respect to the input (output of the model), in your code you look at the gradient of the loss which does not exist. So you could so something like:
import torch
input = torch.tensor([1,0,1,0], dtype=float, requires_grad=True)
target = torch.tensor([1,2,3,4], dtype=float)
loss = (input - target).abs().mean()
loss.backward()
Here loss.grad gives you None, but input.grad returns:
tensor([ 0.0000, -0.2500, -0.2500, -0.2500], dtype=torch.float64)
Which should be the gradient you are interested in.
| https://stackoverflow.com/questions/59394244/ |
How do i save images for each class using Pytorch? (no grid) | I am using acgan to do image augmentation. At present, sample images are generated in grid format. But I want to save images for each class separately. (e.g. 1.png; 2.png ...) How should I modify this code? Or is there an answer that I would like to refer to?
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.label_emb = nn.Embedding(opt.n_classes, opt.latent_dim)
self.init_size = opt.img_size // 4 # Initial size before upsampling
self.l1 = nn.Sequential(nn.Linear(opt.latent_dim, 128 * self.init_size ** 2))
self.conv_blocks = nn.Sequential(
nn.BatchNorm2d(128),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 128, 3, stride=1, padding=1),
nn.BatchNorm2d(128, 0.8),
nn.LeakyReLU(0.2, inplace=True),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 64, 3, stride=1, padding=1),
nn.BatchNorm2d(64, 0.8),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(64, opt.channels, 3, stride=1, padding=1),
nn.Tanh(),
)
..
generator = Generator()
..
def sample_image(n_row, batches_done):
"""Saves a grid of generated digits ranging from 0 to n_classes"""
# Sample noise
z = Variable(FloatTensor(np.random.normal(0, 1, (n_row ** 2, opt.latent_dim))))
# Get labels ranging from 0 to n_classes for n rows
labels = np.array([num for _ in range(n_row) for num in range(n_row)])
labels = Variable(LongTensor(labels))
gen_imgs = generator(z, labels)
save_image(gen_imgs.data, "images/%d.png" % batches_done, nrow=n_row, normalize=True)
| in the def sample_image, you have line that defines target labels for the generator:
labels = np.array([num for _ in range(n_row) for num in range(n_row)]).
Instead of using num which changes due to being sampled from range, use constant number you pass as an argument (class_id below):
def sample_image(n_row, batches_done, class_id):
"""Saves a grid of generated digits ranging from 0 to n_classes"""
# Sample noise
z = Variable(FloatTensor(np.random.normal(0, 1, (n_row ** 2, opt.latent_dim))))
# Get labels ranging from 0 to n_classes for n rows
labels = np.array([class_id for _ in range(n_row) for __ in range(n_row)])
labels = Variable(LongTensor(labels))
gen_imgs = generator(z, labels)
save_image(gen_imgs.data, "images/%d.png" % batches_done, nrow=n_row, normalize=True)
This way you will get a rectangular array full of images of the class you have requested.
Furthermore, to have just one image you can set n_row to 1. Note that you didn't provide code for save_image function, there may be some tricks to it.
| https://stackoverflow.com/questions/59405182/ |
nvprof warning on CUDA_VISIBLE_DEVICES | When I use os.environ['CUDA_VISIBLE_DEVICES'] in pytorch, I get the following message
Warning: Device on which events/metrics are configured are different than the device on which it is being profiled. One of the possible reason is setting CUDA_VISIBLE_DEVICES inside the application.
What does this actually mean? How can I avoid this by using 'CUDA_VISIBLE_DEVICES' (not torch.cuda.set_device())?
Here is the code in pytorch test.py
import torch
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
g = 1
c1 = 512
c2 = 512
input = torch.randn(64, c1, 28, 28).cuda()
model = nn.Sequential(
nn.Conv2d(c1,c2,1,groups=g),
nn.ReLU(),
nn.Conv2d(c1,c2,1,groups=g),
nn.ReLU(),
nn.Conv2d(c1,c2,1,groups=g),
nn.ReLU(),
nn.Conv2d(c1,c2,1,groups=g),
nn.ReLU(),
nn.Conv2d(c1,c2,1,groups=g),
nn.ReLU(),
).cuda()
out = model(input)
and the command:
nvprof --analysis-metrics -o metrics python test.py
|
What does this actually mean?
It means that nvprof started profiling your code on a GPU context which you made unavailable by setting CUDA_VISIBLE_DEVICES.
How can I avoid this by using CUDA_VISIBLE_DEVICES (not torch.cuda.set_device())?
Probably like this:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import torch
....
I know nothing about pytorch, but I would guess that importing the library triggers a lot of CUDA activity you don't see. If you import the library after you set CUDA_VISIBLE_DEVICES, I suspect the whole problem will disappear.
If that doesn't work then you would have no choice but to not set CUDA_VISIBLE_DEVICES within the python code at all, and instead do this:
CUDA_VISIBLE_DEVICES=1 nvprof --analysis-metrics -o metrics python test.py
| https://stackoverflow.com/questions/59419516/ |
Resolution preserving Fully Convolutional Network | I am new to ML and Pytorch and I have the following problem:
I am looking for a Fully Convolutional Network architecture in Pytorch, so that the input would be an RGB image (HxWxC or 480x640x3) and the output would be a single channel image (HxW or 480x640). In other words, I am looking for a network that will preserve the resolution of the input (HxW), and will loose the channel dimension. All of the networks that I've came across (ResNet, Densenet, ...) end with a fully connected layer (without any upsampling or deconvolution). This is problematic for two reasons:
I am restricted with the choice of the input size (HxWxC).
It has nothing to do with the output that I expect to get (a single channel image HxW).
What am I missing? Why is there even a FC layer? Why is there no up-sampling, or some deconvolution layers after feature extraction? Is there any build-in torchvision.model that might suit my requirements? Where can I find such pytorch architecture? As I said, I am new in this field so I don't really like the idea of building such a network from scratch.
Thanks.
| You probably came across the networks that are used in classification. So they end up with a pooling and a fully connected layer to produce a fixed number of categorical output.
Have a look at Unet
https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
Note: the original unet implementation use a lot of tricks.
You can simply downsample and then upsample symmetrically to do the work.
| https://stackoverflow.com/questions/59425733/ |
Matrix Multiplication in 3,4 axes pytorch | I've two tensors of shape a(16,8,8,64) and b(64,64). Suppose, I extract last dimension of ainto another column vector c, I want to compute matmul(matmul(c.T, b), c). I want this to be done in each of the first 3 dimensions of a. That is the final product should be of shape (16,8,8,1). How can I achieve this in pytorch?
| Can be done as follows:
row_vec = a[:, :, :, None, :].float()
col_vec = a[:, :, :, :, None].float()
b = (b[None, None, None, :, :]).float()
prod = torch.matmul(torch.matmul(row_vec, b), col_vec)
| https://stackoverflow.com/questions/59427249/ |
Get probability of multi-token word in MASK position | It is relatively easy to get a token's probability according to a language model, as the snippet below shows. You can get the output of a model, restrict yourself to the output of the masked token, and then find the probability of your requested token in the output vector. However, this only works with single-token words, e.g. words that are themselves in the tokenizer's vocabulary. When a word does not exist in the vocabulary, the tokenizer will chunk it up into pieces that it does know (see the bottom of the example). But since the input sentence consists of only one masked position, and the requested token has more tokens than that, how can we get its probability? Ultimately I am looking for a solution that works regardless of the number of subword units a word has.
In the code below I have added many comments explaining what is going on, as well as printing out the given output of print statements. You'll see that predicting tokens such as 'love' and 'hate' is straightforward because they are in the tokenizer's vocabulary. 'reprimand' is not, though, so it cannot be predicted in a single masked position - it consists of three subword units. So how can we predict 'reprimand' in the masked position?
from transformers import BertTokenizer, BertForMaskedLM
import torch
# init model and tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()
# init softmax to get probabilities later on
sm = torch.nn.Softmax(dim=0)
torch.set_grad_enabled(False)
# set sentence with MASK token, convert to token_ids
sentence = f"I {tokenizer.mask_token} you"
token_ids = tokenizer.encode(sentence, return_tensors='pt')
print(token_ids)
# tensor([[ 101, 1045, 103, 2017, 102]])
# get the position of the masked token
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero().item()
# forward
output = model(token_ids)
last_hidden_state = output[0].squeeze(0)
# only get output for masked token
# output is the size of the vocabulary
mask_hidden_state = last_hidden_state[masked_position]
# convert to probabilities (softmax)
# giving a probability for each item in the vocabulary
probs = sm(mask_hidden_state)
# get probability of token 'hate'
hate_id = tokenizer.convert_tokens_to_ids('hate')
print('hate probability', probs[hate_id].item())
# hate probability 0.008057191967964172
# get probability of token 'love'
love_id = tokenizer.convert_tokens_to_ids('love')
print('love probability', probs[love_id].item())
# love probability 0.6704086065292358
# get probability of token 'reprimand' (?)
reprimand_id = tokenizer.convert_tokens_to_ids('reprimand')
# reprimand is not in the vocabulary, so it needs to be split into subword units
print(tokenizer.convert_ids_to_tokens(reprimand_id))
# [UNK]
reprimand_id = tokenizer.encode('reprimand', add_special_tokens=False)
print(tokenizer.convert_ids_to_tokens(reprimand_id))
# ['rep', '##rim', '##and']
# but how do we now get the probability of a multi-token word in a single-token position?
| Since the split word does not present in the dictionary, BERT is simply unaware of it's probability, so there is no use of masking it before tokenization.
And you can't get it's probability by exploiting rule of chain, see response by J.Devlin. To illustrate it, let's take more generic example. Try to estimate the probability of some bigram in position i. While you can estimate probability of each word given the sentence and their positions
P(w_i|w_0, w_1... w_i-1, w_i+1, ..., w_N),
P(w_i+1|w_0, w_1... w_i, wi+2, ..., w_N),
there is no way to get the probability of the bigram
P(w_i,w_i+1|w_0, w_1... w_i-1, wi+2, ..., w_N)
because BERT does not store such information.
Having said all that, you can get a very rough estimate of the probability of your OOV word by multiplying probabilities of seeing it's parts. So you will get
P("reprimand"|...) ~= P("rep"|...)*P("##rim"|...)*P("##and"|...)
Since your subwords are not regular words, but a special kind of words, this is not all wrong, because the dependency between them is implicit.
| https://stackoverflow.com/questions/59435020/ |
Calculate mean across one specific dimension of a 4D tensor in Pytorch | I have a PyTorch video feature tensor of shape [66,7,7,1024] and I need to convert it to [1024,66,7,7]. How to rearrange a tensor shape? Also, how to perform mean across dimension=1? i.e., after performing mean of the dimension with size 66, I need the tensor to be [1024,1,7,7].
I have tried to calculate the mean of dimension=1 but I failed to replace it with the mean value. And I could not imagine a 4D tensor in which one dimension is replaced by its mean.
Edit:
I tried torch.mean(my_tensor, dim=1). But this returns me a tensor of shape [1024,7,7]. The 4D tensor is being converted to 3D. But I want it to remain 4D with shape [1024,1,7,7].
Thank you very much.
| The first part of the question has been answered in the comments section. So we can use tensor.transpose([3,0,1,2]) to convert the tensor to the shape [1024,66,7,7].
Now mean over the temporal dimension can be taken by
torch.mean(my_tensor, dim=1)
This will give a 3D tensor of shape [1024,7,7].
To obtain a tensor of shape [1024,1,7,7], I had to unsqueeze in dimension=1:
tensor = tensor.unsqueeze(1)
| https://stackoverflow.com/questions/59435653/ |
converting 1 x 1 x 33 tensor to 1 x 1 x 34 while maintaining values and leaving the 34th index 0 | I have a tensor of 0 and and one 1 in a 1 by 1 by 33 tensor and I want all the indexes that are 1 to stay one, but be in a 1 by 1 by 34 tensor. What's the standard practice for this? Apparently using "reshape is bad" for this.
| You can use pad to pad either the beginning or end of a tensor with zeros.
For example
import torch.nn.functional as F
x = F.pad(x, (0, 1))
will pad the end of the last dimension of a 3d tensor x with 1 zero.
| https://stackoverflow.com/questions/59440800/ |
Python multiprocessing.pool's interaction with a class objective function and neuro-evolution | Warning, this is gonna be long since I want to be as specific as I can be.
Exact problem: This is a multi-processing problem. I have ensured that my classes all behave as built/expected in previous experiments.
edit: said threading beforehand.
When I run toy example of my problem in a threaded environment, everything behaves; however, when I transition into my real problem, the code breaks. Specifically, I get a TypeError: can't pickle _thread.lock objects error. Full stack is at the bottom.
My threading needs here are bit different than the example I adapted my code from -- https://github.com/CMA-ES/pycma/issues/31. In this example we have one fitness function that can be independently called by each evaluation and none of the function calls can interact with each other. However, in my real problem we are trying to optimize neural network weights using a genetic algorithm. The GA will suggest potential weights and we need to evaluate these NN controller-weights in our environment. In a single threaded case, we can have just one environment where we evaluate the weights with a simple for-loop: [nn.evaluate(weights) for weights in potential_candidates], find the best-performing individual, and use those weights in the next mutation round. However, we cannot simply have one simulation in a threaded environment.
So, instead of passing in a single function to evaluate I am passing in a list of function (one for each individual, where the environment is the same, but we have forked the processes so that the communication streams don't interact between individuals.)
One further thing of immediate note:
I am using a build-for-parallel evaluation data-structure from neat
from neat.parallel import ParallelEvaluator # uses multiprocessing.Pool
Toy example code:
NPARAMS = nn.flat_init_weights.shape[0] # make this a 1000-dimensional problem.
NPOPULATION = 5 # use population size of 5.
MAX_ITERATION = 100 # run each solver for 100 function calls.
import time
from neat.parallel import ParallelEvaluator # uses multiprocessing.Pool
import cma
def fitness(x):
time.sleep(0.1)
return sum(x**2)
# # serial evaluation of all solutions
# def serial_evals(X, f=fitness, args=()):
# return [f(x, *args) for x in X]
# parallel evaluation of all solutions
def _evaluate2(self, weights, *args):
"""redefine evaluate without the dependencies on neat-internal data structures
"""
jobs = []
for i, w in enumerate(weights):
jobs.append(self.pool.apply_async(self.eval_function[i], (w, ) + args))
return [job.get() for job in jobs]
ParallelEvaluator.evaluate2 = _evaluate2
parallel_eval = ParallelEvaluator(12, [fitness]*NPOPULATION)
# time both
for eval_all in [parallel_eval.evaluate2]:
es = cma.CMAEvolutionStrategy(NPARAMS * [1], 1, {'maxiter': MAX_ITERATION,
'popsize': NPOPULATION})
es.disp_annotation()
while not es.stop():
X = es.ask()
es.tell(X, eval_all(X))
es.disp()
Necessary background:
When I switch from the toy example to my real code, the above fails.
My classes are:
LevelGenerator (simple GA class that implements mutate, etc)
GridGame (OpenAI wrapper; launches a Java server in which to run the simulation;
handles all communication between the Agent and the environment)
Agent (neural-network class, has an evaluate fn which uses the NN to play a single rollout)
Objective (handles serializing/de-serializing weights: numpy <--> torch; launching the evaluate function)
# The classes get composed to get the necessary behavior:
env = GridGame(Generator)
agent = NNAgent(env) # NNAgent is a subclass of (Random) Agent)
obj = PyTorchObjective(agent)
# My code normally all interacts like this in the single-threaded case:
def test_solver(solver): # Solver: CMA-ES, Differential Evolution, EvolutionStrategy, etc
history = []
for j in range(MAX_ITERATION):
solutions = solver.ask() #2d-numpy array. (POPSIZE x NPARAMS)
fitness_list = np.zeros(solver.popsize)
for i in range(solver.popsize):
fitness_list[i] = obj.function(solutions[i], len(solutions[i]))
solver.tell(fitness_list)
result = solver.result() # first element is the best solution, second element is the best fitness
history.append(result[1])
scores[j] = fitness_list
return history, result
So, when I attempt to run:
NPARAMS = nn.flat_init_weights.shape[0]
NPOPULATION = 5
MAX_ITERATION = 100
_x = NNAgent(GridGame(Generator))
gyms = [_x.mutate(0.0) for _ in range(NPOPULATION)]
objs = [PyTorchObjective(a) for a in gyms]
def evaluate(objective, weights):
return objective.fun(weights, len(weights))
import time
from neat.parallel import ParallelEvaluator # uses multiprocessing.Pool
import cma
def fitness(agent):
return agent.evalute()
# # serial evaluation of all solutions
# def serial_evals(X, f=fitness, args=()):
# return [f(x, *args) for x in X]
# parallel evaluation of all solutions
def _evaluate2(self, X, *args):
"""redefine evaluate without the dependencies on neat-internal data structures
"""
jobs = []
for i, x in enumerate(X):
jobs.append(self.pool.apply_async(self.eval_function[i], (x, ) + args))
return [job.get() for job in jobs]
ParallelEvaluator.evaluate2 = _evaluate2
parallel_eval = ParallelEvaluator(12, [obj.fun for obj in objs])
# obj.fun takes in the candidate weights, loads them into the NN, and then evaluates the NN in the environment.
# time both
for eval_all in [parallel_eval.evaluate2]:
es = cma.CMAEvolutionStrategy(NPARAMS * [1], 1, {'maxiter': MAX_ITERATION,
'popsize': NPOPULATION})
es.disp_annotation()
while not es.stop():
X = es.ask()
es.tell(X, eval_all(X, NPARAMS))
es.disp()
I get the following error:
TypeError Traceback (most recent call last)
<ipython-input-57-3e6b7bf6f83a> in <module>
6 while not es.stop():
7 X = es.ask()
----> 8 es.tell(X, eval_all(X, NPARAMS))
9 es.disp()
<ipython-input-55-2182743d6306> in _evaluate2(self, X, *args)
14 jobs.append(self.pool.apply_async(self.eval_function[i], (x, ) + args))
15
---> 16 return [job.get() for job in jobs]
<ipython-input-55-2182743d6306> in <listcomp>(.0)
14 jobs.append(self.pool.apply_async(self.eval_function[i], (x, ) + args))
15
---> 16 return [job.get() for job in jobs]
~/miniconda3/envs/thesis/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
~/miniconda3/envs/thesis/lib/python3.7/multiprocessing/pool.py in _handle_tasks(taskqueue, put, outqueue, pool, cache)
429 break
430 try:
--> 431 put(task)
432 except Exception as e:
433 job, idx = task[:2]
~/miniconda3/envs/thesis/lib/python3.7/multiprocessing/connection.py in send(self, obj)
204 self._check_closed()
205 self._check_writable()
--> 206 self._send_bytes(_ForkingPickler.dumps(obj))
207
208 def recv_bytes(self, maxlength=None):
~/miniconda3/envs/thesis/lib/python3.7/multiprocessing/reduction.py in dumps(cls, obj, protocol)
49 def dumps(cls, obj, protocol=None):
50 buf = io.BytesIO()
---> 51 cls(buf, protocol).dump(obj)
52 return buf.getbuffer()
53
TypeError: can't pickle _thread.lock objects
I also read here that this might be being caused by the fact that this is a class function -- TypeError: can't pickle _thread.lock objects -- so I created the global scoped fitness function def fitness(agent): return agent.evalute(), but that didn't work either.
I thought this error might be coming from the fact that originally, I had the evaluate function in the PyTorchObjective class as a lambda function, but when I changed that it still broke.
Any insight would be greatly appreciated, and thanks for reading this giant wall of text.
| You are not using multiple threads. You are using multiple processes.
All arguments that you pass to apply_async, including the function itself, are serialized (pickled) under the hood and passed to a worker process via an IPC channel (read up multiprocessing documentation for details). So you cannot pass any entities that are tied to things that are by their nature process-local. This includes most synchronization primitives since they have to use locks to do atomic operations.
Whenever this happens (as many other questions on this error message show), you are likely trying to be too smart and passing to a parallelization framework an object that already has parallelization logic built in.
If you want to create "multiple levels of parallelization" with such "parallelized object", you'll be better off either:
using the parallelization mechanism of that object proper and not bother about multiple levels: you can't do more stuff at a time than you have cores anyway; or
create and use these "parallelized objects" inside worker processes
but you are likely to hit multiprocessing limitations here since its worker processes are deliberately prohibited from spawning their own pools.
You can let workers add extra items to the work queue but may hit Queue limitations as well.
so for such a scenario, a more advanced 3rd-party distributed work queue solution may be preferrable.
| https://stackoverflow.com/questions/59441355/ |
is loaded model does not need eval? | tryint to use the test after I load the model
net = net.load_state_dict(torch.load(PATH))
net.eval()
but this spit the error
net.eval() AttributeError: '_IncompatibleKeys' object has no attribute 'eval'
| here you don't need to assign net.load_state_dict to net
net = net.load_state_dict(torch.load(PATH))
just used
net.load_state_dict(torch.load(PATH))
net.eval()
more see Recommended approach for saving a model
| https://stackoverflow.com/questions/59442655/ |
Interpolate one range of numbers to another in a tensor in PyTorch | I am trying to transform a set of numbers say {0, 4, 6} to a consecutive one {0, 1, 2}.
Currently I am using Scipy's interp1d function. My problem is that I want to change each element of a tensor. Is there a better way than broadcasting the interp1d function to each element in a tensor? If not how can I achieve this broadcasting of a custom function
Currently I am doing the following:
>>> a = torch.tensor([[0, 6, 4],
[4, 0, 4]])
>>> unique_values = np.unique(a)
>>> interpolator = interp1d(unique_values, range(len(unique_values)))
>>> interpolator(a[0, 1])
>>> 2
| If you just want to replace, why do you want to interpolate?
a[a==6]=2
a[a==4]=1
Should work. If you don't want to hardcode, you can find unique values like you did, sort them and then replace them in a loop.
| https://stackoverflow.com/questions/59449453/ |
Subsets and Splits