instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
resnet50.to() function on a non-NVIDIA GPU | I am trying to convert the pretrained ResNet50 model to be used on GPU using the Pytorch function resnet50.to().
The problem is that I am using an Intel Iris Plus Graphics 655 1536 MB GPU on Mac, and I don't know what argument to pass to the function as I only found the one for NVIDIA GPUs (resnet50.to('cuda:0')).
| PyTorch uses Nvidia's CUDA API for all GPU interactions. Other GPUs that don't utilise the CUDA API (such as AMD or Intel GPUs) are therefore not supported.
If you don't have an Nvidia GPU, you cannot run PyTorch on the GPU.
| https://stackoverflow.com/questions/62262241/ |
Replacing nn.Upsample with alternative upsample operation | I have a UNet++(view in private, code for model at the bottom of the article) which I'm trying to reconfigure. I'm getting some artifacts in some images so I'm following this article which suggests doing upsampling then a convolution operation.
I'm replacing the up-sample layers with sequential operation shown below but my model isn't learning. I suspect its to do with how I've configured the channels so I'd like another opinion.
Old up-sample operation:
self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
New operations:
class upConv(nn.Module):
"""
Up sampling/ deconv block by factor of 2
"""
def __init__(self, in_ch, out_ch):
super().__init__()
self.upc = nn.Sequential(
nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True),
nn.Conv2d(in_ch, out_ch*2, 3, stride=1, padding=1),
nn.BatchNorm2d(out_ch*2),
nn.ReLU(inplace=True)
)
def forward(self, x):
out = self.upc(x)
return out
My question is do these two operations have the same output/function within my model?
|
Do these two operations have the same output/function within my model?
If out_ch*2 == in_ch, then: yes, they have the same output shape.
If the input x is the output of a BatchNorm+ReLU op, then they could be even more similar.
| https://stackoverflow.com/questions/62262473/ |
Measuring F1 score for multiclass classification natively in PyTorch | I am trying to implement the macro F1 score (F-measure) natively in PyTorch instead of using the already-widely-used sklearn.metrics.f1_score in order to calculate the measure directly on the GPU.
From what I understand, in order to compute the macro F1 score, I need to compute the F1 score with the sensitivity and precision for all labels, then take the average of all these.
My attempt
My current implementation looks like this:
def confusion_matrix(y_pred: torch.Tensor, y_true: torch.Tensor, n_classes: int):
conf_matrix = torch.zeros([n_classes, n_classes], dtype=torch.int)
y_pred = torch.argmax(y_pred, 1)
for t, p in zip(y_true.view(-1), y_pred.view(-1)):
conf_matrix[t.long(), p.long()] += 1
return conf_matrix
def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor:
conf_matrix = confusion_matrix(y_pred, y_true, self.classes)
TP = conf_matrix.diag()
f1_scores = torch.zeros(self.classes, dtype=torch.float)
for c in range(self.classes):
idx = torch.ones(self.classes, dtype=torch.long)
idx[c] = 0
FP = conf_matrix[c, idx].sum()
FN = conf_matrix[idx, c].sum()
sensitivity = TP[c] / (TP[c] + FN + self.epsilon)
precision = TP[c] / (TP[c] + FP + self.epsilon)
f1_scores[c] += 2.0 * ((precision * sensitivity) / (precision + sensitivity + self.epsilon))
return f1_scores.mean()
self.classes is the number of labels and self.epsilon is a very small value set to 10-e12 which prevents DivisionByZeroError.
When training, I compute the measure for every batch and take the average of all measures as the final score.
Problem
The problem is that when I compare my custom F1 score with sklearn's macro F1 score, they are rarely equal.
# example 1
eval_cce 0.5203, eval_f1 0.8068, eval_acc 81.5455, eval_f1_sci 0.8023,
test_cce 0.4784, test_f1 0.7975, test_acc 82.6732, test_f1_sci 0.8097
# example 2
eval_cce 0.3304, eval_f1 0.8211, eval_acc 87.4955, eval_f1_sci 0.8626,
test_cce 0.3734, test_f1 0.8183, test_acc 85.4996, test_f1_sci 0.8424
# example 3
eval_cce 0.4792, eval_f1 0.7982, eval_acc 81.8482, eval_f1_sci 0.8001,
test_cce 0.4722, test_f1 0.7905, test_acc 82.6533, test_f1_sci 0.8139
While I have tried to scan the internet, most cases cover binary classification. I have yet been able to discover an example to attempts to do what I am trying to.
My Question
Is there any obvious issue with my attempt?
Update (10.06.2020)
I have yet to figure out my mistake. Due to time constraint, I decided to just use the F1 macro score provided by sklearn. While it cannot work directly with GPU tensors, it is fast enough for my case anyway.
However, it would be awesome if anybody can figure this out, so that anybody else that might stumble upon this issue can get their problem resolved.
| I have written my own implementation in Pytorch some time ago:
from typing import Tuple
import torch
class F1Score:
"""
Class for f1 calculation in Pytorch.
"""
def __init__(self, average: str = 'weighted'):
"""
Init.
Args:
average: averaging method
"""
self.average = average
if average not in [None, 'micro', 'macro', 'weighted']:
raise ValueError('Wrong value of average parameter')
@staticmethod
def calc_f1_micro(predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
"""
Calculate f1 micro.
Args:
predictions: tensor with predictions
labels: tensor with original labels
Returns:
f1 score
"""
true_positive = torch.eq(labels, predictions).sum().float()
f1_score = torch.div(true_positive, len(labels))
return f1_score
@staticmethod
def calc_f1_count_for_label(predictions: torch.Tensor,
labels: torch.Tensor, label_id: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Calculate f1 and true count for the label
Args:
predictions: tensor with predictions
labels: tensor with original labels
label_id: id of current label
Returns:
f1 score and true count for label
"""
# label count
true_count = torch.eq(labels, label_id).sum()
# true positives: labels equal to prediction and to label_id
true_positive = torch.logical_and(torch.eq(labels, predictions),
torch.eq(labels, label_id)).sum().float()
# precision for label
precision = torch.div(true_positive, torch.eq(predictions, label_id).sum().float())
# replace nan values with 0
precision = torch.where(torch.isnan(precision),
torch.zeros_like(precision).type_as(true_positive),
precision)
# recall for label
recall = torch.div(true_positive, true_count)
# f1
f1 = 2 * precision * recall / (precision + recall)
# replace nan values with 0
f1 = torch.where(torch.isnan(f1), torch.zeros_like(f1).type_as(true_positive), f1)
return f1, true_count
def __call__(self, predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
"""
Calculate f1 score based on averaging method defined in init.
Args:
predictions: tensor with predictions
labels: tensor with original labels
Returns:
f1 score
"""
# simpler calculation for micro
if self.average == 'micro':
return self.calc_f1_micro(predictions, labels)
f1_score = 0
for label_id in range(1, len(labels.unique()) + 1):
f1, true_count = self.calc_f1_count_for_label(predictions, labels, label_id)
if self.average == 'weighted':
f1_score += f1 * true_count
elif self.average == 'macro':
f1_score += f1
if self.average == 'weighted':
f1_score = torch.div(f1_score, len(labels))
elif self.average == 'macro':
f1_score = torch.div(f1_score, len(labels.unique()))
return f1_score
You can test it in the following way:
from sklearn.metrics import f1_score
import numpy as np
errors = 0
for _ in range(10):
labels = torch.randint(1, 10, (4096, 100)).flatten()
predictions = torch.randint(1, 10, (4096, 100)).flatten()
labels1 = labels.numpy()
predictions1 = predictions.numpy()
for av in ['micro', 'macro', 'weighted']:
f1_metric = F1Score(av)
my_pred = f1_metric(predictions, labels)
f1_pred = f1_score(labels1, predictions1, average=av)
if not np.isclose(my_pred.item(), f1_pred.item()):
print('!' * 50)
print(f1_pred, my_pred, av)
errors += 1
if errors == 0:
print('No errors!')
| https://stackoverflow.com/questions/62265351/ |
Am I understanding PyTorch's add_ and mul_ correctly? | In this notebook the author writes the following nesterov update:
def nesterov_update(w, dw, v, lr, weight_decay, momentum):
dw.add_(weight_decay, w).mul_(-lr)
v.mul_(momentum).add_(dw)
w.add_(dw.add_(momentum, v))
As I understand it, a.add(b) in PyTorch implements a+b and a.add(b,c) implements a+(b*c), because b is in the slot of the alpha parameter. And lastly, add_ does the in-place version of add.
Q: Am I right so far?
Then, if I were to sketch the above nesterov update in an expanded form that illustrates the logic, I would write:
dw = -lr*(dw + weight_decay*w)
v = v*momentum + dw
w = w + dw + momentum*v
Q: is this correct?
I'm not planning to use the above expanded "code," I'm just writing it this way to try communicate what I'm understanding that it's doing, to check.
| It is important to note the PyTorch version (1.1.0) the tutorial is using. According to 1.1.0, function prototype for torch.add is torch.add(input, value=1, other, out=None). So, your interpretation of the following line:
dw.add_(weight_decay, w)
as: dw = dw + weight_decay * w is correct. So, the answer to your first question is, yes, you are right.
However, with the latest versions of PyTorch, you would get an error if torch.add is used in the same fashion.
a = torch.FloatTensor([0, 1.0, 2.0, 3.0])
b = torch.FloatTensor([0, 4.0, 5.0, 6.0])
c = 1.0
z = a.add(b, c)
The above code gives: (In PyTorch 1.5.0)
TypeError: add() takes 1 positional argument but 2 were given
However, if you perform the following, then it works fine.
z = a.add(b, alpha=c)
Note that, the prototype of torch.add is now: torch.add(input, other, *, alpha=1, out=None)
The answer to your second question is, yes, you are right.
| https://stackoverflow.com/questions/62269014/ |
In PyTorch, what's the difference between training an RNN to predict the last word given a sequence, vs predicting the entire sequence shifted? | Let's say I'm trying to train an RNN language model in PyTorch. Suppose I iterate over batches of word sequences, and that each training batch tensor has the following shape:
data.shape = [batch_size, sequence_length, vocab_dim]
My question is, what's the difference between using only the last word in each sequence as the target label:
X = data[:,:-1]
y = data[:,-1]
and training to minimize loss using a softmax prediction of the last word,
vs setting the target to be the entire sequence shifted right:
X = data[:,:-1]
y = data[:,1:]
and training to minimize the sum of losses of each predicted word in the shifted sequence?
What's the correct approach here? I feel like i've seen both examples online. Does this also have to do with loop unrolling vs BPTT?
| Consider the sequence prediction problem a b c d
where you want to train an RNN via teacher forcing.
If you only use the last word in the sentence, you are doing the following classification problem (on the left is the input; on the right is the output you're supposed to predict):
a b c -> d
For your second approach, where y is set to be the entire sequence shifted right, you are doing three classification problems:
a -> b
a b -> c
a b c -> d
The task of predicting the intermediate words in a sequence is crucial for training a useful RNN (otherwise, you would know how to get from c given a b, but you wouldn't know how to proceed after just a).
An equivalent thing to do would be to do define your training data as both the complete sequence a b c d and all incomplete sequences (a b, a b c). Then if you were to do just the "last word" prediction as mentioned previously, you would end up with the same supervision as the formulation where y is the entire sequence shifted right. But this is computationally wasteful - you don't want to rerun the RNN on both a b and a b c (the state you get from a b can be reused to obtain the state after consuming a b c).
In other words, the point of doing the "shift y right" is to split a single sequence (a b c d) of length N into N - 1 independent classification problems of the form: "given words up to time t, predict word t + 1", while needing just one RNN forward pass.
| https://stackoverflow.com/questions/62270525/ |
Pytorch dataloader from csv of file paths and labels | I have a csv file for train and test datasets that contains the file location and the label. The head of this data frame is:
df.head()
Out[46]:
file_path label
0 \\images\\29771.png 0
1 \\images\\55201.png 0
2 \\images\\00715.png 1
3 \\images\\33214.png 0
4 \\images\\99841.png 1
I have multiple locations for the file paths, and limited space, so I can't copy them into \0 and \1 folder locations. How can I use this data frame to create a pytorch dataloader and/or dataset object?
| Just write a custom __getitem__ method for your dataset.
class MyData(Dataset):
def __init__(self, df):
self.df = df
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index):
image = load_image(self.df.file_path[index])
label = self.df.label[index]
return image, label
Where load_image is a function that reads the filename into whatever format you need.
| https://stackoverflow.com/questions/62271194/ |
I modified a few layers to an example of a neural network just to see if I could. What's wrong with it? | A simple neural network I found had the layers w1, Relu, and w2. I tried to add a new weight layer in the middle and a second Relu after it. So, the layers are as follows w1, Relu, w_mid, Relu, and w2.
It is much much slower than the original 3 layer network if it works at all. I'm not sure if everything is getting a forward pass and if back prop is working across every part it is supposed to.
The neural network is from this link. It is the third block of code down the page.
This is the code I changed.
Below it is the original.
import torch
dtype = torch.float
device = torch.device("cpu")
#device = torch.device("cuda:0") # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 250, 250, 10
# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w_mid = torch.randn(H, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
learning_rate = 1e-5
for t in range(5000):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
k = h_relu.mm(w_mid)
k_relu = k.clamp(min=0)
y_pred = k_relu.mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 1000 == 0:
print(t, loss)
# Backprop to compute gradients of w1, mid, and w2 with respect to loss
grad_y_pred = (y_pred - y) * 2
grad_w2 = k_relu.t().mm(grad_y_pred)
grad_k_relu = grad_y_pred.mm(w2.t())
grad_k = grad_k_relu.clone()
grad_k[k < 0] = 0
grad_mid = h_relu.t().mm(grad_k)
grad_h_relu = grad_k.mm(w1.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w_mid -= learning_rate * grad_mid
w2 -= learning_rate * grad_w2
The loss is ..
0 1904074240.0
1000 639.4848022460938
2000 639.4848022460938
3000 639.4848022460938
4000 639.4848022460938
This is the original code from the Pytorch website.
import torch
dtype = torch.float
#device = torch.device("cpu")
device = torch.device("cuda:0") # Uncomment this to run on GPU
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(500):
# Forward pass: compute predicted y
h = x.mm(w1)
h_relu = h.clamp(min=0)
y_pred = h_relu.mm(w2)
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.t().mm(grad_y_pred)
grad_h_relu = grad_y_pred.mm(w2.t())
grad_h = grad_h_relu.clone()
grad_h[h < 0] = 0
grad_w1 = x.t().mm(grad_h)
# Update weights using gradient descent
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
| The calculations for the gradient of h_relu is not correct.
grad_h_relu = grad_k.mm(w1.t())
That should be a w_mid not w1:
grad_h_relu = grad_k.mm(w_mid.t())
Other than that, the calculations are correct, but you should lower the learning rate, as the gradients are very big at the beginning, making the weights very large and that leads to overflowing values (infinity), which in turn produce NaN losses and gradients. This is known as exploding gradients.
In your example a learning rate of 1e-8 seems to work.
| https://stackoverflow.com/questions/62272830/ |
Differences in encoder - decoder models between Keras and Pytorch | There seem to be significant, fundamental differences in construction of encoder-decoder models between keras and pytorch. Here is keras' enc-dec blog and here is pytorch's enc-dec blog.
Some differences I noticed are the following:
Keras' model directly feeds input to LSTM layer. Whereas Pytorch uses an embedding layer for both the encoder and decoder.
Pytorch uses an embedding layer with no activation in the encoder but uses relu activation for the embedding layer in the decoder.
Given these observations, my questions are the following:
My understanding is the following, is it correct? The embedding layer is not strictly required but it helps in finding a better and denser representation of the input. It is optional and you can still build a good model without the embedding layer (dependent on the problem). This is why Keras chose not to use it in this particular example. Is this a sound reason or is there more to the story?
Why use an activation for the embedding layer in the decoder but not the encoder?
Why use 'relu' as the activation instead of 'tanh', etc for the embedding layer? What's the intuition here? I've only seen 'relu' applied to data that has spatial relation, not temporal relation.
| You have a wrong understanding of encoder-decoder models. First of all, please note Keras and Pytorch are two deep learning frameworks, while encoder-decoder is a type of neural network architecture. So, you need to understand how encoder-decoder works in the first place and then revise their architecture as per your need. Now, let me come back to your questions.
Embedding layer converts one-hot encoding representations into low-dimensional vector representations. For example, we have a sentence I love programming. We want to translate this sentence into German using an encoder-decoder network. So, the first step is to first convert the words in the input sentence into a sequence of vector representations, and this can be done using an embedding layer. Please note, the use of Keras or Pytorch doesn't matter. You can think, how would you give a natural language sentence as input to an LSTM? Obviously, you first need to convert them into vectors.
There is no such rule that you should use an activation layer in the embedding layer for the decoder, but not in the encoder. Remember, activation functions are non-linear functions. So, applying a non-linearity has different consequences but it has nothing to do with the encoder-decoder framework.
Again, the choice of activation function depends on other factors, not on encoder or decoder or a specific type of neural network architecture. I suggest you read the characteristics of the popular activation functions that are used in neural networks. Also, do not come into conclusions after observing a few use cases. Such conclusions are dangerous.
| https://stackoverflow.com/questions/62273985/ |
Convert dictionary to tensors, and back | I started with a dictionary object:
{"train": [{"input": [[3, 1, 2], [3, 1, 2], [3, 1, 2]], "output": [[4, 5, 6], [4, 5, 6], [4, 5, 6]]}, {"input": [[2, 3, 8], [2, 3, 8], [2, 3, 8]], "output": [[6, 4, 9], [6, 4, 9], [6, 4, 9]]}]}
I then want to add padding to each input and output object in a loop. So I converted each input and output to a tensor so I could then use F.pad to add padding. Result of the first input:
tensor([[0, 0, 0, 0, 0],
[0, 3, 1, 2, 0],
[0, 3, 1, 2, 0],
[0, 3, 1, 2, 0],
[0, 0, 0, 0, 0]]).
result of the first output:
tensor([[0, 0, 0, 0, 0],
[0, 4, 5, 6, 0],
[0, 4, 5, 6, 0],
[0, 4, 5, 6, 0],
[0, 0, 0, 0, 0]])
So that works fine. Now, I want to reconstruct the generated tensors into the same form as the original dictionary, so that it will look like this:
{"train": [{"input": [[0, 0, 0, 0, 0], [0, 3, 1, 2, 0], [0, 3, 1, 2, 0], [0, 3, 1, 2, 0], [0, 0, 0, 0, 0]], "output": [[0, 0, 0, 0, 0], [0, 4, 5, 6, 0], [0, 4, 5, 6, 0], [0, 4, 5, 6, 0], [0, 0, 0, 0, 0]]}, {"input": [[0, 0, 0, 0, 0],
[0, 2, 3, 8, 0],
[0, 2, 3, 8, 0],
[0, 2, 3, 8, 0],
[0, 0, 0, 0, 0]], "output": [[0, 0, 0, 0, 0],
[0, 6, 4, 9, 0],
[0, 6, 4, 9, 0],
[0, 6, 4, 9, 0],
[0, 0, 0, 0, 0]]}]}
I can see a string concatenation way that might work:
composedString = '{"train": [{"input": ' + tensor1 + tensor2
or something like that. But given that there are different numbers of elements in the various arrays, it seems like a loop nightmare. I'm thinking there's got to be a better way. Anyone know what it is?
Thanks.
| Does the following serve your purpose?
in_dict = {"train": [{"input": [[3, 1, 2], [3, 1, 2], [3, 1, 2]], "output": [[4, 5, 6], [4, 5, 6], [4, 5, 6]]}, {"input": [[2, 3, 8], [2, 3, 8], [2, 3, 8]], "output": [[6, 4, 9], [6, 4, 9], [6, 4, 9]]}]}
train_examples = []
for item in in_dict['train']:
in_tensor = torch.Tensor(item['input'])
out_tensor = torch.Tensor(item['output'])
train_examples.append([in_tensor, out_tensor])
out_dict = {'train': []}
for item in train_examples:
out_dict['train'].append({
'input': item[0].tolist(),
'output': item[1].tolist()
})
print(out_dict)
| https://stackoverflow.com/questions/62276020/ |
Does deleting intermediate tensors affect the computation graph in PyTorch? | To free up memory, I was wondering if it was possible to remove intermediate tensors in the forward method of my model. Here's a minimalized example scenario:
def forward(self, input):
x1, x2 = input
x1 = some_layers(x1)
x2 = some_layers(x2)
x_conc = torch.cat((x1,x2),dim=1)
x_conc = some_layers(x_conc)
return x_conc
Basically, the model passes two tensors through two separate blocks, and then concatenates the results. Further operations are applied on that concatenated tensor. Will it affect the computation graph if I run del x1 and del x2 after creating x_conc?
| PyTorch will store the x1, x2 tensors in the computation graph if you want to perform automatic differentiation later on. Also, note that deleting tensors using del operator works but you won't see a decrease in the GPU memory. Why? Because the memory is freed but not returned to the device. It is an optimization technique and from the user's perspective, the memory has been "freed". That is, the memory is available for making new tensors now.
Hence, deleting tensors is not recommended to free up GPU memory.
| https://stackoverflow.com/questions/62277715/ |
how to use custom python object in torchscript | I am ready to convert a pytorch module to ScriptModule and then load it in c++,but I am blocked by this error This attribute exists on the Python module, but we failed to convert Python type: 'Vocab' to a TorchScript type, the Vocab is a python object I define.
the demo code is here:
import torch
class Vocab(object):
def __init__(self, name):
self.name = name
def show(self):
print("dict:" + self.name)
class Model(torch.nn.Module):
def __init__(self, ):
super(Model, self).__init__()
self.layers = torch.nn.Linear(2, 3)
self.encoder = 4
self.vocab = Vocab("vocab")
def forward(self, x):
name = self.vocab.name
print("forward show encoder:" + str(self.encoder))
print("vocab:" + name)
enc_hidden = []
step = len(x) // 2
for i in range(step):
enc_hidden.append((x[2*i] + x[2*i + 1])/2)
enc_hidden = torch.stack(enc_hidden, 0)
enc_hidden = self.__show(enc_hidden)
return self.layers(enc_hidden)
@torch.jit.export
def __show(self, x):
return x + 1
model = Model()
data = torch.randn(10, 2)
script_model = torch.jit.script(model)
print(script_model)
r1 = model(data)
print(r1)
the error msg:
Traceback (most recent call last):
File "/mnt/d/python_projects/pytorch_deploy/model4.py", line 47, in <module>
script_model = torch.jit.script(model)
File "/mnt/d/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py", line 1261, in script
return torch.jit._recursive.create_script_module(obj, torch.jit._recursive.infer_methods_to_compile)
File "/mnt/d/anaconda3/lib/python3.6/site-packages/torch/jit/_recursive.py", line 305, in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
File "/mnt/d/anaconda3/lib/python3.6/site-packages/torch/jit/_recursive.py", line 361, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File "/mnt/d/anaconda3/lib/python3.6/site-packages/torch/jit/_recursive.py", line 279, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
Module 'Model' has no attribute 'vocab' (This attribute exists on the Python module, but we failed to convert Python type: 'Vocab' to a TorchScript type.):
File "/mnt/d/python_projects/pytorch_deploy/model4.py", line 26
def forward(self, x):
name = self.vocab.name
~~~~~~~~~~ <--- HERE
print("forward show encoder:" + str(self.encoder))
print("vocab:" + name)
so how can I use my own python object in torchscript?
| You have to annotate your Vocab with torchscript.jit like this:
@torch.jit.script
class Vocab(object):
def __init__(self, name: str):
self.name = name
def show(self):
print("dict:" + self.name)
Also note specification name: str as it's also needed for torchscript to infer it's type (PyTorch supports >=Python3.6 type annotations, you could use a comment as well, but it's way less clear).
Please see Torchscript classes and Default Types and other related torchscript info over there.
| https://stackoverflow.com/questions/62279080/ |
Pytorch BatchNorm2d RuntimeError: running_mean should contain 64 elements not 0 | I'm using Octave Convolutions and have set up a BatchNorm2d adaptation that for some reasen is giving me
RuntimeError: running_mean should contain 64 elements not 0
I've set up some debugging prints to check what was wrong with my Tensors' dimensions, but was unable to find it.
Here is my class:
class _BatchNorm2d(nn.Module):
def __init__(self, num_features, alpha_in=0, alpha_out=0, eps=1e-5, momentum=0.1, affine=True,
track_running_stats=True):
super(_BatchNorm2d, self).__init__()
hf_ch = int(num_features * (1 - alpha_out))
lf_ch = num_features - hf_ch
self.bnh = nn.BatchNorm2d(hf_ch)
self.bnl = nn.BatchNorm2d(lf_ch)
def forward(self, x):
if isinstance(x, tuple):
hf, lf = x
print("IN ON BN: ",lf.shape if lf is not None else None) #DEBUGGING PRINT
print(self.bnl) #DEBUGGING PRINT
hf = self.bnh(hf) if type(hf) == torch.Tensor else hf
lf = self.bnh(lf) if type(lf) == torch.Tensor else lf #THIS IS THE LINE ACCUSING THE ERROR
print("ENDED BN")
return hf, lf
else:
return self.bnh(x)
Here is the printing error:
IN ON BN: torch.Size([32, 64, 3, 3])
BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
It seems to me the function should have worked, since x has 64 channels and bn expects 64 channels.
EDIT:
May also be important to mention the error only happens on alpha value of 1. However, I do not understand it, since the volumes are still the same.
| Solved. It was a typo on the call for the low frequency BN.
hf = self.bnh(hf) if type(hf) == torch.Tensor else hf
lf = self.bnh(lf) if type(lf) == torch.Tensor else lf
Should have been
hf = self.bnh(hf) if type(hf) == torch.Tensor else hf
lf = self.bnl(lf) if type(lf) == torch.Tensor else lf
| https://stackoverflow.com/questions/62284354/ |
How to use scripting to convert pytorch transformer? | I am trying to compile pytorch transformer to run it in C++:
from torch.nn import TransformerEncoder, TransformerEncoderLayer
encoder_layers = TransformerEncoderLayer(1000, 8, 512, 0.1)
transf = TransformerEncoder(encoder_layers, 6)
sm = torch.jit.script(transf)
But I am getting an error:
RuntimeError: Expected a default value of type Tensor on parameter
"src_mask": File "C:\Program Files (x86)\Microsoft Visual
Studio\Shared\Python36_64\lib\site-packages\torch\nn\modules\transformer.py",
line 271
def forward(self, src, src_mask=None, src_key_padding_mask=None):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~... <--- HERE
r"""Pass the input through the encoder layer.
It looks like something wrong with pytorch transformer module.
Is there any way to run pytorch transformer in C++ ?
| You need to upgrade to PyTorch 1.5.0, older versions did not support converting Transformers to TorchScript (JIT) modules.
pip install torch===1.5.0 -f https://download.pytorch.org/whl/torch_stable.html
In 1.5.0 you will see some warnings about the parameters being declared as constants, such as:
UserWarning: 'q_proj_weight' was found in ScriptModule constants, but it is a non-constant parameter. Consider removing it.
These can be safely ignored.
| https://stackoverflow.com/questions/62286188/ |
Combine Pytorch ImageFolder dataset with custom Pytorch dataset | I am loading data from multiple datasets using Pytorch. I have some images stored in properly labeled folders (e.g., \0 and \1), and in those cases I can use torch.utils.data.ConcatDataset after loading the lists, for example (where trans is a set of pre-defined Pytorch transformations):
l = []
l.append(datasets.ImageFolder(file_path, trans))
l.append(datasets.ImageFolder(file_path2, trans))
image_datasets = torch.utils.data.ConcatDataset(l)
img_datasets = dict()
img_datasets['train'], img_datasets['val'] = torch.utils.data.random_split(image_datasets, (round(0.8*len(image_datasets)), round(0.2*len(image_datasets)) ))
However, I am also loading images from other disparate locations using a csv file. The process here looks like this:
class MyData(Dataset):
def __init__(self, df):
self.df = df
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index):
image = trans(PIL.Image.open(self.df.file_path[index]))
label = self.df.label[index]
return image, label
df = pd.read_csv(image_file_paths), names=["file_path", "label"])
mydata = MyData(df)
my_datasets = dict()
my_datasets['train'], my_datasets['val'] = torch.utils.data.random_split(mydata, (round(0.8*len(mydata)), round(0.2*len(mydata))))
So I'd like to be able to combine these datasets into a single dataloader. Any ideas for how I should go about doing this? Thanks!
| Found the solution; just need to use multiple passes of ConcatDataset:
l = []
l.append(datasets.ImageFolder(file_path, trans))
l.append(datasets.ImageFolder(file_path2, trans))
image_datasets = torch.utils.data.ConcatDataset(l)
df = pd.read_csv(image_file_paths), names=["file_path", "label"])
mydata = MyData(df)
image_datasets = torch.utils.data.ConcatDataset([image_datasets, mydata])
img_datasets = dict()
img_datasets['train'], img_datasets['val'] = torch.utils.data.random_split(image_datasets, (round(0.8*len(image_datasets)), round(0.2*len(image_datasets))))
Good to go from there.
| https://stackoverflow.com/questions/62288855/ |
PyTorch: Loading word vectors into Field vocabulary vs. Embedding layer | I'm coming from Keras to PyTorch. I would like to create a PyTorch Embedding layer (a matrix of size V x D, where V is over vocabulary word indices and D is the embedding vector dimension) with GloVe vectors but am confused by the needed steps.
In Keras, you can load the GloVe vectors by having the Embedding layer constructor take a weights argument:
# Keras code.
embedding_layer = Embedding(..., weights=[embedding_matrix])
When looking at PyTorch and the TorchText library, I see that the embeddings should be loaded twice, once in a Field and then again in an Embedding layer. Here is sample code that I found:
# PyTorch code.
# Create a field for text and build a vocabulary with 'glove.6B.100d'
# pretrained embeddings.
TEXT = data.Field(tokenize = 'spacy', include_lengths = True)
TEXT.build_vocab(train_data, vectors='glove.6B.100d')
# Build an RNN model with an Embedding layer.
class RNN(nn.Module):
def __init__(self, ...):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
...
# Initialize the embedding layer with the Glove embeddings from the
# vocabulary. Why are two steps needed???
model = RNN(...)
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
Specifically:
Why are the GloVe embeddings loaded in a Field in addition to the Embedding?
I thought the Field function build_vocab() just builds its vocabulary from the training data. How are the GloVe embeddings involved here during this step?
Here are other StackOverflow questions that did not answer my questions:
PyTorch / Gensim - How to load pre-trained word embeddings
Embedding in pytorch
PyTorch LSTM - using word embeddings instead of nn.Embedding()
Thanks for any help.
| When torchtext builds the vocabulary, it aligns the the token indices with the embedding. If your vocabulary doesn't have the same size and ordering as the pre-trained embeddings, the indices wouldn't be guaranteed to match, therefore you might look up incorrect embeddings. build_vocab() creates the vocabulary for your dataset with the corresponding embeddings and discards the rest of the embeddings, because those are unused.
The GloVe-6B embeddings includes a vocabulary of size 400K. For example the IMDB dataset only uses about 120K of these, the other 280K are unused.
import torch
from torchtext import data, datasets, vocab
TEXT = data.Field(tokenize='spacy', include_lengths=True)
LABEL = data.LabelField()
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
TEXT.build_vocab(train_data, vectors='glove.6B.100d')
TEXT.vocab.vectors.size() # => torch.Size([121417, 100])
# For comparison the full GloVe
glove = vocab.GloVe(name="6B", dim=100)
glove.vectors.size() # => torch.Size([400000, 100])
# Embedding of the first token is not the same
torch.equal(TEXT.vocab.vectors[0], glove.vectors[0]) # => False
# Index of the word "the"
TEXT.vocab.stoi["the"] # => 2
glove.stoi["the"] # => 0
# Same embedding when using the respective index of the same word
torch.equal(TEXT.vocab.vectors[2], glove.vectors[0]) # => True
After having built the vocabulary with its embeddings, the input sequences will be given in the tokenised version where each token is represented by its index. In the model you want to use the embedding of these, so you need to create the embedding layer, but with the embeddings of your vocabulary. The easiest and recommended way is nn.Embedding.from_pretrained, which is essentially the same as the Keras version.
embedding_layer = nn.Embedding.from_pretrained(TEXT.vocab.vectors)
# Or if you want to make it trainable
trainable_embedding_layer = nn.Embedding.from_pretrained(TEXT.vocab.vectors, freeze=False)
You didn't mention how the embedding_matrix is created in the Keras version, nor how the vocabulary is built such that it can be used with the embedding_matrix. If you do that by hand (or with any other utility), you don't need torchtext at all, and you can initialise the embeddings just like in Keras. torchtext is purely for convenience for common data related tasks.
| https://stackoverflow.com/questions/62291303/ |
Resnet18 first layer output dimensions | I am looking at the model implementation in PyTorch. The 1st layer is a convolutional layer with filter size = 7, stride = 2, pad = 3. The standard input size to the network is 224x224x3. Based on these numbers, the output dimensions are (224 + 3*2 - 7)/2 + 1, which is not an integer. Does the original implementation contain non-integer dimensions? I see that the network has adaptive pooling before the FC layer, so the variable input dimensions aren't a problem (I tested this by varying the input size). Am I doing something wrong, or why would the authors choose a non-integer dimension while designing the ResNet?
| The dimensions always have to be integers. From nn.Conv2d - Shape:
The brackets that are only closed towards the bottom denote the floor operation (round down). The calculation becomes:
import math
math.floor((224 + 3*2 - 7)/2 + 1) # => 112
# Or using the integer division (two slashes //)
(224 + 3*2 - 7) // 2 + 1 # => 112
Using an integer division has the same effect, since that always rounds it down to the nearest integer.
| https://stackoverflow.com/questions/62292854/ |
Error in Calculating neural network Test Accuracy | I tried to train my neural network, and then evaluate it's testing accuracy. I am using the code at the bottom of this post to train. The fact is that for other neural networks, I can evaluate the testing accuracy with my code without issue. However, for this neural network (which I constructed correctly according to the description of the neural network paper), I can't evaluate the testing accuracy properly and its giving me the traceback below. So maybe something's wrong in my forward pass?
Here is the training and testing code:
//imports including import deepnet.py
cudnn.benchmark = True
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32')
X_train = np.transpose(X_train, axes=(0, 3, 1, 2))
X_test = X_test.astype('float32')
X_test = np.transpose(X_test, axes=(0, 3, 1, 2))
X_train /= 255
X_test /= 255
device = torch.device('cuda:0')
# This is where you can load any model of your choice.
# I stole PyTorch Vision's VGG network and modified it to work on CIFAR-10.
# You can take this line out and add any other network and the code
# should run just fine.
model = deepnet.cifar10_deep()
#model.to(device)
# Forward pass
opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))
# Forward pass through the network given the input
predsfun = lambda op: np.argmax(op.data.numpy(), 1)
# Do the forward pass, then compute the accuracy
accfun = lambda op, y: np.mean(np.equal(predsfun(op), y.squeeze()))*100
# Initial point
x0 = deepcopy(model.state_dict())
# Number of epochs to train for
# Choose a large value since LB training needs higher values
# Changed from 150 to 30
nb_epochs = 30
batch_range = [25, 40, 50, 64, 80, 128, 256, 512, 625, 1024, 1250, 1750, 2048, 2500, 3125, 4096, 4500, 5000]
# parametric plot (i.e., don't train the network if set to True)
hotstart = False
if not hotstart:
for batch_size in batch_range:
optimizer = torch.optim.Adam(model.parameters())
model.load_state_dict(x0)
#model.to(device)
average_loss_over_epoch = '-'
print('Optimizing the network with batch size %d' % batch_size)
np.random.seed(1337) #So that both networks see same sequence of batches
for e in range(nb_epochs):
model.eval()
print('Epoch:', e, ' of ', nb_epochs, 'Average loss:', average_loss_over_epoch)
average_loss_over_epoch = 0
# Checkpoint the model every epoch
torch.save(model.state_dict(), "./models/DeepNetC2BatchSize" + str(batch_size) + ".pth")
array = np.random.permutation(range(X_train.shape[0]))
slices = X_train.shape[0] // batch_size
beginning = 0
end = 1
# Training loop!
for _ in range(slices):
start_index = batch_size * beginning
end_index = batch_size * end
smpl = array[start_index:end_index]
model.train()
optimizer.zero_grad()
ops = opfun(X_train[smpl])
tgts = Variable(torch.from_numpy(y_train[smpl]).long().squeeze())
loss_fn = F.nll_loss(ops, tgts)
average_loss_over_epoch += loss_fn.data.numpy() / (X_train.shape[0] // batch_size)
loss_fn.backward()
optimizer.step()
beginning += 1
end += 1
grid_size = 18 #How many points of interpolation between [0, 5000]
data_for_plotting = np.zeros((grid_size, 3)) #Uncomment this line if running entire code from scratch
sharpnesses1eNeg3 = []
sharpnesses5eNeg4 = []
#data_for_plotting = np.load("DeepNetCIFAR10-intermediate-values.npy") #Uncomment this line to use an existing NumPy array
print(data_for_plotting)
i = 0
# Fill in test accuracy values for `grid_size' points in the interpolation
for batch_size in batch_range:
mydict = {}
batchmodel = torch.load("./models/DeepNetC2BatchSize" + str(batch_size) + ".pth")
for key, value in batchmodel.items():
mydict[key] = value
model.load_state_dict(mydict)
j = 0
for datatype in [(X_train, y_train), (X_test, y_test)]:
dataX = datatype[0]
datay = datatype[1]
for smpl in np.split(np.random.permutation(range(dataX.shape[0])), 10):
ops = opfun(dataX[smpl])
tgts = Variable(torch.from_numpy(datay[smpl]).long().squeeze())
var = F.nll_loss(ops, tgts).data.numpy() / 10
if j == 1:
data_for_plotting[i, j-1] += accfun(ops, datay[smpl]) / 10.
j += 1
print(data_for_plotting[i])
np.save('DeepNetCIFAR10-intermediate-values', data_for_plotting)
i += 1
And the model code is here and includes the forward pass
import torch
import torch.nn as nn
F = nn.functional
__all__ = ['cifar10_deepnet', 'cifar100_deepnet']
class VGG(nn.Module):
def __init__(self, num_classes=10):
super(VGG, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Dropout(0.3),
nn.Conv2d(64, 64, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 128, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(128, 128, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(128, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(256, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(256, 256, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(256, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.4),
nn.Conv2d(512, 512, kernel_size=3, padding = 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier = nn.Sequential(
nn.Linear(512, 512, bias=False),
nn.Dropout(0.5),
nn.BatchNorm1d(512),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
x = x.view(-1, 512)
x = self.classifier(x)
return F.log_softmax(x)
def cifar10_deep(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 10)
return VGG(num_classes)
def cifar100_deep(**kwargs):
num_classes = getattr(kwargs, 'num_classes', 100)
return VGG(num_classes)
| You are trying to load a state dict that belongs to another model.
The error shows that your model is the class AlexNet.
RunTimeError: Error(s) in loading state_dict for AlexNet:
But the state dict you are trying to load is from the VGG you posted, which doesn't have the same modules as AlexNet.
You need to use the same model whose state dict you saved before.
| https://stackoverflow.com/questions/62295662/ |
how to solve the values changed when converting a pytorch Variable to numpy? | I am trying to convert a parameter of resnet34 to numpy , but I found that the values would change after converting, as shown in the figure.
Why would this happen? What can I do to get precise values in numpy format?
enter image description here
( I am trying to get parameter in torch pretrained models and put them in tensorflow 1.x model , since I can’t find a pretrained resnet34 models in tensorflow1 afteing searching for several days. I am afraid this changed of values would affect the accuracy of the model. )
(BTW, is there someway to download tensorflow1.x resnet34 pretrained modles with basic block instead of bottleneck block?
I have searched in github for several days but failed to find one. I hate tensorflow.)
| The values are not changed, they are identical, but PyTorch limits the default output to 4 decimal places (which gets rounded) to make it easier to inspect.
You can change that behaviour with torch.set_printoptions to show more decimal places.
value = torch.tensor(0.0052872747)
print(value) # => tensor(0.0053)
# Show 10 decimal places
torch.set_printoptions(precision=10)
print(value) # => tensor(0.0052872747)
| https://stackoverflow.com/questions/62296631/ |
How can I zero out duplicate values in each row of a PyTorch tensor? | I would like to write a function that achieves the behavior described in this question.
That is, I want to zero out duplicate values in each row of a matrix in PyTorch. For example, given a matrix
torch.Tensor(([1, 2, 3, 4, 3, 3, 4],
[1, 6, 3, 5, 3, 5, 4]])
I would like to get
torch.Tensor(([1, 2, 3, 4, 0, 0, 0],
[1, 6, 3, 5, 0, 0, 4]])
or
torch.Tensor(([1, 2, 3, 4, 0, 0, 0],
[1, 6, 3, 5, 4, 0, 0]])
According to the linked question, torch.unique() alone is not sufficient. I want to know how to implement this function without a loop.
| x = torch.tensor([
[1, 2, 3, 4, 3, 3, 4],
[1, 6, 3, 5, 3, 5, 4]
], dtype=torch.long)
# sorting the rows so that duplicate values appear together
# e.g., first row: [1, 2, 3, 3, 3, 4, 4]
y, indices = x.sort(dim=-1)
# subtracting, so duplicate values will become 0
# e.g., first row: [1, 2, 3, 0, 0, 4, 0]
y[:, 1:] *= ((y[:, 1:] - y[:, :-1]) !=0).long()
# retrieving the original indices of elements
indices = indices.sort(dim=-1)[1]
# re-organizing the rows following original order
# e.g., first row: [1, 2, 3, 4, 0, 0, 0]
result = torch.gather(y, 1, indices)
print(result) # => output
Output
tensor([[1, 2, 3, 4, 0, 0, 0],
[1, 6, 3, 5, 0, 0, 4]])
| https://stackoverflow.com/questions/62300404/ |
Extracting labels after applying softmax | I have a multi class classification neural network. I apply softmax at the end to get probabilities for my classes. However, now I want to pick the maximum probability and get the corresponding label for it. I am able to extract the maximum probability but I'm confused how to get the label based on that. This is what I have:
labels = {'id1':0,'id2':2,'id3':1,'id4':3} ### labels
x_t = F.softmax(z,dim=-1)
#print(x_t)
y = torch.argmax(x_t, dim=1)
print(y.detach())
Can someone tell me how to get labels now. For example
y = tensor([3, 1, 3])
final_label = [id4,id3,id4]
| You can try storing the map for index to label like this:
labels = {'id1':0,'id2':2,'id3':1,'id4':3} ### labels
idx_to_label = {v:k for k,v in labels.items()}
x_t = F.softmax(z,dim=-1)
#print(x_t)
y = torch.argmax(x_t, dim=1)
print(y.detach())
final_label = [idx_to_label[i] for i in y.detach().cpu().numpy()]
print(final_label)
Let me know if it helps.
| https://stackoverflow.com/questions/62301674/ |
PyTorch path generation with RNN - confusion with input, output, hidden and batch sizes | I'm new to pytorch, I followed a tutorial on sentence generation with RNN and I'm trying to modify it to generate sequences of positions, however I'm having trouble with defining the correct model parameters such as input_size, output_size, hidden_dim, batch_size.
Background:
I have 596 sequences of x,y positions, each looking like [[x1,y1],[x2,y2],...,[xn,yn]]. Each sequence represents the 2D path of a vehicle. I would like to to train a model that, given a starting point (or a partial sequence), could generate one of these sequences.
-I have padded/truncated the sequences so that they all have length 50, meaning each sequence is an array of shape [50,2]
-I then divided this data into input_seq and target_seq:
input_seq: tensor of torch.Size([596, 49, 2]). contains all the 596 sequences, each without its last position.
target_seq: tensor of torch.Size([596, 49, 2]). contains all the 596 sequences, each without its first position.
The model class:
class Model(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(Model, self).__init__()
# Defining some parameters
self.hidden_dim = hidden_dim
self.n_layers = n_layers
#Defining the layers
# RNN Layer
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
# Fully connected layer
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x):
batch_size = x.size(0)
# Initializing hidden state for first input using method defined below
hidden = self.init_hidden(batch_size)
# Passing in the input and hidden state into the model and obtaining outputs
out, hidden = self.rnn(x, hidden)
# Reshaping the outputs such that it can be fit into the fully connected layer
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
return out, hidden
def init_hidden(self, batch_size):
# This method generates the first hidden state of zeros which we'll use in the forward pass
# We'll send the tensor holding the hidden state to the device we specified earlier as well
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_dim)
return hidden
I instantiate the model with the following parameters:
input_size of 2 (an [x,y] position)
output_size of 2 (an [x,y] position)
hidden_dim of 2 (an [x,y] position) (or should this be 50 as in the length of a full sequence?)
model = Model(input_size=2, output_size=2, hidden_dim=2, n_layers=1)
n_epochs = 100
lr=0.01
# Define Loss, Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# Training Run
for epoch in range(1, n_epochs + 1):
optimizer.zero_grad() # Clears existing gradients from previous epoch
output, hidden = model(input_seq)
loss = criterion(output, target_seq.view(-1).long())
loss.backward() # Does backpropagation and calculates gradients
optimizer.step() # Updates the weights accordingly
if epoch%10 == 0:
print('Epoch: {}/{}.............'.format(epoch, n_epochs), end=' ')
print("Loss: {:.4f}".format(loss.item()))
When I run the training loop, it fails with this error:
ValueError Traceback (most recent call last)
<ipython-input-9-ad1575e0914b> in <module>
3 optimizer.zero_grad() # Clears existing gradients from previous epoch
4 output, hidden = model(input_seq)
----> 5 loss = criterion(output, target_seq.view(-1).long())
6 loss.backward() # Does backpropagation and calculates gradients
7 optimizer.step() # Updates the weights accordingly
...
ValueError: Expected input batch_size (29204) to match target batch_size (58408).
I tried modifying input_size, output_size, hidden_dim and batch_size and reshaping the tensors, but the more I try the more confused I get. Could someone point out what I am doing wrong?
Furthermore, since batch size is defined as x.size(0) in Model.forward(self,x), this means I only have a single batch of size 596 right? What would be the correct way to have multiple smaller batches?
| The output has size [batch_size * seq_len, 2] = [29204, 2], and you flatten the target_seq, which has size [batch_size * seq_len * 2] = [58408]. They don't have the same number of dimensions, while having the same number of total elements, therefore the first dimensions are not identical.
Regardless of the dimension mismatch, nn.CrossEntropyLoss is a categorical loss function, which means it would only predict a class from the output. You don't have any classes, but you are trying to predict coordinates, which are continuous values. For this you need to use a regression loss function, such as nn.MSELoss, which calculates the squared error/distance between the predicted and target coordinates.
criterion = nn.MSELoss()
# .flatten() does the same thing as .view(-1) but is more descriptive
loss = criterion(output.flatten(), target_seq.flatten())
The flattening can be avoided as the loss functions as well as the linear layer can operate on multidimensional inputs, which removes the potential risk of getting lost with the flattening and restoring of the dimensions, and the output is more comprehensible to inspect or use later outside of the training. For the linear layer, only the last dimension of the input needs to match the in_features of nn.Linear, which is hidden_dim in your case.
def forward(self, x):
batch_size = x.size(0)
# Initializing hidden state for first input using method defined below
hidden = self.init_hidden(batch_size)
# Passing in the input and hidden state into the model and obtaining outputs
# out size: [batch_size, seq_len, hidden_dim]
out, hidden = self.rnn(x, hidden)
# out size: [batch_size, seq_len, output_size]
out = self.fc(out)
return out, hidden
Now the output of the model has the same size as the target_seq and you can directly call the loss function without flattening:
loss = criterion(output, target_seq)
hidden_dim of 2 (an [x,y] position) (or should this be 50 as in the length of a full sequence?)
The hidden_dim is not a pair of [x, y] and is completely unrelated to both the input_size and output_size. It defines the number of hidden features of the RNN, which is kind of its complexity, and bigger sizes potentially have more room to retain essential information, but also require more computations. There is no perfect hidden size and it largely depends on the use case. You can experiment with different sizes, e.g. 100, 256, etc. and see whether that improves your results.
Furthermore, since batch size is defined as x.size(0) in Model.forward(self,x), this means I only have a single batch of size 596 right? What would be the correct way to have multiple smaller batches?
Yes, you only have a single batch of size 596. If you want to use smaller batches, for example if you cannot fit all of them into a more complex model, you could easily use slices of them, but it would be better to use PyTorch's data utilities: torch.utils.data.TensorDataset to get a dataset, where each sequence of the input has a corresponding target, in combination with torch.utils.data.DataLoader to create batches for you.
from torch.utils.data import DataLoader, TensorDataset
# Match each sequence of the input_seq to the corresponding target_seq.
# e.g. dataset[0] == (input_seq[0], target_seq[0])
dataset = TensorDataset(input_seq, target_seq)
# Randomly shuffle the data and load it in batches of 16
data_loader = DataLoader(dataset, batch_size=16, shuffle=True)
# Process one batch at a time
for input, target in data_loader:
output, hidden = model(input)
loss = criterion(output, target)
| https://stackoverflow.com/questions/62305941/ |
Pytorch, cant' run CNN on GPU. Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same | I'm building a simple image recognition convolutional neural network and trying to run this on my GPU but I haven't done something important apparently.
I checked If GPU is available at the beginning and in train, set batches to the device (cuda:0).
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Checks if GPU is available otherwise uses CPU
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU!")
else:
device = torch.device("cpu")
print("Running on the CPU!")
REBUILD_DATA = False
# Data clean up and format
class DogsVsCats():
IMG_SIZE = 50
CATS = "PetImages/Cat"
DOGS = "PetImages/Dog"
LABELS = {CATS: 0, DOGS: 1}
training_data = []
catcount = 0
dogcount = 0
def make_training_data(self):
for label in self.LABELS:
print(label)
for f in tqdm(os.listdir(label)):
try:
path = os.path.join(label, f)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img, (self.IMG_SIZE, self.IMG_SIZE))
self.training_data.append([np.array(img), np.eye(2)[self.LABELS[label]] ])
if label == self.CATS:
self.catcount += 1
elif label == self.DOGS:
self.dogcount += 1
except Exception as e:
pass
np.random.shuffle(self.training_data)
np.save("training_data.npy", self.training_data)
print("Cats: ", self.catcount)
print("Dogs: ", self.dogcount)
if REBUILD_DATA:
dogsvcats = DogsVsCats()
dogsvcats.make_training_data()
training_data = np.load("training_data.npy", allow_pickle=True)
# print(len("training_data.npy"))
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 5)
self.conv2 = nn.Conv2d(32, 64, 5)
self.conv3 = nn.Conv2d(64, 128, 5)
x = torch.randn(50,50).view(-1,1,50,50)
self._to_linear = None
self.convs(x)
self.fc1 = nn.Linear(self._to_linear, 512)
self.fc2 = nn.Linear(512, 2)
def convs(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv2(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2,2))
print(x[0].shape)
if self._to_linear is None:
self._to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
return x
def forward(self, x):
x = self.convs(x)
x = x.view(-1, self._to_linear)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim = 1)
net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr = 0.001)
loss_function = nn.MSELoss()
X = torch.Tensor([i[0] for i in training_data]).view(-1, 50, 50)
X = X/255.0
y = torch.Tensor([i[1] for i in training_data])
VAL_PCT = 0.1
val_size = int(len(X)*VAL_PCT)
print(val_size)
train_X = X[:-val_size]
train_y = y[:-val_size]
test_X = X[-val_size:]
test_y = y[-val_size:]
BATCH_SIZE = 100
EPOCHS = 1
def train(net):
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(train_X), BATCH_SIZE)):
# print(i, i+BATCH_SIZE)
batch_X = train_X[i:i+BATCH_SIZE].view(-1,1,50,50).to(device)
batch_y = train_y[i:i+BATCH_SIZE].to(device)
net.zero_grad()
outputs = net(batch_X)
loss = loss_function(outputs, batch_y)
loss.backward()
optimizer.step()
print(loss)
correct = 0
total = 0
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1, 1, 50, 50))[0]
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
print("Accuracy: ", round(correct/total,3))
train(net)
Sorry if it's too simple of a question. Thank you in advance!
| You should post the line number of the error, but I'm thinking its from this snipit:
with torch.no_grad():
for i in tqdm(range(len(test_X))):
real_class = torch.argmax(test_y[i])
net_out = net(test_X[i].view(-1, 1, 50, 50))[0]
predicted_class = torch.argmax(net_out)
if predicted_class == real_class:
correct += 1
total += 1
You have to put the input to your net must be put to the device, so maybe change the line
net_out = net(test_X[i].view(-1, 1, 50, 50))[0]
to
net_out = net(test_X[i].view(-1, 1, 50, 50).to(device)[0]
| https://stackoverflow.com/questions/62309497/ |
Compute matrix multiplication only for given coordinates | In PyTorch, I would like to compute
E * A.mm(B)
where E can be a very sparse matrix consisting of 0's and 1's. In other words, I want to compute A.mm(B) and then leave only the certain coordinates. Is there a way to compute such a sparse matrix efficiently? I have full control over matrix representations.
Also, in most cases, E consists only of 1's, so I would like this case also to be handled efficiently.
| You don't need an element-wise multiplication for that, as E is essentially a boolean matrix, that is used as a mask to select the values, where E is 1, and discard the values where E is 0.
C = A.mm(B)
# Ensure that E is a boolean matrix to only keep values where E is True,
# otherwise the 0s and 1s would be treated as indices to select the values.
C = C[E.to(torch.bool)]
If you want to avoid the entire matrix multiplication and only compute the values you would be masking afterwards, you need to manually select the values for A and B that produce the desired values in C.
The matrix multiplication C = AB, where A is an m x n matrix and B an n x p matrix, produces an m x p matrix C, whose values are obtained by multiplying the i-th row of A with the j-th column of B element-wise and taking the sum of these n products. Formally:
Given E, an m x p matrix, that determines which elements of C are required, the index pairs of the required elements are given as follows:
# Indices of required elements (i.e. indices of non-zero elements of E)
# Separate the tensor of (i, j) pairs, into a pair of tensors,
# containing the indices i and j respectively.
indices_i, indices_j = E.nonzero().unbind(dim=1)
# Select all needed rows of A and the needed columns of B
A = A[indices_i]
B = B[:, indices_j]
# Calculate the values
# B is transposed to change the column vectors to row vectors
# such that the two can be multiplied element-wise.
C = torch.sum(A * B.transpose(0, 1), dim=1)
Is it more efficient to selectively calculate the values you want compared to performing the entire matrix multiplication and then only keep the values you want?
The answer is a resounding No. The matrix multiplication is highly optimised, much more optimised than doing the steps manually with operations that themselves are well optimised. Especially, when E contains mostly 1s, then you're basically re-implementing the matrix multiplication, which is guaranteed to be less efficient. Even for the case where E contains mostly 0s, the matrix multiplication is just faster.
To support my claims, I've timed them. For convenience I did it in IPython, which has the built-in %timeit command.
In [1]: import torch
...:
...:
...: def masked(A, B, E):
...: C = A.mm(B)
...: return C[E]
...:
...:
...: def selective(A, B, E):
...: indices_i, indices_j = E.nonzero().unbind(dim=1)
...: return torch.sum(A[indices_i] * B[:, indices_j].transpose(0, 1), dim=1)
...:
...:
...: A = torch.rand(1200, 1000)
...: B = torch.rand(1000, 1100)
...: # Only 10% of the elements are 1
...: E_mostly_zeros = torch.rand(1200, 1100) < 0.1
...: # 90% of the elements are 1
...: E_mostly_ones = torch.rand(1200, 1100) < 0.9
In [2]: # All close instead of equal to account for floating point errors
...: torch.allclose(masked(A, B, E_mostly_ones), selective(A, B, E_mostly_ones))
Out[2]: True
In [3]: # All close instead of equal to account for floating point errors
...: torch.allclose(masked(A, B, E_mostly_zeros), selective(A, B, E_mostly_zeros))
Out[3]: True
In [4]: %timeit masked(A, B, E_mostly_ones)
8.16 ms ± 20.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [5]: %timeit selective(A, B, E_mostly_ones)
2.09 s ± 11.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit masked(A, B, E_mostly_zeros)
5.73 ms ± 24.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
] In [7]: %timeit selective(A, B, E_mostly_zeros)
266 ms ± 3.36 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The matrix multiplication is staggeringly fast, being over 256x faster when E contains 90% ones (8.16ms vs 2090ms), and over 46x faster when E contains only 10% ones (5.73ms vs 266ms).
| https://stackoverflow.com/questions/62311058/ |
Bert + Resnet joint learning, pytorch model is empty after instantiation | I'm writing a simple joint model, which has two branches, one branch is a resnet50 another one is a bert. I concatenate the two outputs and pass that to a simple linear layer with 2 output neurons.
I implemented the following model :
import torch
from torch import nn
import torchvision.models as models
import torch.nn as nn
from collections import OrderedDict
from transformers import BertModel
class BertResNet(nn.Module):
def __init__(self):
super(BertResNet, self).__init__()
# resnet
resnet50 = models.resnet50(pretrained=True)
n_inputs = resnet50.fc.in_features
# compressed embedding space
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(n_inputs, 512))
]))
resnet50.fc = classifier # 512 out resnet
bert = BertModel.from_pretrained('bert-base-uncased')
# final classification layer
classification = nn.Linear(512 + 768, 2)
#print(resnet50)
#print(bert)
def forward(self, img, text):
res_emb = self.resnet50(img)
bert_emb = self.bert(text)
combined = torch.cat(res_emb,
bet_emb, dim=1)
out = self.classification(combined)
return out
But when I instantiate, I get an empty model:
bert_resnet = BertResNet()
print(bert_resnet)
Out:
BertResNet()
list(bert_resnet.parameters()) also returns []
| You never assigned the models to any attribute of the object of the BertResNet class. There are in temporary variables in the __init__ method, but once that finishes, these variables are discarded. They should be assigned to self:
def __init__(self):
super(BertResNet, self).__init__()
# resnet
self.resnet50 = models.resnet50(pretrained=True)
n_inputs = self.resnet50.fc.in_features
# compressed embedding space
self.classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(n_inputs, 512))
]))
self.resnet50.fc = classifier # 512 out resnet
self.bert = BertModel.from_pretrained('bert-base-uncased')
# final classification layer
self.classification = nn.Linear(512 + 768, 2)
| https://stackoverflow.com/questions/62311593/ |
Implement x=T if abs(x)>T as an activation function in pytorch | I would like to implement the following activation function in pytorch:
x = T if abs(x)>T else x
I could do something close with torch.clamp(min=-T, max=T) but it's not exactly the behavior I want (this would behave the same as above for x>-T but would return -T for x<-T). Is there any torch function that could help me achieve this?
| torch.where does exactly that:
x = torch.where(torch.abs(x) > T, T, x)
| https://stackoverflow.com/questions/62312611/ |
Tokens to Words mapping in the tokenizer decode step huggingface? | Is there a way to know the mapping from the tokens back to the original words in the tokenizer.decode() function?
For example:
from transformers.tokenization_roberta import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-large', do_lower_case=True)
str = "This is a tokenization example"
tokenized = tokenizer.tokenize(str)
## ['this', 'Ġis', 'Ġa', 'Ġtoken', 'ization', 'Ġexample']
encoded = tokenizer.encode_plus(str)
## encoded['input_ids']=[0, 42, 16, 10, 19233, 1938, 1246, 2]
decoded = tokenizer.decode(encoded['input_ids'])
## '<s> this is a tokenization example</s>'
And the objective is to have a function that maps each token in the decode process to the correct input word, for here it will be:
desired_output = [[1],[2],[3],[4,5],[6]] As this corresponds to id 42, while token and ization corresponds to ids [19244,1938] which are at indexes 4,5 of the input_ids array.
| If you use the fast tokenizers, i.e. the rust backed versions from the tokenizers library the encoding contains a word_ids method that can be used to map sub-words back to their original word. What constitutes a word vs a subword depends on the tokenizer, a word is something generated by the pre-tokenization stage, i.e. split by whitespace, a subword is generated by the actual model (BPE or Unigram for example).
The code below should work in general, even if the pre-tokenization performs additional splitting. For example I created my own custom step that splits based on PascalCase - the words here are Pascal and Case, the accepted answer wont work in this case since it assumes words are whitespace delimited.
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('roberta-large', do_lower_case=True)
example = "This is a tokenization example"
encoded = tokenizer(example)
desired_output = []
for word_id in encoded.word_ids():
if word_id is not None:
start, end = encoded.word_to_tokens(word_id)
if start == end - 1:
tokens = [start]
else:
tokens = [start, end-1]
if len(desired_output) == 0 or desired_output[-1] != tokens:
desired_output.append(tokens)
desired_output
| https://stackoverflow.com/questions/62317723/ |
Number of instances per class in pytorch dataset | I'm trying to make a simple image classifier using PyTorch.
This is how I load the data into a dataset and dataLoader:
batch_size = 64
validation_split = 0.2
data_dir = PROJECT_PATH+"/categorized_products"
transform = transforms.Compose([transforms.Grayscale(), CustomToTensor()])
dataset = ImageFolder(data_dir, transform=transform)
indices = list(range(len(dataset)))
train_indices = indices[:int(len(indices)*0.8)]
test_indices = indices[int(len(indices)*0.8):]
train_sampler = SubsetRandomSampler(train_indices)
test_sampler = SubsetRandomSampler(test_indices)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler, num_workers=16)
test_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=test_sampler, num_workers=16)
I want to print out the number of images in each class in training and test data separately, something like this:
In train data:
shoes: 20
shirts: 14
In test data:
shoes: 4
shirts: 3
I tried this:
from collections import Counter
print(dict(Counter(sample_tup[1] for sample_tup in dataset.imgs)))
but I got this error:
AttributeError: 'MyDataset' object has no attribute 'img'
| You need to use .targets to access the labels of data i.e.
print(dict(Counter(dataset.targets)))
It'll print something like this (e.g. in MNIST dataset):
{5: 5421, 0: 5923, 4: 5842, 1: 6742, 9: 5949, 2: 5958, 3: 6131, 6: 5918, 7: 6265, 8: 5851}
Also, you can use .classes or .class_to_idx to get mapping of label id to classes:
print(dataset.class_to_idx)
{'0 - zero': 0,
'1 - one': 1,
'2 - two': 2,
'3 - three': 3,
'4 - four': 4,
'5 - five': 5,
'6 - six': 6,
'7 - seven': 7,
'8 - eight': 8,
'9 - nine': 9}
Edit: Method 1
From the comments, in order to get class distribution of training and testing set separately, you can simply iterate over subset as below:
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
# labels in training set
train_classes = [label for _, label in train_dataset]
Counter(train_classes)
Counter({0: 4757,
1: 5363,
2: 4782,
3: 4874,
4: 4678,
5: 4321,
6: 4747,
7: 5024,
8: 4684,
9: 4770})
Edit (2): Method 2
Since you've a large dataset, and as you said it takes considerable time to iterate over all training set, there is another way:
You can use .indices of subset, which referes to indices in the original dataset selected for subset.
i.e.
train_classes = [dataset.targets[i] for i in train_dataset.indices]
Counter(train_classes) # if doesn' work: Counter(i.item() for i in train_classes)
| https://stackoverflow.com/questions/62319228/ |
Pytorch Dataloader not spliting data into batch | I have dataset class like this:
class LoadDataset(Dataset):
def __init__(self, data, label):
self.data = data
self.label = label
def __len__(self):
dlen = len(self.data)
return dlen
def __getitem__(self, index):
return self.data, self.label
then i load my image dataset which have [485, 1, 32, 32] shape
train_dataset = LoadDataset(xtrain, ytrain)
print(len(train_dataset))
# output 485
then i load the data with DataLoader
train_loader = DataLoader(train_dataset, batch_size=32)
and then i iterate the data:
for epoch in range(num_epoch):
for inputs, labels in train_loader:
print(inputs.shape)
the output prints torch.Size([32, 485, 1, 32, 32]), it should be torch.Size([32, 1, 32, 32]),
Can anyone help me?
| The __getitem__ method should return 1 data piece, you returned all of them.
Try this:
class LoadDataset(Dataset):
def __init__(self, data, label):
self.data = data
self.label = label
def __len__(self):
dlen = len(self.data)
llen = len(self.label) # different here
return min(dlen, llen) # different here
def __getitem__(self, index):
return self.data[index], self.label[index] # different here
| https://stackoverflow.com/questions/62319760/ |
Can anyone explain about the "non-sliding window" statement in Feature Pyramid Networks for Object Detection paper? | Feature Pyramid Networks for Object Detection adopt RPN technique to create the detector, and it use sliding window technique to classify. How come there is a statement for "non-sliding window" in 5.2 section?
The extended statement in the paper :
5.2. Object Detection with Fast/Faster R-CNN
Next we investigate FPN for region-based (non-sliding window) detectors.
In my understanding, FPN using sliding window in detection task. This is also mentioned in
https://medium.com/@jonathan_hui/understanding-feature-pyramid-networks-for-object-detection-fpn-45b227b9106c the statement is
"FPN extracts feature maps and later feeds into a detector, says RPN, for object detection. RPN applies a sliding window over the feature maps to make predictions on the objectness (has an object or not) and the object boundary box at each location."
Thank you in advanced.
| Feature Pyramid Networks(FPN) for Object Detection is not an RPN.
FPN is just a better way to do feature extraction. It incorporates features from several stages together which gives better features for the rest of the object detection pipeline (specifically because it incorporates features from the first stages which gives better features for detection of small/medium size objects).
As the original paper states: "Our goal is to leverage a ConvNet’s pyramidal feature
hierarchy, which has semantics from low to high levels, and
build a feature pyramid with high-level semantics throughout. The resulting Feature Pyramid Network is general purpose and in this paper we focus on sliding window proposers (Region Proposal Network, RPN for short) [29] and
region-based detectors (Fast R-CNN)"
So they use it to check "Two stage" object detection pipeline. The first stage is the RPN and this is what they check in section 5.1 and then they check it for the classification stage in section 5.2.
Fast R-CNN Faster R-CNN etc.. are region based object detectors and not sliding window detectors. They get a fixed set of regions from the RPN to classify and thats it.
A good explanation on the differences you can see at https://medium.com/@jonathan_hui/what-do-we-learn-from-region-based-object-detectors-faster-r-cnn-r-fcn-fpn-7e354377a7c9.
| https://stackoverflow.com/questions/62320642/ |
version `GLIBC_2.28' not found | I'm trying to install PyTorch on ARMv7(32-bit) architecture but PyTorch doesn’t have official ARMv7 builds so i tried this unofficial build.
It installed successfully but when I import torch I get the following error
>>import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.28' not found (required by /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
I tried the following
sudo apt-get update
sudo apt-get install libc6
but it seams like that i have the newest version of libc6
Reading package lists... Done
Building dependency tree
Reading state information... Done
libc6 is already the newest version (2.23-0ubuntu11).
The following packages were automatically installed and are no longer required:
busybox-initramfs cpio initramfs-tools initramfs-tools-bin initramfs-tools-core klibc-utils libdbusmenu-gtk4 libklibc
libllvm3.8 libmircommon5 linux-base
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
Here is my GLIBCXX and GLIBC versions that i have:
strings /usr/lib/arm-linux-gnueabihf/libstdc++.so.6 | grep GLIBC
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_3.4.22
GLIBCXX_3.4.23
GLIBCXX_3.4.24
GLIBCXX_3.4.25
GLIBCXX_3.4.26
GLIBCXX_3.4.27
GLIBCXX_3.4.28
GLIBC_2.4
GLIBC_2.6
GLIBC_2.18
GLIBC_2.16
GLIBC_2.17
Ldd version:
ldd --version
ldd (Ubuntu GLIBC 2.23-0ubuntu11) 2.23
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
My OS:
cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
So is it possible to install GLIBC_2.28 on my machine?
|
So is it possible to install GLIBC_2.28 on my machine?
It is possible, but the chances of you making a mistake and rendering your system un-bootable are quite high. It is also very likely that doing so will break something else on your system (this is the reason distributions do not usually update the version of GLIBC from the one they originally shipped with).
A much better solution is to built PyTorch targeting your system (i.e. using your "normal" toolchain).
P.S. GLIBCXX has nothing to do with your problem, and just adds noise to your question.
| https://stackoverflow.com/questions/62324422/ |
New to PyTorch, having trouble making predictions once data is loaded using Data Loader | I'm completely new to PyTorch (have previously used tensor flow) and I'm stuck on something I'm working on. I've been tasked with using a pretrained model to extract the features from application documents and then compute similarity scores to identify duplicates. I have all of the pdf's converted to .jpg's, and I've loaded the pretrained model and modified the last layer to extract features. The folder structure is like this:
root
|- Application 1
| |- image 1
| |- image 2...
|- Application 2
| |- image 1
| |- image 2...
What I'm trying to do is extract features from the images in every sub-directory and calculate the euclidean distance between them and output a similarity matrix. Where I'm having an issue, and this may seem really basic, is actually making the predictions once the data is loaded. Below is the code I have so far, any help would be greatly appreciated.
def get_pretrained_model_notop(model_name): #pull the model and change last layer
pretrained_model = model_name(pretrained=True) #downloads pretrained model weights
for param in pretrained_model.parameters():
param.requires_grad = False #freezes layers
pretrained_model = nn.Sequential(*list(pretrained_model.children())[:-1]) #drops final layer, because we aren't classifying 1000 imagenet classes
pretrained_model.fc = nn.Sequential(
nn.Flatten() #adds flatten layer at end of model
)
if torch.cuda.is_available(): #uses GPU if available
pretrained_model = pretrained_model.cuda()
return pretrained_model
def get_similarity(pretrained_model,train_imgs): #function to extract features from the model and compute similarity scores
bottleneck_feature_example = pretrained_model(train_imgs)
similarity = euclidean_distances(bottleneck_feature_example)
similarity=similarity/similarity.max()
similarity_df = pd.DataFrame(similarity)
similarity_df=1-similarity_df
return np.round(similarity_df,4)
transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
img_dir='path'
images = datasets.ImageFolder(img_dir,transform=transforms)
data_loader = torch.utils.data.DataLoader(images,
batch_size=32,
shuffle=True,
num_workers=4)
model_list=[models.densenet201]
model_name=['densenet201']
pretrained_model=[get_pretrained_model_notop(selected_model) for selected_model in model_list]
for data in data_loader:
pred=[get_similarity(pretrained,data) for pretrained in pretrained_model]
pred_label_ensemble=sum(pred) / len(pred)
pred_label_ensemble.columns=page_numbers
prob_output_folder = unzipped.replace('MF_loan_document', 'MF_loan_document_results')
pred_label_ensemble.to_csv(prob_output_folder+'/'+'results.csv',index=False)
| You didn't specify the problem, but I would assume that it is because you're expecting images but passing data to the get_similarity. So, you'd need to change:
pred=[get_similarity(pretrained,data) for pretrained in pretrained_model]
to:
pred = [get_similarity(pretrained, data[0]) for pretrained in pretrained_model]
because data is a tuple with (images, labels).
| https://stackoverflow.com/questions/62328184/ |
How did Tensorflow process images of ImageNet in Tfrecord format when training resnet? | emm, converting imagenet to Tfrecord format is quite complicated, so I downloaded the processed Tfrecord imagenet from somewhere.
I applied it to resnet34 in tensorflow with parameters from Pytorch, but found that the accuracy is just 55%, too low. I guess the reason may be the different methods processing imagenet between Pytorch.models and this Tfrecord. And a nice bro has told me the way how Pytorch processing data, but I still need to know how tensorflow processing it.
I found that the values of Tfrecord pictures range from -1 to 1, can you tell me the processing method of this Tfrecord so that I can try to improve the accuracy?
Thanks a lot! I am just too new_flesh, your kind help is so so important
| OK, Let me give out my first anser in stackoverflow to myself.
The different way processing images effect the accuracy, that' right.
Pytorch scales pictures to [0,1] and then normalizes them using mean and std.
And the Tfrecord file I found scales pictures to [0,1], and then scale them to [-1, 1] by:
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0)
(which is a strange way as I see).
So I add comments to thes two lines, and got a 66% accuracy. By fine_tune, got 72%(But I stiil don't understand the accuracy decline when transfer parameters from Pytorch to Tensoflow).
PS: In this period, I found that parameters set in operator would be saved when saving models in tensorflow, so don't worry about how to set parameter of operator the same.
| https://stackoverflow.com/questions/62331426/ |
What's the difference between dim in Pytorch and axis in Tensorflow? | I have two lines and I want to understand whether they will produce the same output or not?
In tensorflow: tf.norm(my_tensor, ord=2, axis=1)
In pytorch: torch.norm(my_tensor, p=2, dim=1)
Say the shape of my_tensor is [100,2]
Will the above two lines give the same result? Or is the axis attribute different from dim?
| Yes, they are the same!
import tensorflow as tf
tensor = [[1., 2.], [4., 5.], [3., 6.], [7., 8.], [5., 2.]]
tensor = tf.convert_to_tensor(tensor, dtype=tf.float32)
t_norm = tf.norm(tensor, ord=2, axis=1)
print(t_norm)
Output
tf.Tensor([ 2.236068 6.4031243 6.708204
10.630146 5.3851647], shape=(5,), dtype=float32)
import torch
tensor = [[1., 2.], [4., 5.], [3., 6.], [7., 8.], [5., 2.]]
tensor = torch.tensor(tensor, dtype=torch.float32)
t_norm = torch.norm(tensor, p=2, dim=1)
print(t_norm)
Output
tensor([ 2.2361, 6.4031, 6.7082, 10.6301, 5.3852])
| https://stackoverflow.com/questions/62333053/ |
Why does this tensor.unfold method in this example add a dimension? | I am familiarizing myself with the Pytorch unfold method from https://pytorch.org/docs/stable/tensors.html#torch.Tensor.unfold
I looked at their example which is
>>> x = torch.arange(1., 8)
>>> x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
>>> x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
[ 6., 7.]])
I understand above that when we unfold in dimension 0, we take chunks of size 2 at a time with stride 1 and therefore, the result is an arrangement of different chunks, which are [1., 2.], [2., 3.] and so on. As we have 6 chunks at the end, the chunks will be put together and the final shape is (6,2).
However, I have another example I ran as shown below.
In [115]: s = torch.arange(20).view(1,10,2)
In [116]: s
Out[116]:
tensor([[[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11],
[12, 13],
[14, 15],
[16, 17],
[18, 19]]])
In [117]: s.unfold(0,1,1)
Out[117]:
tensor([[[[ 0],
[ 1]],
[[ 2],
[ 3]],
[[ 4],
[ 5]],
[[ 6],
[ 7]],
[[ 8],
[ 9]],
[[10],
[11]],
[[12],
[13]],
[[14],
[15]],
[[16],
[17]],
[[18],
[19]]]])
In [119]: s.unfold(0,1,1).shape
Out[119]: torch.Size([1, 10, 2, 1])
So you see my original tensor was of shape (1,10,2) and I asked for an unfolding operation with parameters s.unfold(0, 1, 1).
Going by original understanding from the previous example, I assumed this means in the dimension 0, we take 1 chunk at a time with stride 1. Thus, as we have go into dimension 0, we see that we have only one chunk of size (10,2). So the output should have just taken this chunk and may be it should have just added a dimension to wrap this chunk and given me an output of size (1, 10, 2).
However, it gives me an output of size (1, 10, 2, 1). Why does it have an extra dimension at the last? Can someone elaborate intuitively please?
| The documentation states:
An additional dimension of size size is appended in the returned tensor.
where size is the size of the chunks you specified (second argument). By definition, it always adds an additional dimension, which makes it consistent no matter what size you choose. Just because a dimension has size 1, doesn't mean it should be omitted automatically.
Regarding the intuition behind this, let's consider that instead of returning a tensor where the last dimension represents the chunks, we create a list of all chunks. For simplicity, we'll limit it to the first dimension with a step of 1.
import torch
from typing import List
def list_chunks(tensor: torch.Tensor, size: int) -> List[torch.Tensor]:
chunks = []
for i in range(tensor.size(0) - size + 1):
chunks.append(tensor[i : i + size])
return chunks
x = torch.arange(1.0, 8)
s = torch.arange(20).view(1, 10, 2)
# As expected, a list with 6 elements, as there are 6 chunks.
list_chunks(x, 2)
# => [tensor([1., 2.]),
# tensor([2., 3.]),
# tensor([3., 4.]),
# tensor([4., 5.]),
# tensor([5., 6.]),
# tensor([6., 7.])]
# The list has only a single element, as there is only a single chunk.
# But it's still a list.
list_chunks(s, 1)
# => [tensor([[[ 0, 1],
# [ 2, 3],
# [ 4, 5],
# [ 6, 7],
# [ 8, 9],
# [10, 11],
# [12, 13],
# [14, 15],
# [16, 17],
# [18, 19]]])]
I've deliberately included type annotations to make it clearer what we are expecting from the function. If there is only a single chunk, it will be a list with one element, as it is always a list of chunks.
You were expecting a different behaviour, namely when there is a single chunk, you want the single chunk instead of a list. Which would change the implementation as follows.
from typing import List, Union
def list_chunks(tensor: torch.Tensor, size: int) -> Union[List[torch.Tensor], torch.Tensor]:
chunks = []
for i in range(tensor.size(0) - size + 1):
chunks.append(tensor[i : i + size])
# If it's a single chunk, return just the chunk itself
if len(chunks) == 1:
return chunks[0]
else:
return chunks
With that change, anyone that uses this function, now needs to take two cases into consideration. If you don't distinguish between a list and a single chunk (tensor), you will get unexpected results, e.g. looping over the chunks would instead loop over the first dimension of the tensor.
The programmatically intuitive approach is to always return a list of chunks and torch.unfold does the same, but instead of a list of chunks, it's a tensor where the last dimension can be seen as the listing of the chunks.
| https://stackoverflow.com/questions/62334729/ |
Best way to output prediction result for a test set from a model with 1 output (binary classification) | This seems simple, but I'm looking for the best way to output the prediction results for a model with 1 output (binary classification model). My labels are 0 and 1 in this example. Now I can just say if model output > 0.5 label is 1. But I'm sort of guessing this is correct. I am hence wondering if there is a better approach, like
prediction = np.argmax(y_hat.detach().numpy, axis=0) for multiple classes.
Currently I'm doing this to test:
test = [[0,0],[0,1],[1,1],[1,0]]
for trial in test:
Xtest = torch.Tensor(trial)
y_hat = logreg_clf(Xtest)
if y_hat > 0.5:
prediction = 1
else:
prediction = 0
# prediction = np.argmax(y_hat.detach().numpy, axis=0)
print("{0} xor {1} = {2}".format(int(Xtest[0]), int(Xtest[1]), prediction))
Was also wondering if there is a function to automatically get the accuracy or confusion matrix for test set 'test'.
FYI my model:
import torch.nn as nn
import torch.nn.functional as F
class LogisticRegression(nn.Module):
# input_size: Dimensionality of input feature vector.
# num_classes: The number of classes in the classification problem.
def __init__(self, input_size, num_classes):
# Always call the superclass (nn.Module) constructor first!
super(LogisticRegression, self).__init__()
# Set up the linear transform
self.linear = nn.Linear(input_size, num_classes)
# I do not yet include the sigmoid activation after the linear
# layer because our loss function will include this as you will see later
# Forward's sole argument is the input.
# input is of shape (batch_size, input_size)
def forward(self, x):
# Apply the linear transform.
# out is of shape (batch_size, num_classes).
out = self.linear(x)
out = F.sigmoid(out)
# Softmax the out tensor to get a log-probability distribution
# over classes for each example.
return out
# Binary classifiation
num_outputs = 1
num_input_features = 2
# Create the logistic regression model
logreg_clf = LogisticRegression(num_input_features, num_outputs)
print(logreg_clf)
import torch
lr_rate = 0.001 # alpha
# training set of input X and labels Y
X = torch.Tensor([[0,0],[0,1], [1,0], [1,1]])
Y = torch.Tensor([0,1,1,0]).view(-1,1) #view is similar to numpy.reshape() here it makes it into a column
# Run the forward pass of the logistic regression model
sample_output = logreg_clf(X) #completely random at the moment
print(X)
loss_function = nn.BCELoss()
# SGD: stochastic gradient descent is used to train/fit the model
optimizer = torch.optim.SGD(logreg_clf.parameters(), lr=lr_rate)
import numpy as np
# from torch.autograd import Variable
#training loop:
epochs = 2001 #how many times we go through the training set
steps = X.size(0) #steps = 4; we have 4 training examples (I know, tiny training set :)
for i in range(epochs):
for j in range(steps):
# randomly sample from the training set:
data_point = np.random.randint(X.size(0))
# store the retrieved datapoint into 2 separate variables of the right shape
x_var = torch.Tensor(X[data_point]).unsqueeze(0)
y_var = torch.Tensor(Y[data_point])
optimizer.zero_grad() # empty (zero) the gradient buffers
y_hat = logreg_clf(x_var) #get the output from the model
print(y_hat)
print(y_var)
loss = loss_function(y_hat, y_var) #calculate the loss
loss.backward() #backprop
optimizer.step() #does the update
if i % 500 == 0:
print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy()))
| test_data = torch.Tensor(test)
raw_preds = logreg_clf(test_data)
preds = (raw_preds > 0.5).long()
To get the predictions for all test data at once, we first convert the test data to a tensor, then we can make a forward pass with that tensor.
The raw predictions are something like tensor([[0.4795], [0.4749], [0.4960], [0.5006]].
Then we apply the "risk neutral strategy" i.e. 0.5 as the threshold to get the results (which will be of type torch.bool so we convert it to long).
Equivalently in one line:
preds = (logreg_clf(torch.Tensor(test)) > 0.5).long()
So the predictions are:
tensor([[0], [0], [0], [1]])
wondering if there is a function to automatically get the accuracy or confusion matrix
You can use respective functions from sklearn.metrics:
from sklearn.metrics import accuracy_score, confusion_matrix
# the ground truth for the given test data
truth = torch.Tensor([0, 1, 0, 1]).view(-1, 1)
conf_mat = confusion_matrix(truth, preds)
acc = accuracy_score(truth, preds)
which gives the confusion matrix as
array([[2, 0],
[1, 1]], dtype=int64)
and the accuracy as
0.75
So you have one false negative i.e. the second sample in the test data.
| https://stackoverflow.com/questions/62339013/ |
Cuda version issue while using Detectron2 in Google Colab | I am trying to run the Detectron2 module on Colab using CUDA version 10.0 but since today there have been some issues regarding the versions of Cuda Compiler.
The output I get after running !nvidia-smi is :
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 36C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
And what I get after running !nvcc --version is :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
I am not able to understand the reason for the mismatch. Also the output from detectron after running !python -m detectron2.utils.collect_env is :
---------------------- ----------------------------------------------------------------------------
sys.platform linux
Python 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
numpy 1.18.5
detectron2 0.1.3 @/content/gdrive/My Drive/Data/Table_Struct/detectron2_repo/detectron2
Compiler GCC 7.5
CUDA compiler CUDA 10.1
detectron2 arch flags sm_60
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.4.0+cu100 @/usr/local/lib/python3.6/dist-packages/torch
PyTorch debug build False
GPU available True
GPU 0 Tesla K80
CUDA_HOME /usr/local/cuda
Pillow 7.0.0
torchvision 0.5.0+cu100 @/usr/local/lib/python3.6/dist-packages/torchvision
torchvision arch flags sm_35, sm_50, sm_60, sm_70, sm_75
fvcore 0.1.1
cv2 4.1.2
---------------------- ----------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CUDA Runtime 10.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.1
- Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
My guess is that the version of CUDA on Colab doesn't match the Detectron2 I am using. IF so how can I change something to make this work on Google Colab.
| The problem was with the compiled Detectron2 Cuda runtime version and once I recompiled Detectron2 the error was solved.
Here is the result from !python -m detectron2.utils.collect_env command:
---------------------- ----------------------------------------------------------------------------
sys.platform linux
Python 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]
numpy 1.18.5
detectron2 0.1.3 @/content/gdrive/My Drive/Data/Table_Struct/detectron2_repo/detectron2
Compiler GCC 7.5
CUDA compiler CUDA 10.0
detectron2 arch flags sm_75
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.4.0+cu100 @/usr/local/lib/python3.6/dist-packages/torch
PyTorch debug build False
GPU available True
GPU 0 Tesla T4
CUDA_HOME /usr/local/cuda
Pillow 7.0.0
torchvision 0.5.0+cu100 @/usr/local/lib/python3.6/dist-packages/torchvision
torchvision arch flags sm_35, sm_50, sm_60, sm_70, sm_75
fvcore 0.1.1
cv2 4.1.2
---------------------- ----------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CUDA Runtime 10.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.3
- Magma 2.5.1
- Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
| https://stackoverflow.com/questions/62341457/ |
Pytorch DataLoader doesn't work with remote interpreter | I have the following error.
Expected: /home/ubuntu/.pycharm_helpers/pydev/pydevd_attach_to_process/attach_linux_amd64.so to exist.
And Here is the code:
import torch_geometric.transforms as T
category = 'Airplane'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'ShapeNet')
transform = T.Compose([
T.RandomTranslate(0.01),
T.RandomRotate(15, axis=0),
T.RandomRotate(15, axis=1),
T.RandomRotate(15, axis=2)
])
pre_transform = T.NormalizeScale()
train_dataset = ShapeNet(path, category, split='trainval', transform=transform,
pre_transform=pre_transform)
test_dataset = ShapeNet(path, category, split='test',
pre_transform=pre_transform)
train_loader = DataLoader(train_dataset, batch_size=12, shuffle=True, num_workers=6)
test_loader = DataLoader(test_dataset, batch_size=12, shuffle=False,
num_workers=6)
When I'm trying sample from dataset using the dataloader, the debugger crashes and returns this erros.
Have tried deleting remote helpers, but it didn't solve my problem.
My local machine in running on windows 10 and the remote one is running on Ubuntu 18.04
| Answering to myself here,
I've tried to follow this : https://intellij-support.jetbrains.com/hc/en-us/community/posts/360006791159--pycharm-helpers-pydev-pydevd-attach-to-process-attach-linux-amd64-so-undefined-symbol-AttachDebuggerTracing
But it has proven inefficient.
On the contrary, deleting the parameter 'num_workers' from the Dataloader solved this issue. So this is an easy workaround.
However, I haven't wrap my head around the problem yet. It's probably that multiprocessing is buggy. But if anyone has any insight of what's going on. I'll gladly take it.
| https://stackoverflow.com/questions/62341906/ |
tensorboard --logdir=runs not working: Abort trap: 6 | I am trying to run tensorboard: tensorboard --logdir=runs.
I have also tried: tensorboard --logdir=runs --host=127.0.0.1.
I am running the command from the terminal from within the the directory, which contains the runs folder.
I get the following error:
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1367]
CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
libc++abi.dylib: terminating with uncaught exception of type google::protobuf::FatalException:
CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
Abort trap: 6
My Python code contains the following lines:
tb_path = './runs/SimpleLSTM_MNIST'
if os.path.isdir(tb_path):
shutil.rmtree(tb_path)
writer = tb.SummaryWriter(log_dir=tb_path)
My runs folder contains the folder SimpleLSTM_MNIST, which contains events.out.tfevents.1591953948.computername.local.29440.0.
Operating System: MacOS Catalina
How can I resolve the problem?
| Apparently this is a specific issue that occurs when running macOS Catalina, and can be solved by switching to protobuf version 3.8.0 and tensorflow version 2.0.0.
So basically uninstalling tensorflow and protobuf and re-installing with pip3 install protobuf==3.8.0 and pip3 install tensorflow==2.0.0.
| https://stackoverflow.com/questions/62342221/ |
Missing/unexpected keys in resnet50 with pytorch | I get the following errors and no idea why:
Missing key(s) in state_dict: "layer2.0.layer.0.inplace.0.weight", "layer2.0.layer.0.inplace.0.bias",...
Unexpected key(s) in state_dict: "layer2.0.layer.0.0.weight", "layer2.0.layer.0.0.bias",...
The channel sizes I set seem to be what I wanted. however, I don't see where the mistake is
import torch.nn as nn
#1x1 convolution
def conv1x1(in_channels: object, out_channels: object, stride: object, padding: object) -> object:
model = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
return model
# 3x3 convolution
def conv3x3(in_channels: object, out_channels: object, stride: object, padding: object) -> object:
model = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
return model
class ResidualBlock(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, downsample=False):
super(ResidualBlock, self).__init__()
self.downsample = downsample
if self.downsample:
self.layer = nn.Sequential(
nn.ReLU(conv1x1(in_channels, middle_channels,1,0)),
nn.ReLU(conv3x3(middle_channels, middle_channels,1,0)),
nn.ReLU(conv1x1(middle_channels,out_channels,1,0))
)
self.downsize = conv1x1(in_channels, out_channels, 2, 0)
else:
self.layer = nn.Sequential(
nn.ReLU(conv1x1(in_channels,middle_channels,2,0)),
nn.ReLU(conv3x3(middle_channels,middle_channels,2,0)),
nn.ReLU(conv1x1(middle_channels,out_channels,2,0))
)
self.make_equal_channel = conv1x1(in_channels, out_channels, 1, 0)
def forward(self, x):
if self.downsample:
out = self.layer(x)
x = self.downsize(x)
return out + x
else:
out = self.layer(x)
if x.size() is not out.size():
x = self.make_equal_channel(x)
return out + x
class ResNet50_layer4(nn.Module):
def __init__(self, num_classes= 10 ): # Hint : How many classes in Cifar-10 dataset?
super(ResNet50_layer4, self).__init__()
self.layer1 = nn.Sequential(
#in_channels, out_channels, kernel_size, stride, padding
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3),
# Hint : Through this conv-layer, the input image size is halved.
# Consider stride, kernel size, padding and input & output channel sizes.
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2, padding=0)
)
self.layer2 = nn.Sequential(
#in_channels, middle_channels, out_channels, downsample=False
ResidualBlock(in_channels=64, middle_channels=64, out_channels=256, downsample=False),
ResidualBlock(in_channels=256, middle_channels=64, out_channels=256, downsample=False),
ResidualBlock(in_channels=256, middle_channels=64,out_channels=256, downsample=True)
)
self.layer3 = nn.Sequential(
ResidualBlock(in_channels=256, middle_channels=128, out_channels=512, downsample=False),
ResidualBlock(in_channels=512, middle_channels=128, out_channels=512, downsample=False),
ResidualBlock(in_channels=512, middle_channels=128, out_channels=512, downsample=False),
ResidualBlock(in_channels=512, middle_channels=128, out_channels=512, downsample=True)
)
self.layer4 = nn.Sequential(
ResidualBlock(in_channels=512, middle_channels=256, out_channels=1024, downsample=False),
ResidualBlock(in_channels=1024, middle_channels=256, out_channels=1024, downsample=False),
ResidualBlock(in_channels=1024, middle_channels=256, out_channels=1024, downsample=False),
ResidualBlock(in_channels=1024, middle_channels=256, out_channels=1024, downsample=False),
ResidualBlock(in_channels=1024, middle_channels=256, out_channels=1024, downsample=False),
ResidualBlock(in_channels=1024, middle_channels=256, out_channels=1024, downsample=False)
)
self.fc = nn.Linear(1024, 10)
self.avgpool = nn.AvgPool2d(7, stride=1)
for m in self.modules():
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight.data)
elif isinstance(m, nn.Conv2d):
nn.init.xavier_uniform_(m.weight.data)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = self.avgpool(out)
out = out.view(out.size()[0], -1)
out = self.fc(out)
return out
| You have changed your model, and as a result, the keys have changed. So, you are getting a mismatch error. I think you have added nn.ReLU() in the sequential wrappers in ResidualBlock.
In your ResidualBlock, you have:
self.layer = nn.Sequential(
nn.ReLU(conv1x1(in_channels, middle_channels, 2, 0)),
nn.ReLU(conv3x3(middle_channels, middle_channels, 2, 0)),
nn.ReLU(conv1x1(middle_channels, out_channels, 2, 0))
)
However, in your conv1x1 and conv3x3, you already have nn.ReLU(inplace=True) as the last layer in nn.Sequential. Hence, having another nn.ReLU() in
nn.ReLU(conv1x1(in_channels, middle_channels, 2, 0))
seems unnecessary. If you remove the nn.ReLU(), then the keys will match.
I revise ResidualBlock as follows.
class ResidualBlock(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, downsample=False):
super(ResidualBlock, self).__init__()
self.downsample = downsample
if self.downsample:
self.layer = nn.Sequential(
conv1x1(in_channels, middle_channels, 1, 0),
conv3x3(middle_channels, middle_channels, 1, 0),
conv1x1(middle_channels, out_channels, 1, 0)
)
self.downsize = conv1x1(in_channels, out_channels, 2, 0)
else:
self.layer = nn.Sequential(
conv1x1(in_channels, middle_channels, 2, 0),
conv3x3(middle_channels, middle_channels, 2, 0),
conv1x1(middle_channels, out_channels, 2, 0)
)
self.make_equal_channel = conv1x1(in_channels, out_channels, 1, 0)
def forward(self, x):
'''Your forward method description'''
Now, let's test.
model = ResNet50_layer4()
for k, v in model.named_parameters():
print(k)
Output:
layer1.0.weight
layer1.0.bias
layer1.1.weight
layer1.1.bias
layer2.0.layer.0.0.weight
layer2.0.layer.0.0.bias
layer2.0.layer.0.1.weight
layer2.0.layer.0.1.bias
...
...
If you want to still use an additional nn.ReLU(), you can train the revised model and save weights, and then try to load the weights back, it will work.
| https://stackoverflow.com/questions/62343301/ |
“AttributeError: classificadorFinal' object has no attribute 'log_softmax” when trying to train a neural network using pytorch | I'm learning to use pytorch and I got an error that won't let me continue programming.
My code:
import torch.nn as nn
from skorch import NeuralNetClassifier #integracao com sklearn
from sklearn.model_selection import cross_val_score,GridSearchCV
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
import torch
import torch.nn.functional as F
from torch import nn,optim
class classificadorFinal(nn.Module):
def __init__(self, activation=F.tanh, neurons=16, initializer=torch.nn.init.uniform_, dropout=0.3):
##from melhores_parametros
super().__init__()
self.dense0 = nn.Linear(4, neurons)
initializer(self.dense0.weight)
self.activation0 = activation
self.dense1 = nn.Linear(neurons, neurons)
initializer(self.dense1.weight)
self.activation1 = activation
self.dense2 = nn.Linear(neurons, 3)
self.dropout = nn.Dropout(dropout)
def forward(self, X):
X = self.dense0(X)
X = self.activation0(X)
X = self.dropout(X)
X = self.dense1(X)
X = self.activation1(X)
X = self.dropout(X)
X = self.dense2(X)
return X
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(classificador.parameters(), lr = 0.001, weight_decay = 0.0001)
#treino
for epoch in range(200):##from melhores_parametros
running_loss = 0.
running_accuracy = 0.
for data in train_loader:
inputs, labels = data
optimizer.zero_grad()
outputs = classificadorFinal(inputs)
loss = criterion(outputs, labels)###erro
loss.backward()
optimizer.step()
running_loss += loss.item()
ps = F.softmax(outputs)
top_p, top_class = ps.topk(k = 1, dim = 1)
equals = top_class == labels.view(*top_class.shape)
running_accuracy += torch.mean(equals.type(torch.float))
print('Época {:3d}: perda {:3.5f} - precisão {:3.5f}'.format(epoch + 1, running_loss/len(train_loader), running_accuracy/len(train_loader)))
The error occurs exactly on loss = criterion(outputs, labels):
AttributeError: 'classificadorFinal' object has no attribute 'log_softmax'
I found out this error is well known, but I did not understand the proposed solution:
disable aux_logits when the model is created aux_logits=False.
A little help, please!
| The outputs are not actually the output of the model, but rather the model itself. classificadorFinal is the class, calling it creates an object/instance of that class, and inputs will be the first argument to the __init__ method, namely activation.
# Creates an instance of the model
outputs = classificadorFinal(inputs)
You first have to create the model (an instance), which should be done once, then call that model with the inputs. It looks like you have already created the model before, as you are using classificador.parameters() for the optimiser, hence classificador is presumably the instance of the model. You need to call classificador (instance) not classificadorFinal (class) to create the outputs.
# Call the instance of the model, not the class
outputs = classificador(inputs)
| https://stackoverflow.com/questions/62343793/ |
Data format and actual shape | I'm trying to migrate TensorFlow checkpoint weights to PyTorch.
When I extract some weights with cp.load_variable(<CKPT>, <FIELD_NAME>), I get a 4D list ordered as HWCN, for example [1, 1, 512, 1024] which is clearly HWCN.
However, all convolution blocks data_format are set to NHWC.
So, the question is, why there's mismatch?
what should I believe? does the 4D list from cp.load_variable is correct and all left to do is permute the dimensions?
Thanks!
| The weights are not given as HWCN, as the weights do not have any batch dimension (N), otherwise that would apply a different weight for each sample in the batch. The shape is [kernel_height, kernel_width, in_channels, out_channels]. There is no mismatch, because data_format specifies which format the input and output use.
In PyTorch the weight of convolutions is given as [out_channels, in_channels, kernel_height, kernel_width], therefore you only need to permute the dimensions.
| https://stackoverflow.com/questions/62344931/ |
Scatter tensor in pytorch along the rows | I want to scatter tensors in granularities of rows.
For example consider,
Input = torch.tensor([[2, 3], [3, 4], [4, 5]])
I want to scatter
S = torch.tensor([[1,2],[1,2]])
to indices
I = torch.tensor([0,2])
I expect the output to be torch.tensor([[1, 2], [3, 4], [1, 2]]).
Here S[0] is scattered to Input[I[0]], similarly S[1] is scattered to Input[I[1]]
How can I achieve this? Instead of looping over the row in S, I am looking for a more efficient way.
| Do input[I] = S
Example:
input = torch.tensor([[2, 3], [3, 4], [4, 5]])
S = torch.tensor([[1,2],[1,2]])
I = torch.tensor([0,2])
input[I] = S
input
tensor([[1, 2],
[3, 4],
[1, 2]])
| https://stackoverflow.com/questions/62350436/ |
Convert weight and bias to sparse tensor pytorch | I'm trying to convert torch.nn.Parameters to sparse tensor. Pytorch documents say that Parameters is a Tensor's subclass. Tensor support to_sparse method but if I convert a Parameters to sparse, it will give me:
TypeError: cannot assign 'torch.cuda.sparse.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)
Is there a way to bypass this and use sparse tensor for Parameters?
Here is example code to produce the problem:
for name, module in net.named_modules():
if isinstance(module, torch.nn.Conv2d):
module.weight = module.weight.data.to_sparse()
module.bias = module.bias.data.to_sparse()
| torch.Tensor.to_sparse() returns a sparse copy of the tensor which cannot be assigned to module.weight since this is an instance of torch.nn.Parameter. So, you should rather do:
module.weight = torch.nn.Parameter(module.weight.data.to_sparse())
module.bias = torch.nn.Parameter(module.bias.data.to_sparse())
Please note that Parameters are a specific type of Tensor that is marked as being a parameter from an nn.Module, so they are different from ordinary Tensors.
| https://stackoverflow.com/questions/62355252/ |
Instance Normalization with batch size 1 | I am really confused with the meaning of Instance Norm and whether I can use it with a batch size of 1. I am using PyTorch and nothing in the documentation says that batch size should be greater than 1.
I know that for BatchNorm the performance is adversely affected when batch size is less than 8 and hence it puts a sort of soft bound on the batch size. However, I did not see any such analysis on Instance Norm and am a bit confused now. Should I remove the norm layer if my batch size is 1 then?
| A good overview of the different norms is shown in the Group Normalization paper.
Instance normalisation is summarised as:
[...] IN computes µ and σ along the (H, W) axes for each sample and each channel.
The mean and standard deviation are computed on the spatial dimensions (H, W) only and are independent of the batch size and channels (there are N x C different norms). Hence, you can use it with a batch size of 1.
| https://stackoverflow.com/questions/62356985/ |
What is the appropriate way to use BCE loss with ResNet outputs? | The title explains the overall problem, but for some elaboration:
I'm using torchvision.models.resnet18() to run an anomaly detection scheme. I initialize the model by doing:
net = torchvision.models.resnet18(num_classes=2)
since in my particular setting 0 equals normal samples and 1 equals anomalous samples.
The output from my model is of shape (16, 2) (batch size is 16) and labels are of size (16, 1). This gives me the error that the two input tensors are of inappropriate shape.
In order to solve this, I tried something like:
>>> new_output = torch.argmax(output, dim=1)
Which gives me the appropriate shape, but running loss = nn.BCELoss(new_output, labels) gives me the error:
RuntimeError: bool value of Tensor with more than one value is ambiguous
What is the appropriate way for me to approach this issue? Thanks.
Edit
I've also tried using nn.CrossEntropyLoss as well, but am getting the same RuntimeError.
More specifically, I tried nn.CrossEntropyLoss(output, label) and nn.CrossEntropyLoss(output, label.flatten()).
| If you want to use BCELoss, the output shape should be (16, 1) instead of (16, 2) even though you have two classes. You may consider reading this excellent writing to under binary cross-entropy loss.
Since you are getting output from resnet18 with shape (16, 2), you should rather use CrossEntropyLoss where you can give (16, 2) output and label of shape (16).
You should use CrossEntropyLoss as follows.
loss_crit = nn.CrossEntropyLoss()
loss = loss_crit(output, label)
where output = (16, 2) and label = (16). Please note, the label should contain either 0 or 1.
Please see the example provided (copied below) in the official documentation.
>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()
| https://stackoverflow.com/questions/62357260/ |
.eq() method is not giving same result as [ == ] | I am having a hard time understanding why the results are not the same for following code.
I am trying to find the accuracy of a model but the first item gives a result of tensor(66.), and second item gives a result of tensor(105).
(y_test[y_test==y_predicted_cls].sum(), y_predicted_cls.eq(y_test).sum())
Both of the tensors (y_test and y_predicted_cls) have the same data type, torch.float32.
Output:
(tensor(66.), tensor(105))
I thought that first one would be equal to the second one.
| The statement y_test[y_test==y_predicted_cls].sum() gives the sum for the y_test list/array while, y_predicted_cls.eq(y_test).sum() gives the sum for y_predicted_cls, and in the first case, if both the arrays are same, it yields:
y_test[1].sum()
else :
y_test[0].sum()
| https://stackoverflow.com/questions/62359037/ |
Pytorch says that CUDA is not available (on Ubuntu) | I'm trying to run Pytorch on a laptop that I have. It's an older model but it does have an Nvidia graphics card. I realize it is probably not going to be sufficient for real machine learning but I am trying to do it so I can learn the process of getting CUDA installed.
I have followed the steps on the installation guide for Ubuntu 18.04 (my specific distribution is Xubuntu).
My graphics card is a GeForce 845M, verified by lspci | grep nvidia:
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce 845M] (rev a2)
01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)
I also have gcc 7.5 installed, verified by gcc --version
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
And I have the correct headers installed, verified by trying to install them with sudo apt-get install linux-headers-$(uname -r):
Reading package lists... Done
Building dependency tree
Reading state information... Done
linux-headers-4.15.0-106-generic is already the newest version (4.15.0-106.107).
I then followed the installation instructions using a local .deb for version 10.1.
Npw, when I run nvidia-smi, I get:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 845M On | 00000000:01:00.0 Off | N/A |
| N/A 40C P0 N/A / N/A | 88MiB / 2004MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 982 G /usr/lib/xorg/Xorg 87MiB |
+-----------------------------------------------------------------------------+
and I run nvcc -V I get:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
I then performed the post-installation instructions from section 6.1, and so as a result, echo $PATH looks like this:
/home/isaek/anaconda3/envs/stylegan2_pytorch/bin:/home/isaek/anaconda3/bin:/home/isaek/anaconda3/condabin:/usr/local/cuda-10.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
echo $LD_LIBRARY_PATH looks like this:
/usr/local/cuda-10.1/lib64
and my /etc/udev/rules.d/40-vm-hotadd.rules file looks like this:
# On Hyper-V and Xen Virtual Machines we want to add memory and cpus as soon as they appear
ATTR{[dmi/id]sys_vendor}=="Microsoft Corporation", ATTR{[dmi/id]product_name}=="Virtual Machine", GOTO="vm_hotadd_apply"
ATTR{[dmi/id]sys_vendor}=="Xen", GOTO="vm_hotadd_apply"
GOTO="vm_hotadd_end"
LABEL="vm_hotadd_apply"
# Memory hotadd request
# CPU hotadd request
SUBSYSTEM=="cpu", ACTION=="add", DEVPATH=="/devices/system/cpu/cpu[0-9]*", TEST=="online", ATTR{online}="1"
LABEL="vm_hotadd_end"
After all of this, I even compiled and ran the samples. ./deviceQuery returns:
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce 845M"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 2004 MBytes (2101870592 bytes)
( 4) Multiprocessors, (128) CUDA Cores/MP: 512 CUDA Cores
GPU Max Clock rate: 863 MHz (0.86 GHz)
Memory Clock rate: 1001 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS
and ./bandwidthTest returns:
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: GeForce 845M
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.7
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 11.8
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 14.5
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
But after all of this, this Python snippet (in a conda environment with all dependencies installed):
import torch
torch.cuda.is_available()
returns False
Does anybody have any idea about how to resolve this? I've tried to add /usr/local/cuda-10.1/bin to etc/environment like this:
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
PATH=$PATH:/usr/local/cuda-10.1/bin
And restarting the terminal, but that didn't fix it. I really don't know what else to try.
EDIT - Results of collect_env for @kHarshit
Collecting environment information...
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce 845M
Nvidia driver version: 418.87.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.5
[pip] pytorch-ranger==0.1.1
[pip] stylegan2-pytorch==0.12.0
[pip] torch==1.5.0
[pip] torch-optimizer==0.0.1a12
[pip] torchvision==0.6.0
[pip] vector-quantize-pytorch==0.0.2
[conda] numpy 1.18.5 pypi_0 pypi
[conda] pytorch-ranger 0.1.1 pypi_0 pypi
[conda] stylegan2-pytorch 0.12.0 pypi_0 pypi
[conda] torch 1.5.0 pypi_0 pypi
[conda] torch-optimizer 0.0.1a12 pypi_0 pypi
[conda] torchvision 0.6.0 pypi_0 pypi
[conda] vector-quantize-pytorch 0.0.2 pypi_0 pypi
| PyTorch doesn't use the system's CUDA library. When you install PyTorch using the precompiled binaries using either pip or conda it is shipped with a copy of the specified version of the CUDA library which is installed locally. In fact, you don't even need to install CUDA on your system to use PyTorch with CUDA support.
There are two scenarios which could have caused your issue.
You installed the CPU only version of PyTorch. In this case PyTorch wasn't compiled with CUDA support so it didn't support CUDA.
You installed the CUDA 10.2 version of PyTorch. In this case the problem is that your graphics card currently uses the 418.87 drivers, which only support up to CUDA 10.1. The two potential fixes in this case would be to either install updated drivers (version >= 440.33 according to Table 2) or to install a version of PyTorch compiled against CUDA 10.1.
To determine the appropriate command to use when installing PyTorch you can use the handy widget in the "Install PyTorch" section at pytorch.org. Just select the appropriate operating system, package manager, and CUDA version then run the recommended command.
In your case one solution was to use
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
which explicitly specifies to conda that you want to install the version of PyTorch compiled against CUDA 10.1.
For more information about PyTorch CUDA compatibility with respect drivers and hardware see this answer.
Edit After you added the output of collect_env we can see that the problem was that you had the CUDA 10.2 version of PyTorch installed. Based on that an alternative solution would have been to update the graphics driver as elaborated in item 2 and the linked answer.
| https://stackoverflow.com/questions/62359175/ |
Why do 'loss.backward()' and 'weight.grad' return a tensor containing all zeros? | When I run 'loss.backward()' and 'weight.grad' I get a tensor containing all zeros. Also, 'weight.grad_fn' retruns NONE.
However, it all seems to return the correct result for the second layer 'w2'.
If I play with simple operations such as x*2 or x**2 'backward()' and '.grad' return correct results
Here's my code:
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
# Getting MNIST data
num_workers = 0
batch_size = 64
transform = transforms.ToTensor()
train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
dataiter = iter(train_loader)
images, labels = dataiter.next()
#####################################
#####################################
#### NN Part
def activation(x):
return 1/(1+torch.exp(-x))
inputs = torch.from_numpy(images.view())
# Flatten the inputs format from (64,1,28,28) into (64,784)
inputs = inputs.reshape(images.shape[0], int(images.shape[1]*images.shape[2]*images.shape[3]))
w1 = torch.randn(784, 256, requires_grad=True)# n_input, n_hidden
b1 = torch.randn(256)# n_hidden
w2 = torch.randn(256, 10, requires_grad=True)# n_hidden, n_output
b2 = torch.randn(10)# n_output
h = activation(torch.mm(inputs, w1) + b1)
y = torch.mm(h, w2) + b2
#print(h)
#print(y)
y.sum().backward()
print(w1.grad)
print(w1.grad_fn)
#print(w2.grad)
#print(w2.grad_fn)
By the way it gives me the same problem if I try to run it this way also:
images = images.reshape(images.shape[0], -1)
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
logits = model(images)
criterion = nn.NLLLoss()
loss = criterion(logits, labels)
print(loss)
print(loss.grad_fn)
print('Before backward pass: ', model[0].weight.grad)
loss.backward()
print('After: ', model[0].weight.grad)
#print('After: ', model[2].weight.grad)
#print('After: ', model[4].weight.grad)
| The gradients of w1 are not all zero, there are simply a lot of zeros, especially around the border, because the MNIST images have a lot of black pixels (zeros). When multiplying with zero, the resulting gradients are also zero.
By printing w1.grad you only see a very small part of the values (borders), and you just can't see the non-zero values.
w1.grad
# => tensor([[0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# ...,
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.],
# [0., 0., 0., ..., 0., 0., 0.]])
# Indices of non-zero elements
w1.grad.nonzero()
# => tensor([[ 71, 0],
# [ 71, 1],
# [ 71, 2],
# ...,
# [746, 253],
# [746, 254],
# [746, 255]])
| https://stackoverflow.com/questions/62361956/ |
Get probability from predicted class pytorch | I have a network like this for image classification, for now i have 2 classes:
class ActionNet(Module):
def __init__(self, num_class=4):
super(ActionNet, self).__init__()
self.cnn_layer = Sequential(
#conv1
Conv2d(in_channels=1, out_channels=32, kernel_size=1, bias=False),
BatchNorm2d(32),
PReLU(num_parameters=32),
MaxPool2d(kernel_size=3),
#conv2
Conv2d(in_channels=32, out_channels=64, kernel_size=1, bias=False),
BatchNorm2d(64),
PReLU(num_parameters=64),
MaxPool2d(kernel_size=3),
#flatten
Flatten(),
Linear(576, 128),
BatchNorm1d(128),
ReLU(inplace=True),
Dropout(0.5),
Linear(128, num_class)
)
def forward(self, x):
x = self.cnn_layer(x)
return x
then after i training my network, i predict the image using this code:
def predict_image(image):
input = torch.from_numpy(image)
input = input.unsqueeze(1)
input = input.to(device)
output = model(input)
index = output.data.cpu().numpy().argmax()
return index
how do i get all class probability of prediction image? so the result would be array of index with probaility like 0=0.1, 1=0.7
| To get probability from model output here you can use softmax function.
Try this
import torch.nn.functional as F
...
prob = F.softmax(output, dim=1)
...
| https://stackoverflow.com/questions/62364328/ |
PyTorch demands image-like dimensionality in the DataLoader even though no images are used | I cannot get a simple neural network to run with a custom dummy dataset. You can find the error message at the very bottom of this question.
I would like to deeply understand how PyTorch processes data inputs and therefore built a simple dataset that gets two series of boolean values (encoded as 0s and 1s) as inputs, with OR, AND, and XOR series as targets. Everything should work with a custom Dataset and a DataLoader (for learning purposes). The input data looks as follows (naturally, only one target column is used at a time):
column_1 column_2 or and xor
0 1 1 1 1 0
1 0 1 1 0 1
2 0 1 1 0 1
3 0 1 1 0 1
4 1 0 1 0 1
... ... ... .. ... ...
9995 1 1 1 1 0
9996 1 0 1 0 1
9997 1 0 1 0 1
9998 0 1 1 0 1
9999 0 1 1 0 1
So I would like to build a neural network that represents either an OR gate, an AND gate, or an XOR gate.
Can someone shed some light on why one-dimensional data does not seem to be accepted in the iterator? It seems like image-like data is assumed? Is it even possible to solve this problem with a custom Dataset and with a DataLoader or will I have to compromise to not use a DataLoader (like in this intro to logic gates in PyTorch)?
Minimum working example with XOR as the target column:
# Imports
from pandas import read_csv, DataFrame
import torch
from torch.utils.data import Dataset, DataLoader
import torch.nn as torch_nn
import torch.nn.functional as torch_functional
import torchvision
import random
# Classes
class CustomDataset(Dataset):
def __init__(self, data, target_column_name, transform=None):
self.dataframe = data
self.x = data[['column_1', 'column_2']].values
self.y = data[[target_column_name]].values
self.n_samples = len(data)
self.transform = transform
def __len__(self):
return self.n_samples
def __getitem__(self, index):
x = self.x[index]
y = self.y[index]
if not self.transform == None:
return (self.transform(x), self.transform(y))
return (x, y)
class CustomNet(torch_nn.Module):
def __init__(self):
super().__init__()
self.fully_connected_input_layer = torch_nn.Linear(2, 5)
self.fully_connected_hidden_layer_1 = torch_nn.Linear(5, 5)
self.fully_connected_hidden_layer_2 = torch_nn.Linear(5, 5)
self.fully_connected_output_layer = torch_nn.Linear(5, 2)
def forward(self, data):
data = torch_functional.relu(self.fully_connected_input_layer(data))
data = torch_functional.relu(self.fully_connected_hidden_layer_1(data))
data = torch_functional.relu(self.fully_connected_hidden_layer_2(data))
data = torch_functional.log_softmax(
self.fully_connected_output_layer(data),
dim=1
)
# data = data.squeeze(1)
return data
# Global variables
NUMBER_OF_OBSERVATIONS = 10000
N_TRAIN_OBSERVATIONS = 7000
BATCH_SIZE = 4
N_EPOCHS = 3
random.seed(42)
# Generating logical gate data
## Generating two columns with random 0s and 1s.
df_data = DataFrame({
'column_1': random.choices([0, 1], k=NUMBER_OF_OBSERVATIONS),
'column_2': random.choices([0, 1], k=NUMBER_OF_OBSERVATIONS)
})
## Adding the logic gate results of the previously generated two columns.
df_data.loc[:,'or'] = (df_data['column_1'] == 1) | (df_data['column_2'] == 1)
df_data.loc[:,'or'] = df_data.loc[:,'or'].astype(int)
df_data.loc[:,'and'] = (df_data['column_1'] == 1) & (df_data['column_2'] == 1).astype(int)
df_data.loc[:,'and'] = df_data.loc[:,'and'].astype(int)
df_data.loc[:,'xor'] = ((df_data['column_1'] == 1) & (df_data['column_2'] != 1)) | ((df_data['column_1'] != 1) & (df_data['column_2'] == 1)).astype(int)
df_data.loc[:,'xor'] = df_data.loc[:,'xor'].astype(int)
print(df_data.info())
print(df_data)
# Instantiating a CustomDataSet object and a DataLoader object.
dataset_data = CustomDataset(df_data, target_column_name='xor', transform=torchvision.transforms.ToTensor())
dataset_data_train, dataset_data_test = torch.utils.data.random_split(
dataset_data,
[N_TRAIN_OBSERVATIONS, len(dataset_data)-N_TRAIN_OBSERVATIONS]
)
dataloader_data_train = DataLoader(
dataset=dataset_data_train,
batch_size=BATCH_SIZE,
shuffle=True
)
dataloader_data_test = DataLoader(
dataset=dataset_data_test,
batch_size=BATCH_SIZE,
shuffle=True
)
# Instantiating the neural network
custom_net = CustomNet()
# Running one epoch
for epoch in range(N_EPOCHS):
for data in dataloader_data_train:
X, y = data[0].float(), data[1].float()
net.zero_grad()
output = net(X)
print(output)
print(y)
loss = torch_functional.nll_loss(output, y)
loss.backward()
optimizer.step()
break
Error message:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-f2833deb4b11> in <module>
95 # Running one epoch
96 for epoch in range(N_EPOCHS):
---> 97 for data in dataloader_data_train:
98 X, y = data[0].float(), data[1].float()
99 net.zero_grad()
~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
343
344 def __next__(self):
--> 345 data = self._next_data()
346 self._num_yielded += 1
347 if self._dataset_kind == _DatasetKind.Iterable and \
~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
383 def _next_data(self):
384 index = self._next_index() # may raise StopIteration
--> 385 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
386 if self._pin_memory:
387 data = _utils.pin_memory.pin_memory(data)
~/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataset.py in __getitem__(self, idx)
255
256 def __getitem__(self, idx):
--> 257 return self.dataset[self.indices[idx]]
258
259 def __len__(self):
<ipython-input-2-f2833deb4b11> in __getitem__(self, index)
26
27 if not self.transform == None:
---> 28 return (self.transform(x), self.transform(y))
29 return (x, y)
30
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py in __call__(self, pic)
90 Tensor: Converted image.
91 """
---> 92 return F.to_tensor(pic)
93
94 def __repr__(self):
~/anaconda3/lib/python3.7/site-packages/torchvision/transforms/functional.py in to_tensor(pic)
43
44 if _is_numpy(pic) and not _is_numpy_image(pic):
---> 45 raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))
46
47 if isinstance(pic, np.ndarray):
ValueError: pic should be 2/3 dimensional. Got 1 dimensions.
| The problem is that you're using a specialized conversion routine from the Torchvision library, torchvision.transforms.ToTensor. You should just use torch.from_numpy instead.
Also note that .values on Pandas objects is deprecated. You should use .to_numpy instead:
import pandas as pd
import torch
x_pandas = pd.Series([0.0, 0.5, 1.0])
x_numpy = x_pandas.to_numpy()
x_torch = torch.from_numpy(x_numpy)
| https://stackoverflow.com/questions/62372055/ |
Delete an element from torch.Tensor | I'm trying to delete an item from a tensor.
In the example below, How can I remove the third item from the tensor ?
tensor([[-5.1949, -6.2621, -6.2051, -5.8983, -6.3586, -6.2434, -5.8923, -6.1901,
-6.5713, -6.2396, -6.1227, -6.4196, -3.4311, -6.8903, -6.1248, -6.3813,
-6.0152, -6.7449, -6.0523, -6.4341, -6.8579, -6.1961, -6.5564, -6.6520,
-5.9976, -6.3637, -5.7560, -6.7946, -5.4101, -6.1310, -3.3249, -6.4584,
-6.2202, -6.3663, -6.9293, -6.9262]], grad_fn=<SqueezeBackward1>)
| You can first filter array through indices and then concat both
t.shape
torch.Size([1, 36])
t = torch.cat((t[:,:3], t[:,4:]), axis = 1)
t.shape
torch.Size([1, 35])
| https://stackoverflow.com/questions/62372762/ |
Understanding input shape to PyTorch conv1D? | This seems to be one of the common questions on here (1, 2, 3), but I am still struggling to define the right shape for input to PyTorch conv1D.
I have text sequences of length 512 (number of tokens per sequence) with each token being represented by a vector of length 768 (embedding). The batch size I am using is 6.
So my input tensor to conv1D is of shape [6, 512, 768].
input = torch.randn(6, 512, 768)
Now, I want to convolve over the length of my sequence (512) with a kernel size of 2 using the conv1D layer from PyTorch.
Understanding 1:
I assumed that "in_channels" are the embedding dimension of the conv1D layer. If so, then a conv1D layer will be defined in this way where
in_channels = embedding dimension (768)
out_channels = 100 (arbitrary number)
kernel = 2
convolution_layer = nn.conv1D(768, 100, 2)
feature_map = convolution_layer(input)
But with this assumption, I get the following error:
RuntimeError: Given groups=1, weight of size 100 768 2, expected input `[4, 512, 768]` to have 768 channels, but got 512 channels instead
Understanding 2:
Then I assumed that "in_channels" is the sequence length of the input sequence. If so, then a conv1D layer will be defined in this way where
in_channels = sequence length (512)
out_channels = 100 (arbitrary number)
kernel = 2
convolution_layer = nn.conv1D(512, 100, 2)
feature_map = convolution_layer(input)
This works fine and I get an output feature map of dimension [batch_size, 100, 767]. However, I am confused. Shouldn't the convolutional layer convolve over the sequence length of 512 and output a feature map of dimension [batch_size, 100, 511]?
I will be really grateful for your help.
| In pytorch your input shape of [6, 512, 768] should actually be [6, 768, 512] where the feature length is represented by the channel dimension and sequence length is the length dimension. Then you can define your conv1d with in/out channels of 768 and 100 respectively to get an output of [6, 100, 511].
Given an input of shape [6, 512, 768] you can convert it to the correct shape with Tensor.transpose.
input = input.transpose(1, 2).contiguous()
The .contiguous() ensures the memory of the tensor is stored contiguously which helps avoid potential issues during processing.
| https://stackoverflow.com/questions/62372938/ |
Why does the output from VQ-Wav2Vec from FairSeq missing frames? | I am using the fairseq library to run an example code for feature extraction with the VQ-Wav2Vec code as written below:
In [6]: import torch
...: from fairseq.models.wav2vec import Wav2VecModel
In [7]: cp = torch.load('wav2vec_models/checkpoint_best.pt')
...: model = Wav2VecModel.build_model(cp['args'], task=None)
...: model.load_state_dict(cp['model'])
...: model.eval()
In [9]: wav_input_16khz = torch.randn(1,10000)
...: z = model.feature_extractor(wav_input_16khz)
...: f, idxs = model.vector_quantizer.forward_idx(z)
...: print(idxs.shape, f.shape)
>>>> torch.Size([1, 60, 4]) torch.Size([1, 512, 60])
My understanding is that the vq-wav2vec processes every 10ms of input speech (assumed to be sampled at 16K samples / sec) samples and outputs a feature vector of size [512] samples for each of these 10ms of speech. So given that the input speech is 10000 samples, we are supposed to get 62 frames ( 62 * 160 = 9920 samples).
Why do I see only 60 frames?
| From the article (arxiv.org/pdf/1904.05862.pdf): "The output of the encoder is a low frequency feature representation zi ∈Z which encodes about 30 ms of 16 kHz of audio and the striding results in representations zi every 10ms." => The windows are overlapping and this explains why you are getting 2 frames fewer.
Indeed we are moving a 30 ms window by 10ms steps. In your example, the 30 ms window takes 60 different positions.
| https://stackoverflow.com/questions/62376687/ |
Add values to a tensor element based on its position in pytorch | In pytorch, I want to add values to elements in the tensor based on their position. For example consider,
Input = torch.tensor([1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0])
Between several offsets of the Input array, Offsets = [0,5,10,15,20], I want to add different values, ValuesToAdd = [10,100,1000,10000]
I expect the output to be
Output = torch.tensor([11,12,13,14,15,106,107,108,109,100,1001,1002,1003,1004,1005,10006,10007,10008,10009,10000])
Here, between indices Offsets[i] and Offsets[i+1] in Input array, ValuesToAdd[i] is added. For example, for indices 10,11,12,13 and 14 (Offsets[2] = 10 to Offsets[3]=15) in Input array, 1000 (ValuesToAdd[2]) is added.
How can I achieve this? Instead of looping over Offsets array, I am looking for a more efficient way.
| You can use torch.repeat_interleave
Offsets = torch.tensor(Offsets)
shifts = Offsets[1:] - Offsets[:-1]
output = Input.clone()
output[Offsets[0]:Offsets[-1]] += torch.tensor(ValuesToAdd).repeat_interleave(shifts)
print(torch.all(output == Output))
# True
| https://stackoverflow.com/questions/62379332/ |
How to obtain sequence of submodules from a pytorch module? | For a pytorch module, I suppose I could use .named_children, .named_modules, etc. to obtain a list of the submodules. However, I suppose the list is not given in order, right? An example:
In [19]: import transformers
In [20]: model = transformers.DistilBertForSequenceClassification.from_pretrained('distilb
...: ert-base-cased')
In [21]: [name for name, _ in model.named_children()]
Out[21]: ['distilbert', 'pre_classifier', 'classifier', 'dropout']
The order of .named_children() in the above model is given as distilbert, pre_classifier, classifier, and dropout. However, if you examine the code, it is evident that dropout happens before classifier. So how do I get the order of these submodules?
| In Pytorch, the results of print(model) or .named_children(), etc are listed based on the order they are declared in __init__ of the model's class e.g.
Case 1
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.conv2_drop = nn.Dropout2d()
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, p=0.6)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Model()
print(model)
[name for name, _ in model.named_children()]
# output
['conv1', 'conv2', 'fc1', 'fc2', 'conv2_drop']
Case 2
Changed order of fc1 and fc2 layers in constructor.
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc2 = nn.Linear(50, 10)
self.fc1 = nn.Linear(320, 50)
self.conv2_drop = nn.Dropout2d()
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, p=0.6)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
model = Model()
print(model)
[name for name, _ in model.named_children()]
# output
['conv1', 'conv2', 'fc2', 'fc1', 'conv2_drop']
That's why classifier is printed before dropout as it's declared so in constructor:
class DistilBertForSequenceClassification(DistilBertPreTrainedModel):
...
self.distilbert = DistilBertModel(config)
self.pre_classifier = nn.Linear(config.dim, config.dim)
self.classifier = nn.Linear(config.dim, config.num_labels)
self.dropout = nn.Dropout(config.seq_classif_dropout)
Nevertheless, you can play with model's submodules using .modules(), etc. but they'll be listed only in the order they are declared in __init__. If you only want to print structure based on forward method, you may try using pytorch-summary.
| https://stackoverflow.com/questions/62381286/ |
How to improve code to speed up word embedding with transformer models? | I need to compute words embeddings for a bunch of documents with different language models.
No problem with that, the script is doing fine, except I'm working on a notebook, without GPU and each text needs around 1.5s to be processed which is by far too long (I have thousands of texts to process).
Here is how I'm doing it with pytorch and transformers lib:
import torch
from transformers import CamembertModel, CamembertTokenizer
docs = [text1, text2, ..., text20000]
tok = CamembertTokenizer.from_pretrained('camembert-base')
model = CamembertModel.from_pretrained('camembert-base', output_hidden_states=True)
# let try with a batch size of 64 documents
docids = [tok.encode(
doc, max_length=512, return_tensors='pt', pad_to_max_length=True) for doc in docs[:64]]
ids=torch.cat(tuple(docids))
device = 'cuda' if torch.cuda.is_available() else 'cpu' # cpu in my case...
model = model.to(device)
ids = ids.to(device)
model.eval()
with torch.no_grad():
out = model(input_ids=ids)
# 103s later...
Do someone has any idea or suggestions to improve speed?
| I don't think that there is a trivial way to significantly improve the speed, without using a GPU.
Some of the ways I could think of include smart batching, which is used by Sentence-Transformers, where you basically sort inputs of similar length together, to avoid padding to the full 512 token limit. I'm not sure how much of a speedup this is going to get you, but the only way that you can improve it significantly in a short period of time.
Otherwise, if you have access to Google colab, you can also utilize their GPU environment, if the processing can be completed in reasonable time.
| https://stackoverflow.com/questions/62385092/ |
Cannot import BertModel from transformers | I am trying to import BertModel from transformers, but it fails. This is code I am using
from transformers import BertModel, BertForMaskedLM
This is the error I get
ImportError: cannot import name 'BertModel' from 'transformers'
Can anyone help me fix this?
| Fixed the error. This is the code
from transformers.modeling_bert import BertModel, BertForMaskedLM
| https://stackoverflow.com/questions/62386631/ |
Why not accumulate query loss and then take derivative in MAML with Pytorch and Higher? | when doing MAML (Model agnostic meta-learning) there are two ways to do the inner loop:
def inner_loop1():
n_inner_iter = 5
inner_opt = torch.optim.SGD(net.parameters(), lr=1e-1)
qry_losses = []
qry_accs = []
meta_opt.zero_grad()
for i in range(task_num):
with higher.innerloop_ctx(
net, inner_opt, copy_initial_weights=False
) as (fnet, diffopt):
# Optimize the likelihood of the support set by taking
# gradient steps w.r.t. the model's parameters.
# This adapts the model's meta-parameters to the task.
# higher is able to automatically keep copies of
# your network's parameters as they are being updated.
for _ in range(n_inner_iter):
spt_logits = fnet(x_spt[i])
spt_loss = F.cross_entropy(spt_logits, y_spt[i])
diffopt.step(spt_loss)
# The final set of adapted parameters will induce some
# final loss and accuracy on the query dataset.
# These will be used to update the model's meta-parameters.
qry_logits = fnet(x_qry[i])
qry_loss = F.cross_entropy(qry_logits, y_qry[i])
qry_losses.append(qry_loss.detach())
qry_acc = (qry_logits.argmax(
dim=1) == y_qry[i]).sum().item() / querysz
qry_accs.append(qry_acc)
# Update the model's meta-parameters to optimize the query
# losses across all of the tasks sampled in this batch.
# This unrolls through the gradient steps.
qry_loss.backward()
meta_opt.step()
qry_losses = sum(qry_losses) / task_num
qry_accs = 100. * sum(qry_accs) / task_num
i = epoch + float(batch_idx) / n_train_iter
iter_time = time.time() - start_time
def inner_loop2():
n_inner_iter = 5
inner_opt = torch.optim.SGD(net.parameters(), lr=1e-1)
qry_losses = []
qry_accs = []
meta_opt.zero_grad()
meta_loss = 0
for i in range(task_num):
with higher.innerloop_ctx(
net, inner_opt, copy_initial_weights=False
) as (fnet, diffopt):
# Optimize the likelihood of the support set by taking
# gradient steps w.r.t. the model's parameters.
# This adapts the model's meta-parameters to the task.
# higher is able to automatically keep copies of
# your network's parameters as they are being updated.
for _ in range(n_inner_iter):
spt_logits = fnet(x_spt[i])
spt_loss = F.cross_entropy(spt_logits, y_spt[i])
diffopt.step(spt_loss)
# The final set of adapted parameters will induce some
# final loss and accuracy on the query dataset.
# These will be used to update the model's meta-parameters.
qry_logits = fnet(x_qry[i])
qry_loss = F.cross_entropy(qry_logits, y_qry[i])
qry_losses.append(qry_loss.detach())
qry_acc = (qry_logits.argmax(
dim=1) == y_qry[i]).sum().item() / querysz
qry_accs.append(qry_acc)
# Update the model's meta-parameters to optimize the query
# losses across all of the tasks sampled in this batch.
# This unrolls through the gradient steps.
#qry_loss.backward()
meta_loss += qry_loss
meta_loss.backward()
meta_opt.step()
qry_accs = 100. * sum(qry_accs) / task_num
i = epoch + float(batch_idx) / n_train_iter
iter_time = time.time() - start_time
are they truly equivalent?
cross-posted:
git issue: https://github.com/facebookresearch/higher/issues/60
| The only difference is that in second approach you'll have to keep much more stuff in memory - until you call backward you'll have all unrolled parameters fnet.parameters(time=T) (along with intermediate computation tensors) for each of task_num iterations as part of the graph for the aggregated meta_loss. If you call backward on every task then you only need to keep full set of unrolled parameters (and other pieces of the graph) for one task.
So to answer your question's title: because in this case the memory footprint is task_num times bigger.
In a nutshell what you're doing is similar to comparing loopA(N) and loopB(N) in the following code. Here loopA will get as much memory as it can and OOM with sufficiently large N, while loopB will use about same amount of memory for any large N:
import torch
import numpy as np
a = 0
np.random.seed(1)
v = torch.tensor(np.random.randn(1000000))
y = torch.tensor(np.random.randn(1000000))
x = torch.zeros(1000000, requires_grad=True)
def loopA(N=1000):
a = 0
for i in range(N):
a += ((x * v - y)**2).sum()
a.backward()
def loopB(N=1000):
for i in range(N):
a = ((x * v - y)**2).sum()
a.backward()
Regarding the normalization - two approaches are equivalent (up to numerical precision maybe): if you first sum up individual losses, then divide by task_num, then finally call backward then you'll effectively compute d((Loss_1 + ... + Loss_{task_num})/task_num) / dw (where w is one of the weights meta-optimizer is fitting). On the other hand if you call backward for each loss divided by task_num you'll get d(Loss_1/task_num)/dw + ... + d(Loss_{task_num}/task_num)/dw which is the same because taking gradient operation is linear. So in both cases your meta-optimizer step will start with pretty much same gradients.
| https://stackoverflow.com/questions/62394411/ |
Why does 'dimension' mean several different things in the machine-learning world? | I've noticed that AI community refers to various tensors as 512-d, meaning 512 dimensional tensor, where the term 'dimension' seems to mean 512 different float values in the representation for a single datapoint. e.g. in 512-d word-embeddings means 512 length vector of floats used to represent 1 english-word e.g. https://medium.com/@jonathan_hui/nlp-word-embedding-glove-5e7f523999f6
But it isn't 512 different dimensions, it's only 1 dimensional vector? Why is the term dimension used in such a different manner than usual?
When we use the term conv1d or conv2d which are convolutions over 1-dimension and 2-dimensions, a dimension is used in the typical way it's used in math/sciences but in the word-embedding context, a 1-d vector is said to be a 512-d vector, or am I missing something?
Why is this overloaded use of the term dimension? What context determines what dimension means in machine-learning as the term seems overloaded?
| In the context of word embeddings in neural networks, dimensionality reduction, and many other machine learning areas, it is indeed correct to call the vector (which is typically, an 1D array or tensor) as n-dimensional where n is usually greater than 2. This is because we usually work in the Euclidean space where a (data) point in a certain dimensional (Euclidean) space is represented as an n-tuple of real numbers (i.e. real n-space ℝn).
Below is an exampleref of a (data) point in a 3D (Euclidean) space. To represent any point in this space, say d1, we need a tuple of three real numbers (x1, y1, z1).
Now, your confusion arises why this point d1 is called as 3 dimensional instead of 1 dimensional array. The reason is because it lies or lives in this 3D space. The same argument can be extended to all points in any n-dimensional real space, as it is done in the case of embeddings with 300d, 512d, 1024d vector etc.
However, in all nD array compute frameworks such as NumPy, PyTorch, TensorFlow etc, these are still 1D arrays because the length of the above said vectors can be represented using a single number.
But, what if you have more than 1 data point? Then, you have to stack them in some (unique) way. And this is where the need for a second dimension arises. So, let's say you stack 4 of these 512d vectors vertically, then you'd end up with a 2D array/tensor of shape (4, 512). Note that here we call the array as 2D because two integer numbers are required to represent the extent/length along each axis.
To understand this better, please refer my other answer on axis parameter visualization for nD arrays, the visual representation of which I will include it below.
ref: Euclidean space wiki
| https://stackoverflow.com/questions/62395315/ |
Size mismatch error in converting TensorFlow model to Pytorch | I´m trying to convert a TensorFlow model to Pytorch but got stuck in this error. Can anyone help me?
#getting weights and biases from tensorflow model
weights, biases = model.layers[0].get_weights()
#[1] is the dropout layer
weights2, biases2 = model.layers[2].get_weights()
#initializing pytorch
class TwoLayerNet(torch.nn.Module):
def __init__(self, weights, biases, weights2, biases2):
super(TwoLayerNet, self).__init__()
#created the model in the same dimensions as tensorflow´s model
self.linear1 = torch.nn.Linear(9, 2048)
self.hidden1 = nn.Dropout(0.2)
self.linear2 = torch.nn.Linear(2048,5)
weights = torch.from_numpy(weights)
biases = torch.from_numpy(biases)
weights2 = torch.from_numpy(weights2)
biases2 = torch.from_numpy(biases2)
self.linear1.weight = torch.nn.Parameter(weights)
self.linear1.bias = torch.nn.Parameter(biases)
self.linear2.weight.data = weights2
self.linear2.bias.data = biases2
#in this print the dimensions are ok (Linear(in_features=9, out_features=2048, bias=True))
print(self.linear1)
def forward(self, x):
print(self.linear1)
x = self.linear1(x)
x = self.hidden1(x)
x = self.linear2(x)
return x
model_pytorch = TwoLayerNet(weights, biases, weights2, biases2)
model_pytorch.eval()
exemplo_input_torch = torch.from_numpy(exemplo_input)
exemplo_input_torch = exemplo_input_torch.float()
print(exemplo_input_torch)
result = model_pytorch(exemplo_input_torch)
The error is:
RuntimeError: size mismatch, m1: [1 x 9], m2: [2048 x 9] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41
| You need to transpose the weights and biases:
weights = torch.from_numpy(weights).T
biases = torch.from_numpy(biases).T
weights2 = torch.from_numpy(weights2).T
biases2 = torch.from_numpy(biases2).T
| https://stackoverflow.com/questions/62396359/ |
How to remove showing renderings in the Cartpole game from OpenAI in Pytorch | My current code is based off of Pytorch's example on their website where they use env.render() to make the next state. That makes the game run very slow and would like it to run much quicker without the renderings. Here is the the class that uses the render function and at the bottom is the full code. Any ideas how I can get this to work?
class CartPoleEnvManager(): #manage cartpole env
def __init__(self, device):
self.device = device
self.env = gym.make('CartPole-v0').unwrapped #unwrapped so we can see other dynamics that there is no access to otherwise
self.env.reset() #reset to starting state
self.current_screen = None #screen initialization
self.done = False #if episode is finished
def reset(self):
self.env.reset()
self.current_screen = None
def close(self): #close env
self.env.close()
def render(self, mode='human'): #render the current state to screen
return self.env.render(mode)
def num_actions_available(self): #returns # actions available to agent (2)
return self.env.action_space.n
def take_action(self, action):# step returns tuple containing env observation, reward and diagnostic info -- all from taking a certain action
_, reward, self.done, _ = self.env.step(action.item()) # only reward and done status are of importance
return torch.tensor([reward], device=self.device)
#####action is a tensor, action.item() gives a number, what step wants
def just_starting(self):
return self.current_screen is None
def get_state(self): #return to the current state of env in the form of a processed image of the screen
if self.just_starting() or self.done:
self.current_screen = self.get_processed_screen() #state = processed image of diff of 2 separate screens
black_screen = torch.zeros_like(self.current_screen)
return black_screen
else:
s1 = self.current_screen
s2 = self.get_processed_screen() ####what is get_processed_screen?
self.current_screen = s2
return s2 - s1 # this represents a single state
def get_screen_height(self):
screen = self.get_processed_screen()
return screen.shape[2]
def get_screen_width(self):
screen = self.get_processed_screen()
return screen.shape[3]
def get_processed_screen(self):
screen = self.env.render(mode='rgb_array').transpose((2, 0, 1)) # PyTorch expects CHW
screen = self.crop_screen(screen)
return self.transform_screen_data(screen)
def crop_screen(self, screen):
screen_height = screen.shape[1]
# Strip off top and bottom
top = int(screen_height * 0.4)
bottom = int(screen_height * 0.8)
screen = screen[:, top:bottom, :]
return screen
def transform_screen_data(self, screen):
# Convert to float, rescale, convert to tensor
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Use torchvision package to compose image transforms
resize = T.Compose([
T.ToPILImage()
,T.Resize((40,90))
,T.ToTensor()
])
return resize(screen).unsqueeze(0).to(self.device) # add a batch dimension (BCHW)
And the full code:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
import torch
import torch.nn as nnfr
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
import pennylane as qml
import numpy as np
import torch
import torch.nn as nn
from torch.nn.functional import relu, sigmoid
class DQN(nn.Module):
def __init__(self, img_height, img_width):
super().__init__()
self.fc1 = nn.Linear(in_features=img_height*img_width*3, out_features=64)
self.fc2 = nn.Linear(in_features=64, out_features=48)
self.fc3 = nn.Linear(in_features=48, out_features=32)
self.out = nn.Linear(in_features=32, out_features=2)
def forward(self, b):
b = b.flatten(start_dim=1)
#t = F.relu(clayer_out)
b = F.relu(self.fc1(b))
b = F.relu(self.fc2(b))
b = F.relu(self.fc3(b))
b = self.out(b)
return b
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython: from IPython import display
Experience = namedtuple(
'Experience',
('state', 'action', 'next_state', 'reward')
) # use experiences to train network
class ReplayMemory():
def __init__(self, capacity): # replay memory has set capacity, only need to
self.capacity = capacity # initialize capacity for a ReplayMemory object
self.memory = [] #memory will hold the experiences
self.push_count = 0 #to show how many experiences we've added to memory
def push(self, experience): # to add experiences in memory
if len(self.memory) < self.capacity: # ensuring that the length of memory doesn't exceed set capacity
self.memory.append(experience)
else:
self.memory[self.push_count % self.capacity] = experience # if memory greater than capacity,
# then the new experiences will be put to the front of memory, erasing the
# oldest experiences in the memory array
self.push_count += 1
def sample(self, batch_size): # returns a randome sample of experiences, to later use for
return random.sample(self.memory, batch_size) #random.sample(sequence, k), of the sequence, gives k randomly chosen experiences
def can_provide_sample(self, batch_size): #to sample, batch size needs to be bigger than memory -- this is important at the beginning
return len(self.memory) >= batch_size
class EpsilonGreedyStrategy(): #explor vs. exploitation
def __init__(self, start, end, decay):
self.start = start
self.end = end
self.decay = decay
def get_exploration_rate(self, current_step): ####this was not explained in the video, woher kommt
return self.end + self.start*(1/(1+self.decay*current_step))#self.end + (self.start - self.end) * \
#math.exp(-1. * current_step * self.decay)
class LearningRate():
def __init__(self, start, end, decay, current_step):
self.start = start
self.end = end
self.decay = decay
self.current_step = current_step
def get_learning_rate(self, current_step):
self.current_step += 1
return self.end + self.start*(1/(1+self.decay*current_step))
class lr(): # learning rate class needed. Left for possible future use, need to update things beforehand
def __init__(self, learning_rate):
self.current_step = 0
self.learning_rate = learning_rate
def update_lr(self):
lrrate = learning_rate.get_learning_rate(self.current_step)
self.current_step +=1
class Agent():
def __init__(self, strategy, num_actions, device): # when we later create an agent object, need to get strategy from epsilon, num_actions = how many actions from a given state (2 for this game), device is the device in pytorch for tensor calculations CPU or GPU
self.current_step = 0 # current step number in the environment
self.strategy = strategy
self.num_actions = num_actions
self.device = device
def select_action(self, state, policy_net): #policy_net is the policy trained by DQN
rate = strategy.get_exploration_rate(self.current_step)
self.current_step += 1
if rate > random.random():
action = random.randrange(self.num_actions)
return torch.tensor([action]).to(self.device) # explore
else:
with torch.no_grad(): #turn off gradient tracking
return policy_net(state).argmax(dim=1).to(self.device) # exploit
class CartPoleEnvManager(): #manage cartpole env
def __init__(self, device):
self.device = device
self.env = gym.make('CartPole-v0').unwrapped #unwrapped so we can see other dynamics that there is no access to otherwise
self.env.reset() #reset to starting state
self.current_screen = None #screen initialization
self.done = False #if episode is finished
def reset(self):
self.env.reset()
self.current_screen = None
def close(self): #close env
self.env.close()
def render(self, mode='human'): #render the current state to screen
return self.env.render(mode)
def num_actions_available(self): #returns # actions available to agent (2)
return self.env.action_space.n
def take_action(self, action):# step returns tuple containing env observation, reward and diagnostic info -- all from taking a certain action
_, reward, self.done, _ = self.env.step(action.item()) # only reward and done status are of importance
return torch.tensor([reward], device=self.device)
#####action is a tensor, action.item() gives a number, what step wants
def just_starting(self):
return self.current_screen is None
def get_state(self): #return to the current state of env in the form of a processed image of the screen
if self.just_starting() or self.done:
self.current_screen = self.get_processed_screen() #state = processed image of diff of 2 separate screens
black_screen = torch.zeros_like(self.current_screen)
return black_screen
else:
s1 = self.current_screen
s2 = self.get_processed_screen() ####what is get_processed_screen?
self.current_screen = s2
return s2 - s1 # this represents a single state
def get_screen_height(self):
screen = self.get_processed_screen()
return screen.shape[2]
def get_screen_width(self):
screen = self.get_processed_screen()
return screen.shape[3]
def get_processed_screen(self):
screen = self.env.render(mode='rgb_array').transpose((2, 0, 1)) # PyTorch expects CHW
screen = self.crop_screen(screen)
return self.transform_screen_data(screen)
def crop_screen(self, screen):
screen_height = screen.shape[1]
# Strip off top and bottom
top = int(screen_height * 0.4)
bottom = int(screen_height * 0.8)
screen = screen[:, top:bottom, :]
return screen
def transform_screen_data(self, screen):
# Convert to float, rescale, convert to tensor
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Use torchvision package to compose image transforms
resize = T.Compose([
T.ToPILImage()
,T.Resize((40,90))
,T.ToTensor()
])
return resize(screen).unsqueeze(0).to(self.device) # add a batch dimension (BCHW)
def plot(values, moving_avg_period):
plt.figure(2)
plt.clf()
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(values)
moving_avg = get_moving_average(moving_avg_period, values)
plt.plot(moving_avg)
plt.pause(0.001)
print("Episode", len(values), "\n", \
moving_avg_period, "episode moving avg:", moving_avg[-1])
if is_ipython: display.clear_output(wait=True)
def get_moving_average(period, values):
values = torch.tensor(values, dtype=torch.float)
if len(values) >= period:
moving_avg = values.unfold(dimension=0, size=period, step=1) \
.mean(dim=1).flatten(start_dim=0)
moving_avg = torch.cat((torch.zeros(period-1), moving_avg))
return moving_avg.numpy()
else:
moving_avg = torch.zeros(len(values))
return moving_avg.numpy()
def extract_tensors(experiences):
# Convert batch of Experiences to Experience of batches
batch = Experience(*zip(*experiences))
t1 = torch.cat(batch.state)
t2 = torch.cat(batch.action)
t3 = torch.cat(batch.reward)
t4 = torch.cat(batch.next_state)
return (t1,t2,t3,t4)
class QValues():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
@staticmethod
def get_current(policy_net, states, actions):
return policy_net(states).gather(dim=1, index=actions.unsqueeze(-1))
@staticmethod
def get_next(target_net, next_states):
final_state_locations = next_states.flatten(start_dim=1) \
.max(dim=1)[0].eq(0).type(torch.bool)
non_final_state_locations = (final_state_locations == False)
non_final_states = next_states[non_final_state_locations]
batch_size = next_states.shape[0]
values = torch.zeros(batch_size).to(QValues.device)
values[non_final_state_locations] = target_net(non_final_states).max(dim=1)[0].detach()
return values
batch_size = 128
gamma = 0.999
eps_start = 1
eps_end = 0.01
eps_decay = 0.0005
target_update = 10
memory_size = 500000
lr_start = 0.01
lr_end = 0.00001
lr_decay = 0.00009
num_episodes = 1000 # run for more episodes for better results
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
em = CartPoleEnvManager(device)
strategy = EpsilonGreedyStrategy(eps_start, eps_end, eps_decay)
agent = Agent(strategy, em.num_actions_available(), device)
memory = ReplayMemory(memory_size)
policy_net = DQN(em.get_screen_height(), em.get_screen_width()).to(device)
target_net = DQN(em.get_screen_height(), em.get_screen_width()).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval() #tells pytorch that target_net is only used for inference, not training
optimizer = optim.Adam(params=policy_net.parameters(), lr=0.01)
i = 0
episode_durations = []
for episode in range(num_episodes): #iterate over each episode
em.reset()
state = em.get_state()
for timestep in count():
action = agent.select_action(state, policy_net)
reward = em.take_action(action)
next_state = em.get_state()
memory.push(Experience(state, action, next_state, reward))
state = next_state
i = 0
if memory.can_provide_sample(batch_size):
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=100, gamma=0.9)
experiences = memory.sample(batch_size)
states, actions, rewards, next_states = extract_tensors(experiences)
current_q_values = QValues.get_current(policy_net, states, actions)
next_q_values = QValues.get_next(target_net, next_states) #will get the max qvalues of the next state, q values of next state are used via next state
target_q_values = (next_q_values * gamma) + rewards
loss = F.mse_loss(current_q_values, target_q_values.unsqueeze(1))
optimizer.zero_grad() # sets the gradiesnt of all weights n biases in policy_net to zero
loss.backward() #computes gradient of loss with respect to all weights n biases in the policy net
optimizer.step() # updates the weights n biases with the gradients that were computed form loss.backwards
scheduler.step()
if em.done:
episode_durations.append(timestep)
plot(episode_durations, 100)
break
if episode % target_update == 0:
target_net.load_state_dict(policy_net.state_dict())
em.close()
| You're using a "hacked" (or patched if you will) version of CartPole environment which in effect replacing real state CartPole-v0 returns with rendered image. So your code is trying to train a policy which is taking image as an input rather than 4-values feature array original CartPole-v0 is returning.
If you look closer when you call
_, reward, self.done, _ = self.env.step(action.item())
the first element _ is actual state of original CartPole-v0 env.
Then instead of using that the class you have is doing rendering and returning image as input for training.
So for the existing task (effectively state is an image) you can't really skip rendering since it is a part of preparing inputs for the policy.
| https://stackoverflow.com/questions/62397819/ |
TransformerEncoder with a padding mask | I'm trying to implement torch.nn.TransformerEncoder with a src_key_padding_mask not equal to none. Imagine the input is of the shape src = [20, 95] and the binary padding mask has the shape src_mask = [20, 95], 1 in the position of padded tokens and 0 for other positions. I make a transformer encoder with 8 layers, each of which contain an attention with 8 heads and hidden dimension 256:
layer=torch.nn.TransformerEncoderLayer(256, 8, 256, 0.1)
encoder=torch.nn.TransformerEncoder(layer, 6)
embed=torch.nn.Embedding(80000, 256)
src=torch.randint(0, 1000, (20, 95))
src = emb(src)
src_mask = torch.randint(0,2,(20, 95))
output = encoder(src, src_mask)
But I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-107-31bf7ab8384b> in <module>
----> 1 output = encoder(src, src_mask)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/transformer.py in forward(self, src, mask, src_key_padding_mask)
165 for i in range(self.num_layers):
166 output = self.layers[i](output, src_mask=mask,
--> 167 src_key_padding_mask=src_key_padding_mask)
168
169 if self.norm:
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/transformer.py in forward(self, src, src_mask, src_key_padding_mask)
264 """
265 src2 = self.self_attn(src, src, src, attn_mask=src_mask,
--> 266 key_padding_mask=src_key_padding_mask)[0]
267 src = src + self.dropout1(src2)
268 src = self.norm1(src)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/activation.py in forward(self, query, key, value, key_padding_mask, need_weights, attn_mask)
781 training=self.training,
782 key_padding_mask=key_padding_mask, need_weights=need_weights,
--> 783 attn_mask=attn_mask)
784
785
~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in multi_head_attention_forward(query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training, key_padding_mask, need_weights, attn_mask, use_separate_proj_weight, q_proj_weight, k_proj_weight, v_proj_weight, static_k, static_v)
3250 if attn_mask is not None:
3251 attn_mask = attn_mask.unsqueeze(0)
-> 3252 attn_output_weights += attn_mask
3253
3254 if key_padding_mask is not None:
RuntimeError: The size of tensor a (20) must match the size of tensor b (95) at non-singleton dimension 2
I was wondering if somebody could help me figure out this problem.
Thanks
| The required shapes are shown in nn.Transformer.forward - Shape (all building blocks of the transformer refer to it). The relevant ones for the encoder are:
src: (S, N, E)
src_mask: (S, S)
src_key_padding_mask: (N, S)
where S is the sequence length, N the batch size and E the embedding dimension (number of features).
The padding mask should have shape [95, 20], not [20, 95]. This assumes that your batch size is 95 and the sequence length is 20, but if that is the other way around, you would have to transpose the src instead.
Furthermore, when calling the encoder, you are not specifying the src_key_padding_mask, but rather the src_mask, as the signature of torch.nn.TransformerEncoder.forward is:
forward(src, mask=None, src_key_padding_mask=None)
The padding mask must be specified as the keyword argument src_key_padding_mask not as the second positional argument. And to avoid confusion, your src_mask should be renamed to src_key_padding_mask.
src_key_padding_mask = torch.randint(0,2,(95, 20))
output = encoder(src, src_key_padding_mask=src_key_padding_mask)
| https://stackoverflow.com/questions/62399243/ |
tf.test.is_gpu_available() is False in subprocess but True in main process | I'm currently running a pytorch model which periodically calls out to a tensorflow model for benchmarking purposes. I'd like both of these models to be GPU-enabled and to run in the same script. Since tensorflow benchmarking code claims GPU memory til the end of the process, I've elected to run the benchmarking code in a multiprocessing.Process so that my pytorch model can use the full GPU's memory after the benchmarking script has run.
During this, I've stumbled across an unusual bug (?) in tensorflow's gpu utilization. It seems that tensorflow run in a subprocess doesn't want to use a GPU which has been used ~at all~ by a parent process. I can have tensorflow models and pytorch models in the same GPU and process with no problems, but when I introduce subprocesses tensorflow is ill-behaved.
I'm running
tensorflow-gpu==1.14.0
torch==1.1.0
cudatoolkit=10.0 on an NVIDIA 2080-Ti.
Below is a minimal code snipped to reproduce:
import torch
import tensorflow as tf
from multiprocessing import Process
def f():
print(tf.test.is_gpu_available())
pa = Process(target=f, args=())
pa.start()
pa.join()
torch.ones(1).cuda()
pb = Process(target=f, args=())
pb.start()
pb.join()
>>> True
>>> False
| To anyone running into this problem, you need to call multiprocessing.set_start_method('spawn'). Tensorflow is not fork-safe and some weirdness can happen with global variables/modules that is probably very hard to reason about. Remember to call it only once, inside a if __name__ == '__main__': check.
| https://stackoverflow.com/questions/62399799/ |
Runtime error while running PyTorch model on local machine | I'm running this notebook locally
https://github.com/udacity/deep-learning-v2-pytorch/blob/master/sentiment-rnn/Sentiment_RNN_Solution.ipynb
everything was working just until I started training the model
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
the error that occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-31-9f7dea11cb7b> in <module>
32
33 # get the output from the model
---> 34 output, h = net(inputs, h)
35
36 # calculate the loss and perform backprop
c:\users\asus\.conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-16-b99cefc1dc61> in forward(self, x, hidden)
36
37 # embeddings and lstm_out
---> 38 embeds = self.embedding(x)
39 lstm_out, hidden = self.lstm(embeds, hidden)
40
c:\users\asus\.conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
c:\users\asus\.conda\envs\pytorch\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
110
111 def forward(self, input):
--> 112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
114 self.norm_type, self.scale_grad_by_freq, self.sparse)
c:\users\asus\.conda\envs\pytorch\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
I don't understand why it's happening. I tried to find a solution online. which said I need transfer my model and data to GPU. I did but the problem still persists.
| You are trying to embed the inputs, which are given as ints (torch.int). Only integers (torch.long) can be embedded, since they need to be indices, which cannot be float.
inputs need to be converted to torch.long:
inputs = inputs.to(torch.long)
It seems like you removed the conversion to long, because in the notebook it's done within the model:
# embeddings and lstm_out
x = x.long()
embeds = self.embedding(x)
Whereas, in your stack trace, the line x = x.long() (same as using .to(torch.long)) is missing.
| https://stackoverflow.com/questions/62400718/ |
ValueError: too many values to unpack while using torch tensors | For a project on neural networks, I am using Pytorch and am working with the EMNIST dataset.
The code that is already given loads in the dataset:
train_dataset = dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
And prepares it:
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
Then, when all the configurations of the network are defined, there is a for loop to train the model per epoch:
for i, (images, labels) in enumerate(train_loader):
In the example code this works fine.
For my task, I am given a dataset that I load as follows:
emnist = scipy.io.loadmat("DIRECTORY/emnist-letters.mat")
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
y_train = data ['train'][0, 0]['labels'][0, 0]
Then, I create the train_dataset as follows:
train_dataset = np.concatenate((X_train, y_train), axis = 1)
train_dataset = torch.from_numpy(train_dataset)
And use the same step to prepare it:
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
However, when I try to use the same loop as before:
for i, (images, labels) in enumerate(train_loader):
I get the following error:
ValueError: too many values to unpack (expected 2)
Who knows what I can do so that I can train my dataset with this loop?
| The dataset you created from the EMNIST data is a single tensor, and therefore, the data loader will also produce a single tensor, where the first dimension is the batch dimensions. This results in trying to unpack that tensor across the batch dimension, which doesn't work because your batch size is greater than two, but is also not what you want to happen.
You can use torch.utils.data.TensorDataset to easily create a dataset, which produces a tuple of images and their respective labels, just like the MNIST dataset does.
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
| https://stackoverflow.com/questions/62406283/ |
Cannot obtain same output tensor values in C++ side | I try to make predictions for a 4-class classification project in C++ side using libtorch 1.4. However, I cannot obtain same predictions compared to Python side. Firstly, I obtain same input tensor values just before prediction. When I compare the output tensor values, I noticed that they are different. You can find those values in that picture:
Left side includes Python output tensor values and the prediction results for each input picture.
Right side includes C++ output tensor values and the prediction results for each input picture.
Could you offer a solution to obtain same output tensor values and prediction results?
| I noticed that I used opencv functions to apply normalization using this code:
subtract(image, Scalar(0.485, 0.456, 0.406), temp);
divide(temp, Scalar(0.229, 0.224, 0.225), image);
This operation changes only the first channel and does not change other channels. For this reason, in fact, input tensor values are different. I applied normalization directly on tensor values writing this code:
tensor_image = tensor_image.permute({ 2,0,1 });//chw
tensor_image = tensor_image.toType(torch::kFloat);
tensor_image = tensor_image.div(255.0);
//normalize
tensor_image[0] = tensor_image[0].sub_(0.485).div_(0.229);
tensor_image[1] = tensor_image[1].sub_(0.456).div_(0.224);
tensor_image[2] = tensor_image[2].sub_(0.406).div_(0.225);
Thus, I obtained the same input tensor values compared to Python side. After prediction, I obtained the same output tensor values. My problem solved.
| https://stackoverflow.com/questions/62408941/ |
Train inactive class in classification AI problem | I'm facing an issue. I want to classify data in n classes. But, I also want to say that, the answer is none of these n classes.
For example, my classes are : HORSE, CAT, DOG. So I will train with data related to HORSE, CAT and DOG.
But if I give my model something else, just like a car, I would like my model to tell that it is not HORSE or DOG or CAT.
So maybe I have to train a model with these classes : HORSE, CAT, DOG, OTHER ?
But If I train OTHER class with data, how can I be sure that if I give it something new like a spacerocket, the prediction will be active for the OTHER class ?
In other words, I'm in trouble with inactive class. I don't want my model to give "the best prediction" beetween my desired classes, I also want it to tell me if it is none of them.
Thank you !
| I had some answer, and it seems that there are 2 main solutions for this problem :
As I thought, I can train my classes and 1 more for the other cases. To be robust, I have to create data for this class that my model may have in input to cover as much as cases as possible !
Train n classifier : 1 binary classifier for 1 class. Each one have these outputs : Active / Not Active.
For each solution, I think it is necessary to create dataset for the "other cases", even in the second one to avoid uncontained prediction. In Solution 1, you can choose if you want 1 active class (softmax) or several active classes (sigmoid), in solution 2, it seems harder to control this behavior as each classifier is independent and several classes could be active.
source
| https://stackoverflow.com/questions/62409555/ |
PyTorch: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True | This is an error message I get as I work with some synthetic data. I am a bit puzzled because the error persists although I do what I am advised to do. Could it somehow be related to the fact that I do not specify a batch? Would the use of PyTorch Dataset relieve the problem?
Here is my code (I am new in PyTorch, just learning it now) -- it should be reproducible:
Data Creation:
x, y = np.meshgrid(np.random.randn(100) , np.random.randn(100))
z = 2 * x + 3 * y + 1.5 * x * y - x ** 2 - y**2
X = x.ravel().reshape(-1, 1)
Y = y.ravel().reshape(-1, 1)
Z = z.ravel().reshape(-1, 1)
U = np.concatenate([X, Y], axis = 1)
U = torch.tensor(U, requires_grad=True)
Z = torch.tensor(Z, requires_grad=True)
V = []
for i in range(U.shape[0]):
u = U[i, :]
u1 = u.view(-1, 1) @ u.view(1, -1)
u1 = u1.triu()
ones = torch.ones_like(u1)
mask = ones.triu()
mask = (mask == 1)
u2 = torch.masked_select(u1, mask)
u3 = torch.cat([u, u2])
u3 = u3.view(1, -1)
V.append(u3)
V = torch.cat(V, dim = 0)
Training a Model
from torch import nn
from torch import optim
net = nn.Sequential(nn.Linear(V.shape[1], 1))
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(50): # loop over the dataset multiple times
running_loss = 0.0
i = 0
for inputs , labels in zip(V, Z):
# get the inputs; data is a list of [inputs, labels]
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward(retain_graph = True)
optimizer.step()
# print statistics
running_loss += loss.item()
i += 1
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
Error Message:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-143-2454f4bb70a5> in <module>
25
26
---> 27 loss.backward(retain_graph = True)
28
29 optimizer.step()
~\Anaconda3\envs\torch\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
193 products. Defaults to ``False``.
194 """
--> 195 torch.autograd.backward(self, gradient, retain_graph, create_graph)
196
197 def register_hook(self, hook):
~\Anaconda3\envs\torch\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors, grad_tensors, retain_graph, create_graph,
---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Could you explain the error and fix the code?
| Presumably, you haven't re-run the data creation code again after setting retain_graph=True, as it is in a IPython REPL. It would work with that, but in nearly all cases, setting retain_graph=True is not the appropriate solution.
The problem in your case, is that you have set requires_grad=True for U, which means that everything that involved U in the data creation, will be recorded in the computational graph and when loss.backward() is called, the gradients will propagate through all of these up to U. After the first time, all buffers for the gradients of those will have been freed, and a second backward will fail.
Neither U nor Z should have requires_grad=True, as they are not being optimised/learned. Only the learned parameters (parameters given to the optimiser) should have requires_grad=True and generally, you don't have to set it manually either, since nn.Parameter handles that automatically.
You should also ensure that the tensors you create from NumPy data have type torch.float (float32) as NumPy's float array will usually be float64, which are mostly unnecessary and just slower compared to float32, especially on the GPU.
U = torch.tensor(U, dtype=torch.float)
Z = torch.tensor(Z, dtype=torch.float)
And remove the retain_graph=True from the backward call:
loss.backward()
| https://stackoverflow.com/questions/62412155/ |
Writing tensor to a file in a visually readable manner | In pytorch, I want to write a tensor to a file and visually read the file contents. For example, consider T = torch.tensor([3,4,5,6]). I want to write the tensor T to a file, say file_T.txt, and want to visually read the contents of the file_T.txt, which will be 3,4,5 and 6. How can I achieve this?
| You can use numpy:
import numpy as np
np.savetxt('my_file.txt', torch.Tensor([3,4,5,6]).numpy())
| https://stackoverflow.com/questions/62412976/ |
Why is a simple Binary classification failing in a feedforward neural network? | I am new to Pytorch. I was trying to model a binary classifier on the Kepler dataset. The following was my dataset class.
class KeplerDataset(Dataset):
def __init__(self, test=False):
self.dataframe_orig = pd.read_csv(koi_cumm_path)
if (test == False):
self.data = df_numeric[( df_numeric.koi_disposition == 1 ) | ( df_numeric.koi_disposition == 0 )].values
else:
self.data = df_numeric[~(( df_numeric.koi_disposition == 1 ) | ( df_numeric.koi_disposition == 0 ))].values
self.X_data = torch.FloatTensor(self.data[:, 1:])
self.y_data = torch.FloatTensor(self.data[:, 0])
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
Here, I created a custom classifier class with one hidden layer and a single output unit that produces sigmoidal probability of being in class 1 (planet).
class KOIClassifier(nn.Module):
def __init__(self, input_dim, out_dim):
super(KOIClassifier, self).__init__()
self.linear1 = nn.Linear(input_dim, 32)
self.linear2 = nn.Linear(32, 32)
self.linear3 = nn.Linear(32, out_dim)
def forward(self, xb):
out = self.linear1(xb)
out = F.relu(out)
out = self.linear2(out)
out = F.relu(out)
out = self.linear3(out)
out = torch.sigmoid(out)
return out
I then created a train_model function to optimize the loss using SGD.
def train_model(X, y):
criterion = nn.BCELoss()
optim = torch.optim.SGD(model.parameters(), lr=0.001)
n_epochs = 100
losses = []
for epoch in range(n_epochs):
y_pred = model.forward(X)
loss = criterion(y_pred, y)
losses.append(loss.item())
optim.zero_grad()
loss.backward()
optim.step()
losses = []
for X, y in train_loader:
losses.append(train_model(X, y))
But after performing the optimization over the train_loader, When I try predicting on the trainn_loader itself, the prediction values are so much worse.
for features, y in train_loader:
y_pred = model.predict(features)
break
y_pred
> tensor([[4.5436e-02],
[1.5024e-02],
[2.2579e-01],
[4.2279e-01],
[6.0811e-02],
.....
Why is my model not working properly? Is it the problem with the dataset or am I doing something wrong with implementing the Neural net? I will link my Kaggle notebook because more context might be helpful. Please help.
| You are optimizing many times (100 steps) on the first batch (first samples), then moving to the next samples. It means that your model will overfit your few samples before going to the next batch. Then, your training will be very non smooth, diverge and go far from your global optimum.
Usually, in a training loop you should:
go over all samples (this is one epoch)
shuffle your dataset in order to visit your samples in a different order (set your pytorch training loader accordingly)
go back to 1. until you reach the max number of epochs
Also you should not define your optimizer each time (nor your criterion).
Your training loop should look like this:
criterion = nn.BCELoss()
optim = torch.optim.SGD(model.parameters(), lr=0.001)
n_epochs = 100
def train_model():
for X, y in train_loader:
optim.zero_grad()
y_pred = model.forward(X)
loss = criterion(y_pred, y)
loss.backward()
optim.step()
for epoch in range(n_epochs):
train_model()
| https://stackoverflow.com/questions/62413462/ |
pytorch 4d numpy array applying transfroms inside custom dataset | Inside my custom dataset, I want to apply transforms.Compose() to a NumPy array.
My images are in a NumPy array format with shape (num_samples, width, height, channels).
How can I apply the follwoing transforms to the full numpy array?
img_transform = transforms.Compose([
transforms.Scale((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.46, 0.48, 0.51], [0.32, 0.32, 0.32])
])
My attempts are ending in multiple errors as the transforms accept a PIL image not a 4-d NumPy array.
from torchvision import transforms
import numpy as np
import torch
img_transform = transforms.Compose([
transforms.Scale((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.46, 0.48, 0.51], [0.32, 0.32, 0.32])
])
a = np.random.randint(0,256, (299,299,3))
print(a.shape)
img_transform(a)
| All torchvision transforms operate on single images, not batches of images, hence a 4D array cannot be used.
Single images given as NumPy arrays, like in your code example, can be used by converting them to a PIL image. You can simply add transforms.ToPILImage to the beginning of the transformation pipeline, as it converts either a tensor or a NumPy array to a PIL image.
img_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.46, 0.48, 0.51], [0.32, 0.32, 0.32])
])
Note: transforms.Scale is deprecated in favour of transforms.Resize.
In your example you used np.random.randint, which by default uses type int64, but images have to be uint8. Libraries such as OpenCV return uint8 arrays when loading an image.
a = np.random.randint(0,256, (299,299,3), dtype=np.uint8)
| https://stackoverflow.com/questions/62413906/ |
How do I clamp the magnitude of a PyTorch tensor? | I know I can use torch.clamp to clamp a tensor's values within some min / max, but how can I do this if I want to clamp by the magnitude (absolute value)? Example:
import torch
t = torch.tensor([-5.0, -250, -1, 0.003, 7, 1238])
min_mag = 1 / 10
max_mag = 100
# desired output:
tensor([ -5.0000, -100.0000, -1.0000, 0.1000, 7.0000, 100.0000])
| Here's one method:
sign = t.sign()
t = t.abs_().clamp_(min_mag, max_mag)
t *= sign
(note: this is using in-place operations)
| https://stackoverflow.com/questions/62415215/ |
Why am I able to change the value of a tensor without the computation graph knowing about it in Pytorch with detach? | I can change the value of a tensor that requires grad without autograd knowing about it:
def error_unexpected_way_to_by_pass_safety():
import torch
a = torch.tensor([1,2,3.], requires_grad=True)
# are detached tensor's leafs? yes they are
a_detached = a.detach()
#a.fill_(2) # illegal, warns you that a tensor which requires grads is used in an inplace op (so it won't be recorded in computation graph so it wont take the right derivative of the forward path as this op won't be in it)
a_detached.fill_(2) # weird that this one is allowed, seems to allow me to bypass the error check from the previous comment...?!
print(f'a = {a}')
print(f'a_detached = {a_detached}')
a.sum().backward()
this throws no errors. Though, I am able to change the contents of a which is a tensor that requires grad without autograd knowing about. Which means the computation graph does not know about this op (filling with 2). This seems wrong. Can anyone shed light what is going on?
| .detach gives you a view on the same data, so modifying the data of the detached tensor modifies the data of the original. You can check this like so:
a.data_ptr() == a_detached.data_ptr() # True
As for why this is how .detach is implemented (as opposed to doing a defensive copy), that's a design question that only the PyTorch authors know the answer to. I assume that it's to save unnecessary copies, but users then need to be aware that they have to copy the tensors themselves if they want to modify the detached ones in-place.
Note that you can also alter the non-detached tensor if you really want to:
a.data.fill_(2)
PyTorch isn't trying to stop you from "hacking" autograd; users still have to be aware of how to use tensors properly so that gradients will be tracked correctly.
| https://stackoverflow.com/questions/62415251/ |
Updating learning rate with Libtorch 1.5 and optimiser options in C++ | With the release of the 1.5 stable version of the C++ API for PyTorch, there are some changes in some of the object interfaces. For instance, now
optimizer.options.learning_rate();
won't work (here the optimiser being used is Adam) since learning_rate has changed to lr (see https://github.com/pytorch/pytorch/releases) but moreover the optimiser no longer has options (no member named 'options' in 'torch::optim::Adam'). So my question is: how would one run
optimizer.options.learning_rate();
or update the learning rate
optimizer.options.learning_rate(updatedlearningrate);
with the new release? Any help will be appreciated! Thank you
| The optimisers now behave like their Python counterparts and the learning rates need to be set per parameter group.
for (auto param_group : optimizer.param_groups()) {
# Static cast needed as options() returns OptimizerOptions (base class)
static_cast<torch::optim::AdamOptions &>(param_group.options()).lr(new_lr);
}
If you didn't specify separate parameter groups, there will be only a single group and you could directly set its learning rate as suggested in Issue #35640 - How do you change Adam learning rate since the latest commits?:
static_cast<torch::optim::AdamOptions &>(optimizer.param_groups()[0].options()).lr(new_lr)
| https://stackoverflow.com/questions/62415285/ |
RuntimeError: Given groups=1, weight of size [32, 3, 16, 16, 16], expected input[100, 16, 16, 16, 3] to have 3 channels, but got 16 channels instead | RuntimeError: Given groups=1, weight of size [32, 3, 16, 16, 16], expected input[100, 16, 16, 16, 3] to have 3 channels, but got 16 channels instead
This is the portion of code I think where the problem is.
def __init__(self):
super(Lightning_CNNModel, self).__init__()
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.fc1 = nn.Linear(2**3*64, 128)
self.fc2 = nn.Linear(128, 10) # num_classes = 10
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(128)
self.drop=nn.Dropout(p=0.15)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
out = self.drop(out)
out = self.fc2(out)
return out
This is the code I am working on
| nn.Conv3d expects the input to have size [batch_size, channels, depth, height, width]. The first convolution expects 3 channels, but with your input having size [100, 16, 16, 16, 3], that would be 16 channels.
Assuming that your data is given as [batch_size, depth, height, width, channels], you need to swap the dimensions around, which can be done with torch.Tensor.permute:
# From: [batch_size, depth, height, width, channels]
# To: [batch_size, channels, depth, height, width]
input = input.permute(0, 4, 1, 2, 3)
| https://stackoverflow.com/questions/62416819/ |
Custom data loader is returning list in pytorch | I want to get 3 batches of images from 3 different folders. I have written custom data loader in pytorch. but it is returning list that has all the batches instead of single batch at a time.(running in google colab)
#custom data loader
class set(Dataset):
def __init__(self, dataset_input, dataset_expertA, dataset_expertB):
self.dataset1 = dataset_input
self.dataset2 = dataset_expertA
self.dataset3 = dataset_expertB
def __getitem__(self, index):
x1 = self.dataset1[index]
x2 = self.dataset2[index]
x3 = self.dataset3[index]
return x1, x2, x3
def __len__(self):
return len(self.dataset1)
input_path = "/content/gdrive/My Drive/project/input/"
dataset = datasets.ImageFolder(root= input_path, transform=transforms.Compose([
transforms.Resize([64,64]),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]))
expertA_path = "/content/gdrive/My Drive/project/expertA/"
datasetA = datasets.ImageFolder(root= expertA_path, transform=transforms.Compose([
transforms.Resize([64,64]),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]))
expertB_path = "/content/gdrive/My Drive/project/expertB/"
datasetB = datasets.ImageFolder(root= expertB_path, transform=transforms.Compose([
transforms.Resize([64,64]),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]))
data = set(dataset, datasetA, datasetB)
dataloader = torch.utils.data.DataLoader(data, batch_size=64,
shuffle=True, num_workers=2)
for i, (inp, expA, expB) in enumerate(dataloader):
print(inp.shape)
break
this prints error that inp is list and when i print(inp[0].shape) i get proper shape i think inp contains all batches ie inp[0], inp[1]...
what mistake am i doing in data loader code?
| datasets.ImageFolder returns a tuple of (image, label), hence inp is also a tuple, where inp[0] are the images and inp[1] their corresponding labels. The same applies to expA and expB.
If you only want the images without the labels, you can ignore the labels and just return the images when accessing the data in your custom dataset:
def __getitem__(self, index):
image1, label1 = self.dataset1[index]
image2, label2 = self.dataset2[index]
image3, label3 = self.dataset3[index]
return image1, image2, image3
| https://stackoverflow.com/questions/62418244/ |
Pytorch: Dimensions for cross entropy is correct but somehow wrong for MSE? | I was creating a program that would take in as input the Fashion MNIST set and I was tweaking around with my model to see how different parameters would change the accuracy.
One of the tweaks I made to my model was to change my model's loss function from cross entropy to MSE.
# The code above is miscellaneous training data import code
trainloader = torch.utils.data.DataLoader(trainset, batch_size = 64, shuffle = True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size = 64, shuffle = True, num_workers=4)
dataiter = iter(trainloader)
images, labels = dataiter.next()
from torch import nn, optim
import torch.nn.functional as F
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 10),
nn.LogSoftmax(dim = 1)
)
model.to(device)
# Define the loss
criterion = nn.MSELoss()
# Define the optimizer
optimizer = optim.Adam(model.parameters(), lr = 0.001)
# Define the epochs
epochs = 5
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten Fashion-MNIST images into a 784 long vector
images = images.to(device)
labels = labels.to(device)
images = images.view(images.shape[0], -1)
# Training pass
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
My model worked without any problems when using cross entropy loss, but when I changed to MSE loss, the interpreter complained and said that my tensors were different sizes and thus could not be computed.
<class 'torch.Tensor'>
torch.Size([64, 1, 28, 28])
torch.Size([64])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-62-ec6942122f02> in <module>
44 output = model.forward(images)
45
---> 46 loss = criterion(output, labels)
47 loss.backward()
48 optimizer.step()
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
429
430 def forward(self, input, target):
--> 431 return F.mse_loss(input, target, reduction=self.reduction)
432
433
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in mse_loss(input, target, size_average, reduce, reduction)
2213 ret = torch.mean(ret) if reduction == 'mean' else torch.sum(ret)
2214 else:
-> 2215 expanded_input, expanded_target = torch.broadcast_tensors(input, target)
2216 ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
2217 return ret
/opt/conda/lib/python3.7/site-packages/torch/functional.py in broadcast_tensors(*tensors)
50 [0, 1, 2]])
51 """
---> 52 return torch._C._VariableFunctions.broadcast_tensors(tensors)
53
54
RuntimeError: The size of tensor a (10) must match the size of tensor b (64) at non-singleton dimension 1
I tried reshaping my tensors and creating new arrays as placeholders for my output array, yet seem to be getting nowhere.
Why cross entropy loss works without any errors yet MSE does not?
| nn.CrossEntropyLoss and nn.MSELoss are completely different loss functions with fundamentally different rationale behind them.
nn.CrossEntropyLoss is a loss function for discrete labeling tasks. Therefore it expects as inputs a prediction of label probabilities and targets as ground-truth discrete labels: x shape is nxc (where c is the number of labels) and y is of shape n of type integer, each target takes values in the range {0,...,c-1}.
In contrast, nn.MSELoss is a loss function for regression tasks. Therefore it expects both predictions and targets to be of the same shape and data type. That is, if your prediction is of shape nxc the target should also be of shape nxc (and not just n as in the cross-entropy case).
If you are insisting on using MSE loss instead of cross entropy, you will need to convert the target integer labels you currently have (of shape n) into 1-hot vectors of shape nxc and only then compute the MSE loss between your predictions and the generated one-hot targets.
| https://stackoverflow.com/questions/62422644/ |
Why does output shape in a simple Elman RNN depend on the sequence length(while hidden state shape doesn't)? | I am learning about RNNs, and am trying to code one up using PyTorch.
I have some trouble understanding the output dimensions
Here is some code for a simple RNN architecture
class RNN(nn.Module):
def __init__(self, input_size, hidden_dim, n_layers):
super(RNN, self).__init__()
self.hidden_dim=hidden_dim
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
def forward(self, x, hidden):
r_out, hidden = self.rnn(x, hidden)
return r_out, hidden
So, what I understand is the hidden_dim is the number blocks I will have in my hidden layer, and essentially the number of features in the output and in the hidden state.
I create some dummy data, to test it
test_rnn = RNN(input_size=1, hidden_dim=4, n_layers=1)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, 6, 3)
data = np.sin(time_steps)
data.resize((3, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Hidden state size: ', test_h.size())
print('Output size: ', test_out.size())
What I get is
Input size: torch.Size([1, 3, 1])
Hidden state size: torch.Size([1, 1, 4])
Output size: torch.Size([1, 3, 4])
So I understand that the shape of x is determined like so
x = (batch_size, seq_length, input_size).. so 1 bath size, and input of 1 feature and 3 time steps(sequence length).
For hidden state, like so hidden = (n_layers, batch_size, hidden_dim).. so I had 1 layer, batch size 1, and 4 blocks in my hidden layer.
What I don't get is the RNN output. From the documentation, r_out = (batch_size, time_step, hidden_size).. Wasn't the output supposed to be the same as the hidden state that was output from the hidden units? That is, if I have 4 units in my hidden layer, I would expect it to output 4 numbers for the hidden state, and 4 numbers for the output. Why is the time step a dimension of the output? Because, each hidden unit, takes in some numbers, outputs a state S and output Y, and both of these are equal, yes? I tried a diagram, this is what I came up with. Help me understand what part of it I'm doing wrong.
So TL;DR
Why does output shape in a simple Elman RNN depend on the sequence length(while hidden state shape doesn't)? For in the diagram I drew, I see both of them being the same.
| In the PyTorch API, the output is a sequence of hidden states during the RNN computation, i.e., there is one hidden state vector per input vector. The hidden state is the last hidden state, the state the RNN ends with after processing the input, so test_out[:, -1, :] = test_h.
Vector y in your diagrams is the same as a hidden state Ht, it indeed has 4 numbers, but the state is different for every time step, so you have 4 number for every time step.
The reason why PyTorch separates the sequence of outputs = hidden states (it's not the same in LSTMs, though) is that you can have a batch of sequences of different lengths. In that case, the final state is not simply test_out[:, -1, :], because you need to select final states based on the lengths of individual sequences.
| https://stackoverflow.com/questions/62424364/ |
RuntimeError: Expected 4-dimensional input for 4-dimensional weight X, but got 3-dimensional input of size Y instead | I am building a CNN to do image classification on the EMNIST dataset.
To do so, I have the following datasets:
import scipy .io
emnist = scipy.io.loadmat(DIRECTORY + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')
y_train = data ['train'][0, 0]['labels'][0, 0]
X_test = data ['test'][0, 0]['images'][0, 0]
X_test = X_test.reshape((-1,28,28), order = 'F')
y_test = data ['test'][0, 0]['labels'][0, 0]
With shape:
X_train = (124800, 28, 28)
y_train = (124800, 1)
X_test = (20800, 28, 28)
y_test = (20800, 1)
Note that the pictures are grayscale, so the colors are represented with only one number.
Which I prepare further as follows:
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
test_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test))
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
My model looks as follows:
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
self.cnn_layers = Sequential(
# Defining a 2D convolution layer
Conv2d(1, 4, kernel_size=3, stride=1, padding=1),
BatchNorm2d(4),
ReLU(inplace=True),
MaxPool2d(kernel_size=2, stride=2),
# Defining another 2D convolution layer
Conv2d(4, 4, kernel_size=3, stride=1, padding=1),
BatchNorm2d(4),
ReLU(inplace=True),
MaxPool2d(kernel_size=2, stride=2),
)
self.linear_layers = Sequential(
Linear(4 * 7 * 7, 10)
)
# Defining the forward pass
def forward(self, x):
x = self.cnn_layers(x)
x = x.view(x.size(0), -1)
x = self.linear_layers(x)
return x
model = CNNModel()
The code below is part of the code that I use to train my model:
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images)
labels = Variable(labels)
# Forward pass to get output/logits
outputs = model(images)
However, by excuting my code, I get the following error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [4, 1, 3, 3], but got 3-dimensional input of size [100, 28, 28] instead
So a 4D input is expected while my input is 3D. What can I do so a 3D model is expected rather than a 4D model?
Here a similar question is asked, however I dont see how I can translate this to my code
| The convolution expects the input to have size [batch_size, channels, height, width], but your images have size [batch_size, height, width], the channel dimension is missing. Greyscale is represented with a single channel and you have correctly set the in_channels of the first convolutions to 1, but your images don't have the matching dimension.
You can easily add the singular dimension with torch.unsqueeze.
Also, please don't use Variable, it was deprecated with PyTorch 0.4.0, which was released over 2 years ago, and all of its functionality has been merged into the tensors.
for i, (images, labels) in enumerate(train_loader):
# Add a single channel dimension
# From: [batch_size, height, width]
# To: [batch_size, 1, height, width]
images = images.unsqueeze(1)
# Forward pass to get output/logits
outputs = model(images)
| https://stackoverflow.com/questions/62427615/ |
RuntimeError: value cannot be converted to type uint8_t without overflow: -0.192746 | I am new to Pytorch and am aiming to do an image classification task using a CNN based on the EMNIST dataset.
I read my data in as follows:
emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')
y_train = data ['train'][0, 0]['labels'][0, 0]
X_test = data ['test'][0, 0]['images'][0, 0]
X_test = X_test.reshape((-1,28,28), order = 'F')
y_test = data ['test'][0, 0]['labels'][0, 0]
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
test_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test))
batch_size = 128
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
Then, I found the following configurations (that I still have to adjust to fit to my data):
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
self.relu1 = nn.ReLU()
# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
self.relu2 = nn.ReLU()
# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
# Fully connected 1 (readout)
self.fc1 = nn.Linear(32 * 4 * 4, 10)
def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)
# Max pool 1
out = self.maxpool1(out)
# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)
# Max pool 2
out = self.maxpool2(out)
# Resize
# Original size: (100, 32, 7, 7)
# out.size(0): 100
# New out size: (100, 32*7*7)
out = out.view(out.size(0), -1)
# Linear function (readout)
out = self.fc1(out)
return out
model = CNNModel()
criterion = nn.CrossEntropyLoss()
To train the model, I use the following code:
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Add a single channel dimension
# From: [batch_size, height, width]
# To: [batch_size, 1, height, width]
images = images.unsqueeze(1)
# Forward pass to get output/logits
outputs = model(images)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(images)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
if iter % 500 == 0:
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
images = images.unsqueeze(1)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
However, when I run this, I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-27-1fbdd53d1194> in <module>()
12
13 # Forward pass to get output/logits
---> 14 outputs = model(images)
15
16 # Clear gradients w.r.t. parameters
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
348 _pair(0), self.dilation, self.groups)
349 return F.conv2d(input, weight, self.bias, self.stride,
--> 350 self.padding, self.dilation, self.groups)
351
352 def forward(self, input):
RuntimeError: value cannot be converted to type uint8_t without overflow: -0.0510302
I found this question already and think that the solution might work for me as well. However, I don't understand where in my code I can implement this.
What can I do to overcome this problem?
Ps.
I have used the following import statements:
import scipy .io
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
from torch.autograd import Variable
import cv2
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import os
from PIL import Image
from PIL import ImageOps
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor
from torch.nn import Sequential
from torch.nn import Conv2d
from torch.nn import BatchNorm2d
from torch.nn import MaxPool2d
from torch.nn import ReLU
from torch.nn import Linear
| What fixed my problem was replacing out = self.cnn1(x) with out = self.cnn1(x.float())
| https://stackoverflow.com/questions/62432572/ |
Python, class dataset, how to concatenate images with their respective labels in pytorch | I am new to PyTorch, and in the last couple of days I have been struggling with the class Dataset that lets you build your custom dataset.
I am working with this dataset (https://www.kaggle.com/ianmoone0617/flower-goggle-tpu-classification/kernels) , the problem is that it has the images and their labels in separate folders, and I can’t figure out how to concatenate them.
This is the code I am using:
class MyDataset(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
self.labels = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.labels)
def __getitem__(self, index):
if torch.is_tensor(index):
index = index.tolist()
image_name = os.path.join(self.root_dir, self.labels.iloc[index, 0])
image = io.imread(image_name)
if self.transform:
image = self.transform(image)
return (image, labels)
While the structure of the folders is the following one:
I really want to understand this so thank you in advance guys!!
| Seems like you're nearly there. There are many ways to deal with this. For example, you could read both csv files during initialization to build a dictionary which maps the label string in the flowers_idx.csv to the label index specified in flowers_label.csv.
import os
import pandas as pd
import torch
from torchvision.datasets.folder import default_loader
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, data_csv, label_csv, root_dir, transform=None):
self.data_entries = pd.read_csv(data_csv)
self.root_dir = root_dir
self.transform = transform
label_map = pd.read_csv(label_csv)
self.label_str_to_idx = {label_str: label_idx for label_idx, label_str in label_map.iloc}
def __len__(self):
return len(self.labels)
def __getitem__(self, index):
if torch.is_tensor(index):
index = index.item()
label = self.label_str_to_idx[self.data_entries.iloc[index, 1]]
image_path = os.path.join(self.root_dir, f'{self.data_entries.iloc[index, 0]}.jpeg')
# torchvision datasets generally return PIL image rather than numpy ndarray
image = default_loader(image_path)
# alternative to load ndarray using skimage.io
# image = io.imread(image_path)
if self.transform:
image = self.transform(image)
return (image, label)
Note that this returns PIL images rather than ndarrays since that's generally what is returned by torchvision datasets. This is also nice since many of the torchvision transforms can only be appled to PIL images.
For now a simple use case could be:
import torchvision.transforms as tt
dataset_dir = '/home/jodag/datasets/527293_966816_bundle_archive'
# TODO add more transforms/data-augmentation etc...
transform = tt.Compose((
tt.ToTensor(),
))
dataset = MyDataset(
os.path.join(dataset_dir, 'flowers_idx.csv'),
os.path.join(dataset_dir, 'flowers_label.csv'),
os.path.join(dataset_dir, 'flower_tpu/flower_tpu/flowers_google/flowers_google'),
transform)
image, label = dataset[0]
During training or validation you would probably use a DataLoader to sample the dataset.
| https://stackoverflow.com/questions/62434037/ |
Getting started: Huggingface Model Cards | I just recently started looking into the huggingface transformer library.
When I tried to get started using the model card code at e.g. community model
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
However, I got the following error:
Traceback (most recent call last):
File "test.py", line 2, in <module>
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
File "/Users/Lukas/miniconda3/envs/nlp/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 124, in from_pretrained
"'xlm', 'roberta', 'ctrl'".format(pretrained_model_name_or_path))
ValueError: Unrecognized model identifier in emilyalsentzer/Bio_ClinicalBERT. Should contains one of 'bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm', 'roberta', 'ctrl'
If I try a different tokenizer such as "baykenney/bert-base-gpt2detector-topp92" I get the following error:
OSError: Model name 'baykenney/bert-base-gpt2detector-topp92' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'baykenney/bert-base-gpt2detector-topp92' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
Did I miss anything to get started? I feel like the model cards indicate that these three lines of code should should be enough to get started.
I am using Python 3.7 and the transformer library version 2.1.1 and pytorch 1.5.
| Please update your transformers library to at least 2.4.0. You should create a new conda environment and install all your packages directly from pypi with pip to get the most recent version (currently 2.11.0).
| https://stackoverflow.com/questions/62434075/ |
Where in the code of pytorch or huggingface/transformer label gets "renamed" into labels? | My question concerns the example, available in the great huggingface/transformers library.
I am using a notebook, provided by library creators as a starting point for my pipeline. It presents a pipeline of finetuning a BERT for Sentence Classification on Glue dataset.
When getting into the code, I noticed a very weird thing, which I cannot explain.
In the example, input data is introduced to the model as the instances of the InputFeatures class from here:
This class has 4 attributes, including the label attribute:
class InputFeatures:
...
input_ids: List[int]
attention_mask: Optional[List[int]] = None
token_type_ids: Optional[List[int]] = None
label: Optional[Union[int, float]] = None
which are later passed as a dictionary of inputs to the forward() method of the model. This is done by the Trainer class, for example in the lines 573-576 here:
def _training_step(
self, model: nn.Module, inputs: Dict[str, torch.Tensor], optimizer: torch.optim.Optimizer
) -> float:
model.train()
for k, v in inputs.items():
inputs[k] = v.to(self.args.device)
outputs = model(**inputs)
However, the forward() method expects labels (note the plural form) input parameter (taken from here):
def forward(
self,
input_ids=None,
attention_mask=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
):
So my question is where does the label become labels in this pipeline?
To give some extra info on the issue, I created my own pipeline, which uses nothing, related, with Glue data and pipe, basically it relies only on the Trainer class of transformers. I even use another model (Flaubert). I replicated the InputFeature class and my code works for both cases below:
class InputFeature:
def __init__(self, text, label):
self.input_ids = text
self.label = label
class InputFeaturePlural:
def __init__(self, text, label):
self.input_ids = text
self.labels = label
But it does not work if I name the second attribute as self.labe or by any other names. Why is it possible to use both attribute names?
It's not like it is extremely important in my case, but I feel uncomfortable passing around the data in the variable, which "changes name" somewhere along the way.
| The rename happens in the collator. In the trainer init, when data_collator is None, a default one is used:
class Trainer:
# ...
def __init__(...):
# ...
self.data_collator = data_collator if data_collator is not None else default_data_collator
# ...
FYI, the self.data_collator is later used when you get the dataloader:
data_loader = DataLoader(
self.train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator, # <-- here
drop_last=self.args.dataloader_drop_last,
)
The default collator has a special handling for labels, which does this renaming, if needed:
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
if hasattr(first, "label") and first.label is not None:
if type(first.label) is int:
labels = torch.tensor([f.label for f in features], dtype=torch.long)
else:
labels = torch.tensor([f.label for f in features], dtype=torch.float)
batch = {"labels": labels} # <-- here is where it happens
elif hasattr(first, "label_ids") and first.label_ids is not None:
if type(first.label_ids[0]) is int:
labels = torch.tensor([f.label_ids for f in features], dtype=torch.long)
else:
labels = torch.tensor([f.label_ids for f in features], dtype=torch.float)
batch = {"labels": labels}
else:
batch = {}
| https://stackoverflow.com/questions/62435022/ |
PyTorch extracting tensor elements with boolean mask (retaining dimensions) | Say, I have a PyTorch 2x2 tensor, and I also generated a boolean tensor of the same dimension (2x2). I want to use this as a mask.
For example, if I have a tensor:
tensor([[1, 3],
[4, 7]])
And if my mask is:
tensor([[ True, False],
[False, True]])
I want to use that mask to get a tensor where elements corresponding to True from my original tensor are retained, whereas elements corresponding to False are set to zero in the output tensor.
Expected Output:
tensor([[1, 0],
[0, 7]])
Any help is appreciated. Thanks!
| Assume you have :
t = torch.Tensor([[1,2], [3,4]])
mask = torch.Tensor([[True,False], [False,True]])
You can use the mask by:
masked_t = t * mask
and the output will be:
tensor([[1., 0.],
[0., 4.]])
| https://stackoverflow.com/questions/62436378/ |
Extending PyTorch nn.Sequential class | I'm pretty new to OOP in Python, and rusty in general. I'd like to extend PyTorch's 'nn.Sequential' object in such a way that passing it a tuple of containing the number of node in each layer automatically generates an OrderedDict according to those nodes. For a functional example:
layers = (784, 392, 196, 98, 10)
n_layers = len(layers)
modules = OrderedDict()
# Layer definitions for inner layers:
for i in range(n_layers - 2):
modules[f'fc{i}'] = nn.Linear(layers[i], layers[i+1])
modules[f'relu{i}'] = nn.ReLU()
# Definition for output layer:
modules['fc_out'] = nn.Linear(layers[-2], layers[-1])
modules['smax_out'] = nn.LogSoftmax(dim=1)
# Define model and check attributes:
model = nn.Sequential(modules)
So, rather than pass the 'OrderedDict' object when initializing nn.Sequential, I want my class to take the tuple instead.
class Network(nn.Sequential):
def__init__(self, n_nodes):
super().__init__()
**** INSERT LOGIC FROM LAST SNIPPET ***
So it seems like this won't work, because when my Network class calls super().__init__(), it's going to want to see the dictionary of layer activations. How might I go about writing my own network in such a way that it circumvents this problem, but still has all the functionality of PyTorche's sequential object?
I was thinking along the lines of something like:
class Network(nn.Sequential):
def __init__(self, layers):
super().__init__(self.init_modules(layers))
def init_modules(self, layers):
n_layers = len(layers)
modules = OrderedDict()
# Layer definitions for inner layers:
for i in range(n_layers - 2):
modules[f'fc{i}'] = nn.Linear(layers[i], layers[i+1])
modules[f'relu{i}'] = nn.ReLU()
# Definition for output layer:
modules['fc_out'] = nn.Linear(layers[-2], layers[-1])
modules['smax_out'] = nn.LogSoftmax(dim=1)
return modules
I'm not sure if this sort of thing is allowed and/or good practice in Python.
| Your implementation is allowed and good.
And, you can also initilize super().__init__() vacant, then use self.add_module(key, module) in a loop to attach Linear or Relu or whatever else subsequently. In this way the function __init__ may cover the job of init_modules.
| https://stackoverflow.com/questions/62438892/ |
When extending this class, one of its methods now requires an extra argument. Why? | When I want to do a forward pass with an initialized model = nn.Sequential object, I simply use:
out = model(X)
# OR
out = model.forward(X)
However, I have tried extending the Sequential class, and now both of these methods suddenly require a second argument. For example, note in the following method my call to self(x):
def train(self, trainloader, epochs):
for e in range(epochs):
for x, y in trainloader:
x = x.view(x.shape[0], -1)
self.optimizer.zero_grad()
loss = self.criterion(self(x), y) # CALL OCCURS HERE
loss.backward()
self.optimizer.step()
This code now gives me TypeError: forward() missing 1 required positional argument: 'target'.
My Question: Since I have done nothing but extend the class, why is this?
Code for full class below:
class Network(nn.Sequential):
def __init__(self, layers):
super().__init__(self.init_modules(layers))
self.criterion = nn.NLLLoss()
self.optimizer = optim.Adam(self.parameters(), lr=0.003)
def init_modules(self, layers):
n_layers = len(layers)
modules = OrderedDict()
# Layer definitions for input and inner layers:
for i in range(n_layers - 2):
modules[f'fc{i}'] = nn.Linear(layers[i], layers[i+1])
modules[f'relu{i}'] = nn.ReLU()
# Definition for output layer:
modules['fc_out'] = nn.Linear(layers[-2], layers[-1])
modules['smax_out'] = nn.LogSoftmax(dim=1)
return modules
def train(self, trainloader, epochs):
for e in range(epochs):
for x, y in trainloader:
x = x.view(x.shape[0], -1)
self.optimizer.zero_grad()
loss = self.criterion(self(x), y)
loss.backward()
self.optimizer.step()
Full stack trace:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-63-490e0b9eef22> in <module>
----> 1 model2.train(trainloader, 5, plot_loss=True)
<ipython-input-61-e173e5672f18> in train(self, trainloader, epochs, plot_loss)
32 x = x.view(x.shape[0], -1)
33 self.optimizer.zero_grad()
---> 34 loss = self.criterion(self(x), y)
35 loss.backward()
36 self.optimizer.step()
c:\program files\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
c:\program files\python38\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
98 def forward(self, input):
99 for module in self:
--> 100 input = module(input)
101 return input
102
c:\program files\python38\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() missing 1 required positional argument: 'target'
| Q: Since I have done nothing but extend the class, why is this?
Actually, you have. You chose the "wrong" base class. The forward of the nn.Sequential simply goes through all modules, and when you defined:
self.criterion = nn.NLLLoss()
you registered the loss as a module. Therefore, when you call self(x), you're actually calling self.criterion(x) at some point, hence the TypeError.
| https://stackoverflow.com/questions/62439514/ |
Detectron2 - Extract region features at a threshold for object detection | I am trying to extract region features where class detection is higher than some threshold using the detectron2 framework. I will be using these features later in my pipeline (similar to: VilBert section 3.1 Training ViLBERT) So far I have trained a Mask R-CNN with this config and fine-tuned it on some custom data. It performs well. What I would like to do is extract the features from my trained model for the produced bounding box.
EDIT: I looked at what the users who closed my post wrote and tried to refine it. Although the reader needs context as to what I am doing. If you have any idea on how I can make the question better or if you have some insight as to how to do what I am trying to do your feedback is welcome!
I have a question:
Why am I only getting one prediction instance, but when I look
at the prediction CLS scores there are more than 1 which passes the
threshold?
I believe this is the correct way of producing the ROI features:
images = ImageList.from_tensors(lst[:1], size_divisibility=32).to("cuda") # preprocessed input tensor
#setup config
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.SOLVER.IMS_PER_BATCH = 1
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (pnumonia)
#Just run these lines if you have the trained model im memory
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set the testing threshold for this model
#build model
model = build_model(cfg)
DetectionCheckpointer(model).load("output/model_final.pth")
model.eval()#make sure its in eval mode
#run model
with torch.no_grad():
features = model.backbone(images.tensor.float())
proposals, _ = model.proposal_generator(images, features)
instances = model.roi_heads._forward_box(features, proposals)
Then
pred_boxes = [x.pred_boxes for x in instances]
rois = model.roi_heads.box_pooler([features[f] for f in model.roi_heads.in_features], pred_boxes)
This should be my ROI features.
What I am very confused about is instead of using the bounding boxes produced at inference I could use the proposals and the proposal_boxes with their class scores to get the top n features for this image. Cool so I have tried the following:
proposal_boxes = [x.proposal_boxes for x in proposals]
proposal_rois = model.roi_heads.box_pooler([features[f] for f in model.roi_heads.in_features], proposal_boxes)
#found here: https://detectron2.readthedocs.io/_modules/detectron2/modeling/roi_heads/roi_heads.html
box_features = model.roi_heads.box_head(proposal_rois)
predictions = model.roi_heads.box_predictor(box_features)
pred_instances, losses = model.roi_heads.box_predictor.inference(predictions, proposals)
Where I should be getting my proposal box features and its cls in my predictions object. Inspecting this predictions object I see the scores for each box:
CLS Scores in Predictions object
(tensor([[ 0.6308, -0.4926],
[-1.6662, 1.5430],
[-0.2080, 0.4856],
...,
[-6.9698, 6.6695],
[-5.6361, 5.4046],
[-4.4918, 4.3899]], device='cuda:0', grad_fn=<AddmmBackward>),
After softmaxing and placing these cls scores in a dataframe and setting a threshold of 0.6 I get:
pred_df = pd.DataFrame(predictions[0].softmax(-1).tolist())
pred_df[pred_df[0] > 0.6]
0 1
0 0.754618 0.245382
6 0.686816 0.313184
38 0.722627 0.277373
and in my predictions object I get the same top score, but only 1 instance rather than 2 (I set cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7):
Prediction Instances:
[Instances(num_instances=1, image_height=800, image_width=800, fields=[pred_boxes: Boxes(tensor([[548.5992, 341.7193, 756.9728, 438.0507]], device='cuda:0',
grad_fn=<IndexBackward>)), scores: tensor([0.7546], device='cuda:0', grad_fn=<IndexBackward>), pred_classes: tensor([0], device='cuda:0')])]
The predictions also contain Tensor: Nx4 or Nx(Kx4) bounding box regression deltas. which I don't exactly know what they do and look like:
Bounding box regression deltas in Predictions object
tensor([[ 0.2502, 0.2461, -0.4559, -0.3304],
[-0.1359, -0.1563, -0.2821, 0.0557],
[ 0.7802, 0.5719, -1.0790, -1.3001],
...,
[-0.8594, 0.0632, 0.2024, -0.6000],
[-0.2020, -3.3195, 0.6745, 0.5456],
[-0.5542, 1.1727, 1.9679, -2.3912]], device='cuda:0',
grad_fn=<AddmmBackward>)
Something else strange is that my proposal boxes and my prediction boxes are different but similar:
Proposal bounding boxes
[Boxes(tensor([[532.9427, 335.8969, 761.2068, 438.8086],#this box vs the instance box
[102.7041, 352.5067, 329.4510, 440.7240],
[499.2719, 317.9529, 764.1958, 448.1386],
...,
[ 25.2890, 379.3329, 28.6030, 429.9694],
[127.1215, 392.6055, 328.6081, 489.0793],
[164.5633, 275.6021, 295.0134, 462.7395]], device='cuda:0'))]
| You are almost there. Looking at roi_heads.box_predictor.inference() you will see that it doesn't simply sort the scores of the box candidates. First, it applies box deltas to readjust the proposal boxes. Then it computes Non-Maximum Suppression to remove non-overlapping boxes (while also applying other hyper-settings such as score threshold). Finally, it ranks top-k boxes according to their scores. That probably explains why your method produces the same box scores but different number of output boxes and its coordinates.
Back to your original question, here is the way to extract the features of the proposed boxes in one inference pass:
image = cv2.imread('my_image.jpg')
height, width = image.shape[:2]
image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
inputs = [{"image": image, "height": height, "width": width}]
with torch.no_grad():
images = model.preprocess_image(inputs) # don't forget to preprocess
features = model.backbone(images.tensor) # set of cnn features
proposals, _ = model.proposal_generator(images, features, None) # RPN
features_ = [features[f] for f in model.roi_heads.box_in_features]
box_features = model.roi_heads.box_pooler(features_, [x.proposal_boxes for x in proposals])
box_features = model.roi_heads.box_head(box_features) # features of all 1k candidates
predictions = model.roi_heads.box_predictor(box_features)
pred_instances, pred_inds = model.roi_heads.box_predictor.inference(predictions, proposals)
pred_instances = model.roi_heads.forward_with_given_boxes(features, pred_instances)
# output boxes, masks, scores, etc
pred_instances = model._postprocess(pred_instances, inputs, images.image_sizes) # scale box to orig size
# features of the proposed boxes
feats = box_features[pred_inds]
| https://stackoverflow.com/questions/62442039/ |
Dequantize values to their original prior to quantization | The paper "Natural Language Processing with Small Feed-Forward Networks" https://arxiv.org/pdf/1708.00214.pdf states:
I've implemented quantization as per the above equations in python:
b = 128
embedding_matrix = [[20000,3000,1000],[1999999,20000,1999999], [20000,3000,1000]]
scaled = [ abs(round( (1 / (b - 1) * max(e)) , 3)) for e in embedding_matrix]
print(scaled)
i = 0
quantized = []
for e in embedding_matrix :
for v in e :
quantized.append((v , math.floor(.5 + ( (v / scaled[i]) + b) )))
i = i + 1
quantized
Running this code quantized is set to :
[(20000, 255),
(3000, 147),
(1000, 134),
(1999999, 255),
(20000, 129),
(1999999, 255),
(20000, 255),
(3000, 147),
(1000, 134)]
How to de-quantize back to the original values prior to quantization ?
Reading https://www.tensorflow.org/api_docs/python/tf/quantization/dequantize describes :
tf.quantization.dequantize(
input, min_range, max_range, mode='MIN_COMBINED', name=None, axis=None,
narrow_range=False, dtype=tf.dtypes.float32
)
[min_range, max_range] are scalar floats that specify the range for the output. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.
and the PyTorch docs: https://pytorch.org/docs/stable/quantization.html
Seems to implement quantize differently to above implementation ?
| What they are doing in the paper is roughly this:
import numpy as np
b = 128
embedding_matrix = np.array([[20000,3000,1000,1000],[1999999,20000,1999999,1999999], [20000,3000,1000,1000]])
scales = (np.abs(embedding_matrix).max(axis=1) / (b-1)).reshape(-1, 1)
quantized = (embedding_matrix / scales + b + 0.5).astype(np.uint8)
dequantized = (quantized - b) * scales
print(quantized)
print(dequantized)
Output:
[[255 147 134 134]
[255 129 255 255]
[255 147 134 134]]
[[2.00000000e+04 2.99212598e+03 9.44881890e+02 9.44881890e+02]
[1.99999900e+06 1.57480236e+04 1.99999900e+06 1.99999900e+06]
[2.00000000e+04 2.99212598e+03 9.44881890e+02 9.44881890e+02]]
In short they just have q_ij = round(e_ij / s_i + b), so after you just have quantized value q_ij your best approximation is to say that q_ij = dequantized_ij / s_i + b, so dequantized_ij = (q_ij - b) * s_i
As to pytorch - similar functionality is available with torch.quantize_per_channel e.g the following code is doing pretty much the same:
import torch
t = torch.tensor(embedding_matrix, dtype=torch.float32)
zero_point = torch.tensor([b]).repeat(t.shape[0], 1).reshape(-1)
quantized_tensor = torch.quantize_per_channel(t, t.abs().max(axis=1)[0] / (b-1), zero_point, 0, torch.quint8)
print(quantized_tensor)
print(quantized_tensor.int_repr())
Output:
tensor([[2.0000e+04, 2.9921e+03, 9.4488e+02, 9.4488e+02],
[2.0000e+06, 1.5748e+04, 2.0000e+06, 2.0000e+06],
[2.0000e+04, 2.9921e+03, 9.4488e+02, 9.4488e+02]], size=(3, 4),
dtype=torch.quint8, quantization_scheme=torch.per_channel_affine,
scale=tensor([ 157.4803, 15748.0234, 157.4803], dtype=torch.float64),
zero_point=tensor([128, 128, 128]), axis=0)
tensor([[255, 147, 134, 134],
[255, 129, 255, 255],
[255, 147, 134, 134]], dtype=torch.uint8)
If quantized per channel like this in pytorch you can only apply .dequantize() on the full tensor rather then the sliced which wouldn't be a good thing for embeddings, but you can do it manually very easy using repr_int, q_per_channel_zero_points, and q_per_channel_scales.
Does this answer your question?
| https://stackoverflow.com/questions/62450062/ |
TypeError: forward() missing 1 required positional argument with Tensorboard PyTorch | I am trying to write my model to tensorboard with the following code:
model = SimpleLSTM(4, HIDDEN_DIM, HIDDEN_LAYERS, 1, BATCH_SIZE, device)
writer = tb.SummaryWriter(log_dir=tb_path)
sample_data = iter(trainloader).next()[0]
writer.add_graph(model, sample_data.to(device))
I get the error: TypeError: forward() missing 1 required positional argument: 'batch_size'
My model looks like this:
class SimpleLSTM(nn.Module):
def __init__(self, input_dims, hidden_units, hidden_layers, out, batch_size, device):
super(SimpleLSTM, self).__init__()
self.input_dims = input_dims
self.hidden_units = hidden_units
self.hidden_layers = hidden_layers
self.batch_size = batch_size
self.device = device
self.lstm = nn.LSTM(self.input_dims, self.hidden_units, self.hidden_layers,
batch_first=True, bidirectional=False)
self.output_layer = nn.Linear(self.hidden_units, out)
def init_hidden(self, batch_size):
hidden = torch.rand(self.hidden_layers, batch_size, self.hidden_units, device=self.device, dtype=torch.float32)
cell = torch.rand(self.hidden_layers, batch_size, self.hidden_units, device=self.device, dtype=torch.float32)
hidden = nn.init.xavier_normal_(hidden)
cell = nn.init.xavier_normal_(cell)
return (hidden, cell)
def forward(self, input, batch_size):
hidden = self.init_hidden(batch_size) incomplete batch
lstm_out, (h_n, c_n) = self.lstm(input, hidden)
raw_out = self.output_layer(h_n[-1])
return raw_out
How can I write this model to TensorBoard?
| Your model takes two arguments input and batch_size, but you only provide one argument for add_graph to call your model with.
The inputs (second argument to add_graph) should be a tuple with the input and the batch_size:
writer.add_graph(model, (sample_data.to(device), BATCH_SIZE))
You don't really need to provide the batch size to the forward method, because you can infer it from the input. As your LSTM uses batch_first=True, it means that the input is required to have size [batch_size, seq_len, num_features], therefore the size of the first dimension is the current batch size.
def forward(self, input):
batch_size = input.size(0)
# ...
| https://stackoverflow.com/questions/62453430/ |
Problems using pretrained ResNet50 in PyTorch to solve CIFAR10 Dataset | I got the following error using a pretrained ResNet50 in PyTorch:
RuntimeError
Traceback (most recent call last)
<ipython-input-14-8f0d0641ef12> in <module>()
28 # Update parameters
29 optimizer.zero_grad()
---> 30 loss.backward()
31 optimizer.step()
32
1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in
backward(tensors, grad_tensors, retain_graph, create_graph,
grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Notebook is in this link: https://colab.research.google.com/drive/1k40NNulSIS6ANagopSPBH4Xty_Cw39qC?usp=sharing
| The problem is that you're setting a new attribute model.classifier, while you actually want to replace the current "classifier", i.e., change the model.fc.
It is beyond the scope of your question, but you'll find another problem later on. Your new classifier has a LogSoftmax() module and you're using the nn.CrossEntropyLoss(). As you can see here, you should not do this.
| https://stackoverflow.com/questions/62453752/ |
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select | Can someone kindly help me trace the root of the following error? I don't understand where the switching between GPU and CPU is taking place as from the beginning I have instructed collab to use GPU.
Also following the error stack trace, it points to labels, what could be potentially wrong here?
Thanks in advance!
| I would recommend to take a look at this youtube series to understand how pytorch works.
For your issue specifically I think you'll find your answer in this video
The idea is that you need to specify that you want to place your data and your model on your GPU. Using the method .to(device), device being either cuda if your GPU is available otherwise your cpu, and you need to do the same for your data.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
You also need to do it your data, I assume you have a for loop to iterate over your batch so you could do it like:
for batch in train_loader:
***
x, y = batch[0].to(device), batch[1].to(device)
***
| https://stackoverflow.com/questions/62455254/ |
Is One-Hot Encoding required for using PyTorch's Cross Entropy Loss Function? | For example, if I want to solve the MNIST classification problem, we have 10 output classes. With PyTorch, I would like to use the torch.nn.CrossEntropyLoss function. Do I have to format the targets so that they are one-hot encoded or can I simply use their class labels that come with the dataset?
| nn.CrossEntropyLoss expects integer labels. What it does internally is that it doesn't end up one-hot encoding the class label at all, but uses the label to index into the output probability vector to calculate the loss should you decide to use this class as the final label. This small but important detail makes computing the loss easier and is the equivalent operation to performing one-hot encoding, measuring the output loss per output neuron as every value in the output layer would be zero with the exception of the neuron indexed at the target class. Therefore, there's no need to one-hot encode your data if you have the labels already provided.
The documentation has some more insight on this: https://pytorch.org/docs/master/generated/torch.nn.CrossEntropyLoss.html. In the documentation you'll see targets which serves as part of the input parameters. These are your labels and they are described as:
This clearly shows how the input should be shaped and what is expected. If you in fact wanted to one-hot encode your data, you would need to use torch.nn.functional.one_hot. To best replicate what the cross entropy loss is doing under the hood, you'd also need nn.functional.log_softmax as the final output and you'd have to additionally write your own loss layer since none of the PyTorch layers use log softmax inputs and one-hot encoded targets. However, nn.CrossEntropyLoss combines both of these operations together and is preferred if your outputs are simply class labels so there is no need to do the conversion.
| https://stackoverflow.com/questions/62456558/ |
BERT:Question-Answering - Total number of permissible words/tokens for training | Let's say I want to train BERT with 2 sentences (query-answer) pair against a certain binary label (1,0) for the correctness of the answer, will BERT let me use 512 words/tokens each for the query and the answer or together(query+answer combined) they should be 512? [510 upon ignoring the [start] and [sep] token]
Thanks in advance!
| Together, and actually it's together they should be 509 since there are two [SEP], one after question and another after answer:
[CLS] q_word1 q_word2 ... [SEP] a_word1 a_word2 ... [SEP]
where q_word refers to words in the question and a_word refers to words in the answer
| https://stackoverflow.com/questions/62458671/ |
How to improve my image classifier to recognize real world images | I would like to train a hand gesture classifier with pytorch.
The dataset images looks like this.
I tried to use resnet34 and some kind of data augmentation. I got a high accuracy on test set, but low accuracy when trying to recognize my own gesture in the real world. It works fine when the background is white, goes crazy when other things(my face, chair, bed etc.) appeared in the background. Maybe that's because test images have a pure background, so how can I improve my classifier?
Also I want to add a 'non-gesture' category in my claasifier as well. How can I do that?
This is my data augmentation transforms:
transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(3),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.RandomRotation(20),
torchvision.transforms.RandomResizedCrop(64, (0.6, 1.2)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),
])
Any advice is appreciated. Thanks!
| Aside from changes to your classifier, you should look at your training data:
Ask yourself:
Is my sample size big enough?
If you have too few images to train on, no data augmentation in the world will make up for that. Aim to acquire a large, heterogenous dataset with even label distribution.
Does your training data accurately reflect the circumstances you want to use your classifier in. The images you supplied seem to have a light background, maybe try to get images of hand gestures with different backgrounds.
After that you should take a look at your classifier and improve it. Since you didn't include your model I can't comment on that.
| https://stackoverflow.com/questions/62465193/ |
Shall we lower case input data for (pre) training a BERT uncased model using huggingface? | Shall we lower case input data for (pre) training a BERT uncased model using huggingface? I looked into this response from Thomas Wolf (https://github.com/huggingface/transformers/issues/92#issuecomment-444677920) but not entirely sure if he meant that.
What happens if we lowercase the text ?
| Tokenizer will take care of that.
A simple example:
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', max_length = 10, padding_side = 'right')
input_ids = torch.tensor(tokenizer.encode('this is a cat', add_special_tokens=True, max_length = 10, pad_to_max_length = True)).unsqueeze(0)
print(input_ids)
input_ids = torch.tensor(tokenizer.encode('This is a Cat', add_special_tokens=True, max_length = 10, pad_to_max_length = True)).unsqueeze(0)
print(input_ids)
Out:
tensor([[ 101, 2023, 2003, 1037, 4937, 102, 0, 0, 0, 0]])
tensor([[ 101, 2023, 2003, 1037, 4937, 102, 0, 0, 0, 0]])
But in case of cased,
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', max_length = 10, padding_side = 'right')
input_ids = torch.tensor(tokenizer.encode('this is a cat', add_special_tokens=True, max_length = 10, pad_to_max_length = True)).unsqueeze(0)
print(input_ids)
input_ids = torch.tensor(tokenizer.encode('This is a Cat', add_special_tokens=True, max_length = 10, pad_to_max_length = True)).unsqueeze(0)
print(input_ids)
tensor([[ 101, 1142, 1110, 170, 5855, 102, 0, 0, 0, 0]])
tensor([[ 101, 1188, 1110, 170, 8572, 102, 0, 0, 0, 0]])
| https://stackoverflow.com/questions/62466514/ |
model.parameters() does not produce an iterable of Tensors | I am trying to use torch.nn.utils.clip_grad_norm_() which requires an iterable of Tensors. See below
for epoch in progress_bar(range(num_epochs)):
lstm.train()
outputs = lstm(trainX.to(device))
optimizer.zero_grad()
torch.nn.utils.clip_grad_norm_(lstm.parameters(), 1)
My code errors with:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-168-4cd34e6fd44d> in <module>
28 lstm.train()
29 outputs = lstm(trainX.to(device))
---> 30 torch.nn.utils.clip_grad_norm_(lstm.parameters(), 1)
31
32
/opt/conda/lib/python3.6/site-packages/torch/nn/utils/clip_grad.py in clip_grad_norm_(parameters, max_norm, norm_type)
28 total_norm = max(p.grad.detach().abs().max() for p in parameters)
29 else:
---> 30 total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)
31 clip_coef = max_norm / (total_norm + 1e-6)
32 if clip_coef < 1:
RuntimeError: stack expects a non-empty TensorList
If I example lstm.parameters() I get a list of Parameters, instead of a list of Tensors:
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 1])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048, 512])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048])
<class 'torch.nn.parameter.Parameter'> torch.Size([2048])
<class 'torch.nn.parameter.Parameter'> torch.Size([1, 512])
<class 'torch.nn.parameter.Parameter'> torch.Size([1])
Looking at the first Parameter, it is a list of Tensors:
<class 'torch.Tensor'> torch.Size([1])
<class 'torch.Tensor'> torch.Size([1])
<class 'torch.Tensor'> torch.Size([1])
<class 'torch.Tensor'> torch.Size([1])
<class 'torch.Tensor'> torch.Size([1])
<class 'torch.Tensor'> torch.Size([1])
.
.
.
Does anyone know what is going on here?
| PyTorch's clip_grad_norm, as the name suggests, operates on gradients.
You have to calculate your loss from output, use loss.backward() and perform gradient clipping afterwards.
Also, you should use optimizer.step() after this operation.
Something like this:
for epoch in progress_bar(range(num_epochs)):
lstm.train()
for batch in dataloader:
optimizer.zero_grad()
outputs = lstm(trainX.to(device))
loss = my_loss(outputs, targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(lstm.parameters(), 1)
optimizer.step()
You don't have parameter.grad calculated (it's value is None) and that's the reason of your error.
| https://stackoverflow.com/questions/62466804/ |
Checking the contents of python dataloader | This is probably a simple question, but how see how the contents of this standard data loader looks like:
from torchtext import datasets
import random
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
I can't use .head() and:
print(test_data)
just gives me:
<torchtext.datasets.imdb.IMDB object at 0x7f0b42e8c240>
I'm probably just missing .values or [0] or something similar...
| Datasets are iterables, you can get the first element with next(iter(test_data)).
| https://stackoverflow.com/questions/62467108/ |
Understanding nn.Sequential in convolutional layers | I am new to PyTorch/Deep learning and I am trying to understand the use of the following line to define a convolutional layer:
self.layer1 = nn.Sequential(nn.Conv1d(input_dim, n_conv_filters, kernel_size=7, padding=0), nn.ReLU(), nn.MaxPool1d(3))
I understand that that it is creating a 1d convolutional layer to the network with max pooling 3 wide. However, I don't understand the function of the sequential module or RelU. How do these function in creating a layer?
For reference, the rest of the code can be found here: https://github.com/ArdalanM/nlp-benchmarks/blob/master/src/cnn/net.py
| As per the description provided it seems you are in the process of developing a convolutional architecture for a problem (More likely a Computer Vision one as CNNs are usually targeted for solving CV problems).
Now talking about the code by using Sequential module you are telling the PyTorch that you are developing an architecture that will work in a sequential manner and by specifying ReLU you are bringing the concept of Non-Linearity in the picture (ReLU is one of the widely used activation functions in the Deep learning framework). Non-Linearity helps CNNs to generalize to complex decision boundaries and ultimately helps them to perform better.
PS: I recommend reviewing the https://towardsdatascience.com/convolutional-neural-network-for-image-classification-with-implementation-on-python-using-pytorch-7b88342c9ca9 for getting better idea from a coder perspective.
| https://stackoverflow.com/questions/62468497/ |
AttributeError: module 'torch.optim' has no attribute 'RMSProp' | Getting the following error when trying to use the RMSProp Optimizer with PyTorch:
AttributeError: module 'torch.optim' has no attribute 'RMSProp'
Code:
import torch as T
import torch.nn as nn
import torch.optim as optim
class DeepQNetwork(nn.Module):
def __init__(self, alpha, ...):
super(DeepQNetwork, self).__init__()
...
self.optimizer = optim.RMSProp(self.parameters(), lr=alpha)
...
PyTorch version is 1.5.1 with Python version 3.6. There's a documentation for torch.optim and its optimizers including RMSProp, but PyCharm only suggests Adam and SGD and it really seems like all other optimizers are missing.
Does anyone have an idea? I did not find a single thing on the internet and it starts driving me crazy.
Suggesstion from PyCharm
| RMSprop (as seen in the documentation) instead of RMSProp. So, it's just a typo.
| https://stackoverflow.com/questions/62471195/ |
With the HuggingFace transformer, how can I return multiple samples when generating text? | I'm going off of https://github.com/cortexlabs/cortex/blob/master/examples/pytorch/text-generator/predictor.py
But if I pass num_samples=5, I get:
generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Got 5 and 1 in dimension 0
the code is:
def sample_sequence(
model,
length,
context,
num_samples=1,
temperature=1,
top_k=0,
top_p=0.9,
repetition_penalty=1.0,
device="cpu",
):
context = torch.tensor(context, dtype=torch.long, device=device)
context = context.unsqueeze(0).repeat(num_samples, 1)
print('context_size', context.shape)
generated = context
print('context', context)
with torch.no_grad():
for _ in trange(length):
inputs = {"input_ids": generated}
outputs = model(
**inputs
) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states)
next_token_logits = outputs[0][0, -1, :] / (temperature if temperature > 0 else 1.0)
# reptition penalty from CTRL (https://arxiv.org/abs/1909.05858)
for _ in set(generated.view(-1).tolist()):
next_token_logits[_] /= repetition_penalty
filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
if temperature == 0: # greedy sampling:
next_token = torch.argmax(filtered_logits).unsqueeze(0)
else:
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)
generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)
return generated
| As far as I can see this code doesn't provide multiple samples, but you can adjust it with a some adjustments.
This line uses already multinomial but returns only 1:
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)
change it to:
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=num_samples)
Now you also need to change the result construction. This concatenates line the next_token with the sentence. You get now num_samples of next_tokens and you need unsqueeze all of them:
generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)
change it to:
generated = torch.cat((generated, next_token.unsqueeze(1)), dim=1)
The whole function should look like this now:
def sample_sequence(
model,
length,
context,
num_samples=1,
temperature=1,
top_k=0,
top_p=0.9,
repetition_penalty=1.0,
device="cpu",
):
context = torch.tensor(context, dtype=torch.long, device=device)
context = context.unsqueeze(0).repeat(num_samples, 1)
generated = context
with torch.no_grad():
for _ in trange(length):
inputs = {"input_ids": generated}
outputs = model(
**inputs
) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states)
next_token_logits = outputs[0][0, -1, :] / (temperature if temperature > 0 else 1.0)
# reptition penalty from CTRL (https://arxiv.org/abs/1909.05858)
for _ in set(generated.view(-1).tolist()):
next_token_logits[_] /= repetition_penalty
filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
if temperature == 0: # greedy sampling:
next_token = torch.argmax(filtered_logits).unsqueeze(0)
else:
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=num_samples)
generated = torch.cat((generated, next_token.unsqueeze(1)), dim=1)
return generated
Last but not least you have to change your tokenizer.decode call to tokenizer.batch_decode as the return value contains now multiple samples:
tokenizer.batch_decode(output.tolist(), clean_up_tokenization_spaces=True, skip_special_tokens=True)
Something you have to think of byt yourself, is what you want to do when there is no valide next_token. Currently you will receive an error message like:
RuntimeError: invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)
Another thing you have to think of, is if their code is even correct. During the few test I have conducted, it felt like that the quality of created sentences decreased with an increasing number of num_samples (i.e. Maybe the quality is better when you use a simple loop to call sample_sequence multiple times?). I haven't worked with GPT2 yet and can't help you here.
| https://stackoverflow.com/questions/62472438/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.