instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Copying model output to a torch.Tensor where requires_grad is True | I have a model that outputs a sequence of vectors for each element in the batch, e.g., [Batch size, Sequence Length, Hidden size]. Then, I want to select a variable number of vectors for each element in the batch, and copy these vectors to a tensor where requires_grad = True. A sample code is bellow:
from torch import nn
from typing import List
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc = nn.Linear(8,8)
def forward(self, x: torch.Tensor, indices: List[torch.Tensor]):
# Example indices: [torch.tensor([0,1]), torch.tensor([2,3,4])]
out = self.fc(x)
batch_size, _, hidden_size = out.size()
max_num_hidden_states = max([ind.size(0) for ind in indices])
selected_hidden_states = torch.zeros(batch_size, max_num_hidden_states, hidden_size, requires_grad=True)
for i in range(batch_size):
selected_hidden_states.data[i, :indices[i].size(0)] = out[i, indices[i]]
return selected_hidden_states
model = MyModel()
with torch.no_grad():
output = model(torch.rand(2, 5, 8), [torch.tensor([0,1]), torch.tensor([2,3,4])])
The questions I have w.r.t. this are:
If I train such model, would the gradients be backpropagated in the rest of the model parameters?
Why does output.requires_grad = True, when I explicitly state torch.no_grad()?
The way I'm doing this (which it doesn't seem to work as expected as of now) seems too hacky and wrong. What is the proper way to achieve what I want?
I'm aware of this answer, which approves my way doing it (at least it seems like it), but still it looks hacky to me.
Cheers!
| Copy-paste an answer from the PyTorch forum.
That is from a time long gone and answering a different question.
No. Between creating a new tensor requiring grad and using .data, which you never should these days, you created a new leaf which will accumulate .grad.
Because you requested it. no_grad signals that you do not need the grad, it does not include guarantees about the requires_grad of the result.
If the utility function does not work for you, dropping the requires_grad and the .data should do the trick.
| https://stackoverflow.com/questions/68745819/ |
What is the mechanism of "torch.Tensor in torch.Tensor" in python and why is there such a confusing phenomenon? | environment:
google colab
Python 3.7.11
torch 1.9.0+cu102
code and output
import torch
b = torch.tensor([[1,1,1],[4,5,6]])
print(b.T)
print(torch.tensor([1,4]) in b.T) #
print(torch.tensor([2,1]) in b.T) #
print(torch.tensor([1,2]) in b.T) # Not as expected
print(torch.tensor([2,5]) in b.T) # Not as expected
----------------------------------------------------------
tensor([[1, 4],
[1, 5],
[1, 6]])
True
False
True
True
probleam
I want to judge whether one tensor is in another.But the result above is confusing.
What is the mechanism of in? How should I use it to avoid the above unexpected output?
And is it the problem of torch.tensor.T? (When .T is not used and initial b = torch.tensor([[1,4],[1,5],[1,6]]), there also can be no expected output)
| The source code is as follows (edit: source code snippet found by @Rune):
def __contains__(self, element):
r"""Check if `element` is present in tensor
Args:
element (Tensor or scalar): element to be checked
for presence in current tensor"
"""
if has_torch_function_unary(self):
return handle_torch_function(Tensor.__contains__, (self,), self, element)
if isinstance(element, (torch.Tensor, Number)):
# type hint doesn't understand the __contains__ result array
return (element == self).any().item() # type: ignore[union-attr]
raise RuntimeError(
"Tensor.__contains__ only supports Tensor or scalar, but you passed in a %s." %
type(element)
)
The __contains__ (used by the 'in' syntax: x in b) operator is equivalent to applying torch.any on the x == b boolean condition:
>>> b = tensor([[1, 1, 1],
[4, 5, 6]])
>>> check_in = lambda x: torch.any(x == b.T)
Then
>>> check_in(torch.tensor([1,4]))
tensor(True)
>>> check_in(torch.tensor([2,1]))
tensor(False)
>>> check_in(torch.tensor([1,2]))
tensor(True)
>>> check_in(torch.tensor([2,5]))
tensor(True)
It is the position of the elements in the columns that matters not the exact match of the whole column.
.T reverses the order of the dimension: equivalent to b.permute(1, 0) and has no effect on the results. The only constraint you have when using in is x's size needs to match the shape of b[1]. If you're working with b.T then it will be b[0].
>>> check_in = lambda x: torch.any(x == b)
>>> check_in(torch.tensor([1,1,1]))
tensor(True)
>>> check_in(torch.tensor([5,4,2]))
tensor(False)
| https://stackoverflow.com/questions/68752585/ |
Pytorch Problem: My jupyter stuck when num_workers > 0 | This is a snippet of my code in PyTorch, my jupiter notebook stuck when I used num_workers > 0, I spent a lot on this problem without any answer. I do not have a GPU and I work only with a CPU.
class IndexedDataset(Dataset):
def __init__(self,data,targets, test=False):
self.dataset = data
if not test:
self.labels = targets.numpy()
self.mask = np.concatenate((np.zeros(NUM_LABELED), np.ones(NUM_UNLABELED)))
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
image = self.dataset[idx]
return image, self.labels[idx]
def display(self, idx):
plt.imshow(self.dataset[idx], cmap='gray')
plt.show()
train_set = IndexedDataset(train_data, train_target, test = False)
test_set = IndexedDataset(test_data, test_target, test = True)
train_loader = DataLoader(train_set, batch_size=BATCH_SIZE, num_workers=2)
test_loader = DataLoader(test_set, batch_size=BATCH_SIZE, num_workers=2)
Any help, appreciated.
| When num_workers is greater than 0, PyTorch uses multiple processes for data loading.
Jupyter notebooks have known issues with multiprocessing.
One way to resolve this is not to use Jupyter notebooks - just write a normal .py file and run it via command-line.
Or try use what's suggested here: Jupyter notebook never finishes processing using multiprocessing (Python 3).
| https://stackoverflow.com/questions/68756034/ |
Is there a way to use a pre-trained transformers model without the configuration file? | I would like to fine-tune a pre-trained transformers model on Question Answering. The model was pre-trained on large engineering & science related corpora.
I have been provided a "checkpoint.pt" file containing the weights of the model. They have also provided me with a "bert_config.json" file but I am not sure if this is the correct configuration file.
from transformers import AutoModel, AutoTokenizer, AutoConfig
MODEL_PATH = "./checkpoint.pt"
config = AutoConfig.from_pretrained("./bert_config.json")
model = AutoModel.from_pretrained(MODEL_PATH, config=config)
The reason I believe that bert_config.json doesn't match "./checkpoint.pt" file is that, when I load the model with the code above, I get the error that goes as below.
Some weights of the model checkpoint at ./aerobert/phase2_ckpt_4302592.pt were not used when initializing BertModel: ['files', 'optimizer', 'model', 'master params']
This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertModel were not initialized from the model checkpoint at ./aerobert/phase2_ckpt_4302592.pt and are newly initialized: ['encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.21.attention.self.value.bias', 'encoder.layer.11.attention.self.value.bias', ............
If I am correct in assuming that "bert_config.json" is not the correct one, is there a way to load this model correctly without the config.json file?
Is there a way to see the model architecture from the saved weights of checkpoint.pt file?
| This is a warning message instead of a error.
It means that the pretrained model is pretrained in some task (such as Question Answering, MLM etc), if your own fine tune task is the same as those pretrained task, then this IS NOT expected; unless this IS expected because some pooler of pretrained model will not be used in fine tune.
But this message doesn't mean that the bert_config.json isn't the right one. You can test it on huggingface's official colab notebook
You can find more information in this issue.
| https://stackoverflow.com/questions/68757944/ |
Print input / output / grad / loss at every step/epoch when training Transformers HuggingFace model | I'm working on HuggingFace Transformers and using toy example from here:
https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer
What I actually need: ability to print input, output, grad and loss at every step.
It is trivial using Pytorch training loop, but it is not obvious using HuggingFace Trainer.
At the current moment I have next idea: create a CustomCallback like this:
class MyCallback(TrainerCallback):
"A callback that prints a grad at every step"
def on_step_begin(self, args, state, control, **kwargs):
print("next step")
print(kwargs['model'].classifier.out_proj.weight.grad.norm())
args = TrainingArguments(
output_dir='test_dir',
overwrite_output_dir=True,
num_train_epochs=1,
logging_steps=100,
report_to="none",
fp16=True,
disable_tqdm=True,
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
callbacks=[MyCallback],
)
trainer.train()
This way I can print grad and weights for any model layer.
But I still can't figure out how to print input/output (for example, I want to check them on nan) and loss?
P.S. I also read something about forward_hook but still can't find good code examples for it.
| While using hooks and custom callbacks is the right way to solve the problem I find better solution - use built-in utility for finding nan/Inf in losses / weights / inputs / outputs:
https://huggingface.co/transformers/internal/trainer_utils.html#transformers.debug_utils.DebugUnderflowOverflow
since the 4.6.0 transformers has such option.
You can use it manually in forward function or just use additional option for TrainingArguments like this:
args = TrainingArguments(
output_dir='test_dir',
overwrite_output_dir=True,
num_train_epochs=1,
logging_steps=100,
report_to="none",
fp16=True,
disable_tqdm=True,
debug="debug underflow_overflow"
)
| https://stackoverflow.com/questions/68759885/ |
Pytorch Scheduler: how to get decreasing LR epochs | I'm training a network in pytorch and using ReduceLROnPlateau as scheduler.
I set verbose=True in the parameteres and my scheduler prints something like:
Epoch 159: reducing learning rate to 6.0000e-04.
Epoch 169: reducing learning rate to 3.0000e-04.
Epoch 178: reducing learning rate to 1.5000e-04.
Epoch 187: reducing learning rate to 7.5000e-05.
I would like to get the epochs in some way, in order to obtain a list with all the epochs in which the scheduler reduced the learning rate.
Something like: lr_decrease_epochs = ['159', '169', '178', '187']
Which is the simplest way to do that ?
| I think the scheduler doesn't take track of this (at least I didn't see anything like this in the source code), but you can just keep track of this in your training loop.
Whenever the learning rate changes (scheduler.get_lr()) you simply record the current epoch.
| https://stackoverflow.com/questions/68761372/ |
Extremely low accuracy for IRIS dataset using CNN | Being a beginner, I am trying to implement my CNN on IRIS dataset with only 2 labels considered:
Iris-setosa: 0
Iris-versicolor: 1
I am using 90% data for training and 10% for testing with 1D CNN and Adam optimization and learning rate of 0.001. The accuracy achieved is around 40-50% which also is changing with every execution. Please suggest what should be done.
DATA LOADING TO DATALOADERS:
#Training data
class IrisDataset(T.utils.data.Dataset):
def __init__(self, Iris):
sc = StandardScaler()
X_tr = sc.fit_transform(trainX)
Y_tr = trainY
self.X_tr = torch.tensor(X_tr, dtype = torch.float32)
self.Y_tr = torch.tensor(Y_tr, dtype = torch.float32)
def __len__(self):
return len(self.Y_tr)
def __getitem__(self, idx):
return self.X_tr[idx], self.Y_tr[idx]
train_ds = IrisDataset(Iris)
bat_size = 1
# Leaving only labels 0 and 1
idx = np.append(np.where(train_ds.Y_tr == 0)[0],
np.where(train_ds.Y_tr == 1)[0])
train_ds.X_tr = train_ds.X_tr[idx]
train_ds.Y_tr = train_ds.Y_tr[idx]
#len(train_ds)
train_ldr = T.utils.data.DataLoader(train_ds,
batch_size=bat_size, shuffle=True)
batch = next(iter(train_ldr))
# and in the same way test data
#NETWORK CLASS
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv1d(1, 6, kernel_size=1)
self.conv2 = nn.Conv1d(6, 16, kernel_size=1)
self.dropout = nn.Dropout2d()
self.fc1 = nn.Linear(64, 16)
self.fc2 = nn.Linear(16, 1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = self.dropout(x)
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return T.cat((x, 1 - x), -1)
# MODEL TRAINING
model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_func = nn.NLLLoss()
epochs = 2
loss_list = []
model.train()
for epoch in range(epochs):
total_loss = []
for X_tr, Y_tr in train_ldr:
X_tr = X_tr.unsqueeze(0)
optimizer.zero_grad()
output = model(X_tr)
pred = output.argmax(dim=1, keepdim=True)
Y_tr = torch.tensor(Y_tr, dtype=torch.long)
loss = loss_func(output, Y_tr.squeeze(1))
# Backward pass
loss.backward()
# Optimize the weights
optimizer.step()
total_loss.append(loss.item())
loss_list.append(sum(total_loss)/len(total_loss))
print('Training [{:.0f}%]\tLoss: {:.4f}'.format(
100. * (epoch + 1) / epochs, loss_list[-1]))
| The mistake is here
def forward(self, x):
x = F.relu(self.conv1(x))
.
.
x = self.fc2(x)
return T.cat((x, 1 - x), -1)
'''
The thing is that the output from the dense layer is not the probability and hence
you subtracting it form 1 also don't make any sense. It will become probability if
you use sigmoid activation after it.
'''
def forward(self, x):
x = F.relu(self.conv1(x))
.
.
x = self.fc2(x)
x = torch.nn.functional.sigmoid(x)
return T.cat((x, 1 - x), -1)
But I would recommend that you change your loss function to BCE as below. This is less prone to error and is same as above in terms of results. Also you can read docs for NLL and see that it says "Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer." So you can either add LogSoftmax or use CrossEntropy.
# LogSoftmax
def forward(self, x):
x = F.relu(self.conv1(x))
.
.
x = self.fc2(x)
x = torch.nn.functional.log_softmax(x, dim=1)
return x
loss_func = nn.NLLLoss()
# BCE
def forward(self, x):
x = F.relu(self.conv1(x))
.
.
x = self.fc2(x)
return x
loss_func = torch.nn.BCEWithLogitsLoss()
| https://stackoverflow.com/questions/68761686/ |
Issue with viewing data transformation with PyTorch | I am currently trying to process data for a simple network. This is the code I entered:
Screenshot here
I keep getting this error message but can't find any syntax problems or anyone else with this issue, I'm guessing it's something to do with my vienv because I've seen tutorials of people with no issues and that exact code. It's possible I haven't imported a package into my IDE and I am using anaconda and PyCharm if that helps.
Anyway, this is the error message I keep getting.
Error Message
| You need to use transforms.ToTensor() instead of transforms.ToTensor when passing to transforms.Compose.
| https://stackoverflow.com/questions/68765270/ |
Usage of U2Net Model in android | I converted the original u2net model weight file u2net.pth to tensorflow lite by following these instructructions, and it is converted successfully.
However I'm having trouble using it in android in tensrflow lite, I was not being able to add the image segmenter metadata to this model with tflite-support script, so I changed the model and returned only 1 output d0 (which is a combination of all i.e d1,d2,...,d7). Then metadata was added successfully and I was able to use the model, but its not giving any output and returning the same image .
So any help would be much appreciated, in letting me know where I messed up, and how can I use this use this u2net model properly in tensorflow lite with android, thanks in advance ..
| I will write a long answer here. Getting in touch with the github repo of U2Net it leaves you with the effort to examine the pre and post-processing steps so you can aply the same inside the android project.
First of all preprocessing:
In the u2net_test.py file you can see at this line that all the images are preprocessed with function ToTensorLab(flag=0). Navigating to this you see that with flag=0 the preprocessing is this:
else: # with rgb color (flag = 0)
tmpImg = np.zeros((image.shape[0],image.shape[1],3))
image = image/np.max(image)
if image.shape[2]==1:
tmpImg[:,:,0] = (image[:,:,0]-0.485)/0.229
tmpImg[:,:,1] = (image[:,:,0]-0.485)/0.229
tmpImg[:,:,2] = (image[:,:,0]-0.485)/0.229
else:
tmpImg[:,:,0] = (image[:,:,0]-0.485)/0.229
tmpImg[:,:,1] = (image[:,:,1]-0.456)/0.224
tmpImg[:,:,2] = (image[:,:,2]-0.406)/0.225
Pay attention to 2 steps.
First every color pixel value is divided by the maximum value of all color pixel values:
image = image/np.max(image)
and
Second at every color pixel value is applied mean and std:
tmpImg[:,:,0] = (image[:,:,0]-0.485)/0.229
tmpImg[:,:,1] = (image[:,:,1]-0.456)/0.224
tmpImg[:,:,2] = (image[:,:,2]-0.406)/0.225
So basically in Kotlin if you have a bitmap you have to do something like:
fun bitmapToFloatArray(bitmap: Bitmap):
Array<Array<Array<FloatArray>>> {
val width: Int = bitmap.width
val height: Int = bitmap.height
val intValues = IntArray(width * height)
bitmap.getPixels(intValues, 0, width, 0, 0, width, height)
// Create aa array to find the maximum value
val fourDimensionalArray = Array(1) {
Array(320) {
Array(320) {
FloatArray(3)
}
}
}
// https://github.com/xuebinqin/U-2-Net/blob/f2b8e4ac1c4fbe90daba8707bca051a0ec830bf6/data_loader.py#L204
for (i in 0 until width - 1) {
for (j in 0 until height - 1) {
val pixelValue: Int = intValues[i * width + j]
fourDimensionalArray[0][i][j][0] =
Color.red(pixelValue)
.toFloat()
fourDimensionalArray[0][i][j][1] =
Color.green(pixelValue)
.toFloat()
fourDimensionalArray[0][i][j][2] =
Color.blue(pixelValue).toFloat()
}
}
// Convert multidimensional array to 1D
val oneDFloatArray = ArrayList<Float>()
for (m in fourDimensionalArray[0].indices) {
for (x in fourDimensionalArray[0][0].indices) {
for (y in fourDimensionalArray[0][0][0].indices) {
oneDFloatArray.add(fourDimensionalArray[0][m][x][y])
}
}
}
val maxValue: Float = oneDFloatArray.maxOrNull() ?: 0f
//val minValue: Float = oneDFloatArray.minOrNull() ?: 0f
// Final array that is going to be used with interpreter
val finalFourDimensionalArray = Array(1) {
Array(320) {
Array(320) {
FloatArray(3)
}
}
}
for (i in 0 until width - 1) {
for (j in 0 until height - 1) {
val pixelValue: Int = intValues[i * width + j]
finalFourDimensionalArray[0][i][j][0] =
((Color.red(pixelValue).toFloat() / maxValue) - 0.485f) / 0.229f
finalFourDimensionalArray[0][i][j][1] =
((Color.green(pixelValue).toFloat() / maxValue) - 0.456f) / 0.224f
finalFourDimensionalArray[0][i][j][2] =
((Color.blue(pixelValue).toFloat() / maxValue) - 0.406f) / 0.225f
}
}
return finalFourDimensionalArray
}
Then this array is fed inside the interpreter and as your model has multiple outputs we are using runForMultipleInputsOutputs:
// Convert Bitmap to Float array
val inputStyle = ImageUtils.bitmapToFloatArray(loadedBitmap)
// Create arrays with size 1,320,320,1
val output1 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val output2 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val output3 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val output4 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val output5 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val output6 = Array(1) { Array(CONTENT_IMAGE_SIZE) { Array(CONTENT_IMAGE_SIZE) { FloatArray(1)}}}
val outputs: MutableMap<Int,
Any> = HashMap()
outputs[0] = output1
outputs[1] = output2
outputs[2] = output3
outputs[3] = output4
outputs[4] = output5
outputs[5] = output6
// Runs model inference and gets result.
val array = arrayOf(inputStyle)
interpreterDepth.runForMultipleInputsOutputs(array, outputs)
Then we use the first output of the interpreter as you can see at u2net_test.py file. (I have also printed results of line 112 but it seems that it has no effect. You are free to try that with min and max value of the color pixel values).
So we have the post proseccing like you can see at the save_output function:
// Convert output array to Bitmap
val (finalBitmapGrey, finalBitmapBlack) = ImageUtils.convertArrayToBitmapTensorFlow(
output1, CONTENT_IMAGE_SIZE,
CONTENT_IMAGE_SIZE
)
where the above function will be like:
fun convertArrayToBitmapTensorFlow(
imageArray: Array<Array<Array<FloatArray>>>,
imageWidth: Int,
imageHeight: Int
): Bitmap {
val conf = Bitmap.Config.ARGB_8888 // see other conf types
val grayToneImage = Bitmap.createBitmap(imageWidth, imageHeight, conf)
for (x in imageArray[0].indices) {
for (y in imageArray[0][0].indices) {
val color = Color.rgb(
//
(((imageArray[0][x][y][0]) * 255f).toInt()),
(((imageArray[0][x][y][0]) * 255f).toInt()),
(((imageArray[0][x][y][0]) * 255f).toInt())
)
// this y, x is in the correct order!!!
grayToneImage.setPixel(y, x, color)
}
}
return grayToneImage
}
then this grayscale image you can use it as you want.
Due to multiple steps of the preprocessing I used directly interpreter with no additional libraries. I will try later in the week if you can insert metadata with all the steps but I doubt that.
If you need some clarifications please do not hesitate to ask me.
Colab notebook link
Happy coding
| https://stackoverflow.com/questions/68768237/ |
PyTorch to onnx and use with opencv-dnn? | I want to run yolov5 with opencv dnn in C++, for this, I have converted PyTorch model to onnx, from this link
But that onnx is not working with opencv dnn module.
Anyhelp would be appreciated.
| I have followed below links and have successfully run yolov5 with C++
Link-1
Link-2
| https://stackoverflow.com/questions/68771371/ |
PyTorch 1D CNN Problems | I am having quite a lot of trouble implementing a 1D CNN in PyTorch. The idea is we are using xTrainVar to predict yTrainVar, and xTestVar to predict yTestVar. Time series.
This line:
pred = prod_outputs(train_loader, model)
Creates a problem here:
_, predictions = torch.max(scores, 1)
That results in this error:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
Normally I would not dump a complete codebase, but I just cannot see my error so here it is:
#################################
# Load the libraries
#################################
import torch
import torchvision
import torch.nn.functional as F
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from torch import optim
from torch import nn
from torch.utils.data import DataLoader
from tqdm import tqdm
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
import numpy as np
import torch.utils.data as data_utils
#################################
# Prepare the data
#################################
xTrainVar = [[8.1,4.1,4.1,4.3], [3.9, 3.8, 3.9, 3.7], [3.8, 3.7, 3.8, 3.8], [3.9, 3.8, 4.1, 4.4]]
yTrainVar = [3.9, 3.8, 3.9, 4.4]
xTestVar = [[8.1,4.1,4.1,4.3], [3.9, 3.8, 3.9, 3.7], [3.8, 3.7, 3.8, 3.8], [3.9, 3.8, 4.1, 4.4],[3.9, 3.8, 4.1, 4.4]]
yTestVar = [3.9, 3.8, 3.9, 4.4, 4.4]
# Convert to Tensors
# Prepare training data
train_data = torch.tensor(np.array(xTrainVar, dtype=np.float32))
train_target = torch.tensor(np.array(yTrainVar, dtype=np.float32))
train_data = train_data.unsqueeze(1) # try to get the right shape [4,1,4]
# Reshape the data to be in line with the other shape
new_shape = (len(yTrainVar), 1, 1) # There used to be another ,1 here
train_target = train_target.view(new_shape)
train_tensor = data_utils.TensorDataset(train_data, train_target)
train_loader = data_utils.DataLoader(dataset=train_tensor, batch_size=32,shuffle=False)
# Prepare test data
test_data = torch.tensor(np.array(xTestVar, dtype=np.float32))
test_target = torch.tensor(np.array(yTestVar, dtype=np.float32))
test_data = test_data.unsqueeze(1) # try to get the right shape [5,1,4]
# Reshape the data to be in line with the other shape
new_shape = (len(yTestVar), 1,1)
test_target = test_target.view(new_shape)
test_tensor = data_utils.TensorDataset(test_data, test_target)
test_loader = data_utils.DataLoader(dataset=test_tensor, batch_size=32)
class NN(nn.Module):
def __init__(self, input_size, num_classes):
super(NN, self).__init__()
self.conv1d = nn.Conv1d(in_channels=1, out_channels=4, kernel_size=4, stride=1, padding=0)
self.relu = nn.ReLU(inplace=True)
self.fc1 = nn.Linear(16,50)
self.fc2 = nn.Linear(50,1).
def forward(self, x):
x = self.conv1d(x)
x = self.relu(x)
x = x.view(-1)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
# Set the hyperparameters
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
input_size = 1*4
num_classes = 1
learning_rate = 0.001
batch_size = 64
num_epochs = 1
model = NN(input_size=input_size, num_classes=num_classes).to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
#################################
# Train model using training data
#################################
for epoch in range(num_epochs):
# Fit the model to the training data
for batch_idx, (data, targets) in enumerate(tqdm(train_loader)):
# Get data to cuda if possible
data = data.to(device=device)
targets = targets.to(device=device)
# forward
scores = model(data)
loss = criterion(scores, targets)
# backward
optimizer.zero_grad()
loss.backward()
# gradient descent or adam step
optimizer.step()
def prod_outputs(loader, model):
model.eval() # Place the model into evaluation mode.
with torch.no_grad():
for x, y in loader:
x = x.to(device=device)
y = y.to(device=device)
scores = model(x)
_, predictions = scores.max(1)
print(predictions)
model.train() # Return the model to training mode once we are done.
return scores # This used to be predictions
#################################
# Evaluate the model
#################################
# This is where I am up to with the work.
pred = prod_outputs(train_loader, model) # Use train dependent variable (x) to generate predictions of independent variable (y)
pred = pred.numpy()
| You need to take a step back and assess the situation:
You have an input shaped (bs, c, w) as (4, 1, 4). So a batch size of 4 and a length of 4 with a single channel. Going through the convolutional layer, this becomes (bs, 4, 1) since the kernel size is (4,), there is essentially a single value per channel.
Right after you are applying a reshape which flattens the whole tensor, this is generally a bad idea. Since you are actually flattening the batch elements together. Instead, you should be looking to keep them separated. You can use nn.Flatten for that. For instance, set self.flatten = nn.Flatten() in __init__ then use x = self.flatten(x) instead of x = x.view(-1) in the forward definition.
This being said, flattening except for the first axis (the batch axis) will result in a tensor of shape (4, 4). So, ultimately your first dense layer should have the corresponding number of neurons, i.e. 4. Something like self.fc1 = nn.Linear(4, 50).
Don't take this for granted, I'm explaining what is wrong in your code, not giving you an architecture to stick with. The modications I'm providing you with will run, no guarantee it will train... This is a problem for another day!
| https://stackoverflow.com/questions/68773826/ |
Meaning of grad_outputs in PyTorch's torch.autograd.grad | I am having trouble understanding the conceptual meaning of the grad_outputs option in torch.autograd.grad.
The documentation says:
grad_outputs should be a sequence of length matching output containing the “vector” in Jacobian-vector product, usually the pre-computed gradients w.r.t. each of the outputs. If an output doesn’t require_grad, then the gradient can be None).
I find this description quite cryptic. What exactly do they mean by Jacobian-vector product? I know what the Jacobian is, but not sure about what product they mean here: element-wise, matrix product, something else? I can't tell from my example below.
And why is "vector" in quotes? Indeed, in the example below I get an error when grad_outputs is a vector, but not when it is a matrix.
>>> x = torch.tensor([1.,2.,3.,4.], requires_grad=True)
>>> y = torch.outer(x, x)
Why do we observe the following output; how was it computed?
>>> y
tensor([[ 1., 2., 3., 4.],
[ 2., 4., 6., 8.],
[ 3., 6., 9., 12.],
[ 4., 8., 12., 16.]], grad_fn=<MulBackward0>)
>>> torch.autograd.grad(y, x, grad_outputs=torch.ones_like(y))
(tensor([20., 20., 20., 20.]),)
However, why this error?
>>> torch.autograd.grad(y, x, grad_outputs=torch.ones_like(x))
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([4]) and output[0] has a shape of torch.Size([4, 4]).
| If we take your example we have function f which takes as input x shaped (n,) and outputs y = f(x) shaped (n, n). The input is described as column vector [x_i]_i for i ∈ [1, n], and f(x) is defined as matrix [y_jk]_jk = [x_j*x_k]_jk for j, k ∈ [1, n]².
It is often useful to compute the gradient of the output with respect to the input (or sometimes w.r.t the parameters of f, there are none here). In the more general case though, we are looking to compute dL/dx and not just dy/dx, where dL/dx is the partial derivative of L, computed from y, w.r.t. x.
The computation graph looks like:
x.grad = dL/dx <------- dL/dy y.grad
dy/dx
x -------> y = x*xT
Then, if we look at dL/dx, which is, via the chain rule equal to dL/dy*dy/dx. We have, looking at the interface of torch.autograd.grad, the following correspondences:
outputs <-> y,
inputs <-> x, and
grad_outputs <-> dL/dy.
Looking at the shapes: dL/dx should have the same shape as x (dL/dx can be referred to as the 'gradient' of x), while dy/dx, the Jacobian matrix, would be 3-dimensional. On the other hand dL/dy, which is the incoming gradient, should have the same shape as the output, i.e., y's shape.
We want to compute dL/dx = dL/dy*dy/dx. If we look more closely, we have
dy/dx = [dy_jk/dx_i]_ijk for i, j, k ∈ [1, n]³
Therefore,
dL/dx = [dL/d_x_i]_i, i ∈ [1,n]
= [sum(dL/dy_jk * d(y_jk)/dx_i over j, k ∈ [1, n]²]_i, i ∈ [1,n]
Back to your example, it means for a given i ∈ [1, n]: dL/dx_i = sum(dy_jk/dx_i) over j, k ∈ [1,n]². And dy_jk/dx_i = f(x_j*x_k)/dx_i will equal x_j if i = k, x_k if i = j, and 2*x_i if i = j = k (because of the squared x_i). This being said matrix y is symmetric... So the result comes down to 2*sum(x_i) over i ∈ [1, n]
This means dL/dx is the column vector [2*sum(x)]_i for i ∈ [1, n].
>>> 2*x.sum()*torch.ones_like(x)
tensor([20., 20., 20., 20.])
Stepping back look at this other graph example, here adding an additional operation after y:
x -------> y = x*xT --------> z = y²
If you look at the backward pass on this graph, you have:
dL/dx <------- dL/dy <-------- dL/dz
dy/dx dz/dy
x -------> y = x*xT --------> z = y²
With dL/dx = dL/dy*dy/dx = dL/dz*dz/dy*dy/dx which is in practice computed in two sequential steps: dL/dy = dL/dz*dz/dy, then dL/dx = dL/dy*dy/dx.
| https://stackoverflow.com/questions/68778401/ |
Class label order in YoloV5 using Ultralytics code | I was trying to train a custom object detector using Ultralytics open-source research. I encountered this problem at the step where we have to generate a .yaml file here. What should be the ordering of those label names? It is not alphabetical as we do in Tensorflow. I don't want my model to mislabel during inference.
| The order is arbitrary. You can choose whatever you want. The relevant part is that, in the next step, you must provide a .txt per image, where:
Each row is class x_center y_center width height format.
In this case, class will be an integer between 0 and N-1, where N is the number of classes that you defined in the .yaml file.
So, if in the .yaml file you have:
nc: 3
classes: ['cat', 'dog', 'car']
and in my_image.txt you have:
0 0.156 0.321 0.254 0.198
2 0.574 0.687 0.115 0.301
Then, it means that, in this image, you have one cat and one car.
| https://stackoverflow.com/questions/68782126/ |
PyTorch: Can I group batches by length? | I am working on an ASR project, where I use a model from HuggingFace (wav2vec2). My goal for now is to move the training process to PyTorch, so I am trying to recreate everything that HuggingFace’s Trainer() class offers.
One of these utilities is the ability to group batches by length and combine this with dynamic padding (via a data collator). To be honest however, I am not sure how to even begin this in PyTorch.
The inputs in my case are 1-D arrays that represent the raw waveform of a .wav file. So before training I need to ensure that arrays of similar size will be batched together. Do I need to create a custom Dataloader class and alter it, so that every time it gives me batch sizes of lengths as close as possible?
An idea I had, was to somehow sort the data from shortest to longest (or the opposite), and each time extract batch_size samples from them. This way, the first batch will consist of samples with the biggest lengths, the second batch will have the second biggest lengths, etc.
Nevertheless, I am not sure how to approach this implementation. Any advice will be greatly appreciated.
Thanks in advance.
| One possible way of going about this is by using a batch sampler and implementing a collate_fn for your dataloader that will perform the dynamic padding on your batch elements.
Take this basic dataset:
class DS(Dataset):
def __init__(self, files):
super().__init__()
self.len = len(files)
self.files = files
def __getitem__(self, index):
return self.files[index]
def __len__(self):
return self.len
Initialized with some random data:
>>> file_len = np.random.randint(0, 100, (16*6))
>>> files = [np.random.rand(s) for s in file_len]
>>> ds = DS(files)
Start by defining your batch sampler, this is essentially an iterable returning batches of indices to be used by the data loader to retrieve the elements from the dataset. As you explained we can just sort the lengths and construct the different batches from this sort:
>>> batch_size = 16
>>> batches = np.split(file_len.argsort()[::-1], batch_size)
We should have elements that are close to each other in length.
We can implement a collate_fn function to assemble the batch elements and integrate dynamic padding. This is basically putting an additional user-defined layer right between the dataset and the dataloader. The goal is to find the longest element in the batch and pad all other elements with the correct number of 0s:
def collate_fn(batch):
longest = max([len(x) for x in batch])
s = np.stack([np.pad(x, (0, longest - len(x))) for x in batch])
return torch.from_numpy(s)
Then you can intialize a data loader:
>>> dl = DataLoader(dataset=ds, batch_sampler=batches, collate_fn=collate_fn)
And try iterating, as you can see we get batches of decreasing lengths:
>>> for x in dl:
... print(x.shape)
torch.Size([6, 99])
torch.Size([6, 93])
torch.Size([6, 83])
torch.Size([6, 76])
torch.Size([6, 71])
torch.Size([6, 66])
torch.Size([6, 57])
...
This method has some flaws though, for instance, the distribution of elements will always be the same. This means you will always get the same batches in the same order of appearance. This is because this method is based on the sorting of elements in the dataset based on their length, there is no variability in the creation of the batches. You can reduce this effect by shuffling the batches (e.g. by wrapping batches inside a RandomSampler). However, as I said, the batches' content will remain the same throughout the training which might lead to some problems.
Do note the use of batch_sampler in your data loader is mutually exclusive options batch_size, shuffle, and sampler!
| https://stackoverflow.com/questions/68782144/ |
Unable to understand unet architecture | I was trying to replicate UNET architecture but was having problems understanding why there is 16 written in the encoder part here:
Is this part of the architecture or should we need to choose it randomly? I understood it is the number of output channels of the conv layers but why 16, not any other number?
| That's a design decision, so you won't find a clear reason for your question.
Other than that, the numbers in architecture are usually kept as multiple of 2, here 2**4. This is for practical reasons. It is usually the go-to method when testing variable-sized networks: they will try varying the numbers of layers, and numbers of channels as well as other network layouts and figure out which one is best for the task by experimenting.
By your first question,
should we need to choose it randomly [...]?
Why would you not stick to the network's specification if you're looking to replicate it in the first place? If you ever find an advantage to putting 8 instead of 16, or even 32 if you have the capacity to do so, then go right ahead!
| https://stackoverflow.com/questions/68788411/ |
Using toch log_prob to calculate the probability of selecting multiple values of the distribution itself | I try to use log_prob to get the probability of selecting a value from a normal distribution,
I got dist from a neural network and action from dist.sample()
In a learning phase, I give 5 tensors to a neural network, and it gives me 5 dist, and from dists, I got 5 actions. The problem is that I want to select an action over its own distribution, but this function gives me the probability of action in all distributions. The data on the diameter of the output matrix is the values I want, but I wonder if there is an easy way to implement this part?
I use this block of code:
states = T.tensor(state[b], dtype=T.float).to(agent.device)
old_probs = T.tensor(log_prob[b]).to(agent.device)
actions = T.tensor(action[b]).to(agent.device)
values = T.tensor(value[b]).to(agent.device)
dist = actor(states)
new_probs = dist.log_prob(actions)
and the output is
tensor([[-1.1823, -0.9680, -3.6280, -1.1112, -1.9610],
[-1.5279, -1.1463, -2.5806, -1.0561, -1.4768],
[-1.6258, -1.1618, -2.5027, -1.0100, -1.3882],
[-1.6125, -1.1576, -2.5169, -1.0133, -1.3989],
[-1.3384, -1.0965, -2.9404, -1.1370, -1.7129]], device='cuda:0',
dtype=torch.float64, grad_fn=<SubBackward0>)
but the output must be like:
tensor([-1.1823, -1.1463, -2.5027, -1.0133, -1.7129], device='cuda:0',
grad_fn=<SqueezeBackward1>)
| You can select the diagonal of your matrix with torch.diag:
>>> new_probs.diag()
tensor([-1.1823, -1.1463, -2.5027, -1.0133, -1.7129],
device='cuda:0', grad_fn=<DiagBackward>)
| https://stackoverflow.com/questions/68792047/ |
ValueError: x_max is less than or equal to x_min for bbox | I am using albumentations for a set of images and bboxes.
My bounding box is in "yolo" format, i.e., (x_mid, y_mid, width, height), all normalised.
While running albumentations for a set of bounding errors, above error comes (ValueError: x_max is less than or equal to x_min for bbox) when bounding box takes this value : [0.00017655367231635, 0.0002155172413793, 0.0003531073446327, 0.0004310344827586]. Here, x_min, y_min=[0,0].
This error doesn't come for other bounding box - eg: [0.3060659468984962, 0.4418239134174312, 0.2412257095864662, 0.5496854701834863].
This error is solved when I slightly increase all dimensions of ERROR bounding boxes, i.e. :
Increase x_mid, y_mid, x_width and y_width by 0.01 for this ERROR BB [0.00017655367231635, 0.0002155172413793, 0.0003531073446327, 0.0004310344827586].
Resulting ERROR FREE BB: [0.01017655367231635, 0.0102155172413793, 0.0103531073446327, 0.010431034482758601].
Can anybody tell me a way to solve this error without changing dimensions of BB?
I am using below code for albumentations:
import albumentations as A
A.Compose(
[
A.LongestMaxSize(max_size=IMAGE_SIZE),
A.PadIfNeeded(
min_height=IMAGE_SIZE, min_width=IMAGE_SIZE, border_mode=cv2.BORDER_CONSTANT
),
A.Normalize(mean=[0, 0, 0], std=[1, 1, 1], max_pixel_value=255,),
ToTensorV2(),
],
bbox_params=A.BboxParams(format="yolo", min_visibility=0.4, label_fields=[]),
)
Detailed error:
---> 62 augmentations_norm = self.transform_norm(image=image, bboxes=bboxes)
63 #print('Aug done')
64 image = augmentations_norm["image"]
/opt/conda/lib/python3.7/site-packages/albumentations/core/composition.py in __call__(self, force_apply, *args, **data)
178 if dual_start_end is not None and idx == dual_start_end[0]:
179 for p in self.processors.values():
--> 180 p.preprocess(data)
181
182 data = t(force_apply=force_apply, **data)
/opt/conda/lib/python3.7/site-packages/albumentations/core/utils.py in preprocess(self, data)
60 rows, cols = data["image"].shape[:2]
61 for data_name in self.data_fields:
---> 62 data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to")
63
64 def check_and_convert(self, data, rows, cols, direction="to"):
/opt/conda/lib/python3.7/site-packages/albumentations/core/utils.py in check_and_convert(self, data, rows, cols, direction)
68
69 if direction == "to":
---> 70 return self.convert_to_albumentations(data, rows, cols)
71
72 return self.convert_from_albumentations(data, rows, cols)
/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/bbox_utils.py in convert_to_albumentations(self, data, rows, cols)
49
50 def convert_to_albumentations(self, data, rows, cols):
---> 51 return convert_bboxes_to_albumentations(data, self.params.format, rows, cols, check_validity=True)
52
53
/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/bbox_utils.py in convert_bboxes_to_albumentations(bboxes, source_format, rows, cols, check_validity)
300 def convert_bboxes_to_albumentations(bboxes, source_format, rows, cols, check_validity=False):
301 """Convert a list bounding boxes from a format specified in `source_format` to the format used by albumentations"""
--> 302 return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
303
304
/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/bbox_utils.py in <listcomp>(.0)
300 def convert_bboxes_to_albumentations(bboxes, source_format, rows, cols, check_validity=False):
301 """Convert a list bounding boxes from a format specified in `source_format` to the format used by albumentations"""
--> 302 return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
303
304
/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/bbox_utils.py in convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity)
249 bbox = normalize_bbox(bbox, rows, cols)
250 if check_validity:
--> 251 check_bbox(bbox)
252 return bbox
253
/opt/conda/lib/python3.7/site-packages/albumentations/augmentations/bbox_utils.py in check_bbox(bbox)
331 x_min, y_min, x_max, y_max = bbox[:4]
332 if x_max <= x_min:
--> 333 raise ValueError("x_max is less than or equal to x_min for bbox {bbox}.".format(bbox=bbox))
334 if y_max <= y_min:
335 raise ValueError("y_max is less than or equal to y_min for bbox {bbox}.".format(bbox=bbox))
ValueError: x_max is less than or equal to x_min for bbox (0.00390625, 0.00390625, 0.00390625, 0.00390625, 0.0).
| Adapted from comments:
When you are using bounding boxes in the albumentations library, you need to ensure that the boxes have a width of more than 0 when looking at your image size. This might not be obvious in the yolo bounding box representation, but you can do a quick check if you do this:
pix_w = int(bb[2]*img_w)
pix_h = int(bb[3]*img_w)
np.testing.assert_equal(np.all([pix_w>0,pix_h>0]), True)
| https://stackoverflow.com/questions/68796654/ |
Error in loading ONNX model with ONNXRuntime | I'm converting a customized Pytorch model to ONNX. However, when loading it with ONNXRuntime, I've encountered an error as follows:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: ...onnxruntime/core/providers/cpu/tensor/transpose.h:46 onnxruntime::TransposeBase::TransposeBase(const onnxruntime::OpKernelInfo &) v >= 0 && static_cast<uint64_t>(v) <= std::numeric_limits<size_t>::max() was false.
I've checked with onnx.checker.check_model() and it's totally fine.
I've also tried to replace transpose() into permute() in forward() function but the error has still remained.
Is anyone familiar with this error?
Environments:
Python 3.7
Pytorch 1.9.0
CUDA 10.2
ONNX 1.10.1
ONNXRuntime 1.8.1
OS Ubuntu 18.04
| The perm attribute of node Transpose_52 is [-1, 0, 1] although ONNX Runtime requires that all of them should be positive: onnxruntime/core/providers/cpu/tensor/transpose.h#L46
| https://stackoverflow.com/questions/68797430/ |
Training an Transformer Encoder layer directly and the proper way to pad sequences | I am working on a problem in which I want to train a Transformer Encoder Layer directly (i.e. with no embedding layer). I already have the sequences of embeddings that I will treat as my dataset. I am confused about how I should handle the padding and the attention mask and would simply like to make sure that my understanding is correct.
My sequences have lengths varying between as little as 3 to as many as 130. Does this mean that I should pad all my sequences to have 130 parts? If so does it matter which value I pad with?
For the attention mask, I believe that I want each part to attend to all other parts in the sequence. In the docs I see that they have it set up such that each part is only allowed to attend to earlier parts in the sequence. Is this the most natural approach or is it just for the language modeling task? Also why (in the same link) do they use -Inf and 0 for the values of the attention mask as opposed to simply 1s and 0s?
As a little toy example, say that I have two samples in my dataset with sequence lengths of 2 and 3 respectively (AKA 3 is the max):
s_1 = torch.Tensor([0.001, 0.002, ..., 0.768], [0.001, 0.002, ..., 0.768]) # length 2
s_2 = torch.Tensor([0.001, 0.002, ..., 0.768], [0.001, 0.002, ..., 0.768], [0.001, 0.002, ..., 0.768]) # length 3
Does this mean that I should then pad s_1 to have length 3? And do something like:
s_1 = torch.Tensor([0.001, 0.002, ..., 0.768], [0.001, 0.002, ..., 0.768], [0, 0, ..., 0])
And then my attention masks would then look like:
attn_mask_s1 = [[0 -Inf 0],
[-Inf 0 0],
[0 0 0]]
attn_mask_s2 = [[0 -Inf -Inf],
[-Inf 0 -Inf],
[-Inf -Inf 0 ]]
Sorry to package so many questions into one but they all break down my doubts of how data should be passed to the TransformerEncoder block.
| My sequences have lengths varying between as little as 3 to as many as 130. Does this mean that I should pad all my sequences to have 130 parts?
No need... the main property of transformer is that the sequence lengths are changeable (If you look at the dot product or multi head attention formula you can see that) So no need for padding.
For the attention mask, I believe that I want each part to attend to all other parts in the sequence. In the docs I see that they have it set up such that each part is only allowed to attend to earlier parts in the sequence. Is this the most natural approach or is it just for the language modeling task?
Attention mask is for learning sequential generation. You can assume that the transformer is like a RNN and will generate a sequential data one token at a time. Thats why the mask is used in the Transformer decoder. If that does not apply to you problem you can skip it.
The mask being -inf or 1 depends on where you apply it in dot product attention.
| https://stackoverflow.com/questions/68797901/ |
Efficient pytorch broadcasting command not obtained | I have the class-wise feature vector for the 5 classes in my model. the feature vectors for each class are 20 dimensional. I want to multiply a scalar gain to each class's feature vector and the resulting weighted feature vectors are summed to form new feature matrix.
The gain_matrix corresponds to a scalar value for each i-th j-th pair of classes. The feature vector (20 dimensional) of i-th class is calculated as the sum of the scalar gain multiplied by all other classes feature vectors.
The exact implementation code is shown below.
nClass=5
feature_dim=20
gain_matrix=torch.rand(nClass,nClass)
feature_matrix=torch.rand(nClass,feature_dim) #in my implementation this is output from model
feature_matrix_new=torch.zeros(nClass,feature_dim)
for i in range(nClass):
for j in range(nClass):
feature_matrix_new[i,:]+=gain_matrix[i][j]*feature_matrix[j,:]
The nested for loop is slowing down the implementation a lot.
Is there any efficient PyTorch broadcasting solution to avoid the nested for loop in my implementation?
I have seen pytorch broadcasting web page but it did not help me much.
| This would be a good place to use torch.einsum:
>>> feature_matrix_new = torch.einsum('ij,jk->ik', gain_matrix, feature_matrix)
However in this case this just comes down to a matrix multiplication:
>>> feature_matrix_new = gain_matrix @ feature_matrix
| https://stackoverflow.com/questions/68799075/ |
How to load data in root directory separately in dataloader pytorch | So, I'm trying to load this dataset in pytorch, I'm facing a problem while loading it.
As you can make out my checking the dataset that the directory looks somethings like this:
root
monet_jpg
monet_tfrec
photo_jpg
photo_tfrec
So, I want to load the photo and monet images in separate dataloader variables. But this method doesn't seem to work.
EDIT: By that I mean the monet_ds and photo_ds return only monet images (while photo_ds should return images from photo_jpg)
I'm trying to load the data through this code:
import torchvision.datasets as dset
import torchvision.utils as vutils
from torch.utils.data import Subset
def load_data(dataroot , image_size, batch_size, workers,ngpu,shuffle=True):
#DataLoading
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
print(dataset.class_to_idx)
#print(dataset.imgs)
monet_ds = Subset(dataset, range(0,299))
photo_ds = Subset(dataset, range(300,))
# Create the dataloader
monet_ds = torch.utils.data.DataLoader(monet_ds, batch_size=batch_size,
num_workers=workers)
photo_ds = torch.utils.data.DataLoader(photo_ds, batch_size=batch_size,
num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu")
print("Data loaded...")
root = "../input/gan-getting-started"
monet_ds, photo_ds, device = load_data(root, image_size, batch_size, workers, ngpu)
Any help for loading this data perfectly in pytorch would be of good help.
Thank you.
| It seems that they are completely independent, so the following should work just fine:
import os
from torchvision.datasets.folder import default_loader
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
class MonetPhotoDataset(Dataset):
def __init__(self, root, transform=None):
self.transform = transform
self.img_paths = sorted(os.path.join(root, x) for x in os.listdir(root) if x.endswith('.jpg'))
def __len__(self):
return len(self.img_paths)
def __getitem__(self, idx):
img_path = self.img_paths[idx]
sample = default_loader(img_path)
if self.transform is not None:
sample = self.transform(sample)
return sample
def load_data(dataroot, image_size, batch_size, workers, ngpu, shuffle=True):
# set up transform
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# create datasets
monet_ds = MonetPhotoDataset(root=os.path.join(dataroot, 'monet_jpg'), transform=transform)
photo_ds = MonetPhotoDataset(root=os.path.join(dataroot, 'photo_jpg'), transform=transform)
# create dataloaders
monet_dl = DataLoader(monet_ds, batch_size=batch_size, num_workers=workers)
photo_dl = DataLoader(photo_ds, batch_size=batch_size, num_workers=workers)
# decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available()) else "cpu")
print("Data loaded...")
return monet_dl, photo_dl, device
root = "../input/gan-getting-started"
monet_dl, photo_dl, device = load_data(root, image_size, batch_size, workers, ngpu)
P.S.: I kept the load_data because I assumed you rely on its signature in your code, but I wouldn't use it otherwise. Also, I didn't test the code above, so expect some typo but the logic is correct.
Note that this dataset returns only the images.
| https://stackoverflow.com/questions/68800123/ |
Using `multiprocessing' in PyTorch on Windows got errors-`Couldn't open shared file mapping: , error code: ' | I am currently running a PyTorch code on Windows10 using PyCharm. This code firstly utilised DataLoader function (`num_workers'=4) to load training data:
train_loader = DataLoader(train_dset, batch_size, shuffle=True,
num_workers=4, collate_fn=trim_collate)
Then, in training process, it utilised a `for' loop to load training data and train the model:
for i, (v, norm_bb, q, target, _, _, bb, spa_adj_matrix,
sem_adj_matrix) in enumerate(train_loader):
Error: I got the following error messages when running above `for' loop:
0%| | 0/6934 [00:00<?, ?it/s]Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "E:\PyTorch_env\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "E:\PyTorch_env\lib\site-packages\torch\multiprocessing\reductions.py", line 286, in reduce_storage
metadata = storage._share_filename_()
RuntimeError: Couldn't open shared file mapping: <torch_13684_4004974554>, error code: <0>
python-BaseException
Traceback (most recent call last):
File "E:\PyTorch_env\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
python-BaseException
0%| | 0/6934 [00:07<?, ?it/s]
It seems that there are some issues with the `multiprocessing' function on Windows10.
The environment settings:
Windows10, PyCharm
PyTorch v1.0.1, torchvision v0.2.2, Python 3.7.11
One GPU node
Could you please let me know if there are any possible solutions for this?
Many thanks!
| Use as above num_workers=0
and for error expected long datatype but got Int stead.
apply criterion(outputs_t.float(), target_t.flatten().type(torch.LongTensor))
| https://stackoverflow.com/questions/68802392/ |
how to know how many GPUs are used in pytorch? | The bash file I used to launch the training looks like this:
CUDA_VISIBLE_DEVICES=3,4 python -m torch.distributed.launch \
--nproc_per_node=2 train.py \
--batch_size 6 \
--other_args
I found that the batch size of tensors in each GPU is acctually batch_size / num_of_gpu = 6/2 = 3.
When I initialize my network, I need to know the batch size in each GPU.
(Ps. in this phase, I can't use input_tensor.shape to get the size of batch-dimension, since there are no data fed in jet.)
Somehow I could not find where does the pytorch store the parameter --nproc_per_node.
So how could I know how many GPUs are used, without passing it manually as --other_args?
| I think you are looking for torch.distributed.get_world_size() - this will tell you how many processes were created.
| https://stackoverflow.com/questions/68802932/ |
Optimize construct label function to make it suitable for parallel processing. For loop in my code is creating bottleneck | import torch
import torch.nn as nn
label_tensor = torch.tensor([[0,1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0],
[0,0,0,0,0,0,0,0,1,0],
[0,0,0,0,0,0,1,0,0,0]])
def construct_label(label_tensor):
batch_list = []
for i in range(len(label_tensor)):
empty_tensor_list = []
for j in label_tensor[i]:
empty_tensor_list.append(torch.full((28,28),j))
tensor_label = torch.stack(empty_tensor_list,0)
batch_list.append(tensor_label)
batch_tensor_label = torch.stack(batch_list,0)
print(batch_tensor_label.shape)
return batch_tensor_label
I want to optimize this function construct_label in more torch way avoiding for loop. Is there a optimized way to do it.
| Yes, there is:
import torch
import torch.nn as nn
label_tensor = torch.tensor([[0,1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0],
[0,0,0,0,0,0,0,0,1,0],
[0,0,0,0,0,0,1,0,0,0]])
def construct_label(label_tensor):
return label_tensor[..., None, None].repeat((1, 1, 28, 28))
You can compare the output of your function with this one using torch.all and you'll see it returns True.
| https://stackoverflow.com/questions/68807309/ |
How to disable logging from PyTorch-Lightning logger? | Logger in PyTorch-Lightning prints information about the model to be trained (or evaluated) and the progress during the training,
However, in my case I would like to hide all messages from the logger in order not to flood the output in Jupyter Notebook.
I've looked into the API of the Trainer class on the official docs page https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags and it seems like there is no option to turn off the messages from the logger.
There is a parameter log_every_n_steps which can be set to big value, but nevertheless, the logging result after each epoch is displayed.
How can one disable the logging?
| I am assuming that two things are particularly bothering you in terms of flooding output stream:
One, The "weight summary":
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 100 K
1 | l2 | Linear | 1.3 K
--------------------------------
...
Second, the progress bar:
Epoch 0: 74%|███████████ | 642/1874 [00:02<00:05, 233.59it/s, loss=0.85, v_num=wxln]
PyTorch Lightning provided very clear and elegant solutions for turning them off: Trainer(progress_bar_refresh_rate=0) for turning off progress bar and Trainer(weights_summary=None) for turning off weight summary.
| https://stackoverflow.com/questions/68807896/ |
PyTorch nn.CrossEntropyLoss IndexError: Target 2 is out of bounds | I'm creating a simple 2 class sentiment classifier using bert, but i'm getting an error related to output and label size. I cannot figure out what I'm doing wrong. Below are the required code snippets.
My custom dataset class:
class AmazonReviewsDataset(torch.utils.data.Dataset):
def __init__(self, df):
self.df = df
self.maxlen = 256
self.tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
def __len__(self):
return len(self.df)
def __getitem__(self, index):
review = self.df['reviews'].iloc[index].split()
review = ' '.join(review)
sentiment = int(self.df['sentiment'].iloc[index])
encodings = self.tokenizer.encode_plus(
review,
add_special_tokens=True,
max_length=self.maxlen,
padding='max_length',
truncation=True,
return_attention_mask=True,
return_tensors='pt'
)
return {
'input_ids': encodings.input_ids.flatten(),
'attention_mask': encodings.attention_mask.flatten(),
'labels': torch.tensor(sentiment, dtype=torch.long)
}
output of dataloader:
for batch in train_loader:
print(batch['input_ids'].shape)
print(batch['attention_mask'].shape)
print(batch['labels'])
print(batch['labels'].shape)
break
torch.Size([32, 256])
torch.Size([32, 256])
tensor([2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2,
2, 2, 2, 2, 2, 1, 1, 2])
torch.Size([32])
My nn:
criterion = nn.CrossEntropyLoss().to(device)
class SentimentClassifier(nn.Module):
def __init__(self):
super(SentimentClassifier, self).__init__()
self.distilbert = DistilBertModel.from_pretrained("distilbert-base-uncased")
self.drop0 = nn.Dropout(0.25)
self.linear1 = nn.Linear(3072, 512)
self.relu1 = nn.ReLU()
self.drop1 = nn.Dropout(0.25)
self.linear2 = nn.Linear(512, 2)
self.relu2 = nn.ReLU()
def forward(self, input_ids, attention_mask):
outputs = self.distilbert(input_ids, attention_mask)
last_hidden_state = outputs[0]
pooled_output = torch.cat(tuple([last_hidden_state[:, i] for i in [-4, -3, -2, -1]]), dim=-1)
x = self.drop0(pooled_output)
x = self.relu1(self.linear1(x))
x = self.drop1(x)
x = self.relu2(self.linear2(x))
return x
Train loop:
for batch in loop:
optimizer.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
output = model(input_ids, attention_mask)
print(output.size(), labels.size())
loss = criterion(output, labels) # ERROR
loss.backward()
optimizer.step()
Error:
torch.Size([32, 2]) torch.Size([32])
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-19-6268781f396e> in <module>()
12 print(output.size(), labels.size())
13 # output_class = torch.argmax(results, dim=1)
---> 14 loss = criterion(output, labels)
15 train_loss += loss
16 loss.backward()
2 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
1119 def forward(self, input: Tensor, target: Tensor) -> Tensor:
1120 return F.cross_entropy(input, target, weight=self.weight,
-> 1121 ignore_index=self.ignore_index, reduction=self.reduction)
1122
1123
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2822 if size_average is not None or reduce is not None:
2823 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2825
2826
IndexError: Target 2 is out of bounds.
I read a tutorial which says that, do not use softmax when applying a nn.CrossEntropyLoss, as I have 2 classes. What is wrong can anyone guide me! Thank you!
| You have two classes, which means the maximum target label is 1 not 2 because the classes are indexed from 0. You essentially have to subtract 1 to your labels tensor, such that class n°1 is assigned the value 0, and class n°2 value 1.
In turn the labels of the batch you printed would look like:
tensor([1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1,
1, 1, 1, 1, 1, 0, 0, 1])
| https://stackoverflow.com/questions/68807958/ |
TabNetRegressor not working with reshaped data | I am using the PyTorch implementation of tabnet and cannot figure out why I'm still getting this error. I import the data to a dataframe, I use this function to get my X, and y then my train-test split
def get_X_y(df):
''' This function takes in a dataframe and splits it into the X and y variables
'''
X = df.drop(['is_goal'], axis=1)
y = df.is_goal
return X,y
X,y = get_X_y(df)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101)
Then I use this to reshape my y_train
y_train.values.reshape(-1,1)
Then create an instance of the model and try to fit it
reg = TabNetRegressor()
reg.fit(X_train, y_train)
and I get this error
ValueError: Targets should be 2D : (n_samples, n_regression) but y_train.shape=(639912,) given.
Use reshape(-1, 1) for single regression.
I understand why I need to reshape it as this is pretty common, but I cannot understand why it's still giving me this error. I've restarted the kernel in notebooks so I don't think it's persistence memory issues either.
| You have to re-assign it:
y_train = y_train.values.reshape(-1,1)
Otherwise, it won't change.
| https://stackoverflow.com/questions/68810698/ |
Importing pytorch geometric results in an error message | I'm suddenly unable to import pytorch geometric and I cannot figure out why. I've added a screenshot of the packages in my conda environment and the error message that I get when I try to import torch_geometric.
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
Error message:
OSError: dlopen(/Users/anstercharles/opt/anaconda3/lib/python3.8/site-packages/torch_sparse/_convert_cpu.so, 6): Symbol not found: __ZN2at8internal13_parallel_runExxxRKNSt3__18functionIFvxxmEEE
Referenced from: /Users/anstercharles/opt/anaconda3/lib/python3.8/site-packages/torch_sparse/_convert_cpu.so
Expected in: /Users/anstercharles/opt/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_cpu.dylib
in /Users/anstercharles/opt/anaconda3/lib/python3.8/site-packages/torch_sparse/_convert_cpu.so
Running:
conda list pytorch
Gives me:
Name
Version
Build
Channel
pytorch
1.9.0
cpu_py38h490fcb8_1
conda-forge
pytorch-cluster
1.5.9
py38_torch_1.9.0_cpu
rusty1s
pytorch-geometric
1.7.2
py38_torch_1.9.0_cpu
rusty1s
pytorch-scatter
2.0.8
py38_torch_1.9.0_cpu
rusty1s
pytorch-sparse
0.6.11
py38_torch_1.9.0_cpu
rusty1s
pytorch-spline-conv
1.2.1
py38_torch_1.9.0_cpu
rusty1s
Additional Details
OS: MacOS Mojave
Anaconda 3
Python 3.8
| I can replicate the error. The documentation in the README, to use
conda install pytorch-geometric -c rusty1s -c conda-forge
does not match the order that is actually used in the build, which has the channel order:
-c defaults -c pytorch -c conda-forge -c rusty1s
Workaround
I find it works using:
conda create -n foo -c defaults -c pytorch -c conda-forge -c rusty1s pytorch-geometric
| https://stackoverflow.com/questions/68811117/ |
How to efficiently (without looping) get data from tensor predicted by a torchscript in C++? | I am calling a torchscript (neural network serialized from Python) from a C++ program:
// define inputs
int batch = 3; // batch size
int n_inp = 2; // number of inputs
double I[batch][n_inp] = {{1.0, 1.0}, {2.0, 3.0}, {4.0, 5.0}}; // some random input
std::cout << "inputs" "\n"; // print inputs
for (int i = 0; i < batch; ++i)
{
std::cout << "\n";
for (int j = 0; j < n_inp; ++j)
{
std::cout << I[i][j] << "\n";
}
}
// prepare inputs for feeding to neural network
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::from_blob(I, {batch, n_inp}, at::kDouble));
// deserialize and load scriptmodule
torch::jit::script::Module module;
module = torch::jit::load("Net-0.pt");
// do forward pass
auto outputs = module.forward(inputs).toTensor();
Usually, to get data from the outputs, the following (element-wise) operation is performed:
// get data from outputs
std::cout << "outputs" << "\n";
int n_out = 1;
double outputs_data[batch][n_out];
for (int i = 0; i < batch; i++)
{
for (int j = 0; j < n_out; j++)
{
outputs_data[i][j] = outputs[i][j].item<double>();
std::cout << outputs_data[i][j] << "\n";
}
}
However, such looping using .item is highly inefficient (in the actual code I will have millions of points predicted at each time step). I want to get data from outputs directly (without looping over elements). I tried:
int n_out = 1;
double outputs_data[batch][n_out];
outputs_data = outputs.data_ptr<double>();
However, it is giving the error:
error: incompatible types in assignment of ‘double*’ to ‘double [batch][n_out]’
outputs_data = outputs.data_ptr<double>();
^
Note, that type of outputs_data is fixed to double and cannot be changed.
| It is necessary to make a deep copy as follows:
double outputs_data[batch];
std::memcpy(outputs_data, outputs.data_ptr<dfloat>(), sizeof(double)*batch);
| https://stackoverflow.com/questions/68811149/ |
The importance of combining a train and inference in Deep Learning | I thought that the process of training and inference in Deep Learning is separated, however I saw this snippet, and I actually executed the code, I found that my understanding it might be not correct.
network.train()
batch_losses = []
for step, (imgs, label_imgs) in enumerate(train_loader):
imgs = Variable(imgs).cuda()
label_imgs = Variable(label_imgs.type(torch.LongTensor)).cuda()
outputs = network(imgs)
loss = loss_fn(outputs, label_imgs)
loss_value = loss.data.cpu().numpy()
batch_losses.append(loss_value)
I point out outputs = network(imgs) to figure out this procedure. this is typically used for estimating model? I know that estimating model is quite good to see our direction. but I don't understand that it's necessary. If I want to boost my training time, could I remove it?
| The code snippet you shared is actually just the training code for a deep learning model. Here, the outputs = network(imgs) actually takes in the imgs i.e; the training data and gives the predicted outputs after passing through the network (the model you want to train your data on).
And then based on that outputs, loss_fn is calculated here loss = loss_fn(outputs, label_imgs) taking the predicted labels (outputs) and the actual labels (label_imgs)
Note: In the inference, you dont generally calculate the loss, you calculate the accuracy of the predicted labels. Loss is calculated only on the training data to measure if the training is happening properly and if the loss is decreasing.
I suggest you once go through some basic deep learning tutorials to get a better insight in case you still have some doubts.
| https://stackoverflow.com/questions/68811814/ |
Masking few non-zero elements of certain rows of a matrix | I have a 3*3 matrix with 1s and 0s, A = [[1,0,1],[0,1,1],[1,0,0]] and an array indicating the limit on the row sum, B = [1,2,1]. I want to find rows of A whose sum exceeds the corresponding value in B, and set the non-zero elements of A to zero to ensure that the sum matches with B. Finding the rows of A that exceed the sum is easy, however masking the elements to adjust the sum is what I need help with.How can this be achieved (want to scale it to larger matrices and tensors)?
| I would do smth like this:
import numpy as np
A = np.array([[1,0,1],[0,1,1],[1,0,0]])
B = np.array([1,2,1])
# a cumulative sum of each row will tell you how many
# ones were in that row up to each point.
A_cs = np.cumsum(A, axis = 1)
# theresholding according to the sum vector will let
# you know where you should start omitting values since
# at that point the sum of the row exceeds its limit.
A_th = A_cs > B[:, None]
# then you can use the boolean array to create a new
# array where the appropriate values in the original
# array are set to zero to reduce the row sum.
A_nw = A * (1 - A_th)
output:
A_nw =
[[1 0 0]
[0 1 1]
[1 0 0]]
Unrelated note:
The following note is here to help OP better their dev-related search skills.
I can answer some questions instantaneously, but this was not one of them. I'm telling you this because I reached the answer through a simple google search for "python find the i th non zero element in each row", which led me to this post which in turn led me very quickly to an answer. You don't have to try to be a better, more independet code writer. But if you want to, know that you can.
| https://stackoverflow.com/questions/68812259/ |
pytorch load model without torchvision | Is it possible to load a pytorch model (from a .pth file, containing architecture+state_dict) without torchvision as a dependency?
import os
import torch
assert os.path.exists(r'.\vgg.pth')
model = torch.load(r'.\vgg.pth')
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-e26863d95688> in <module>
2 import torch
3 assert os.path.exists(r'.\vgg.pth')
----> 4 model = torch.load(r'.\vgg.pth')
~\Anaconda3\envs\pytorch_save\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
590 opened_file.seek(orig_position)
591 return torch.jit.load(opened_file)
--> 592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
594
~\Anaconda3\envs\pytorch_save\lib\site-packages\torch\serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
849 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args)
850 unpickler.persistent_load = persistent_load
--> 851 result = unpickler.load()
852
853 torch._utils._validate_loaded_sparse_tensors()
ModuleNotFoundError: No module named 'torchvision'
I have looked into torch/serialization.py, but I see no reason why it would need torchvision. The imports in this file are as follows:
import difflib
import os
import io
import shutil
import struct
import sys
import torch
import tarfile
import tempfile
import warnings
from contextlib import closing, contextmanager
from ._utils import _import_dotted_name
from ._six import string_classes as _string_classes
from torch._sources import get_source_lines_and_file
from torch.types import Storage
from typing import Any, BinaryIO, cast, Dict, Optional, Type, Tuple, Union, IO
import copyreg
import pickle
import pathlib
| What caused my problem
The vgg.pth file in my question was generated as follows:
import torchvision
vgg = models.vgg16(pretrained=True, init_weights=False)
torch.save(vgg, r'.\vgg.pth')
This way, the file vgg.pth contains not only the model parameters, but also the model architecture (see pytorch: save/load entire model). However, as @Kishore pointed out in the comments, it seems that this architecture also needs torchvision as a dependency.
How I solved it
In an environment with torchvision, I loaded the pretrained VGG model into memory and saved the state_dict
from torchvision.models.vgg import vgg16
import torch
model = vgg16(pretrained=True)
torch.save(model.state_dict, r'.\state_dict.pth')
In an environment without torchvision, I rebuilt the model by inspecting the torchvision.models.vgg code.
Then I loaded this state_dict file into the state_dict of my model.
Lastly, I saved this model (including architecture) to a .pth file.
import torch
# a file where I pasted the torchvision.models.vgg code
# and commented out the torchvision dependencies I don't need
# in this case: 'from .._internally_replaced_utils import load_state_dict_from_url'
from torch_save import *
model = vgg16()
model.load_state_dict(torch.load(r'.\state_dict.pth'))
torch.save(model, r'.\entire_model.pth')
When I load this again in a torchvision-free environment, I get no errors.
| https://stackoverflow.com/questions/68813681/ |
How to save parameters just related to classifier layer of pretrained bert model due to the memory concerns? | I fine tuned the pretrained model here by freezing all layers except the classifier layers. And I saved weight file with using pytorch as .bin format.
Now instead of loading the 400mb pre-trained model, is there a way to load the parameters of the just Classifier layer I retrained it? By the way, I know that I have to load the original pretrained model, I just don't want to load the entire fine tuned model. due to memory concerns.
I can access the last layer's parameters from state_dict as below, but how can I save them in a separate file to use them later for less memory usage?
model = PosTaggingModel(num_pos_tag=num_pos_tag)
state_dict = torch.load("model.bin")
print("state dictionary:",state_dict)
with torch.no_grad():
model.out_pos_tag.weight.copy_(state_dict['out_pos_tag.weight'])
model.out_pos_tag.bias.copy_(state_dict['out_pos_tag.bias'])
Here is the model class:
class PosTaggingModel(nn.Module):
def __init__(self, num_pos_tag):
super(PosTaggingModel, self).__init__()
self.num_pos_tag = num_pos_tag
self.model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased")
for name, param in self.model.named_parameters():
if 'classifier' not in name: # classifier layer
param.requires_grad = False
self.bert_drop = nn.Dropout(0.3)
self.out_pos_tag = nn.Linear(768, self.num_pos_tag)
def forward(self, ids, mask, token_type_ids, target_pos_tag):
o1, _ = self.model(ids, attention_mask = mask, token_type_ids = token_type_ids)
bo_pos_tag = self.bert_drop(o1)
pos_tag = self.out_pos_tag(bo_pos_tag)
loss = loss_fn(pos_tag, target_pos_tag, mask, self.num_pos_tag)
return pos_tag, loss
I don't know if this is possible but I'm just looking for a way to save and reuse the last layer's parameters, without the need for parameters of frozen layers. I couldn't find it in the documentation.
Thanks in advance to those who will help.
| You can do it like this
import torch
# creating a dummy model
class Classifier(torch.nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.first = torch.nn.Linear(10, 10)
self.second = torch.nn.Linear(10, 20)
self.last = torch.nn.Linear(20, 1)
def forward(self, x):
pass
# Creating its object
model = Classifier()
# Extracting the layer to save
to_save = model.last
# Saving the state dict of that layer
torch.save(to_save.state_dict(), './classifier.bin')
# Recreating the object of that model
model = Classifier()
# Updating the saved layer of model
model.last.load_state_dict(torch.load('./classifier.bin'))
| https://stackoverflow.com/questions/68814074/ |
How to run Pytorch on Macbook pro (M1) GPU? | I tried to train a model using PyTorch on my Macbook pro. It uses the new generation apple M1 CPU. However, PyTorch couldn't recognize my GPUs.
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
Does anyone know any solution?
I have updated all the libraries to the latest versions.
| PyTorch added support for M1 GPU as of 2022-05-18 in the Nightly version. Read more about it in their blog post.
Simply install nightly:
conda install pytorch -c pytorch-nightly --force-reinstall
To use (source):
mps_device = torch.device("mps")
# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")
# Any operation happens on the GPU
y = x * 2
# Move your model to mps just like any other device
model = YourFavoriteNet()
model.to(mps_device)
# Now every call runs on the GPU
pred = model(x)
| https://stackoverflow.com/questions/68820453/ |
Passing the output from the last Convolutional layer to the FCC layer - PyTorch | I am trying to pass the output from the last Convolutional Layer to FCC layer but I am struggling with dimentions. As a default, the network uses AdaptiveAvgPool2d(output_size=(6, 6)) what does not let me use torch.use_deterministic_algorithms(True) for reproducibility purpose. This is the error I am getting:
*mat1 dim 1 must match mat2 dim 0*
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Dropout(p=0.5, inplace=False)
(1): Linear(in_features=9216, out_features=4096, bias=True)
The input tensor is: [10, 3, 350, 350].
The shape of tensor from last Conv2d/MaxPool2d layer is: torch.Size([10, 256, 9, 9]). I assume that number of inputs for FCC should be 256 x 9 x 9 = 20736 but it does not work as well.
Here is also my class for forwarding the output from CONV to FCC layer:
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
print('SHAPE', np.shape(x))
return x
The idea has been taken from video: https://www.youtube.com/watch?v=qaDe0qQZ5AQ&t=301s.
Thank you so much in advance.
| TLDR; The number of neurons in your fully connected layer is fine, your shape is not.
The nn.AdaptativeAveragePool2d layer between your CNN and classifier will output a tensor of shape (10, 256, 6, 6). since you've initialized it with an output_size of (6, 6). This being said, the first fully connected layer should have 256*6*6 neurons.
self.fc = nn.Linear(in_features=9216, out_features=4096)
This matches your current model's setup, not your proposed 20736...
Your classifier input shape should be flattened, this can be done by defining a flatten layer nn.Flatten (or using an inline alternative). First define your layer in the initializer:
self.flatten = nn.Flatten()
Then
>>> x.shape # nn.AdaptativeAveragePool2d output
torch.Size([10, 256, 6, 6])
>>> self.flatten(x)
torch.Size([10, 9216])
| https://stackoverflow.com/questions/68823539/ |
Pytorch Geometric: How do I pass the 2nd or 3rd arguments to from_networkx? | I am trying to use from_networkx() from Pytorch Geometric. I have a networkx Graph object as my first argument, and am trying to feed in a list of strings for the node attribute. I am getting an error that I am giving it 2 positional arguments when it wants 1. How can I make this code function or find a workaround?
The first line below is a list of attributes produced by nx.get_attributes(I, 'spin').
{(0, 0): 1, (0, 1): 1, (0, 2): -1, (0, 3): 1, (1, 0): 1, (1, 1): 1, (1, 2): 1, (1, 3): 1, (2, 0): 1, (2, 1): -1, (2, 2): -1, (2, 3): 1, (3, 0): -1, (3, 1): -1, (3, 2): 1, (3, 3): -1}
Graph with 16 nodes and 32 edges
<class 'networkx.classes.graph.Graph'>
Traceback (most recent call last):
File "pytorch_test.py", line 222, in <module>
print(from_networkx(I, ["spin"]))
TypeError: from_networkx() takes 1 positional argument but 2 were given
| I guess you are running pytorch_geometric with version <= 1.7.2. Then the method from_networkx had only one parameter. Only the latest from_networkx has the additional parameters.
However, before the additional parameters were introduced-and still the default behaviour- all node attributes were transformed:
import networkx as nx
g = nx.karate_club_graph()
print(g.nodes(data=True))
# [(0, {'club': 'Mr. Hi'}), (1, {'club': 'Mr. Hi'}), (2, {'club': 'Mr. Hi'}), ....
import torch
from torch_geometric.utils import from_networkx
data = from_networkx(g)
print(data)
#Data(club=[34], edge_index=[2, 156])
So in your example, data.spin should work if you use from_networkx(I).
| https://stackoverflow.com/questions/68826086/ |
Which seed when using `pytorch.manual_seed(seed)`? | I have trained a model with ImageNet. I got a new GPU and I also want to train the same model on a different GPU.
I want to compare if the outcome is different and therefore I want to use torch.manual_seed(seed).
After reading the docs https://pytorch.org/docs/stable/generated/torch.manual_seed.html it is still unclear, which number I should take for the paramater seed.
How to choose the seed parameter? Can I take 10, 100, or 1000 or even more or less and why?
|
How to choose the seed parameter? Can I take 10, 100, or 1000 or even more or less and why?
The PyTorch doc page you are pointing to does not mention anything special, beyond stating that the seed is a 64 bits integer.
So yes, 1000 is OK. As you expect from a modern pseudo-random number generator, the statistical properties of the pseudo-random sequence you are relying on do NOT depend on the choice of seed.
As for most runs you will probably reuse the seed from a previous run, a practical thing is to have the random seed as a command line parameter. In those cases where you have to come up with a brand new seed, you can peek one from the top of your head, or just play dice to get it.
The important thing is to have a record of the seed used for each run.
OK, but ...
That being said, a lot of people seem uncomfortable with the task of just “guessing” some arbitrary number. There are (at least) two common expedients to get some seed in a seemingly proper “random” fashion.
The first one is to use the operating system official source for genuinely (not pseudo) random bits. In Python, this is typically rendered as os.urandom(). So to get a seed in Python as a 64 bits integer, you could use code like this:
import functools
import os
# returns a list of 8 random small integers between 0 and 255
def get8RandomBytesFromOS():
r8 = os.urandom(8) # official OS entropy source
byteCodes = list(map(ord, r8.decode('Latin-1'))) # type conversion
return byteCodes
# make a single long integer from a list of 8 integers between 0 and 255
def getIntFromBytes(bs):
# force highest bit to 0 to avoid overflow
bs2 = [bs[0] if (bs[0] < 128) else (bs[0]-128)] + bs[1:8]
num = functools.reduce(lambda acc,n: acc*256+n, bs2)
return num
# Main user entry point:
def getRandomSeedFromOS():
rbs8 = get8RandomBytesFromOS()
return (getIntFromBytes(rbs8))
A second common expedient is to hash a string containing the current date and time, possibly with some prefix. With Python, the time includes microseconds. When a human user launches a script, the microseconds number of the launch time can be said to be random. One can use code like this, using a version of SHA (Secure Hash Algorithm):
import hashlib
import datetime
def getRandomSeedFromTime():
prefix = 'dl.cs.univ-stackoverflow.edu'
timeString1 = str(datetime.datetime.now())
timeString2 = prefix + ' ' + timeString1
hash = hashlib.sha256(timeString2.encode('ascii'))
bytes = (hash.digest())[24:32] # 8 rightmost bytes of hash
byteCodes = list(map(ord, bytes.decode('Latin-1'))) # type conversion
return (getIntFromBytes(byteCodes))
But, again, 1000 is basically OK. The idea of hashing the time string, instead of just taking the number of microseconds since the Epoch as the seed, probably comes from the fact that some early random number generators used their seed as an offset into a common and not so large global sequence. Hence, if your program naïvely took two seeds in rapid sequence without hashing, there was a risk of overlap between your two sequences. Fortunately, pseudo-random number generators now are much better than what they used to be.
(Addendum - taken from comment)
Note that the seed is a peripheral thing. The important thing is the state of the automaton, which can be much larger than the seed, hence the very word “seed”. The commonly used Mersenne Twister scheme has ~20,000 bits of internal state, and you cannot ask the user to provide 20,000 bits. There are sometimes ill-behaved initial states, but it is always the responsibility of the random number library to expand somehow the user-provided arbitrary seed into a well-behaved initial state.
| https://stackoverflow.com/questions/68828745/ |
Convert a pth pytorch file to an onnx model | I'm trying to convert a PyTorch model(pth file containing weights) to an onnx file then to a TensorFlow model since I work on TensorFlow. to then fine-tune it.
This is my attempt so far. I keep however getting errors.enter image description here
I think the problem is that the weights are for a vision transformer. But I haven't figure out what type of model to use to convert it. I'm assuming a CRNN but if there is an easier way I would love to know.
PS: I did load the pth file to my drive. the path is correct
from torch.autograd import Variable
import torch.onnx
import torchvision
import torch
import onnx
import torch.nn as nn
dummy_input = torch.randn(1, 3, 224, 224)
file_path='/content/drive/MyDrive/VitSTR/vitstr_base_patch16_224_aug.pth'
model = torchvision.models.vgg16()
model.load_state_dict(torch.load(file_path))
model.eval()
torch.onnx.export(model, dummy_input, "vitstr.onnx")
| Thank you all.
I used the same architecture as the one in the model and it worked.
| https://stackoverflow.com/questions/68829209/ |
Optimizing Values that are on GPU | I am trying to optimize a PyTorch tensor which I am also using it as input to a network. Lets call this tensor "shape". My optimizer is as follows:
optimizer = torch.optim.Adam(
[shape],
lr=0.0001
)
I also am getting vertice values using this "shape" tensor:
vertices = model(shape)
And my loss function calculates loss as in differences of inferenced vertices and ground truth vertices:
loss = torch.sqrt(((gt_vertices - vertices) ** 2).sum(2)).mean(1).mean()
So what I am doing is actually estimating shape value. I am only interested in shape values. This works perfectly fine when everything is on CPU. However, when I put my shape and models on GPU by calling to("cuda"), I am getting the classic non-leaf Tensor error:
ValueError: can't optimize a non-leaf Tensor
Calling .detach().cpu() on shape inside optimizer solves the issue, but then gradient's cannot flow as they should be and my values are not updated. How can I make this work?
| When .to('cuda'), e.g. calling shape_p = shape.to('cuda'), you are making a copy of shape. While shape remains a leaf tensor, shape_p is not, because it's 'parent' tensor is shape. Therefore shape_p is not a leaf and returns the error when trying to optimize it.
Sending it to CUDA device after having set the optimizer, would solve the issue (there are certain instances when this can't be possible though, see here).
>>> optimizer = torch.optim.Adam([shape], lr=0.0001)
>>> shape = shape.cuda()
The best option though, in my opinion, is to send it directly on init:
>>> shape = torch.rand(1, requires_grad=True, device='cuda')
>>> optimizer = torch.optim.Adam([shape], lr=0.0001)
This way it remains a leaf tensor.
| https://stackoverflow.com/questions/68830818/ |
Backprop for Repeated Convolution using Grouped Convolution | I have a 3D tensor with each channel to be convolved with the one single kernel. From a quick search, the fastest way to do this was to use grouped convolution with number of groups to be the number of channels.
Here is a small reproducible example:
import torch
import torch.nn as nn
torch.manual_seed(0)
x = torch.rand(1, 3, 3, 3)
first = x[:, 0:1, ...]
second = x[:, 1:2, ...]
third = x[:, 2:3, ...]
kernel = nn.Conv2d(1, 1, 3)
conv = nn.Conv2d(3, 3, 3, groups=3)
conv.weight.data = kernel.weight.data.repeat(3, 1, 1, 1)
conv.bias.data = kernel.bias.data.repeat(3)
>>> conv(x)
tensor([[[[-1.0085]],
[[-1.0068]],
[[-1.0451]]]], grad_fn=<MkldnnConvolutionBackward>)
>>> kernel(first), kernel(second), kernel(third)
(tensor([[[[-1.0085]]]], grad_fn=<ThnnConv2DBackward>),
tensor([[[[-1.0068]]]], grad_fn=<ThnnConv2DBackward>),
tensor([[[[-1.0451]]]], grad_fn=<ThnnConv2DBackward>))
Which you can see perfectly works.
Now coming to my question. I need to do backprop on this (kernel object). While doing this, each weight of the conv gets its own update. But actually, conv is made up of kernel repeated 3 times. At the end I require only an updated kernel. How do I do this?
PS: I need to optimize for speed
| To give a reply to your own answer, averaging the weights is actually not an accurate method. You can operate on the gradients by summing them (see below) but not on the weights.
For a given convolution layer when using groups, you can think of it as passing a number of groups elements through the kernel. As such the gradient is accumulated, not averaged. The resulting gradient is effectively the sum of gradients:
kernel.weight.grad = conv.weight.grad.sum(0, keepdim=True)
You can verify this with pen and paper, if you average the weights, you end up averaging the weights of the previous step and the gradients of each kernel. This is not even true for more advanced optimizers which won't rely solely on a simple update scheme like θ_t = θ _t-1 - lr*grad. Therefore, you should be working with gradients directly, not the resulting weights.
One alternative way you can solve this is by implementing you own shared kernel convolution module. This can be done in the following two steps:
define your single kernel in the nn.Module initializer.
in the forward definition, make a view of your kernel to match the number groups. Use Tensor.expand instead of Tensor.repeat (the latter makes a copy). You should not make copies, they must remain references of the same underlying data i.e. your single kernel. Then, you can apply the grouped convolution with more flexibility using the functional variant of the paper torch.nn.functional.conv2d.
From there you can backpropagate anytime, and the gradient will accumulate on the single underlying weight (and bias) parameter.
Let's see it in practice:
class SharedKernelConv2d(nn.Module):
def __init__(self, kernel_size, groups, **kwargs):
super().__init__()
self.kwargs = kwargs
self.groups = groups
self.weight = nn.Parameter(torch.rand(1, 1, kernel_size, kernel_size))
self.bias = nn.Parameter(torch.rand(1))
def forward(self, x):
return F.conv2d(x,
weight=self.weight.expand(self.groups, -1, -1, -1),
bias=self.bias.expand(self.groups),
groups=self.groups,
**self.kwargs)
This is a very simple implementation yet is effective. Let's compare the two:
>>> sharedconv = SharedKernelConv2d(3, groups=3):
With the other method:
>>> conv = nn.Conv2d(3, 3, 3, groups=3)
>>> conv.weight.data = torch.clone(conv.weight).repeat(3, 1, 1, 1)
>>> conv.bias.data = torch.clone(conv.bias).repeat(3)
Backpropagate on the sharedconv layer:
>>> sharedconv(x).mean().backward()
>>> sharedconv.weight.grad
tensor([[[[0.7920, 0.6585, 0.8721],
[0.6257, 0.3358, 0.6995],
[0.5230, 0.6542, 0.3852]]]])
>>> sharedconv.bias.grad
tensor([1.])
Compared to summing the gradient on the repeated tensor:
>>> conv(x).mean().backward()
>>> conv.weight.grad.sum(0, keepdim=True)
tensor([[[[0.7920, 0.6585, 0.8721],
[0.6257, 0.3358, 0.6995],
[0.5230, 0.6542, 0.3852]]]])
>>> conv.bias.grad.sum(0, keepdim=True)
tensor([1.])
With SharedKernelConv2d you don't have to worry about updating the gradient with the sum of kernel gradients each time. The accumulation takes place automatically by having kept the reference to self.weight and self.bias with Tensor.expand.
| https://stackoverflow.com/questions/68841748/ |
binary classifying model in pytorch using cnn | i build a classifying model about phishing site.
1. first, about my data
num_dataset: i have about 16000 dataset
num_feature: my dataset has 12 characterisitics.
label: if it's a phishing site, i set it's label -1. else 1
batch_size: set batch_size 128, for cnn model
kernel_size: 3
2. I try
from torch.utils.data import DataLoader, TensorDataset
train_dataset = TensorDataset(x_train, y_train)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
#-------------------------------------------------------#
class CNN(nn.Module):
def __init__(self, kernel_size):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv1d(12, 32, 3,
stride=1, padding=1)
...
def forward(self, x):
out = self.layer1(x)
...
#-------------------------------------------------------#
model = CNN()
for epoch in range(epochs):
avg_cost = 0
for x_train, y_train in train_loader:
Hypothesis = model(x_train)
x_train.shape
torch.Size([16072, 12])
y_train.shape
torch.Size([16072, 1])
3. Error
at Hypothesis = model(x_train)
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [32, 12, 3], but got 2-dimensional input of size [128, 12] instead
4. Finnally
i think it's because I'm confused between conv1d and conv2d but i can't figure it out...
plz, i want to know the cause of this problem
| You are using a nn.Conv1d which should receive a 3-dimensional input shaped (batch_size, n_channels, sequence_length). This being said your input has n_channels=12 (since you've initialized your 1d conv with 12 input channels) and a sequence_length=1. To match the requirements, you need to have an additional dimension on your input. Passing something like x_train.unsqueeze(-1) to your network should work.
See the following example:
>>> x = torch.rand(100, 12, 1)
>>> L = nn.Conv1d(12, 32, 3, stride=1, padding=1)
>>> L(x).shape
torch.Size([100, 32, 1])
| https://stackoverflow.com/questions/68844939/ |
install PyTorch CPU-only in Dockerfile | I am fairly new to Docker and containerisation. I am wanting to decrease the size of my_proj docker container in production.
I prefer installing packages and managing dependencies via Poetry.
How can I specify using CPU-only PyTorch in a Dockerfile?
To do this via. bash terminal, it would be:
poetry add pytorch-cpu torchvision-cpu -c pytorch
(or conda install...)
My existing Dockerfile:
FROM python:3.7-slim as base
RUN apt-get update -y \
&& apt-get -y --no-install-recommends install curl wget\
&& rm -rf /var/lib/apt/lists/*
ENV ROOT /home/worker/python/my_proj
WORKDIR $ROOT
ARG ATLASSIAN_TOKEN
ARG POETRY_HTTP_BASIC_AZURE_PASSWORD
ARG ACCESS_KEY
ENV AWS_ACCESS_KEY_ID=$ACCESS_KEY
ARG SECRET_KEY
ENV AWS_SECRET_ACCESS_KEY=$SECRET_KEY
ARG REPO
ENV REPO_URL=$REPO
ENV PYPIRC_PATH=$ROOT/.pypirc
ENV \
PYTHONFAULTHANDLER=1 \
POETRY_VERSION=1.1.4 \
POETRY_HOME=/etc/poetry \
XDG_CACHE_HOME=/home/worker/.cache \
POETRY_VIRTUALENVS_IN_PROJECT=true \
MPLCONFIGDIR=/home/worker/matplotlib \
PATH=/home/worker/python/my_proj/.venv/bin:/usr/local/bin:/etc/poetry/bin:$PATH
ADD https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py ./
RUN python get-poetry.py && chmod +x /etc/poetry/bin/poetry
RUN --mount=type=cache,target=/root/.cache pip install twine keyring artifacts-keyring
RUN --mount=type=cache,target=/root/.cache apt update && apt install gcc -y
FROM base as ws
ARG WS_APIKEY
ARG WS_PROJECTVERSION=
ARG WS_PROJECTNAME=workers-python-my_proj
ARG WS_PRODUCTNAME=HALO
COPY --chown=worker:worker . .
RUN --mount=type=cache,uid=1000,target=/home/worker/.cache poetry install --no-dev
COPY --from=openjdk:15-slim-buster /usr/local/openjdk-15 /usr/local/openjdk-15
ENV JAVA_HOME /usr/local/openjdk-15
ENV PATH $JAVA_HOME/bin:$PATH
RUN --mount=type=cache,uid=1000,target=/home/worker/.cache ./wss_agent.sh
FROM base as test
COPY . .
RUN poetry config experimental.new-installer false
RUN poetry install
RUN cd my_proj && poetry run invoke deployconfluence_server_pass=$ATLASSIAN_TOKEN
FROM base as package
COPY . .
RUN poetry build
RUN python -m pip install --upgrade pip && \
pip install twine keyring artifacts-keyring && \
twine upload -r $REPO_URL --config-file $PYPIRC_PATH dist/* --skip-existing
FROM base as build
COPY . .
RUN poetry config experimental.new-installer false
RUN poetry install --no-dev
RUN pip3 --no-cache-dir install --upgrade awscli
RUN aws s3 cp s3://....tar.gz $ROOT/my_proj # censored url
RUN mkdir $ROOT/my_proj/bert-base-cased && cd $ROOT/my_proj/bert-base-cased && \
wget https://huggingface.co/bert-base-cased/resolve/main/config.json && \
wget https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json && \
wget https://huggingface.co/bert-base-cased/resolve/main/tokenizer_config.json
FROM python:3.7-slim as production
ENV ROOT=/home/worker/python/my_proj \
VIRTUAL_ENV=/home/worker/python/my_proj/.venv\
PATH=/home/worker/python/my_proj/.venv/bin:/home/worker/python/my_proj:$PATH
COPY --from=build /home/worker/python/my_proj/pyproject.toml /home/worker/python/
COPY --from=build /home/worker/python/my_proj/.venv /home/worker/python/my_proj/.venv
COPY --from=build /home/worker/python/my_proj/my_proj /home/worker/python/my_proj
WORKDIR $ROOT
ENV PYTHONPATH=$ROOT:/home/worker/python/
ENTRYPOINT [ "primary_worker", "--mongo" ]
| Installing it via pip should work:
RUN pip3 install torch==1.9.0+cpu torchvision==0.10.0+cpu torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
| https://stackoverflow.com/questions/68845314/ |
Colab PyTorch | ImportError: /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so | On Google Colaboratory, I have tried all 3 runtimes: CPU, GPU, TPU. All give the same error.
Cells:
# NB: Only run in TPU environment
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
!pip -q install pytorch-lightning==1.2.7 transformers torchmetrics awscli mlflow boto3 pycm
import os
import sys
import logging
from pytorch_lightning import LightningDataModule
Error:
ImportError Traceback (most recent call last)
<ipython-input-6-09509a67016b> in <module>()
3 import logging
4
----> 5 from pytorch_lightning import LightningDataModule
6 from torch.utils.data import DataLoader, Dataset
7 from transformers import AutoTokenizer
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py in <module>()
26 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT)
27
---> 28 from pytorch_lightning import metrics # noqa: E402
29 from pytorch_lightning.callbacks import Callback # noqa: E402
30 from pytorch_lightning.core import LightningDataModule, LightningModule # noqa: E402
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py in <module>()
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from pytorch_lightning.metrics.classification import ( # noqa: F401
15 Accuracy,
16 AUC,
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py in <module>()
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from pytorch_lightning.metrics.classification.accuracy import Accuracy # noqa: F401
15 from pytorch_lightning.metrics.classification.auc import AUC # noqa: F401
16 from pytorch_lightning.metrics.classification.auroc import AUROC # noqa: F401
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py in <module>()
16 import torch
17
---> 18 from pytorch_lightning.metrics.functional.accuracy import _accuracy_compute, _accuracy_update
19 from pytorch_lightning.metrics.metric import Metric
20
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/functional/__init__.py in <module>()
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from pytorch_lightning.metrics.functional.accuracy import accuracy # noqa: F401
15 from pytorch_lightning.metrics.functional.auc import auc # noqa: F401
16 from pytorch_lightning.metrics.functional.auroc import auroc # noqa: F401
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/functional/accuracy.py in <module>()
16 import torch
17
---> 18 from pytorch_lightning.metrics.classification.helpers import _input_format_classification, DataType
19
20
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/helpers.py in <module>()
17 import torch
18
---> 19 from pytorch_lightning.metrics.utils import select_topk, to_onehot
20 from pytorch_lightning.utilities import LightningEnum
21
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py in <module>()
16 import torch
17
---> 18 from pytorch_lightning.utilities import rank_zero_warn
19
20 METRIC_EPS = 1e-6
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py in <module>()
46 )
47 from pytorch_lightning.utilities.parsing import AttributeDict, flatten_dict, is_picklable # noqa: F401
---> 48 from pytorch_lightning.utilities.xla_device import XLADeviceUtils # noqa: F401
49
50 _TPU_AVAILABLE = XLADeviceUtils.tpu_device_exists()
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/xla_device.py in <module>()
21
22 if _XLA_AVAILABLE:
---> 23 import torch_xla.core.xla_model as xm
24
25 #: define waiting time got checking TPU available in sec
/usr/local/lib/python3.7/dist-packages/torch_xla/__init__.py in <module>()
126 import torch
127 from ._patched_functions import _apply_patches
--> 128 import _XLAC
129
130
ImportError: /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at11result_typeERKNS_6TensorEN3c106ScalarE
| Searching online; there semes to be many causes for this same problem.
In my case, setting Accelerator to None in Google Colaboratory solved this.
| https://stackoverflow.com/questions/68846290/ |
How do i stream a video with openCV into my pytorch neural network? | I wrote YOLOv3 in Pytorch from scratch. If i send an image through the model with trained weights, it kinda works. Next step is to use my camera to make YOLO do its magic in real time.
I think the correct working pipeline is to catch a single frame of the video and feed it to the network. Than, write the boxes on the very same frame.
checkpoint = torch.load("\my_checkpoint_40.pth.tar")
model = YOLOv3(in_channels = 3, num_classes = 20).to(config.DEVICE)
model.load_state_dict(checkpoint["state_dict"])
ip_camera = "http://192.168.1.70:4500/mjpegfeed?640x480"
outputFile = "yolo_out_py.avi"
In this way, i load the weights into the net. Then, i wrote the function to use my camera (it's droidCamera from my mobile, because on my PC i have not any camera device, so i use the ip of the mobile device) and the code itself works: the video appears on screen.
The outputFile should be the destination path of the wrote video.
Problems are when i try to load a single frame into the net and do the rest of the process.
def streaming(model, thresh, iou_thresh, anchors, ip_camera):
stream = cv2.VideoCapture(ip_camera)
# Corrective actions printed in the even of failed connection.
if stream.isOpened() is not True:
print('Not opened.')
print('Please ensure the following:')
print('1. DroidCam is not running in your browser.')
print('2. The IP address given is correct.')
# Resizing the image to be in hte same dimension of the YOLOv3 Network
width = 416
height = 416
# Connection successful. Proceeding to display video stream.
while stream.isOpened() is True:
# Capture frame-by-frame
ret, f = stream.read()
dim = (width, height)
image = cv2.resize(f, dim, interpolation = cv2.INTER_AREA)
cv2.imshow('frame', image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
for frame in image:
model.eval()
anchors = torch.tensor(anchors)
anchors = anchors.to(config.DEVICE)
x = torch.tensor(frame)
x = x.to("cuda")
# from this line to the nms_boxes, it's the same code i used for plotting a single image
with torch.no_grad():
out = model(x)
bboxes = [[] for _ in range(x.shape[0])]
for i in range(1):
batch_size, A, S, _, _ = out[i].shape
anchor = anchors[i]
boxes_scale_i = cells_to_bboxes(
out[i], anchor, S = S, is_preds = True
)
for idx, (box) in enumerate(boxes_scale_i):
bboxes[idx] += box
model.train()
for i in range(batch_size):
nms_boxes = non_max_suppression(
bboxes[i], iou_threshold = iou_thresh, threshold = thresh, box_format = "midpoint",
)
# cells_to_boxes and non_max_suppression are functions that return boxes coordinates
# and the "better" box
#now it's time to write things on the frame
frame = cv2.VideoWriter(outputFile, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), 30,
(round(stream.get(cv2.CAP_PROP_FRAME_WIDTH)), round(stream.get(cv2.CAP_PROP_FRAME_HEIGHT))))
frame.write(nms_boxes)
stream.release()
cv2.destroyAllWindows()
the code doesn't work for several reasons: model.eval() gives me an error: missing 1 required positional argument: 'self'
then, i have several errors that i think are about the right pipiline of work on stream of video. It's the first time i work with openCV.
If i delete model.eval(), i have another error on:
out = model(x)
This is the traceback
Traceback (most recent call last):
File "C:/Python_Project/YOLOV3/openVid.py", line 97, in <module>
streaming(YOLOv3, 0.6, 0.6, config.ANCHORS, ip_camera)
File "C:/Python_Project/YOLOV3/openVid.py", line 72, in streaming
out = model(x)
File "C:\Python_Project\YOLOV3\model.py", line 106, in __init__
self.layers = self.create_conv_layers()
File "C:\Python_Project\YOLOV3\model.py", line 141, in create_conv_layers
CNNBlock(
File "C:\Python_Project\YOLOV3\model.py", line 46, in __init__
self.conv = nn.Conv2d(in_channels, out_channels, bias = not bn_act, **kwargs)
File "C:\Users\Simone\anaconda3\envs\Pytorch\lib\site-packages\torch\nn\modules\conv.py", line 430, in __init__
super(Conv2d, self).__init__(
File "C:\Users\Simone\anaconda3\envs\Pytorch\lib\site-packages\torch\nn\modules\conv.py", line 83, in __init__
if in_channels % groups != 0:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
I have no clue about what doing next.
I saw i should convert the model in ONNX, but i really don't know what to do. I couldn't find any tutorial on internet, and i'm stucked.
Could you help me?
| Based on the error message, model is not a class instance. Note, in the Traceback, that
out = model(x)
is calling the __init__ function. Therefore, model is probably YOLOV3 rather than YOLOV3(...). Based on the init signature, x is being taken as in_channels, and as x an image, the
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
makes sense. This also explains the .eval() error. Besides that, I believe you'll need to add a batch dimension to your frame (e.g., x.unsqueeze(0)), otherwise you'll get another error.
| https://stackoverflow.com/questions/68846681/ |
issue while exporting torch model to onnx format | I'm trying to export my PyTorch model to an ONNX format but I keep getting this error:
TypeError: forward() missing 1 required positional argument: 'text'
This is my code:
model = Model(opt)
dummy_input = torch.randn(1, 3, 224, 224)
file_path='/content/drive/MyDrive/VitSTR/vitstr_tiny_patch16_224_aug.pth'
torch.save(model.state_dict(), file_path)
model.load_state_dict(torch.load(file_path))
#model = torch.nn.DataParallel(model).to(device)
#print(model)
torch.onnx.export(model, dummy_input, "vitstr.onnx", verbose=True)
| ViTSTR forward requires two positional arguments, input and text:
def forward(self, input, text, is_train=True, seqlen=25):
# ...
Therefore, you need to pass an additional argument:
# ...
dummy_text = # create a dummy_text as well, with the appropriate shape
torch.onnx.export(model, (dummy_input, dummy_text), "vitstr.onnx", verbose=True)
| https://stackoverflow.com/questions/68847663/ |
Batch size reduces accuracy of ensemble of pretrained CNNs | I'm trying to implement basic softmax-based voting, which I take a couple of pretrained CNNs, softmax their outputs, add them together and then use argmax as final output.
So I loaded 4 different pretrained CNNs (vgg11, vgg13, vgg16, vgg19) from "chenyaofo/pytorch-cifar-models" that were trained on CIFAR10 -- I didn't train them.
When I iterate over the test set with DataLoader with batch_size=128/256, I get to 94% accuracy;
When I iterate over the test set with batch_size=1, I get to 69% accuracy.
How could it be?
This is the code:
import torch
from tqdm import tqdm
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import torch.nn as nn
import torch
torch.cuda.empty_cache()
model_names = [
"cifar10_vgg11_bn",
"cifar10_vgg13_bn",
"cifar10_vgg16_bn",
"cifar10_vgg19_bn",
# "cifar10_resnet56",
]
batch_size = 2
test_transform = transforms.Compose([
transforms.ToTensor(),
])
def load_models():
models = []
for model_name in model_names:
model = torch.hub.load("chenyaofo/pytorch-cifar-models", model_name, pretrained=True)
models.append(model)
return models
testset = datasets.CIFAR10(root='./data', train=False,
download=True, transform=test_transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False)
import torch.nn as nn
import torch
class MyEnsemble(nn.Module):
def __init__(self, modelA, modelB, modelC, modelD):
super(MyEnsemble, self).__init__()
self.modelA = modelA
self.modelB = modelB
self.modelC = modelC
self.modelD = modelD
# self.modelE = modelE
def forward(self, x):
out1 = self.modelA(x)
out2 = self.modelB(x)
out3 = self.modelC(x)
out4 = self.modelD(x)
# out5 = self.modelE(x)
# print(out1.shape)
out1 = torch.softmax(out1, dim=1)
out2 = torch.softmax(out2, dim=1)
out3 = torch.softmax(out3, dim=1)
out4 = torch.softmax(out4, dim=1)
out = out1 + out2 + out3 + out4
return out
from EnsembleModule import MyEnsemble
from data import load_models, testloader
import torch
from tqdm import tqdm
device = 'cuda' if torch.cuda.is_available() else 'cpu'
models = load_models()
model = MyEnsemble(models[0], models[1], models[2], models[3])
model.to(device)
total = 0
correct = 0
with torch.no_grad():
for images, labels in tqdm(testloader):
images, labels = images.to(device), labels.to(device)
outputs = model(images)
_, predictions = torch.max(outputs, 1)
total += labels.size(0)
correct += (predictions == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
| You're forgetting to call model.eval():
# ...
model.to(device)
model.eval() # <<<<<<<<<<<<<
total = 0
correct = 0
with torch.no_grad():
for images, labels in tqdm(testloader):
images, labels = images.to(device), labels.to(device)
outputs = model(images)
# ...
As your model has BatchNorm layers, batch_size=1 is particularly degrading.
The pre-processing should also follow the one used for training. As you can see in the repository of the author of the model, you should normalize using the following statistics:
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.4914, 0.4822, 0.4465), std=(0.2023, 0.1994, 0.2010))
])
| https://stackoverflow.com/questions/68849347/ |
How to get output from intermediate encoder layers in PyTorch Transformer? | I have trained a fairly simple Transformer model with 6 TransformerEncoder layers:
class LitModel(pl.LightningModule):
def __init__(self,
num_tokens: int,
dim_model: int = 96,
dim_h: int = 128,
n_head: int = 1,
dropout: float = 0.1,
activation: str = 'relu',
num_layers: int = 2,
lr: float=1e-3):
"""
:param num_tokens:
:param dim_model:
:param dim_h:
:param n_head:
:param dropout:
:param activation:
:param num_layers:
"""
super().__init__()
self.lr = lr
self.embed = torch.nn.Embedding(num_embeddings=num_tokens,
embedding_dim=dim_model)
encoder_layer = torch.nn.TransformerEncoderLayer(d_model=dim_model,
nhead=n_head,
dim_feedforward=dim_h,
dropout=dropout,
activation=activation,
batch_first=True)
self.encoder = torch.nn.TransformerEncoder(encoder_layer=encoder_layer,
num_layers=num_layers)
self.linear = torch.nn.Linear(in_features=dim_model, out_features=num_tokens)
def forward(self, indices, mask):
x = self.embed(indices)
x = self.encoder(x, src_key_padding_mask=mask)
return x
def training_step(self, batch, batch_idx):
x = batch['src']
y = batch['label']
mask = batch['mask']
x = self.embed(x)
x = self.encoder(x, src_key_padding_mask=mask)
x = self.linear(x)
loss = F.cross_entropy(input=x.transpose(1, 2),
target=y,
ignore_index=0)
self.log('train_loss', loss)
return loss
After training the model to predict [MASK] tokens (exactly like BERT), I would like to be able to extract the outputs from the lower layers, specifically, the second to last TransformerEncoderLayer, which may give a better vector encoding than the final layer (according to the original BERT paper). I'm not sure how to go about doing this.
| Just in case it is not clear from the comments, you can do that by registering a forward hook:
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
# instantiate the model
model = LitModel(...)
# register the forward hook
model.encoder.layers[-2].register_forward_hook(get_activation('encoder_penultimate_layer'))
# pass some data through the model
output = model(x)
# this is what you're looking for
activation['encoder_penultimate_layer']
| https://stackoverflow.com/questions/68852988/ |
RTX3060 cannot run Pytorch Yolov4 with cuda11.4 | Before I am using RTX2070 SUPER to run Pytorch Yolov4 and now my PC is changed to use RTX3060, ASUS KO GeForce RTX™ 3060 OC.
I have deleted the existing cuda11.2 and install again with cuda11.4 and Nvidia Driver 470.57.02
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:07:00.0 Off | N/A |
| 0% 42C P8 16W / 170W | 403MiB / 12053MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1236 G /usr/lib/xorg/Xorg 9MiB |
| 0 N/A N/A 1264 G /usr/bin/gnome-shell 6MiB |
| 0 N/A N/A 2124 C python 153MiB |
+-----------------------------------------------------------------------------+
However with the cuda11.4 and RTX3060, I cannot run Pytorch Yolov4 detection. When I run the detection, the detection will be stuck after loading weights, Loading weights from ./data/people.weights... Done!. In the meantime, nvidia-smi can show that a "python" (above PID 2124) is using the GPU memory and the used GPU memory of "python" will keep increasing.
Is cuda11.4 not support RTX3060 or Pytorch1.4 yet?
Environment:
ASUS KO GeForce RTX™ 3060 OC
Ubuntu 18.04.5 LTS
cuda 11.4
nvidia driver 470.57.02
conda 4.8.3
python 3.8.5
pytorch 1.4
| Solved by reinstalling the pytorch in my Conda Env.
You may try reinstalling the Pytorch or create a new Conda Environment to do it again.
| https://stackoverflow.com/questions/68857035/ |
How to train a deep learning model on a GPU server with laptop closed? | I'm about to train my own ASR model using ESPNet on a GPU server.
If my calculations are right, it's going to take about 4 consecutive days (using about 100G of audio data).
I'm mainly using VScode to remotely connect to the SSH server, and will run the shell file with the VScode terminal.
My question is that will I have to leave my laptop open for four days in order to train my model?
not sure if this is any useful info, but this is my nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
and my nvidia-smi:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro RTX 6000 Off | 00000000:00:06.0 Off | 0 |
| N/A 32C P0 41W / 250W | 0MiB / 22698MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Once my data is all prepared, I'll execute the run.sh file.
Espnet github:
https://github.com/espnet/espnet
The model I'm using is located in espnet/egs2/zeroth_korean/asr1.
I'm fairly new to linux servers and models this heavy and large, so any type of feedback would be much appreciated.
| Many Linux versions include the GNU Screen program, which - amongst other things - allow you to keep processes running after you've logged off.
Once connected, simply run the screen command:
[myhost ~]$ screen
Start your long running process inside this screen terminal.
You can now close the terminal. Power off, restart your computer, whatever.
When you want to check up on your process, just re-connect and run the following command to re-attach:
[myhost ~]$ screen -r
I hope this works for you.
screen has lots of other nice tricks. Just google "Linux Screen" for an abundance of articles on this.
| https://stackoverflow.com/questions/68858284/ |
Custom Images with GAN | I downloaded this image set https://www.kaggle.com/jessicali9530/stanford-dogs-dataset
and extracted those images folder in to my data folder
So now it's like this
Below is my code
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
import os
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
# Creating the transformations
transform = transforms.Compose([transforms.Scale(imageSize), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
input_vector = 100
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Defining the Generator
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False
def __init__(self):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 3, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
# Creating the generator
netG = G()
netG.apply(weights_init)
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(3, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.002, betas=(0.5, 0.999))
nb_epochs = 25
for epoch in range(25):
for i, data in enumerate(dataloader, 0):
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
noise = Variable(torch.randn(input.size()[0], 100, 1, 1))
fake = netG(noise)
target = Variable(torch.zeros(input.size()[0]))
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (epoch, 25, i, len(dataloader), errD.data[0], errG.data[0]))
if i % 100 == 0:
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
But when I run that it gives me this error
RuntimeError: stack expects each tensor to be equal size, but got [3, 64, 85] at entry 0 and [3, 64, 80] at entry 1
That means the issue is with the for loop.
How can I change that for loop to match my dataset ?
| try using transforms.Resize instead of transforms.Scale(imageSize), as Resize behaves differently on input images with a different height and width. Pass the size argument as a tuple:
....
transforms.Resize((imageSize, imageSize))
....
| https://stackoverflow.com/questions/68858397/ |
Cannot import modules from torch | I have installed torch 1.4.0. It has worked with all modules for 4 months well. But yesterday I ran into a problem in Jupyter Notebook such as:
import torch
import torch.nn as nn
import torch.optim as optim
And it throws "cannot find module torch.optim". But it could import torch and torch.nn! I can't understand why. I didn't change venv or install other packages. Same problem with torch.nn.Functional and torch.onnx. Does anybody know how to solve this problem?
| So the problem solved with installing torch 1.6.0. I have no idea why this happened with torch 1.4.0. Anyway now it works.
| https://stackoverflow.com/questions/68858407/ |
Learning Object Detection Detected result showed in discolouration | Breif Description
Recently begin to learning Object Detection, Just starting off with PyTorch, YOLOv5. So I thought why not build a small side project to learn? Using it to train to detect Pikachu.
The Problem
I've successfully trained the model with Pikachu and then uses trained weights with myself written Python script/code to detect the Pikachu using test images, now, here's the problem, Pikachus can successfully detected but all the result showed in blue discolouration, what supposed to be yellow, all turned into blue and blue into yellow.
Fig-1 Result images showed in blue discolouration(few example outputs)
Additional Imformation
I've pushed this project to the GitHub, feel free to download it or pull it to debugging.
GitHub repository where it contains all the files
Any solution/suggestion would be helpful, Thanks.
The Code
"""Object detection using YOLOv5
Pokemon Pikachu detecting
"""
# import os, sys to append YOLOv5 folder path
import os, sys
# import object detection needed modules and libraries
# pillow
from PIL import Image, ImageDraw, ImageFont
import numpy as np
import torch # PyTorch
# YOLOv5 folder path and related folder path settings
cwd = os.getcwd()
root_dir = (cwd + "/yolov5_stable")
sys.path.append(root_dir)
# import methods, functions from YOLOv5
from models.experimental import attempt_load
from utils.datasets import LoadImages
from utils.general import non_max_suppression, scale_coords
from utils.plots import colors
# define a function to show detected pikachu
def show_pikachu(img, det):
labels = ["pikachu"]
img = Image.fromarray(img)
draw = ImageDraw.Draw(img)
font_size = max(round(max(img.size)/40), 12)
font = ImageFont.truetype(cwd + "/yolov5_stable/fonts/times.ttf")
for info in det:
color = colors(1)
target, prob = int(info[5].cpu().numpy()), np.round(info[4].cpu().numpy(), 2)
x_min, y_min, x_max, y_max = info[0], info[1], info[2], info[3]
draw.rectangle([x_min, y_min, x_max, y_max], width = 3, outline = color)
draw.text((x_min, y_min), labels[target] + ':' + str(prob), fill = color, font = font)
# Bug unresolved, pikachu shown in blue discolouration
return img
if __name__ == "__main__":
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print("GPU State: ", device)
data_path = (cwd + "/test_data/")
weight_path = (cwd + "/yolov5_stable/weights/best_v1.pt")
dataset = LoadImages(data_path)
model = attempt_load(weight_path, map_location = device)
model.to(device)
for path, img, im0s, _ in dataset:
img = torch.from_numpy(img).to(device)
img = img.float() # uint8 to fp16/32
img /= 255.0 # 0-255 to 0.0-1.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
pred = model(img)[0]
pred = non_max_suppression(pred, 0.25, 0.45)
for i, det in enumerate(pred):
im0 = im0s.copy()
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
result = show_pikachu(im0, det)
result.show()
| The problem is that Image.fromarray expects image in RGB and you're providing them in BGR. You just need to change that. There are multiple places you could do that, for instance:
Image.fromarray(img[...,::-1]) # assuming `img` is channel-last
An evidence is that the red parts of the mouse (red is RGB(255, 0, 0)) are being shown in blue (which is RGB(0, 0, 255)). FYI, yellow is RGB(255, 255, 0) and cyan is RGB(0, 255, 255), which you can also see in your case.
| https://stackoverflow.com/questions/68862495/ |
Load a saved model on Pytorch AttributeError: 'dict' object has no attribute 'seek' | I trained a model, and I saved the best one, with:
# Store params at the best validation accuracy
if save_param and accuracy_star > best_accuracy:
torch.save({'vgg_a':nets[0].state_dict(),'classifier':nets[3].state_dict()}, f"{model_name}_best_test.pth")
So, at the end of the training I downloaded this "federated_mnist_best_test.pth" on my local computer.
Now, I created a new notebook on google colab, and I tried din this way to upload the model:
from google.colab import files
uploaded = files.upload()
Inside uploaded I put my .pth file, and then:
state_dict = torch.load(uploaded)
gives me this error:
AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
| After having called uploaded = files.upload() (usage examples can be found on this notebook) and interacted with the file explorer, the file will be uploaded to google Colaboratory's temporary file system.
You can find it by looking at the side panel:
At this point, the file has been uploaded to the file system but hasn't yet been loaded on the notebook. You need to load the file using its name (e.g. if the name is best_test.pth):
state_dict = torch.load('best_test.pth')
| https://stackoverflow.com/questions/68864023/ |
Extend a tensor of sequences so that every sequence is prepended | I am having a tensor of 3 sequences where every sequence has length 2 and consists of vectors of size 2:
import torch
t = torch.Tensor([[[11,12],[21,22]], [[31,32],[41,42]], [[51,52],[61,62]]])
>>> t
tensor([[[11., 12.],
[21., 22.]],
[[31., 32.],
[41., 42.]],
[[51., 52.],
[61., 62.]]])
So t has the structure t[batch, sequencePos, dataPos]. How can I extend every sequence so that it is prepended by a new element [01, 02] (sequence have then length 3) so that I get:
tensor([[[01., 02.],
[11., 12.],
[21., 22.]],
[[01., 02.],
[31., 32.]
[41., 42.]],
[[01., 02.],
[51., 52.],
[61., 62.]]])
| You are looking to concatenate two tensors on axis=1, the first one being t:
>>> t
tensor([[[11., 12.],
[21., 22.]],
[[31., 32.],
[41., 42.]],
[[51., 52.],
[61., 62.]]])
The second one is an arrangement:
>>> arr = torch.arange(t.size(-1))
tensor([0, 1])
However, we first need to broadcast it to the correct shape using torch.reshape and torch.repeat:
>>> arr = arr.reshape(1, 1, -1).repeat(len(t), 1, 1)
tensor([[[0, 1]],
[[0, 1]],
[[0, 1]]])
At this point, arr.shape is torch.Size([3, 1, 4]).
We are set for concatenating arr and t together, either with torch.cat:
>>> torch.cat((arr, t), dim=1)
or more elegantly with torch.hstack:
>>> torch.hstack((arr, t))
tensor([[[ 0., 1.],
[11., 12.],
[21., 22.]],
[[ 0., 1.],
[31., 32.],
[41., 42.]],
[[ 0., 1.],
[51., 52.],
[61., 62.]]])
Notice how this implementation will work with any 3-dimensional input. In the following example t has three columns instead of two:
>>> t
tensor([[[11., 12., 13.],
[21., 22., 23.]],
[[31., 32., 33.],
[41., 42., 43.]],
[[51., 52., 53.],
[61., 62., 63.]]])
>>> arr = torch.arange(t.size(-1)).reshape(1, 1, -1).repeat(len(t), 1, 1)
>>> torch.cat((arr, t), dim=1)
tensor([[[ 0., 1., 2.],
[11., 12., 13.],
[21., 22., 23.]],
[[ 0., 1., 2.],
[31., 32., 33.],
[41., 42., 43.]],
[[ 0., 1., 2.],
[51., 52., 53.],
[61., 62., 63.]]])
This can even be generalized to n-dimensional tensors. You just have to take care of the reshape and repeat calls where the number of arguments depends on the number of dimensions.
>>> ones = (1,)*(t.ndim-1)
>>> arr = torch.arange(t.size(-1)).reshape(*ones, -1).repeat(len(t), *ones)
>>> torch.cat((arr, t), dim=-2)
| https://stackoverflow.com/questions/68864395/ |
python module not found after executing shell script even though the module is installed | pip3 list
Package Version
------------------- ------------
apipkg 1.5
apparmor 3.0.3
appdirs 1.4.4
asn1crypto 1.4.0
brotlipy 0.7.0
certifi 2021.5.30
cffi 1.14.6
chardet 4.0.0
cmdln 2.0.0
configobj 5.0.6
createrepo-c 0.17.3
cryptography 3.3.2
cssselect 1.1.0
cupshelpers 1.0
cycler 0.10.0
decorator 5.0.9
idna 3.2
iniconfig 0.0.0
isc 2.0
joblib 1.0.1
kiwisolver 1.3.1
LibAppArmor 3.0.3
lxml 4.6.3
matplotlib 3.4.3
mysqlclient 2.0.3
nftables 0.1
notify2 0.3.1
numpy 1.21.1
opi 2.1.1
ordered-set 3.1.1
packaging 20.9
pandas 1.3.1
Pillow 8.3.1
pip 20.2.4
ply 3.11
psutil 5.8.0
py 1.10.0
pyasn1 0.4.8
pycairo 1.20.1
pycparser 2.20
pycups 2.0.1
pycurl 7.43.0.6
PyGObject 3.40.1
pyOpenSSL 20.0.1
pyparsing 2.4.7
pysmbc 1.0.23
PySocks 1.7.1
python-dateutil 2.8.2
python-linux-procfs 0.6
pytz 2021.1
pyudev 0.22.0
requests 2.25.1
rpm 4.16.1.3
scikit-learn 0.24.2
scipy 1.7.1
setuptools 57.4.0
six 1.16.0
sklearn 0.0
slip 0.6.5
slip.dbus 0.6.5
termcolor 1.1.0
threadpoolctl 2.2.0
torch 1.9.0+cu111
torchaudio 0.9.0
torchvision 0.10.0+cu111
tqdm 4.62.1
typing-extensions 3.10.0.0
urllib3 1.26.6
Above shows my installed modules, but when i go into the project folder, and run the shell script, I get:
Traceback (most recent call last):
File "main.py", line 3, in <module>
import torch
ImportError: No module named torch
Even though in the list above it clearly shows that torch is installed.
Please help. My $PATH is /usr/bin/python:/home/anthony/bin:/usr/local/bin:/usr/bin:/bin:/snap/bin
and my printenv PYTHONPATH is :/usr/bin/python
Let me know if you need any other print outs, I've tried everything, and nothing seems to be working. I am working mainly in pycharm.
| It is very likely that pip3 is pointing to a different python instance.
Imagine you had python, python3, python3.6 and python3.8 all installed on your system. Which one would pip3 install packages for? (who knows?)
It is almost always safer to do python3.8 -m pip list/install since you can be sure that python3.8 somefile.py will be using the same files you just saw. (even better, do python3.8 -m venv /path/to/some/virtualenv and then make sure that is activated, then you can be sure pip points to the same python)
| https://stackoverflow.com/questions/68866686/ |
Where is the "negative" slope in a LeakyReLU? | What does the negative slope in a LeakyReLU function refer to?
The term "negative slope" is used in the documentation of both TensorFlow and Pytorch, but it does not seem to point to reality.
The slope of a LeakyReLU function for both positive and negative inputs is generally non-negative.
The Pytorch and TensorFlow docs provide examples of setting the negative slope, and they both use a positive value. TensorFlow explicitly enforces non-negative values. (See below)
Are they just wrong or am I missing something?
Pytorch docs:
CLASStorch.nn.LeakyReLU(negative_slope=0.01, inplace=False)
Tensorflow LeakyReLU:
Args alpha Float >= 0. Negative slope coefficient. Default to 0.3.
| negative_slope in this context means the negative half of the Leaky ReLU's slope. It is not describing a slope which is necessarily negative.
When naming kwargs it's normal to use concise terms, and here "negative slope" and "positive slope" refer to the slopes of the linear splines spanning the negative [-∞,0] and positive (0,∞] halves of the Leaky ReLU's domain.
| https://stackoverflow.com/questions/68867228/ |
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xbe in position 2: invalid start byte | Do you know how I can fix this problem in PyTorch 1.9?
File "main.py", line 138, in main
checkpoint = torch.load(args.resume)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/serialization.py", line 608, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/serialization.py", line 787, in _legacy_load
result = unpickler.load()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 2: invalid start byte
I have:
$ pip freeze
h5py==3.3.0
joblib==1.0.1
numpy==1.21.2
Pillow==8.3.1
scikit-learn==0.24.2
scipy==1.7.1
sklearn==0.0
threadpoolctl==2.2.0
torch==1.9.0
torchaudio==0.9.0
torchvision==0.10.0
typing-extensions==3.10.0.0
| Can you try something like this:
>>> torch.load('model.pt', encoding='ascii') # or latin1, or other encoding
By default, we decode byte strings as utf-8. This is to avoid a common error case UnicodeDecodeError: 'ascii' codec can't decode byte 0x... when loading files saved by Python 2 in Python 3. If this default is incorrect, you may use an extra encoding keyword argument to specify how these objects should be loaded, e.g., encoding='latin1' decodes them to strings using latin1 encoding, and encoding='bytes' keeps them as byte arrays which can be decoded later with byte_array.decode(...).
Read this, the 2nd note.
| https://stackoverflow.com/questions/68868019/ |
The expanded size of the tensor (64) must match the existing size (66) at non-singleton dimension 2 | When I run the following (unchanged) code in stable PyTorch 1.9, I get the following error. How can I diagnose and fix it?
(fashcomp) [jalal@goku fashion-compatibility]$ python main.py --test --l2_embed --resume runs/nondisjoint_l2norm/model_best.pth.tar --datadir ../../../data/fashion/
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torchvision/transforms/transforms.py:310: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
=> loading checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar'
=> loaded checkpoint 'runs/nondisjoint_l2norm/model_best.pth.tar' (epoch 5)
/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
File "main.py", line 312, in <module>
main()
File "main.py", line 149, in main
test_acc = test(test_loader, tnet)
File "main.py", line 244, in test
embeddings.append(tnet.embeddingnet(images).data)
File "/scratch3/venv/fashcomp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch3/research/code/fashion/fashion-compatibility/type_specific_network.py", line 119, in forward
masked_embedding = masked_embedding / norm.expand_as(masked_embedding)
RuntimeError: The expanded size of the tensor (64) must match the existing size (66) at non-singleton dimension 2. Target sizes: [256, 66, 64]. Tensor sizes: [256, 66]
Related issue: https://github.com/mvasil/fashion-compatibility/pull/13
| The problem is that norm is missing that extra dimension: masked_embedding is [256, 66, 64] and norm is [256, 66]. You can fix that by adding this extra dimension ([256, 66, 1]) to norm this way:
masked_embedding = masked_embedding / norm.unsqueeze(-1).expand_as(masked_embedding)
OR, by modifying the call (this one) that generated norm:
norm = torch.norm(masked_embedding, p=2, dim=2, keepdim=True) + 1e-10
| https://stackoverflow.com/questions/68868354/ |
How does calculation in a GRU layer take place | So I want to understand exactly how the outputs and hidden state of a GRU cell are calculated.
I obtained the pre-trained model from here and the GRU layer has been defined as nn.GRU(96, 96, bias=True).
I looked at the the PyTorch Documentation and confirmed the dimensions of the weights and bias as:
weight_ih_l0: (288, 96)
weight_hh_l0: (288, 96)
bias_ih_l0: (288)
bias_hh_l0: (288)
My input size and output size are (1000, 8, 96). I understand that there are 1000 tensors, each of size (8, 96). The hidden state is (1, 8, 96), which is one tensor of size (8, 96).
I have also printed the variable batch_first and found it to be False. This means that:
Sequence length: L=1000
Batch size: B=8
Input size: Hin=96
Now going by the equations from the documentation, for the reset gate, I need to multiply the weight by the input x. But my weights are 2-dimensions and my input has three dimensions.
Here is what I've tried, I took the first (8, 96) matrix from my input and multiplied it with the transpose of my weight matrix:
Input (8, 96) x Weight (96, 288) = (8, 288)
Then I add the bias by replicating the (288) eight times to give (8, 288). This would give the size of r(t) as (8, 288). Similarly, z(t) would also be (8, 288).
This r(t) is used in n(t), since Hadamard product is used, both the matrices being multiplied have to be the same size that is (8, 288). This implies that n(t) is also (8, 288).
Finally, h(t) is the Hadamard produce and matrix addition, which would give the size of h(t) as (8, 288) which is wrong.
Where am I going wrong in this process?
| TLDR; This confusion comes from the fact that the weights of the layer are the concatenation of input_hidden and hidden-hidden respectively.
- nn.GRU layer weight/bias layout
You can take a closer look at what's inside the GRU layer implementation torch.nn.GRU by peaking through the weights and biases.
>>> gru = nn.GRU(input_size=96, hidden_size=96, num_layers=1)
First the parameters of the GRU layer:
>>> gru._all_weights
[['weight_ih_l0', 'weight_hh_l0', 'bias_ih_l0', 'bias_hh_l0']]
You can look at gru.state_dict() to get the dictionary of weights of the layer.
We have two weights and two biases, _ih stands for 'input-hidden' and _hh stands for 'hidden-hidden'.
For more efficient computation the parameters have been concatenated together, as the documentation page clearly explains (| means concatenation). In this particular example num_layers=1 and k=0:
~GRU.weight_ih_l[k] – the learnable input-hidden weights of the layer (W_ir | W_iz | W_in), of shape (3*hidden_size, input_size).
~GRU.weight_hh_l[k] – the learnable hidden-hidden weights of the layer (W_hr | W_hz | W_hn), of shape (3*hidden_size, hidden_size).
~GRU.bias_ih_l[k] – the learnable input-hidden bias of the layer (b_ir | b_iz | b_in), of shape (3*hidden_size).
~GRU.bias_hh_l[k] – the learnable hidden-hidden bias of the (b_hr | b_hz | b_hn).
For further inspection we can get those split up with the following code:
>>> W_ih, W_hh, b_ih, b_hh = gru._flat_weights
>>> W_ir, W_iz, W_in = W_ih.split(H_in)
>>> W_hr, W_hz, W_hn = W_hh.split(H_in)
>>> b_ir, b_iz, b_in = b_ih.split(H_in)
>>> b_hr, b_hz, b_hn = b_hh.split(H_in)
Now we have the 12 tensor parameters sorted out.
- Expressions
The four expressions for a GRU layer: r_t, z_t, n_t, and h_t, are computed at each timestep.
The first operation is r_t = σ(W_ir@x_t + b_ir + W_hr@h + b_hr). I used the @ sign to designate the matrix multiplication operator (__matmul__). Remember W_ir is shaped (H_in=input_size, hidden_size) while x_t contains the element at step t from the x sequence. Tensor x_t = x[t] is shaped as (N=batch_size, H_in=input_size). At this point, it's simply a matrix multiplication between the input x[t] and the weight matrix. The resulting tensor r is shaped (N, hidden_size=H_in):
>>> (x[t]@W_ir.T).shape
(8, 96)
The same is true for all other weight multiplication operations performed. As a result, you end up with an output tensor shaped (N, H_out=hidden_size).
In the following expressions h is the tensor containing the hidden state of the previous step for each element in the batch, i.e. shaped (N, hidden_size=H_out), since num_layers=1, i.e. there's a single hidden layer.
>>> r_t = torch.sigmoid(x[t]@W_ir.T + b_ir + h@W_hr.T + b_hr)
>>> r_t.shape
(8, 96)
>>> z_t = torch.sigmoid(x[t]@W_iz.T + b_iz + h@W_hz.T + b_hz)
>>> z_t.shape
(8, 96)
The output of the layer is the concatenation of the computed h tensors at
consecutive timesteps t (between 0 and L-1).
- Demonstration
Here is a minimal example of an nn.GRU inference manually computed:
Parameters
Description
Values
H_in
feature size
3
H_out
hidden size
2
L
sequence length
3
N
batch size
1
k
number of layers
1
Setup:
gru = nn.GRU(input_size=H_in, hidden_size=H_out, num_layers=k)
W_ih, W_hh, b_ih, b_hh = gru._flat_weights
W_ir, W_iz, W_in = W_ih.split(H_out)
W_hr, W_hz, W_hn = W_hh.split(H_out)
b_ir, b_iz, b_in = b_ih.split(H_out)
b_hr, b_hz, b_hn = b_hh.split(H_out)
Random input:
x = torch.rand(L, N, H_in)
Inference loop:
output = []
h = torch.zeros(1, N, H_out)
for t in range(L):
r = torch.sigmoid(x[t]@W_ir.T + b_ir + h@W_hr.T + b_hr)
z = torch.sigmoid(x[t]@W_iz.T + b_iz + h@W_hz.T + b_hz)
n = torch.tanh(x[t]@W_in.T + b_in + r*(h@W_hn.T + b_hn))
h = (1-z)*n + z*h
output.append(h)
The final output is given by the stacking the tensors h at consecutive timesteps:
>>> torch.vstack(output)
tensor([[[0.1086, 0.0362]],
[[0.2150, 0.0108]],
[[0.3020, 0.0352]]], grad_fn=<CatBackward>)
In this case the output shape is (L, N, H_out), i.e. (3, 1, 2).
Which you can compare with output, _ = gru(x).
| https://stackoverflow.com/questions/68873625/ |
How to make a custom DataLoader return values and labels seperately? | I want to iterate over a custom DataLoader using batches with matching values and labels. Modification of PandasDataset described below is needed and since I copied it from online I do not have a great grasp of how it works
import torch
import pandas as pd
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
class PandasDataset(Dataset):
def __init__(self, dataframe):
self.dataframe = dataframe
def __len__(self):
return len(self.dataframe)
def __getitem__(self, index):
return self.dataframe.iloc[index]
d = {'values': [1, 2], 'values2': [3, 4],'labels': [5, 6]}
df = pd.DataFrame(data=d)
dataset = PandasDataset(df)
loader = DataLoader(torch.tensor(dataset), batch_size=1, shuffle=False)
for batch_index, (values, label) in enumerate(loader):
print(values)
print(label)
| You can change __getitem__ to something like this:
def __getitem__(self, index):
data = self.dataframe.iloc[index].to_numpy()
return data[:-1], data[-1]
Then, you don't need to wrap your dataset with torch.tensor:
loader = DataLoader(dataset, batch_size=1, shuffle=False)
and it'll return:
next(iter(loader))
# >>> [tensor([[1, 3]]), tensor([5])]
| https://stackoverflow.com/questions/68874703/ |
How to do Normalization in CNN? | I am new to CNN, and I am learning it with Food Classification. Here is my code. In the DATASET part, I change the train dataset and validation dataset from numpy to tensor. At that point, the shape of tensor is ([9866, 128, 128, 3]). Since the channel 3 need to be in the first index, so I use "transpose" method to change the index. Then, I used the "Data.TensorDataset" to put training data and training label together, and the reason to use "Data.DataLoader" is that I need the batch size to speed up.
import os
import numpy as np
import pandas as pd
import cv2
import torch
import torch.nn as nn
from torch.nn import functional as F
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch import optim
import pandas as pd
from torch.utils.data import DataLoader, Dataset
import torch.utils.data as Data
'''Initialize Params'''
epochs = 3
learning_rate = 0.0001
momentum = 0.5
batch_size = 128
'''Load Data'''
def readFile(path,label):
image_dir = sorted(os.listdir(path))
# x stores photos
x = np.zeros((len(image_dir),128,128,3),dtype=np.uint8)
# y stores labels
y = np.zeros((len(image_dir)), dtype=np.uint8)
for i, file in enumerate(image_dir):
img = cv2.imread(os.path.join(path, file))
x[i, :, :] = cv2.resize(img,(128, 128))
if label:
y[i] = int(file.split("_")[0])
if label:
return x,y
else:
return x
train_x, train_y = readFile('./food/training',True)
val_x, val_y = readFile('./food/validation',True)
test_x = readFile('./food/testing',False)
# print("Reading data: ")
# print("Size of training data = {}".format(len(train_x)))
# print("Size of validation data = {}".format(len(val_x)))
# print("Size of Testing data = {}".format(len(test_x)))
'''DataSet'''
train_x = torch.tensor(train_x)
# print(train_x.shape)
train_x = train_x.transpose(1,3).float()
train_y = torch.tensor(train_y)
val_x = torch.tensor(val_x)
val_x = val_x.transpose(1, 3).float()
val_y = torch.tensor(val_y)
train_dataset = Data.TensorDataset(train_x,train_y)
val_dataset = Data.TensorDataset(val_x,val_y)
train_loader = Data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)
val_loader = Data.DataLoader(dataset=val_dataset,batch_size=batch_size,shuffle=True)
I got the 68% accuracy of training set and I would like to improve it. I searched on net and found that maybe I should add the normalization. But I only found the way like this
transform = transforms.Compose([
transforms.ToTensor(), # range [0, 255] -> [0.0,1.0]
]
)
and I confuse about how to put it with "Data.DataLoader".
And I knew that there is a other way to change the training data from numpy to dataloader like this, and here is the link
train_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(15),
transforms.ToTensor(),
])
test_transform = transforms.Compose([
transforms.ToPILImage(),
transforms.ToTensor(),
])
class ImgDataset(Dataset):
def __init__(self, x, y=None, transform=None):
self.x = x
self.y = y
if y is not None:
self.y = torch.LongTensor(y)
self.transform = transform
def __len__(self):
return len(self.x)
def __getitem__(self, index):
X = self.x[index]
if self.transform is not None:
X = self.transform(X)
if self.y is not None:
Y = self.y[index]
return X, Y
else:
return X
train_set = ImgDataset(train_x, train_y, train_transform)
val_set = ImgDataset(val_x, val_y, test_transform)
train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_set, batch_size=batch_size, shuffle=False)
My way might be silly but I would like to try it, and any help would be appreciated. I hope I explain it well, and thanks advance.
Here is the complete code if needed:
import os
import numpy as np
import pandas as pd
import cv2
import torch
import torch.nn as nn
from torch.nn import functional as F
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch import optim
import pandas as pd
from torch.utils.data import DataLoader, Dataset
import torch.utils.data as Data
'''Initialize Params'''
epochs = 3
learning_rate = 0.0001
momentum = 0.5
batch_size = 128
transform = transforms.Compose([
transforms.ToTensor(), # range [0, 255] -> [0.0,1.0]
]
)
'''Load Data'''
def readFile(path,label):
image_dir = sorted(os.listdir(path))
# x stores photos
x = np.zeros((len(image_dir),128,128,3),dtype=np.uint8)
# y stores labels
y = np.zeros((len(image_dir)), dtype=np.uint8)
for i, file in enumerate(image_dir):
img = cv2.imread(os.path.join(path, file))
x[i, :, :] = cv2.resize(img,(128, 128))
if label:
y[i] = int(file.split("_")[0])
if label:
return x,y
else:
return x
train_x, train_y = readFile('./food/training',True)
val_x, val_y = readFile('./food/validation',True)
test_x = readFile('./food/testing',False)
# print("Reading data: ")
# print("Size of training data = {}".format(len(train_x)))
# print("Size of validation data = {}".format(len(val_x)))
# print("Size of Testing data = {}".format(len(test_x)))
'''DataSet'''
train_x = torch.tensor(train_x)
# print(train_x.shape)
train_x = train_x.transpose(1,3).float()
train_y = torch.tensor(train_y)
val_x = torch.tensor(val_x)
val_x = val_x.transpose(1, 3).float()
val_y = torch.tensor(val_y)
train_dataset = Data.TensorDataset(train_x,train_y)
val_dataset = Data.TensorDataset(val_x,val_y)
train_loader = Data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)
val_loader = Data.DataLoader(dataset=val_dataset,batch_size=batch_size,shuffle=True)
'''Create Model'''
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# nn.Conv2d(input_channel, output_channel, kernel, stride)
self.conv1 = nn.Conv2d(3,64,5,1,1)
nn.BatchNorm2d(64)
self.conv2 = nn.Conv2d(64,128,5,1,1)
nn.BatchNorm2d(128)
self.conv3 = nn.Conv2d(128,256,5,1,1)
nn.BatchNorm2d(256)
self.conv4 = nn.Conv2d(256,256,5,1,1)
nn.BatchNorm2d(256)
self.conv4_drop = nn.Dropout2d()
self.fc1 = nn.Linear(6*6*256, 1024)
# self.fc1 = nn.Linear(512*4*4, 1024)
self.fc2 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 256)
self.fc4 = nn.Linear(256, 11)
def forward(self, x):
# maxpooling 1
x = self.conv1(x)
x = F.relu(x) # 124*124*64
x = F.max_pool2d(x, 2) # 62*62*20
# maxpooling 2
x = self.conv2(x)
x = F.relu(x) # 58*58*128
x = F.max_pool2d(x, 2) # 29*29*40
# maxpooling 3
x = self.conv3(x)
x = F.relu(x) # 25*25*256
x = F.max_pool2d(x, 2) # 12*12*100
# maxpooling 4
x = self.conv4(x)
x = F.relu(x) # 8*8*256
x = F.max_pool2d(x, 2) # 4*4*256
x = x.view(-1,6*6*256)
# print(x.shape)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
# return F.log_softmax(x)
return F.log_softmax(x)
'''Initialize the network'''
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
'''Train Model'''
if __name__ == '__main__':
# print(train_x.shape) torch.Size([9866, 128, 128, 3])
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
for epoch in range(epochs):
for i,data in enumerate(train_loader):
x,y = data
# print(x)
# print('--------------------')
sum_loss = 0.0
optimizer.zero_grad()
x,y = Variable(x).cuda(), Variable(y).cuda()
outputs = net(x)
loss = criterion(outputs, y.long())
loss.backward()
optimizer.step()
# print(loss.item())
# sum_loss += loss.item()
# if i % 500 == 99:
# print('[%d,%d] loss:%.03f' %
# (epoch + 1, i + 1, sum_loss / 500))
# sum_loss = 0.0
'''Test Model with Validation'''
net.eval()
correct = 0
total = 0
for val_data in val_loader:
x,y = val_data
x, y = Variable(x).cuda(), Variable(y).cuda()
output_test = net(x)
_, predicted = torch.max(output_test, 1)
total += y.size(0)
correct += (predicted == y).sum()
print("correct1: ", correct)
print("Test acc: {0}".format(correct.item() /len(val_loader)))
| The data normalization performed on the model's inputs (whether it'd be your training, validation, or testing set) depends on your training data statistics. More specifically, it corresponds to the channel-wise mean and standard deviation of the images used in the training set.
torchvision.transforms.Normalize
You can do so with torchvision's transforms.Normalize. It is a very easy function, yet a lot of people seem to have trouble understanding it - or fail to read its documentation...
torchvision.transforms.Normalize(mean, std, inplace=False)
Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean[1], ..., mean[n]) and std: (std[1], ..., std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]
You can argue the naming for both arguments 'mean' and 'std' is inaccurate is rather misleading indeed. This can explain why it's so misused (today again).
It is nothing more than a channel-wise shift-scale operator.
Misusage of the transform
Very frequently I see this type of normalization setup.
T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
To clarify, the above won't do the following two things:
it won't normalize your input to [0, 1].
it won't standardize your input (i.e. make mean=0, std=1)
The described transform will map values from [0, 1] to [-1, 1] (since (x - .5) / .5 = 2x - 1).
>>> x = torch.tensor([0., .5, 1.]).reshape(1, 3, 1, 1)
tensor([0.0, 0.5, 1.0])
>>> t = T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
>>> t(x).flatten()
tensor([-1., 0., 1.])
How to properly use transforms.Normalize
In your case, you shouldn't use .5 as the mean and std parameters. This doesn't make any sense. If you're using a T.ToTensor prior to this you are essentially mapping your values to [-1, 1]...
Instead, you should initialize the transform with the actual dataset mean and std, the ones you measure from the dataset directly.
For instance, with Imagenet, the statistics are: [0.485, 0.456, 0.406] and [0.229, 0.224, 0.225] for mean and standard deviation respectively. You should be doing:
>>> t = T.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
Having imported torchvision.transforms as T
| https://stackoverflow.com/questions/68874807/ |
How to insert values of one tensor into another? | I am trying to insert tensor y into the tensors x final dimension (y_dim). The final tensor should be of size (100, 16, 16, 1) where the values of y are placed in each of 100 x 0dimension
import torch
y_dim = 1
x = torch.randn(100, 16, 16, y_dim)
#OR x = torch.randn(100, 16, 16)
y = torch.randn(100)
Xy = torch.cat((x, y), dim=3)
| I think you are missing something in your understanding of tensors and dimensions. The easiest thing is to consider your tensor x as a batch containing 100 maps of width and height 16, i.e. 100 16x16-maps. So you are manipulating a tensor containing 100*16*16 elements. Your y, on the other hand, contains 100 scalar values, it has 100 elements.
I'm turning the question back to you:
How would you concatenate 100 16x16-maps with 100 scalar values?
The above question has no answer. There are certain things that can be done though, assumptions that can be made on y in order to perform a concatenation:
If you had a tensor y containing 16x16 maps as well, then yes this operation would be achievable:
>>> x = torch.rand(100, 16, 16)
>>> y = torch.rand(100, 16, 16)
>>> torch.cat((x, y)).shape
torch.Size([200, 16, 16])
If you consider the y in your question, you could expand the 100 scalar values to 16x16 maps. And, then concatenate with x:
>>> x = torch.rand(100, 16, 16)
>>> y = torch.rand(100)
>>> y_r = y[:, None, None].repeat(1, 16, 16)
>>> torch.cat((x, y_r))
torch.Size([200, 16, 16])
| https://stackoverflow.com/questions/68876300/ |
Setting values of a tensor based on given indices of corresponding rows using pytorch | I've got a tensor A with shape (M, N), and have another tensor B with shape (M, P) and with values of given indices in corresponding rows of A. Now I would like to set the values of A with corresponding indices in B to 0.
For example:
In[1]: import torch
A = torch.tensor([range(1,11), range(1,11), range(1,11)])
A
Out[1]:
tensor([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
In[2]: B = torch.tensor([[1,2], [2,3], [3,5]])
B
Out[2]:
tensor([[1, 2],
[2, 3],
[3, 5]])
The objective is to set the value of the element with index 1,2 in the first row, 2,3 in the second row, and 3,5 in the third row of A to 0, i.e., setting A to
tensor([[ 1, 0, 0, 4, 5, 6, 7, 8, 9, 10],
[ 1, 2, 0, 0, 5, 6, 7, 8, 9, 10],
[ 1, 2, 3, 0, 5, 0, 7, 8, 9, 10]])
I have applied row by row for loop, and also tried scatter:
zeros = torch.zeros(A.shape, dtype=torch.float).to("cuda")
A = A.scatter_(1, B, zeros)
The two methods work fine, but all give quite poor performance. Actually, I infer that some efficient approach should exist based on an error before. I initially used A[:, B] = 0. This would set all the indices of appeared in B to 0, regardless of the row. However, the training speed improved drastically when doing A[:, B] = 0.
Is there any way to implement this more efficiently?
| Here's what i would do:
import torch
A = torch.tensor([range(1,11), range(1,11), range(1,11)])
B = torch.tensor([[1,2], [2,3], [3,5]])
r, c = B.shape
idx0 = torch.arange(r).reshape(-1, 1).repeat(1, c).flatten()
idx1 = B.flatten()
A[idx0, idx1] = 0
output:
A =
tensor([[ 1, 0, 0, 4, 5, 6, 7, 8, 9, 10],
[ 1, 2, 0, 0, 5, 6, 7, 8, 9, 10],
[ 1, 2, 3, 0, 5, 0, 7, 8, 9, 10]])
| https://stackoverflow.com/questions/68878403/ |
CNN feature Extraction | class ResNet(nn.Module):
def __init__(self, output_features, fine_tuning=False):
super(ResNet, self).__init__()
self.resnet152 = tv.models.resnet152(pretrained=True)
#freezing the feature extraction layers
for param in self.resnet152.parameters():
param.requires_grad = fine_tuning
#self.features = self.resnet152.features
self.num_fts = 512
self.output_features = output_features
# Linear layer goes from 512 to 1024
self.classifier = nn.Linear(self.num_fts, self.output_features)
nn.init.xavier_uniform_(self.classifier.weight)
self.tanh = nn.Tanh()
def forward(self, x):
h = self.resnet152(x)
print('h: ',h.shape)
return h
image_model_resnet152=ResNet(output_features=10).to(device)
image_model_resnet152
Here, after printing the image_model_resnet152, I get:
Here, what is the difference between (avgpool): Linear(in_features=2048)
and (classifier): Linear(in_features=512)?
I am implementing an image captioning model, so which in_features should I take for an image?
| ResNet is not as straightforward as VGG: it's not a sequential model, i.e. there is some model-specific logic inside the forward definition of the torchvision.models.resnet152, for instance, the flattening of features between the CNN and classifier. You can take a look at its source code.
The easiest thing to do in this case is to add a hook on the last layer of the CNN: layer4, and log the result of that layer in an external dict. This is done with register_forward_hook.
Define the hook:
out = {}
def result(module, input, output):
out['layer4'] = output
Attach the hook on the submodule resnet.layer4:
>>> x = torch.rand(1,3,224,224)
>>> resnet = torchvision.models.resnet152()
>>> resnet.layer4.register_forward_hook(result)
After inference you will have access to the result inside of out:
>>> resnet(x)
>>> out['layer4']
(1, 2048, 7, 7)
You can look at another answer of mine on a more in-depth usage of forward hooks.
A possible implementation would be:
class NN(nn.Module):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet152()
self.resnet.layer4.register_forward_hook(result)
self.out = {}
@staticmethod
def result(module, input, output):
out['layer4'] = output
def forward(self, x):
x = self.resnet(x)
return out['layer4']
You can then define additional layers for your custom classifier and call them inside forward.
| https://stackoverflow.com/questions/68880288/ |
PyTorch resnet bad tensor dimensions | I'm trying to setup a face detection/recognition pipeline using Pytorch.
I load the image using opencv
image = cv2.imread('...')
I load the mtcnn face detection and resnet face recognition models
self.mtcnn = MTCNN(keep_all=True, device=self.device)
self.resnet = InceptionResnetV1(pretrained='vggface2').eval()
Then I run detection and recognition
cropped = detector.mtcnn(image)
detector.resnet(cropped.unsqueeze(0))
And get this error
Expected 4-dimensional input for 4-dimensional weight [32, 3, 3, 3], but got 5-dimensional input of size [1, 1, 3, 160, 160] instead
I also tried resizing the image to 512x512 and passing image_size=512 to the MTCNN constructor but I get a similar error.
| I tried removing the unsqueeze and it worked
The guide I was following was using something other than opencv to load the image which probably returned the image as an array/tensor of images
| https://stackoverflow.com/questions/68882782/ |
Pytorch permute not changing desired index | I am trying to use the permute function to swap the axis of my tensor but for some reason the output is not as expected. The output of the code is torch.Size([512, 256, 3, 3]) but I would expect it to be torch.Size([256, 512, 3, 3]). It doesn't look like I can use flip to switch 0, 1 index. Is there something i am missing? I wish to change the tensor such that the shape is (256, 512, 3, 3).
Reproducible code:
import torch
wtf = torch.rand(3, 3, 512, 256)
wtf = wtf.permute(2, 3, 1, 0)
print(wtf.shape)
| The numbers provided to torch.permute are the indices of the axis in the order you want the new tensor to have.
Having set x as torch.rand(3, 3, 512, 256).
If you want to invert the order of axis: the initial order is 0, 1, 2, 3, you want 3, 2, 1, 0:
>>> wtf.permute(3, 2, 1, 0).shape
torch.Size([256, 512, 3, 3])
Inverting axis order is essentially the transpose operation:
>>> wtf.T
torch.Size([256, 512, 3, 3])
If you just want to invert and keep the order of the last two: original order is 0, 1, 2, 3 and resulting order is 3, 2, 0, 1:
>>> x.permute(3, 2, 0, 1).shape
torch.Size([256, 512, 3, 3])
The difference between the two options is that the last two axes of size 3 will be swapped.
| https://stackoverflow.com/questions/68884277/ |
Confused about implementation of skip layers in CNN | I'm reading about AlphaGo Zero's network structure and came across this cheatsheet:
I'm having a hard time understanding how skip connections work dimensionally.
Specifically, it seems like each residual layer ends up with 2 stacked copies of the input it receives. Would this not cause the input size to grow exponentially with the depth of the network?
And could this be avoided by changing the output channel size of the conv2d filter? I see that in_C and out_C don't have to be the same in pytorch, but I don't know enough to understand the implications of these values being different.
| With skip connection, you can indeed end up with twice the number of channels per connection. This is the case when you are concatenating the channels together. However, it doesn't necessarily have to grow exponentially, if you keep the number of output channels (what you refer to as out_C) under control.
For instance, if you have a skip connection providing a total of n channels and the convolutional layer gets in_C channels as input. Then you can define out_C as n as well, such that the resulting number of channels after concatenation is equal to 2*n. Ultimately, you decide on the number of output channels for each convolution, it is all about network capacity and how much it will be capable of learning.
| https://stackoverflow.com/questions/68886509/ |
Will switching GPU device affect the gradient in PyTorch back propagation? | I use the Pytorch. In the computation, I move some data and operators A in the GPU. In the middle step, I move the data and operators B to CPU and continue the forward.
My question is that:
My operator B is very memory-consuming that cannot be used in GPU. Will this affect (some parts compute in GPU and the others are computed in CPU) the backpropagation?
| Pytorch keeps track of the location of tensors. If you use .cpu() or .to('cpu') pytorch's native commands you should be okay.
See, e.g., this model parallel tutorial - the computation is split between two different GPU devices.
| https://stackoverflow.com/questions/68887223/ |
Import transparent images to GAN | I have Images set which has transparency.
I'm trying to train GAN(Generative adversarial networks).
How can I preserve transparency. I can see from output images all transparent area is BLACK.
How can I avoid doing that ?
I think this is called "Alpha Channel".
Anyways How can I keep my transparency ?
Below is my code.
# Importing the libraries
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from generator import G
from discriminator import D
import os
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
input_vector = 100
nb_epochs = 500
# Creating the transformations
transform = transforms.Compose([transforms.Resize((imageSize, imageSize)), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def is_cuda_available():
return torch.cuda.is_available()
def is_gpu_available():
if is_cuda_available():
if int(torch.cuda.device_count()) > 0:
return True
return False
return False
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Creating the generator
netG = G(input_vector)
netG.apply(weights_init)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
if is_gpu_available():
netG.cuda()
netD.cuda()
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.0002, betas=(0.5, 0.999))
generator_model = 'generator_model'
discriminator_model = 'discriminator_model'
def save_model(epoch, model, optimizer, error, filepath, noise=None):
if os.path.exists(filepath):
os.remove(filepath)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': error,
'noise': noise
}, filepath)
def load_checkpoint(filepath):
if os.path.exists(filepath):
return torch.load(filepath)
return None
def main():
print("Device name : " + torch.cuda.get_device_name(0))
for epoch in range(nb_epochs):
for i, data in enumerate(dataloader, 0):
checkpointG = load_checkpoint(generator_model)
checkpointD = load_checkpoint(discriminator_model)
if checkpointG:
netG.load_state_dict(checkpointG['model_state_dict'])
optimizerG.load_state_dict(checkpointG['optimizer_state_dict'])
if checkpointD:
netD.load_state_dict(checkpointD['model_state_dict'])
optimizerD.load_state_dict(checkpointD['optimizer_state_dict'])
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
if is_gpu_available():
input = Variable(real.cuda()).cuda()
target = Variable(torch.ones(input.size()[0]).cuda()).cuda()
else:
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
if is_gpu_available():
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1)).cuda()
target = Variable(torch.zeros(input.size()[0])).cuda()
else:
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1))
target = Variable(torch.zeros(input.size()[0]))
fake = netG(noise)
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
if is_gpu_available():
target = Variable(torch.ones(input.size()[0])).cuda()
else:
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (epoch, nb_epochs, i, len(dataloader), errD.data, errG.data))
save_model(epoch, netG, optimizerG, errG, generator_model, noise)
save_model(epoch, netD, optimizerD, errD, discriminator_model, noise)
if i % 100 == 0:
create_dir('results')
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
if __name__ == "__main__":
main()
generator.py
import torch.nn as nn
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False
def __init__(self, input_vector):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 4, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
discriminator.py
import torch.nn as nn
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(4, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
| Using dset.ImageFolder, without explicitly defining the function that reads the image (the loader) results with your dataset using the default pil_loader:
def pil_loader(path: str) -> Image.Image:
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
As you can see, the default loader discards the alpha channel and forces the image to be with only three color channels: RGB.
You can define your own loader:
def pil_loader_rgba(path: str) -> Image.Image:
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGBA') # force alpha channel
You can use this loader in your dataset:
dataset = dset.ImageFolder(root='./data', transform=transform, loader=pil_loader_rgba)
Now your images will have the alpha channel.
Note that the transparency ("alpha channel") is an additional channel and is not part of the RGB channels. You need to make sure your model knows how to handle 4-channel inputs, otherwise, you'll run into errors such as this.
| https://stackoverflow.com/questions/68888375/ |
pytorch indexing for pixel probabilities | Suppose I need to classify each pixel into one of 3 classes. I wish to get the probability of each pixel. Here is a minimal example. Question is, how do I get those probabilities.
import torch
import torch.nn.functional as F
y = torch.randint(0, 3, (2, 1, 5, 5)) # classes
logits = torch.randn(2, 3, 5, 5)
prob = F.softmax(logits, dim=1) # probability map
prob[y] # does not work
| You are looking for torch.gather:
torch.gather(prob,1, y)
You gather the probabilities along the first dimension according to the indices of y.
| https://stackoverflow.com/questions/68889783/ |
Pytorch Lightning Automatic Logging - AttributeError: 'NoneType' object has no attribute '_results' | Unable to use Automatic Logging (self.log) when calling training_step() on Pytorch Lightning, what am I missing? Here is a minimal example:
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
class LitModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(100, 4)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y.long())
self.log("train_loss", loss) # <-- error
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
pl_model = LitModel()
x = torch.rand((10,100))
y = torch.randint(0,4, size=(10,))
batch = (x,y)
loss = pl_model.training_step(batch, 0)
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-34-b9419bfca30f> in <module>
25 y = torch.randint(0,4, size=(10,))
26 batch = (x,y)
---> 27 loss = pl_model.training_step(batch, 0)
<ipython-input-34-b9419bfca30f> in training_step(self, batch, batch_idx)
14 y_hat = self(x)
15 loss = F.cross_entropy(y_hat, y.long())
---> 16 self.log("train_loss", loss)
17 return loss
18
D:\programs\anaconda3\lib\site-packages\pytorch_lightning\core\lightning.py in log(self, name, value, prog_bar, logger, on_step, on_epoch, reduce_fx, tbptt_reduce_fx, tbptt_pad_token, enable_graph, sync_dist, sync_dist_op, sync_dist_group, add_dataloader_idx, batch_size, metric_attribute, rank_zero_only)
405 on_epoch = self.__auto_choose_log_on_epoch(on_epoch)
406
--> 407 results = self.trainer._results
408 assert results is not None
409 assert self._current_fx_name is not None
AttributeError: 'NoneType' object has no attribute '_results'
| This is NOT the correct usage of LightningModule class. You can't call a hook (namely .training_step()) manually and expect everything to work fine.
You need to setup a Trainer as suggested by PyTorch Lightning at the very start of its tutorial - it is a requirement. The functions (or hooks) that you define in a LightningModule merely tells Lightning "what to do" in a specific situation (in this case, at each training step). It is the Trainer that actually "orchestrates" the training by instantiating the necessary environment (including Logging functionality) and feeding it into the Lightning Module whenever needed.
So, do it the way Lightning suggests and it will work.
| https://stackoverflow.com/questions/68890203/ |
Cannot install the gpu version of torch and torchvision in poetry due to a dependency problem | I am trying to create a virtual environment for machine learning using poetry. So, I am using pytorch as a framework for deep learning. I will extract the relevant part of my pyproject.toml.
[tool.poetry.dependencies].
python = "^3.8"
torch = { url = "https://download.pytorch.org/whl/cu111/torch-1.8.0%2Bcu111-cp38-cp38-linux_x86_64.whl"}
torchvision = { url = "https://download.pytorch.org/whl/cu111/torchvision-0.9.0%2Bcu111-cp38-cp38-linux_x86_64.whl" }
Since pytroch uses the GPU, you need to install it by specifying the whl file. If you install it this way, the version of pytroch will be 1.8.0+cu111. torchvision corresponding to 1.8.0 of pytorch is 0.9.0. The version of pytroch that this torchvision depends on is 1.8.0 (without cu111). Therefore, I cannot create a virtual environment using poetry with the following error.
SolverProblemError
Because torchvision (0.9.0) depends on torch (1.8.0)
and mdwithpriorenergy depends on torch (1.8.0+cu111), torchvision is forbidden.
So, because mdwithpriorenergy depends on torchvision (0.9.0), version solving failed.
So, because [env name] depends on torchvision (0.9.0), version solving failed.
I also made the following changes to torchvision in pyproject.toml above, but they did not work.
[tool.poetry.dependencies].
python = "^3.8"
torch = { url = "https://download.pytorch.org/whl/cu111/torch-1.8.0%2Bcu111-cp38-cp38-linux_x86_64.whl"}
- torchvision = { url = "https://download.pytorch.org/whl/cu111/torchvision-0.9.0%2Bcu111-cp38-cp38-linux_x86_64.whl"}
+ torchvision = "*"
In this case, I received the following error
AttributeError
'EmptyConstraint' object has no attribute 'allows'.
Please tell me how to solve this error.
| As far as I know, this does not yet supported in poetry (without ugly hacks), see Issue 2613.
That said, there is a fork I am maintaining called relaxed-poetry. It is a very young fork but it supports what you want with the following configuration:
[tool.poetry.dependencies]
python = "^3.8"
torch = { version = "=1.8.0+cu111", source = "pytorch" }
torchvision = { version = "=0.9.0+cu111", source = "pytorch" }
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu111/"
secondary = true
If you think it can help you, you can install it side by side with poetry and use the command rp instead
Do note though that the installation is going to take some time as this is a large dependency and a relatively slow source..
| https://stackoverflow.com/questions/68892660/ |
Importing Transparent images gives RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 | I'm trying to learn AI.
I have GAN (generative adversarial network) code with images with ALPHA Channel(transparency).
All images have alpha channel.
To prove that I wrote small image_validator.py program like below
from PIL import Image
import glob
def main():
image_list = []
img_number = 0
for filename in glob.glob('data/*/*.*'):
try:
im = Image.open(filename)
# print(filename)
if str(im.mode) != "RGBA":
print("alpha " + str(im.mode))
img_number = img_number+1
print(str(img_number))
except Exception as e:
print("Error : "+filename)
if __name__ == "__main__":
main()
Above program prints nothing which means all images have ALPHA Channel. To Test above program I added single image WITHOUT ALPHA Channel. So I can confirm all images have ALPHA CHANNEL.
my generator.py is like below
import torch.nn as nn
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False
def __init__(self, input_vector):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 4, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
my discriminator.py is like below
import torch.nn as nn
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(4, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
my main program gan.py is like below
# Importing the libraries
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from generator import G
from discriminator import D
import os
from PIL import Image
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
input_vector = 100
nb_epochs = 500
# Creating the transformations
transform = transforms.Compose([transforms.Resize((imageSize, imageSize)), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
def pil_loader_rgba(path: str) -> Image.Image:
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGBA')
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform, loader=pil_loader_rgba)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def is_cuda_available():
return torch.cuda.is_available()
def is_gpu_available():
if is_cuda_available():
if int(torch.cuda.device_count()) > 0:
return True
return False
return False
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Creating the generator
netG = G(input_vector)
netG.apply(weights_init)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
if is_gpu_available():
netG.cuda()
netD.cuda()
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.0002, betas=(0.5, 0.999))
generator_model = 'generator_model'
discriminator_model = 'discriminator_model'
def save_model(epoch, model, optimizer, error, filepath, noise=None):
if os.path.exists(filepath):
os.remove(filepath)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': error,
'noise': noise
}, filepath)
def load_checkpoint(filepath):
if os.path.exists(filepath):
return torch.load(filepath)
return None
def main():
print("Device name : " + torch.cuda.get_device_name(0))
for epoch in range(nb_epochs):
for i, data in enumerate(dataloader, 0):
checkpointG = load_checkpoint(generator_model)
checkpointD = load_checkpoint(discriminator_model)
if checkpointG:
print("checkpointG")
netG.load_state_dict(checkpointG['model_state_dict'])
optimizerG.load_state_dict(checkpointG['optimizer_state_dict'])
if checkpointD:
netD.load_state_dict(checkpointD['model_state_dict'])
optimizerD.load_state_dict(checkpointD['optimizer_state_dict'])
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
if is_gpu_available():
print("True")
input = Variable(real.cuda()).cuda()
target = Variable(torch.ones(input.size()[0]).cuda()).cuda()
else:
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
if is_gpu_available():
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1)).cuda()
target = Variable(torch.zeros(input.size()[0])).cuda()
else:
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1))
target = Variable(torch.zeros(input.size()[0]))
fake = netG(noise)
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
if is_gpu_available():
target = Variable(torch.ones(input.size()[0])).cuda()
else:
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (
epoch, nb_epochs, i, len(dataloader), errD.data, errG.data))
save_model(epoch, netG, optimizerG, errG, generator_model, noise)
save_model(epoch, netD, optimizerD, errD, discriminator_model, noise)
if i % 100 == 0:
create_dir('results')
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
if __name__ == "__main__":
main()
But when I run my program I'm getting this error
Traceback (most recent call last):
File ".\gans.py", line 178, in <module>
main()
File ".\gans.py", line 109, in main
for i, data in enumerate(dataloader, 0):
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 521, in __next__
data = self._next_data()
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\Akila\anaconda3\lib\site-packages\torchvision\datasets\folder.py", line 234, in __getitem__
sample = self.transform(sample)
File "C:\Users\Akila\anaconda3\lib\site-packages\torchvision\transforms\transforms.py", line 60, in __call__
img = t(img)
File "C:\Users\Akila\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Akila\anaconda3\lib\site-packages\torchvision\transforms\transforms.py", line 221, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File "C:\Users\Akila\anaconda3\lib\site-packages\torchvision\transforms\functional.py", line 335, in normalize
tensor.sub_(mean).div_(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
I did a debugging and I found the issue is with this line
for i, data in enumerate(dataloader, 0):
IF I change this line --> return img.convert('RGBA')
TO this --> return img.convert('RGB')
program works fine.
But I can guarantee all of images have alpha channel.
Because my image_validator.py program prints nothing
I even tried running my main program with SINGLE image which has alpha channel still gives same error.
What am I doing wrong ?
How can I preserve transparency of my Images ?
I do NOT want to lose my transparency.
| Regarding the error message
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
would lead to suggest that there's a problem with this call: sample = self.transform(sample)
Indeed, the issue is you are using a T.Normalize transform which only expects three channels (you specified a mean and std for three channels only, not four).
transform = transforms.Compose([
transforms.Resize((imageSize, imageSize)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5,0.5))])
Instead, you should provide a four-element tuple for both arguments. For example (this is an example, this might run but won't necessarily make sense... see explanation below):
transforms.Normalize((0.5, 0.5, 0.5, 0.5), (0.5, 0.5,0.5, 0.5))])
Other than that, I should ask: do you know why you are using .5 for both parameters of mean and std? If not, chances are you are not using it properly. Please read about it on this answer and its applications.
| https://stackoverflow.com/questions/68893942/ |
Getting (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same | class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, down=True, use_act=True, **kwargs):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_channels, out_channels, padding_mode="reflect", **kwargs)
if down
else nn.ConvTranspose2d(in_channels, out_channels, **kwargs),
nn.InstanceNorm2d(out_channels),
nn.ReLU(inplace=True) if use_act else nn.Identity()
)
def forward(self, x):
return self.conv(x)
class ResidualBlock(nn.Module):
def __init__(self, channels):
super().__init__()
self.block = nn.Sequential(
ConvBlock(channels, channels, kernel_size=3, padding=1),
ConvBlock(channels, channels, use_act=False, kernel_size=3, padding=1),
)
def forward(self, x):
return x + self.block(x)
class Generator(nn.Module):
def __init__(self, image_channels, num_features= 64, num_residuals=9):
super().__init__()
self.initial = nn.Sequential(
nn.Conv2d(image_channels, num_features, kernel_size=7, stride=1, padding=3, padding_mode="reflect"),
nn.ReLU(inplace=True)
)
self.down_blocks = nn.ModuleList = ([
ConvBlock(num_features, num_features*2, kernel_size=3, stride=2, padding=1),
ConvBlock(num_features*2, num_features*4, kernel_size=3, stride=2, padding=1),
])
self.residual_blocks = nn.Sequential(
*[ResidualBlock(num_features*4) for _ in range(num_residuals)]
)
self.up_blocks = nn.ModuleList = ([
ConvBlock(num_features*4, num_features*2, down=False, kernel_size=3, padding=1, stride=2, output_padding=1),
ConvBlock(num_features*2, num_features, down=False, kernel_size=3, padding=1, stride=2, output_padding=1),
])
self.last = nn.Conv2d(num_features, image_channels, kernel_size=7, stride=1, padding=3, padding_mode="reflect")
def forward(self, x):
x = self.initial(x)
for layer in self.down_blocks:
x = layer(x)
x = self.residual_blocks(x)
for layer in self.up_blocks:
x = layer(x)
return torch.tanh(self.last(x))
img_channels = 3
img_size = 256
x = torch.randn((2, img_channels, img_size, img_size))
x = x.to(DEVICE)
gen = Generator(img_channels, 9).to(DEVICE)
print(gen(x).shape)
I have implemented this model for Cycle GAN. The input used here is just for demonstration purposes to shorten the code however the actual input throws the same error. The code runs fine on the CPU but when I shift it to GPU I get the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-30-a56669856674> in <module>
4 x = x.to(DEVICE)
5 gen = Generator(img_channels, 9).to(DEVICE)
----> 6 print(gen(x).shape)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-25-f9a41f6c9d12> in forward(self, x)
26
27 for layer in self.down_blocks:
---> 28 x = layer(x)
29
30 x = self.residual_blocks(x)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-23-e139087b9df4> in forward(self, x)
11
12 def forward(self, x):
---> 13 return self.conv(x)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
421
422 def forward(self, input: Tensor) -> Tensor:
--> 423 return self._conv_forward(input, self.weight)
424
425 class Conv3d(_ConvNd):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
416 return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
417 weight, self.bias, self.stride,
--> 418 _pair(0), self.dilation, self.groups)
419 return F.conv2d(input, weight, self.bias, self.stride,
420 self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
My model is being changed to device(CUDA) and the input as well. I have no idea what the issue is now
| You have typos in your code:
self.down_blocks = nn.ModuleList = ([
...
self.up_blocks = nn.ModuleList = ([
should be:
self.down_blocks = nn.ModuleList([
...
self.up_blocks = nn.ModuleList([
You need to reload your kernel since at this point you've essentially overwritten nn.ModuleList to a list.
| https://stackoverflow.com/questions/68896526/ |
PyTorch's torch.as_strided with negative strides for making a Toeplitz matrix | I am writing a jury-rigged PyTorch version of scipy.linalg.toeplitz, which currently has the following form:
def toeplitz_torch(c, r=None):
c = torch.tensor(c).ravel()
if r is None:
r = torch.conj(c)
else:
r = torch.tensor(r).ravel()
# Flip c left to right.
idx = [i for i in range(c.size(0)-1, -1, -1)]
idx = torch.LongTensor(idx)
c = c.index_select(0, idx)
vals = torch.cat((c, r[1:]))
out_shp = len(c), len(r)
n = vals.stride(0)
return torch.as_strided(vals[len(c)-1:], size=out_shp, stride=(-n, n)).copy()
But torch.as_strided currently does not support negative strides. My function, therefore, throws the error:
RuntimeError: as_strided: Negative strides are not supported at the moment, got strides: [-1, 1].
My (perhaps incorrect) understanding of as_strided is that it inserts the values of the first argument into a new array whose size is specified by the second argument and it does so by linearly indexing those values in the original array and placing them at subscript-indexed strides given by the final argument.
Both the NumPy and PyTorch documentation concerning as_strided have scary warnings about using the function with "extreme care" and I don't understand this function fully, so I'd like to ask:
Is my understanding of as_strided correct?
Is there a simple way to rewrite this so negative strides work?
Will I be able to pass a gradient w.r.t c (or r) through toeplitz_torch?
| > 1. Is my understanding of as_strided correct?
The stride is an interface for your tensor to access the underlying contiguous data buffer. It does not insert values, no copies of the values are done by torch.as_strided, the strides define the artificial layout of what we refer to as multi-dimensional array (in NumPy) or tensor (in PyTorch).
As Andreas K. puts it in another answer:
Strides are the number of bytes to jump over in the memory in order to get from one item to the next item along each direction/dimension of the array. In other words, it's the byte-separation between consecutive items for each dimension.
Please feel free to read the answers over there if you have some trouble with strides. Here we will take your example and look at how it is implemented with as_strided.
The example given by Scipy for linalg.toeplitz is the following:
>>> toeplitz([1,2,3], [1,4,5,6])
array([[1, 4, 5, 6],
[2, 1, 4, 5],
[3, 2, 1, 4]])
To do so they first construct the list of values (what we can refer to as the underlying values, not actually underlying data): vals which is constructed as [3 2 1 4 5 6], i.e. the Toeplitz column and row flattened.
Now notice the arguments passed to np.lib.stride_tricks.as_strided:
values: vals[len(c)-1:] notice the slice: the tensors show up smaller, yet the underlying values remain, and they correspond to those of vals. Go ahead and compare the two with storage_offset: it's just an offset of 2, the values are still there! How this works is that it essentially shifts the indices such that index=0 will refer to value 1, index=1 to 4, etc...
shape: given by the column/row inputs, here (3, 4). This is the shape of the resulting object.
strides: this is the most important piece: (-n, n), in this case (-1, 1)
The most intuitive thing to do with strides is to describe a mapping between the multi-dimensional space: (i, j) ∈ [0,3[ x [0,4[ and the flattened 1D space: k ∈ [0, 3*4[. Since the strides are equal to (-n, n) = (-1, 1), the mapping is -n*i + n*j = -1*i + 1*j = j-i. Mathematically you can describe your matrix as M[i, j] = F[j-i] where F is the flattened values vector [3 2 1 4 5 6].
For instance, let's try with i=1 and j=2. If you look at the Topleitz matrix above M[1, 2] = 4. Indeed F[k] = F[j-i] = F[1] = 4
If you look closely you will see the trick behind negative strides: they allow you to 'reference' to negative indices: for instance, if you take j=0 and i=2, then you see k=-2. Remember how vals was given with an offset of 2 by slicing vals[len(c)-1:]. If you look at its own underlying data storage it's still [3 2 1 4 5 6], but has an offset. The mapping for vals (in this case i: 1D -> k: 1D) would be M'[i] = F'[k] = F'[i+2] because of the offset. This means M'[-2] = F'[0] = 3.
In the above I defined M' as vals[len(c)-1:] which basically equivalent to the following tensor:
>>> torch.as_strided(vals, size=(len(vals)-2,), stride=(1,), storage_offset=2)
tensor([1, 4, 5, 6])
Similarly, I defined F' as the flattened vector of underlying values: [3 2 1 4 5 6].
The usage of strides is indeed a very clever way to define a Toeplitz matrix!
> 2. Is there a simple way to rewrite this so negative strides work?
The issue is, negative strides are not implemented in PyTorch... I don't believe there is a way around it with torch.as_strided, otherwise it would be rather easy to extend the current implementation and provide support for that feature.
There are however alternative ways to solve the problem. It is entirely possible to construct a Toeplitz matrix in PyTorch, but that won't be with torch.as_strided.
We will do the mapping ourselves: for each element of M indexed by (i, j), we will find out the corresponding index k which is simply j-i. This can be done with ease, first by gathering all (i, j) pairs from M:
>>> i, j = torch.ones(3, 4).nonzero().T
(tensor([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]),
tensor([0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]))
Now we essentially have k:
>>> j-i
tensor([ 0, 1, 2, 3, -1, 0, 1, 2, -2, -1, 0, 1])
We just need to construct a flattened tensor of all possible values from the row r and column c inputs. Negative indexed values (the content of c) are put last and flipped:
>>> values = torch.cat((r, c[1:].flip(0)))
tensor([1, 4, 5, 6, 3, 2])
Finally index values with k and reshape:
>>> values[j-i].reshape(3, 4)
tensor([[1, 4, 5, 6],
[2, 1, 4, 5],
[3, 2, 1, 4]])
To sum it up, my proposed implementation would be:
def toeplitz(c, r):
vals = torch.cat((r, c[1:].flip(0)))
shape = len(c), len(r)
i, j = torch.ones(*shape).nonzero().T
return vals[j-i].reshape(*shape)
> 3. Will I be able to pass a gradient w.r.t c (or r) through toeplitz_torch?
That's an interesting question because torch.as_strided doesn't have a backward function implemented. This means you wouldn't have been able to backpropagate to c and r! With the above method, however, which uses 'backward-compatible' builtins, the backward pass comes free of charge.
Notice the grad_fn on the output:
>>> toeplitz(torch.tensor([1.,2.,3.], requires_grad=True),
torch.tensor([1.,4.,5.,6.], requires_grad=True))
tensor([[1., 4., 5., 6.],
[2., 1., 4., 5.],
[3., 2., 1., 4.]], grad_fn=<ViewBackward>)
This was a quick draft (that did take a little while to write down), I will make some edits. If you have some questions or remarks, don't hesitate to comment! I would be interested in seeing other answers as I am not an expert with strides, this is just my take on the problem.
| https://stackoverflow.com/questions/68896578/ |
How `make_gen_block` is defined after `__init__` in the code attached? | In the code below, self.gen is instantiated while using the make_gen_block function which is only defined later outside the __init__ attribute.
How is this possible?
Shouldn't make_gen_block be defined before using it to instantiate self.gen so when __init__ is called, make_gen_block can be found within __init__ scope?
Thanks
class Generator(nn.Module):
'''
Generator Class
Values:
z_dim: the dimension of the noise vector, a scalar
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(MNIST is black-and-white, so 1 channel is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, z_dim=10, im_chan=1, hidden_dim=64):
super(Generator, self).__init__()
self.z_dim = z_dim
# Build the neural network
self.gen = nn.Sequential(
self.make_gen_block(z_dim, hidden_dim * 4),
self.make_gen_block(hidden_dim * 4, hidden_dim * 2, kernel_size=4, stride=1),
self.make_gen_block(hidden_dim * 2, hidden_dim),
self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True))
def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a generator block of DCGAN,
corresponding to a transposed convolution, a batchnorm (except for in the last layer), and an activation.
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
# Steps:
# 1) Do a transposed convolution using the given parameters.
# 2) Do a batchnorm, except for the last layer.
# 3) Follow each batchnorm with a ReLU activation.
# 4) If its the final layer, use a Tanh activation after the deconvolution.
# Build the neural block
if not final_layer:
return nn.Sequential(
#### START CODE HERE ####
nn.ConvTranspose2d(input_channels, output_channels,kernel_size,stride),
nn.BatchNorm2d(output_channels),
nn.ReLU())
#### END CODE HERE ####
else: # Final Layer
return nn.Sequential(
#### START CODE HERE ####
nn.ConvTranspose2d(input_channels, output_channels,kernel_size,stride),
#### END CODE HERE ####
nn.Tanh())
def unsqueeze_noise(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns a copy of that noise with width and height = 1 and channels = z_dim.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
return noise.view(len(noise), self.z_dim, 1, 1)
def forward(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
x = self.unsqueeze_noise(noise)
return self.gen(x)
def get_noise(n_samples, z_dim, device='cpu'):
'''
Function for creating noise vectors: Given the dimensions (n_samples, z_dim)
creates a tensor of that shape filled with random numbers from the normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
device: the device type
'''
return torch.randn(n_samples, z_dim, device=device)
| Note that the call to make_gen_block is actually calling self.make_gen_block. The self is important. You can see in the signature of __init__ that self is injected as the first argument. The method can be referenced because self has been passed into the __init__ method (so it is within the scope), and self is of type Generator, which has a make_gen_block method defined for it. The self instance of the class has already been constructed prior to the calling of the __init__ method.
When the class is instantiated, the __new__ method is called first, which constructs the instance and then the __init__ method is called, with the new instance injected (as self) into the method.
| https://stackoverflow.com/questions/68900346/ |
does it make sense to put two convolutional layers sequentially after each other without an activation function | I am working with one model, here part of it:
...
nn.Conv2d(1, 1 * filters_multiplier, kernel_size=(3, 1)),
nn.Conv2d(1 * filters_multiplier, 6 * filters_multiplier, kernel_size=(1, 3)),
# no activation layer
nn.Conv2d(6 * filters_multiplier, 6 * filters_multiplier, kernel_size=(3, 1)),
nn.Conv2d(6 * filters_multiplier, 12 * filters_multiplier, kernel_size=(1, 3)),
nn.MaxPool2d((3, 3), stride=(2, 2)),
...
I understand that layers 1-2 and 3-4 are just convs(3,3) made for easier calculations (3x1 and 1x3 is 6 operations and 3x3 is 9). But does it make sense to put two convolutional layers sequentially after each other without an activation function or it is just a mistake? I mean between convs 1-2 and 3-4 (in commented place). I thought that a convolution followed by a convolution is just a convolution.
| The purpose of adding an activation at the end of a layer is to make sure that your model can learn non-linear functions. Without activation, you will just be doing linear regression. It is the activation functions that give neural networks the power to model any function given enough depth ( Universal Approximation Theorem.
Thus it is perfectly fine not to add activations, but then you would lose on non-linearities.
If you use a linear activation function or alternatively if you don't have an activation function then no matter how many layers your neural network has, all your model is doing is just computing a linear
activation function so you might as well not have any hidden layers. (See the figure below for proof). The purpose of depth loses its essence.
For details, you can refer to this video by Andrew Ng https://youtu.be/NkOv_k7r6no.
For Convolutional Neural Network -
Well, this works the same for CNN and we don't need to prove it separately (one can do obviously). As convolutions are themselves a constraint. It just restricts the kernels to a particular spatial location, hence preserves spatial properties. A convolutional layer is really just a more restricted version of a fully connected (FC) layer. This is why you can implement convolutions using an FC layer and vice versa — they’re fundamentally the same thing. [ Source and for further details refer - https://machinethink.net/blog/object-detection/ & https://youtu.be/bNb2fEVKeEo ]
| https://stackoverflow.com/questions/68903319/ |
Adding a number to the last dimension of a tensor | I am trying to add a number to a tensor, in the way that this integer will be added as a new dimension.
The tensor is 2 rows and 7 columns:
x = [1,2,3,4,5,6,7,8,9,10,11,12,13,14]
x = torch.tensor(x)
x = x.reshape(-1,7)
print(x.shape)
print(x)
It results in:
torch.Size([2, 7])
tensor([[ 1, 2, 3, 4, 5, 6, 7],
[ 8, 9, 10, 11, 12, 13, 14]])
The number is a float:
a= 0.19
b= torch.tensor([a])
b.reshape(-1,1)
b= b.unsqueeze(dim=1)
print(b.shape)
b
Which is:
torch.Size([1, 1])
tensor([[0.1900]])
What I want to generate is a [2,8] tensor:
tensor([[1, 2, 3, 4, 5, 6, 7,0.1900],
[8, 9, 10, 11, 12, 13, 14,0.1900]])
So,I thought I can torch.stack to have a new dimension:
c= torch.stack((x, b), dim=-1)
Gives an error of : RuntimeError: stack expects each tensor to be equal size, but got [2, 7] at entry 0 and [1, 1] at entry 1
PS: I tried to reshape x into a shape of [14,1] and added [1,1] float tensor to make [15,1], but it added only once so I cannot make a new [2,8] anymore.
x = [1,2,3,4,5,6,7,8,9,10,11,12,13,14]
x = torch.tensor(x)
x = x.reshape(-1,1)
print(x.shape)
print(x)
torch.Size([14, 1])
tensor([[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6],
[ 7],
[ 8],
[ 9],
[10],
[11],
[12],
[13],
[14]])
print('b',b)
c= torch.cat((x, b), dim=-2)
print(c.shape)
b tensor([[0.1900]])
torch.Size([15, 1])
I would be happy to have some help!
| You need to expand tensor b before concatenating them:
import torch
x = [1,2,3,4,5,6,7,8,9,10,11,12,13,14]
x = torch.tensor(x)
x = x.reshape(-1,7)
a=0.19
b= torch.tensor([a])
torch.cat((x,b.expand((2,1))),dim=1)
Will give:
tensor([[ 1.0000, 2.0000, 3.0000, 4.0000, 5.0000, 6.0000, 7.0000, 0.1900],
[ 8.0000, 9.0000, 10.0000, 11.0000, 12.0000, 13.0000, 14.0000, 0.1900]])
| https://stackoverflow.com/questions/68905844/ |
How to infer the shape of the output when connecting convolution layer with dense layers? | I am trying to construct a Convolutional Neural Network using pytorch and can not understand how to interpret the input neurons for the first densely connected layer. Say, for example, I have the following architecture:
self.conv_layer = nn.Sequential(
nn.Conv2d(3, 32, 5),
nn.Conv2d(32, 64, 5),
nn.MaxPool2d(2, 2),
nn.Conv2d(64, 128, 5),
nn.Conv2d(128, 128, 5),
nn.MaxPool2d(2, 2))
self.fc_layer = nn.Sequential(
nn.Linear(X, 512),
nn.Linear(512, 128),
nn.Linear(128, 10))
Here X would be the number of neurons in the first linear layer. So, do I need to keep track of the shape of the output tensor at each layer so that I can figure out X?
Now, I can put the values in the formula (W - F + 2P) / S + 1 and calculate the shape after each layer, that would be somewhat convenient.
Isn't there something even more convenient which might do this automatically?
| An easy solution would be to use LazyLinear layer: https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html.
According to the documentation:
A torch.nn.Linear module where in_features is inferred ... They will be initialized after the first call to forward is done and the module will become a regular torch.nn.Linear module. The in_features argument of the Linear is inferred from the input.shape[-1].
| https://stackoverflow.com/questions/68907849/ |
Swapping the batch axis has effect on the performance in pytorch? | I know that usually the batch dimension is axis zero, and I imagine this has a reason: The underlying memory for each item in the batch is contiguous.
My model calls a function that becomes simpler if I have another dimension in the first axis, so that I can use x[k] instead of x[:, k].
Results from arithmetic operations seems to keep the same memory layout
x = torch.ones(2,3,4).transpose(0,1)
y = torch.ones_like(x)
u = (x + 1)
v = (x + y)
print(x.stride(), u.stride(), v.stride())
When I create additional variables I am creating them with torch.zeros and then transposing, so that the largest stride goes to the axis 1, as well.
e.g.
a,b,c = torch.zeros(
(3, x.shape[1], ADDITIONAL_DIM, x.shape[0]) + x.shape[2:]
).transpose(1,2)
Will create three tensors with the same batch size x.shape[1].
In terms of memory locality it would make any difference to have
a,b,c = torch.zeros(
(x.shape[1], 3, ADDITIONAL_DIM, x.shape[0]) + x.shape[2:]
).permute(1,2,0, ...)
instead.
Should I care about this at all?
| TLDR; Slices seemingly contain less information... but in fact share the identical storage buffer with the original tensor. Since permute doesn't affect the underlying memory layout, both operations are essentially equivalent.
Those two are essentially the same, the underlying data storage buffer is kept the same, only the metadata i.e. how you interact with that buffer (strides and shape) changes.
Let us look at a simple example:
>>> x = torch.ones(2,3,4).transpose(0,1)
>>> x_ptr = x.data_ptr()
>>> x.shape, x.stride(), x_ptr
(3, 2, 4), (4, 12, 1), 94674451667072
We have kept the data pointer for our 'base' tensor in x_ptr:
Slicing on the second axis:
>>> y = x[:, 0]
>>> y.shape, y.stride(), x_ptr == y.data_ptr()
(3, 4), (4, 1), True
As you can see, x and x[:, k] shared the same storage.
Permuting the first two axes then slicing on the first one:
>>> z = x.permute(1, 0, 2)[0]
>>> z.shape, z.stride(), x_ptr == z.data_ptr()
(3, 4), (4, 1), True
Here again, you notice that x.data_ptr is the same as z.data_ptr.
In fact, you can even go from y to x's representation using torch.as_strided:
>>> torch.as_strided(y, size=x.shape, stride=x.stride())
tensor([[[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.]],
[[1., 1., 1., 1.],
[1., 1., 1., 1.]]])
Same with z:
>>> torch.as_strided(z, size=x.shape, stride=x.stride())
Both will return a copy of x because torch.as_strided is allocating memory for the newly created tensor. These two lines were just to illustrate how we can still 'get back' to x from a slice of x, we can recover the apparent content by changing the tensor's metadata.
| https://stackoverflow.com/questions/68918403/ |
HuggingFace-Transformers --- NER single sentence/sample prediction | I am trying to predict with the NER model, as in the tutorial from huggingface (it contains only the training+evaluation part).
I am following this exact tutorial here : https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb
The training works flawlessly, but the problems that I have begin when I try to predict on a simple sample.
model_checkpoint = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
loaded_model = AutoModel.from_pretrained('./my_model_own_custom_training.pth',
from_tf=False)
input_sentence = "John Nash is a great mathematician, he lives in France"
tokenized_input_sentence = tokenizer([input_sentence],
truncation=True,
is_split_into_words=False,
return_tensors='pt')
predictions = loaded_model(tokenized_input_sentence["input_ids"])[0]
Predictions is of shape (1,13,768)
How can I arrive at the final result of the form [JOHN <-> ‘B-PER’, … France <-> “B-LOC”], where B-PER and B-LOC are two ground truth labels, representing the tag for a person and location respectively?
The result of the prediction is:
torch.Size([1, 13, 768])
If I write:
print(predictions.argmax(axis=2))
tensor([613, 705, 244, 620, 206, 206, 206, 620, 620, 620, 477, 693, 308])
I get the tensor above.
However I would have expected to get the tensor representing the ground truth [0…8] labels from the ground truth annotations.
Summary when loading the model :
loading configuration file ./my_model_own_custom_training.pth/config.json
Model config DistilBertConfig {
“name_or_path": “distilbert-base-uncased”,
“activation”: “gelu”,
“architectures”: [
“DistilBertForTokenClassification”
],
“attention_dropout”: 0.1,
“dim”: 768,
“dropout”: 0.1,
“hidden_dim”: 3072,
“id2label”: {
“0”: “LABEL_0”,
“1”: “LABEL_1”,
“2”: “LABEL_2”,
“3”: “LABEL_3”,
“4”: “LABEL_4”,
“5”: “LABEL_5”,
“6”: “LABEL_6”,
“7”: “LABEL_7”,
“8”: “LABEL_8”
},
“initializer_range”: 0.02,
“label2id”: {
“LABEL_0”: 0,
“LABEL_1”: 1,
“LABEL_2”: 2,
“LABEL_3”: 3,
“LABEL_4”: 4,
“LABEL_5”: 5,
“LABEL_6”: 6,
“LABEL_7”: 7,
“LABEL_8”: 8
},
“max_position_embeddings”: 512,
“model_type”: “distilbert”,
“n_heads”: 12,
“n_layers”: 6,
“pad_token_id”: 0,
“qa_dropout”: 0.1,
“seq_classif_dropout”: 0.2,
“sinusoidal_pos_embds”: false,
"tie_weights”: true,
“transformers_version”: “4.8.1”,
“vocab_size”: 30522
}
| The answer is a bit trickier than expected[Huge credits to Niels Rogge].
Firstly, loading models in huggingface-transformers can be done in (at least) two ways:
AutoModel.from_pretrained('./my_model_own_custom_training.pth', from_tf=False)
AutoModelForTokenClassification.from_pretrained('./my_model_own_custom_training.pth', from_tf=False)
It seems that, according to the task at hand, different AutoModels subclasses need to be used. In this scenario I posted, it is the AutoModelForTokenClassification() that has to be used.
After that, a solution to obtain the predictions would be to do the following:
# forward pass
outputs = model(**encoding)
logits = outputs.logits
predictions = logits.argmax(-1)
| https://stackoverflow.com/questions/68918962/ |
Does packing BooleanTensor's to ByteTensor's affect training of LSTM (or other ML models)? | I am working on an LSTM to generate music. My input data will be a BooleanTensor of size 88xLx3, 88 being the amount of available notes, L being the length of each "piece" which will be in the order of 1k - 10k (TBD), and 3 being the parts for "lead melody", "accompaniment", and "bass". A value of 0 would symbolize that that specific note is not being played by that part (instrument) at that time, and a 1 would symbolize that it is.
The problem is that each entry of a BooleanTensor takes 1 byte of space in memory instead of 1 bit, which wastes a lot of valuable GPU memory.
As a solution I thought of packing each BooleanTensor to a ByteTensor (uint8) of size 11xLx3 or 88x(L/8)x3.
My question is: Would packing the data as such have an effect on the learning and generation of the LSTM or would the ByteTensor-based data and model be equivalent to their BooleanTensor-based counterparts in practice?
| I wouldn't really care about the fact that the input is taking X instead of Y number of bits, at least when it comes to GPU memory. Most of it is occupied by the network's weights and intermediate outputs, which will likely be float32 anyway (maybe float16). There is active research on training with lower precision (even binary training), but based on your question, it seems completely unnecessary. Lastly, you can always try Quantization to your production models, if you really need it.
With regards to the packing: it can have an impact, especially if you do it naively. The grouping you're suggesting doesn't seem to be a natural one, therefore it may be harder to learn patterns from the grouped data than otherwise. There'll always be workarounds, but then this answer become an opinion because it is almost impossible to antecipate what could work; an opinion-based questions/answer are off-topic around here :)
| https://stackoverflow.com/questions/68919179/ |
Why tensor size was not changed? | I made the toy CNN model.
class Test(nn.Module):
def __init__(self):
super(Test, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(3,300,3),
nn.Conv2d(300,500,3),
nn.Conv2d(500,1000,3),
)
self.fc = nn.Linear(3364000,1)
def forward(self, x):
out = self.conv(x)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
Then, I had checked model.summary via this code
model = Test()
model.to('cuda')
for param in model.parameters():
print(param.dtype)
break
summary_(model, (3,64,64))
And I was able to get the following results:
torch.float32
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 300, 62, 62] 8,400
Conv2d-2 [-1, 500, 60, 60] 1,350,500
Conv2d-3 [-1, 1000, 58, 58] 4,501,000
Linear-4 [-1, 1] 3,364,001
================================================================
Total params: 9,223,901
Trainable params: 9,223,901
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 48.20
Params size (MB): 35.19
Estimated Total Size (MB): 83.43
----------------------------------------------------------------
I want to reduce model size cuz i wanna increase the batch size.
So, I had changed torch.float32 -> torch.float16 via NVIDIA/apex
model = Test()
model.to('cuda')
opt_level = 'O3'
optimizer = optim.Adam(model.parameters(), lr=0.001)
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
for param in model.parameters():
print(param.dtype)
break
summary_(model, (3,64,64))
Selected optimization level O3: Pure FP16 training.
Defaults for this optimization level are:
enabled : True
opt_level : O3
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : False
master_weights : False
loss_scale : 1.0
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O3
cast_model_type : torch.float16
patch_torch_functions : False
keep_batchnorm_fp32 : False
master_weights : False
loss_scale : 1.0
torch.float16
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 300, 62, 62] 8,400
Conv2d-2 [-1, 500, 60, 60] 1,350,500
Conv2d-3 [-1, 1000, 58, 58] 4,501,000
Linear-4 [-1, 1] 3,364,001
================================================================
Total params: 9,223,901
Trainable params: 9,223,901
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.05
Forward/backward pass size (MB): 48.20
Params size (MB): 35.19
Estimated Total Size (MB): 83.43
----------------------------------------------------------------
As a result, torch.dtype was changed torch.float16 from torch.float32.
But, Param size (MB): 35.19 was not changed.
Why happen this? plz tell me about this.
Thanks.
| Mixed precision does not mean that your model becomes half original size. The parameters remain in float32 dtype by default and they are cast to float16 automatically during certain operations of the neural network training. This is applicable to input data as well.
The torch.cuda.amp provides the functionality to perform this automatic conversion from float32 to float16 during certain operations of training like Convolutions. Your model size will remain the same. Reducing model size is called quantization and it is different than mixed-precision training.
You can read to more about mixed-precision training at NVIDIA's blog and Pytorch's blog.
| https://stackoverflow.com/questions/68919590/ |
How can I simulate this paragraph about changing the learning rate picked from Resnet paper? | While going through this paper, I found they are changing learning rate in between of training/validation iterations.
We start with a learning rate of 0.1, divide it by 10 at 32k and 48k iterations, and terminate training at 64k iterations, which is determined on a 45k/5k train/val split.
| Generally, You're probably looking for
torch.optim.lr_scheduler
Specifically, You can implement the reduction of learning_rate after n epochs using lr_scheduler.MultiStepLR
Following the advised way to use lr_schedulers, You will need to recalculate the milestones from steps to epochs as the updates are done after the whole epochs rather than steps.
If that won't give You the satisfactory result, You can actually "cheat" by calling
scheduler.step() after each batch (step) and then passing the milestones in number of steps.
Remember then not to confuse yourself or the ones that will happen to edit Your code some day, at least put a comment to indicate that You're using the library function in a little less obvious way :D
| https://stackoverflow.com/questions/68920438/ |
Use torch.square() inside a nn.Sequential layer in PyTorch | I want to square the result of a maxpool layer.
I tried the following:
class CNNClassifier(Classifier): # nn.Module
def __init__(self, in_channels):
super().__init__()
self.save_hyperparameters('in_channels')
self.cnn = nn.Sequential(
# maxpool
nn.MaxPool2d((1, 5), stride=(1, 5)),
torch.square(),
# layer1
nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=5,
)
Which to the experienced PyTorch user for sure makes no sense.
Indeed, the error is quite clear:
TypeError: square() missing 1 required positional arguments: "input"
How can I feed in to square the tensor from the preceding layer?
| You can't put a PyTorch function in a nn.Sequential pipeline, it needs to be a nn.Module.
You could wrap it like this:
class Square(nn.Module):
def forward(self, x):
return torch.square(x)
Then use it inside your sequential layer like so:
class CNNClassifier(Classifier): # nn.Module
def __init__(self, in_channels):
super().__init__()
self.save_hyperparameters('in_channels')
self.cnn = nn.Sequential(
nn.MaxPool2d((1, 5), stride=(1, 5)),
Square(),
nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=5))
| https://stackoverflow.com/questions/68924374/ |
How to find the name of layers in preloaded torchvision models? | I'm trying to use GradCAM with a Deeplabv3 resnet50 model preloaded from torchvision, but in Captum I need to say the name of the layer (of type nn.module). I can't find any documentation for how this is done, does anyone possibly have any ideas of how to get the name of the final ReLu layer?
Thanks in advance!
| You can have a look at its representation and get an idea of where it's located by simply printing it:
>>> model = torchvision.models.segmentation.deeplabv3_resnet50()
>>> model
DeepLabV3(
(backbone): IntermediateLayerGetter(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
...
To get the actual exact name of the layer you can loop over the modules with named_modules and only pick the nn.ReLU layers:
>>> relus = [name for name, module in model.named_modules() if isinstance(module, nn.ReLU)]
>>> relus
['backbone.relu',
'backbone.layer1.0.relu',
'backbone.layer1.1.relu',
'backbone.layer1.2.relu',
'backbone.layer2.0.relu',
'backbone.layer2.1.relu',
'backbone.layer2.2.relu',
'backbone.layer2.3.relu',
'backbone.layer3.0.relu',
'backbone.layer3.1.relu',
'backbone.layer3.2.relu',
'backbone.layer3.3.relu',
'backbone.layer3.4.relu',
'backbone.layer3.5.relu',
'backbone.layer4.0.relu',
'backbone.layer4.1.relu',
'backbone.layer4.2.relu',
'classifier.0.convs.0.2',
'classifier.0.convs.1.2',
'classifier.0.convs.2.2',
'classifier.0.convs.3.2',
'classifier.0.convs.4.3',
'classifier.0.project.2',
'classifier.3']
Then pick the last one:
>>> relus[-1]
'classifier.3'
| https://stackoverflow.com/questions/68924829/ |
Problem with Loading tiny imagenet via torch DataLoader | I'm using tiny-imagenet-200 and I'm not sure that loading them with torch.utils.data.DataLoader is possible or not.
I downloaded tiny-imagenet-200 from Stanford site, but the format of validation set in a directory with name val_0 to val_9999 and the label of them is in a .txt.
How can I load this directory via torch.utils.data.DataLoader?
I tried:
datasets.ImageFolder(args.val_dir, transforms.Compose([
OpencvResize(256),
transforms.CenterCrop(224),
ToBGRTensor(),
])
but it doesn't work.
| You can't do that using ImageFolder directly. There are alternatives, though:
You can read the annotation file and re-structure the directories to enable the usage of ImageFolder, as in here;
You can implement a custom Dataset. Luckly, as Tiny ImageNet is a popular dataset, you can find many implementations online. For instance, this one.
| https://stackoverflow.com/questions/68928265/ |
multiclass sequence classifiaction with fastai and huggingface | I am looking to implement DistilBERT via fastai and huggingface for a mutliclass sequence classification problem. I found a useful tutorial that gave a good example on how to do this with binary classification. The code is below:
# !pip install torch==1.9.0
# !pip install torchtext==0.10
# !pip install transformers==4.7
# !pip install fastai==2.4
from fastai.text.all import *
from sklearn.model_selection import train_test_split
import pandas as pd
import glob
from transformers import AutoTokenizer, AutoModelForSequenceClassification
hf_tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
hf_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
"""
train_df and val_df looks like this:
label text
4240 5 whoa interesting.
13 7 you could you could we just
4639 4 you set the goal,
28 1 because ive already agreed to that
66 8 oh hey freshman thats you gona need
"""
print(list(train_df.label.value_counts().index))
"""
[4, 1, 5, 6, 7, 0, 2, 3, 8]
"""
class HF_Dataset(torch.utils.data.Dataset):
def __init__(self, df, hf_tokenizer):
self.df = df
self.hf_tokenizer = hf_tokenizer
self.label_map = {
0:0,
1:0,
2:0,
3:0,
4:1,
5:1,
6:1,
7:1,
8:1
}
def __len__(self):
return len(self.df)
def decode(self, token_ids):
return ' '.join([hf_tokenizer.decode(x) for x in tokenizer_outputs['input_ids']])
def decode_to_original(self, token_ids):
return self.hf_tokenizer.decode(token_ids.squeeze())
def __getitem__(self, index):
label, text = self.df.iloc[index]
label = self.label_map[label]
label = torch.tensor(label)
tokenizer_output = self.hf_tokenizer(text, return_tensors="pt", padding='max_length', truncation=True, max_length=512)
tokenizer_output['input_ids'].squeeze_()
tokenizer_output['attention_mask'].squeeze_()
return tokenizer_output, label
train_dataset = HF_Dataset(train_df, hf_tokenizer)
valid_dataset = HF_Dataset(valid_df, hf_tokenizer)
train_dl = DataLoader(train_dataset, bs=16, shuffle=True)
valid_dl = DataLoader(valid_dataset, bs=16)
dls = DataLoaders(train_dl, valid_dl)
hf_model(**batched_data)
class HF_Model(nn.Module):
def __init__(self, hf_model):
super().__init__()
self.hf_model = hf_model
def forward(self, tokenizer_outputs):
model_output = self.hf_model(**tokenizer_outputs)
return model_output.logits
model = HF_Model(hf_model)
# Manually popping the model onto the gpu since the data is in a dictionary format
# (doesn't automatically place model + data on gpu otherwise)
learn = Learner(dls, model, loss_func=nn.CrossEntropyLoss(), metrics=[accuracy])
learn.fit_one_cycle(3, 1e-4)
This works fine. However, I mapped my multiclass labels to 2 labels to allow this to work. I actually have 9 classes. I tried adjusting the label mapping scheme in HF_Dataset() class to match my actual labels like below:
class HF_Dataset(torch.utils.data.Dataset):
def __init__(self, df, hf_tokenizer):
self.df = df
self.hf_tokenizer = hf_tokenizer
self.label_map = {
0:0,
1:1,
2:2,
3:3,
4:4,
5:5,
6:6,
7:7,
8:8
}
def __len__(self):
return len(self.df)
def decode(self, token_ids):
return ' '.join([hf_tokenizer.decode(x) for x in tokenizer_outputs['input_ids']])
def decode_to_original(self, token_ids):
return self.hf_tokenizer.decode(token_ids.squeeze())
def __getitem__(self, index):
label, text = self.df.iloc[index]
label = self.label_map[label]
label = torch.tensor(label)
tokenizer_output = self.hf_tokenizer(text, return_tensors="pt", padding='max_length', truncation=True, max_length=512)
tokenizer_output['input_ids'].squeeze_()
tokenizer_output['attention_mask'].squeeze_()
return tokenizer_output, label
Every line works until learn.fit_one_cycle.
Here is the full stack trace from this line:
0.00% [0/3 00:00<00:00]
epoch train_loss valid_loss accuracy time
0.00% [0/519 00:00<00:00]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-21-0ec2ff9e12e1> in <module>
----> 1 learn.fit_one_cycle(3, 1e-4)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
111 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
112 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 113 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
114
115 # Cell
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
219 self.opt.set_hypers(lr=self.lr if lr is None else lr)
220 self.n_epoch = n_epoch
--> 221 self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
222
223 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
161
162 def _with_events(self, f, event_type, ex, final=noop):
--> 163 try: self(f'before_{event_type}'); f()
164 except ex: self(f'after_cancel_{event_type}')
165 self(f'after_{event_type}'); final()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _do_fit(self)
210 for epoch in range(self.n_epoch):
211 self.epoch=epoch
--> 212 self._with_events(self._do_epoch, 'epoch', CancelEpochException)
213
214 def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
161
162 def _with_events(self, f, event_type, ex, final=noop):
--> 163 try: self(f'before_{event_type}'); f()
164 except ex: self(f'after_cancel_{event_type}')
165 self(f'after_{event_type}'); final()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _do_epoch(self)
204
205 def _do_epoch(self):
--> 206 self._do_epoch_train()
207 self._do_epoch_validate()
208
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _do_epoch_train(self)
196 def _do_epoch_train(self):
197 self.dl = self.dls.train
--> 198 self._with_events(self.all_batches, 'train', CancelTrainException)
199
200 def _do_epoch_validate(self, ds_idx=1, dl=None):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
161
162 def _with_events(self, f, event_type, ex, final=noop):
--> 163 try: self(f'before_{event_type}'); f()
164 except ex: self(f'after_cancel_{event_type}')
165 self(f'after_{event_type}'); final()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in all_batches(self)
167 def all_batches(self):
168 self.n_iter = len(self.dl)
--> 169 for o in enumerate(self.dl): self.one_batch(*o)
170
171 def _do_one_batch(self):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in one_batch(self, i, b)
192 b = self._set_device(b)
193 self._split(b)
--> 194 self._with_events(self._do_one_batch, 'batch', CancelBatchException)
195
196 def _do_epoch_train(self):
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
161
162 def _with_events(self, f, event_type, ex, final=noop):
--> 163 try: self(f'before_{event_type}'); f()
164 except ex: self(f'after_cancel_{event_type}')
165 self(f'after_{event_type}'); final()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/fastai/learner.py in _do_one_batch(self)
173 self('after_pred')
174 if len(self.yb):
--> 175 self.loss_grad = self.loss_func(self.pred, *self.yb)
176 self.loss = self.loss_grad.clone()
177 self('after_loss')
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
1119 def forward(self, input: Tensor, target: Tensor) -> Tensor:
1120 return F.cross_entropy(input, target, weight=self.weight,
-> 1121 ignore_index=self.ignore_index, reduction=self.reduction)
1122
1123
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2822 if size_average is not None or reduce is not None:
2823 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2825
2826
IndexError: Target 6 is out of bounds.
This seems like it should be a simple fix. Do I need to adjust something in the model architecture to allow it to accept 9 labels? Or do I need to one hot encode my labels? If so, is there a solution prebuilt to do this in the pipeline?
| You need to define num_labels=9 when loading the model:
hf_model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=9)
The default value is 2, which suits the first use-case, but breaks when you tried to change.
Note that the lib explictly says that the classifier (which generates the .logits that are of your interest) is randomly initialized:
Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.weight', 'pre_classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
| https://stackoverflow.com/questions/68928299/ |
Pytorch ConvNet loss remains unchanged and only one class is predicted | My ConvNet is only predicting a single class and the loss remains unchanged.
I have tried the following:
added class weights to be proportional to the data sizes (1-(class occurrences/total data))
adjusted learning rate attempting to find a sweet spot
adjusted gamma (Multiplicative factor of learning rate decay)
used gamma in the loss function, and not used it in loss function (Mainly have been experimenting with Adam)
Tried a much more complex ConvNet than the 'Simple' one currently in use
I am unsure of where to go from here. It seems no matter what I try the Neural Net always predicts the same class (I have been able to get it to predict the other class by throwing the weights very out of proportion)
Below is the output from running my program. It should have all the relevant information in it to come up with some ideas as to how to fix it. If you need to see some of the source code or are curious about what the dataset looks like, please ask.
Any help is greatly appreciated. I have been stumped on this issue for quite a while now. Thank you!
Train dataset length: 27569
Test dataset length: 4866
Image preprocessing:
None
Input dimensions: 28 X 28
Output dimension: 2
Model: Simple
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=2, bias=True)
(5): ReLU()
)
)
Optimizer: Adam
Learning rate: 0.0001
Loss function: CEL
class weights: tensor([0.3481, 0.6519], device='cuda:0')
Multiplicative factor of learning rate decay: 0.0005
Train Epoch: 1 [0/27569 (0%)] Loss: 3785.907959
Train Epoch: 1 [6400/27569 (23%)] Loss: 0.693147
Train Epoch: 1 [12800/27569 (46%)] Loss: 0.693147
Train Epoch: 1 [19200/27569 (70%)] Loss: 0.693147
Train Epoch: 1 [25600/27569 (93%)] Loss: 0.693147
Test set: Average loss: 0.0110, Accuracy: 3172/4866 (65%)
actual count: [3172, 1694]
predicted count: [4866, 0]
| It is very uncommon to have a ReLU after the last Linear layer (where the logits come from). Consider removing it.
In addition, maybe your learning rate is too high. You could try tweaking it a little bit. Check if the loss decreases smoothly between the iterations (which is ideal in most cases), otherwise decrease it
| https://stackoverflow.com/questions/68928529/ |
Implementing Early Stopping in Pytorch without Torchsample | As a Pytorch newbie (coming from tensorflow), I am unsure of how to implement Early Stopping. My research has led me discover that pytorch does not have a native way to so this. I have also discovered torchsample, but am unable to install it in my conda environment for whatever reason. Is there a simple way to go about applying early stopping without it? Here is my current setup:
class RegressionDataset(Dataset):
def __init__(self, X_data, y_data):
self.X_data = X_data
self.y_data = y_data
def __getitem__(self, index):
return self.X_data[index], self.y_data[index]
def __len__(self):
return len(self.X_data)
train_dataset = RegressionDataset(torch.from_numpy(X_train).float(), torch.from_numpy(y_train).float())
val_dataset = RegressionDataset(torch.from_numpy(X_val).float(), torch.from_numpy(y_val).float())
test_dataset = RegressionDataset(torch.from_numpy(X_test).float(), torch.from_numpy(y_test).float())
# Model Params
EPOCHS = 100
BATCH_SIZE = 1000
LEARNING_RATE = 0.001
NUM_FEATURES = np.shape(X_test)[1]
# Initialize Dataloader
train_loader = DataLoader(dataset = train_dataset, batch_size=BATCH_SIZE, shuffle = True)
val_loader = DataLoader(dataset = val_dataset, batch_size=BATCH_SIZE)
test_loader = DataLoader(dataset = test_dataset, batch_size=BATCH_SIZE)
# Define Neural Network Architecture
class MultipleRegression(nn.Module):
def __init__(self, num_features):
super(MultipleRegression, self).__init__()
# Define architecture
self.layer_1 = nn.Linear(num_features, 16)
self.layer_2 = nn.Linear(16, 32)
self.layer_3 = nn.Linear(32, 25)
self.layer_4 = nn.Linear(25, 20)
self.layer_5 = nn.Linear(20, 16)
self.layer_out = nn.Linear(16, 1)
self.relu = nn.ReLU() # ReLU applied to all layers
# Initialize weights and biases
nn.init.xavier_uniform_(self.layer_1.weight)
nn.init.zeros_(self.layer_1.bias)
nn.init.xavier_uniform_(self.layer_2.weight)
nn.init.zeros_(self.layer_2.bias)
nn.init.xavier_uniform_(self.layer_3.weight)
nn.init.zeros_(self.layer_3.bias)
nn.init.xavier_uniform_(self.layer_4.weight)
nn.init.zeros_(self.layer_4.bias)
nn.init.xavier_uniform_(self.layer_5.weight)
nn.init.zeros_(self.layer_5.bias)
nn.init.xavier_uniform_(self.layer_out.weight)
nn.init.zeros_(self.layer_out.bias)
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.relu(self.layer_2(x))
x = self.relu(self.layer_3(x))
x = self.relu(self.layer_4(x))
x = self.relu(self.layer_5(x))
x = self.layer_out(x)
return(x)
def predict(self, test_inputs):
x = self.relu(self.layer_1(test_inputs))
x = self.relu(self.layer_2(x))
x = self.relu(self.layer_3(x))
x = self.relu(self.layer_4(x))
x = self.relu(self.layer_5(x))
x = self.layer_out(x)
return(x)
# Check for GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
model = MultipleRegression(NUM_FEATURES)
model.to(device)
print(model)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr = LEARNING_RATE)
# define dictionary to store loss/epochs for training and validation
loss_stats = {
"train": [],
"val": []
}
# begin training
print("Begin Training")
for e in tqdm(range(1, EPOCHS+1)):
# Training
train_epoch_loss = 0
model.train()
for X_train_batch, y_train_batch in train_loader:
X_train_batch, y_train_batch = X_train_batch.to(device), y_train_batch.to(device)
optimizer.zero_grad()
y_train_pred = model(X_train_batch)
train_loss = criterion(y_train_pred, y_train_batch.unsqueeze(1))
train_loss.backward()
optimizer.step()
train_epoch_loss += train_loss.item()
# validation
with torch.no_grad():
val_epoch_loss = 0
model.eval()
for X_val_batch, y_val_batch in val_loader:
X_val_batch, y_val_batch = X_val_batch.to(device), y_val_batch.to(device)
y_val_pred = model(X_val_batch)
val_loss = criterion(y_val_pred, y_val_batch.unsqueeze(1))
val_epoch_loss += val_loss.item()
loss_stats["train"].append(train_epoch_loss/len(train_loader))
loss_stats["val"].append(val_epoch_loss/len(val_loader))
print(f"Epoch {e}: \ Train loss: {train_epoch_loss/len(train_loader):.5f} \ Val loss: {val_epoch_loss/len(val_loader):.5f}")
# Visualize loss and accuracy
train_val_loss_df = pd.DataFrame.from_dict(loss_stats).reset_index().melt(id_vars=["index"]).rename(columns = {"index":"epochs"})
plt.figure()
sns.lineplot(data = train_val_loss_df, x = "epochs", y = "value", hue = "variable").set_title("Train-Val Loss/Epoch")
# Test model
y_pred_list = []
with torch.no_grad():
model.eval()
for X_batch, _ in test_loader:
X_batch = X_batch.to(device)
y_test_pred = model(X_batch)
y_pred_list.append(y_test_pred.cpu().numpy())
y_pred_list = [a.squeeze().tolist() for a in y_pred_list]
y_pred_list = [item for sublist in y_pred_list for item in sublist]
y_pred_list = np.array(y_pred_list)
mse = mean_squared_error(y_test, y_pred_list)
r_square = r2_score(y_test, y_pred_list)
print("Mean Squared Error :", mse)
print("R^2 :", r_square)
| A basic way to do this is to keep track of the best validation loss obtained so far.
You can have a variable best_loss = 0 initialized before your loop over epochs (or you could do other things like best loss per epoch, etc.).
After each validation pass then do:
if val_loss > best_loss:
best_loss = val_loss
# At this point also save a snapshot of the current model
torch.save(model, 'my_model_best_loss.pth')
Then, if the best_loss does not improve significantly after some number of training steps, or by the end of the epoch, or if it val_loss gets worse, break out of the loop and terminate the training there.
For implementing algorithms like early stopping (and your training loop in general) you may find it easier to give PyTorch Lightning a try (no affiliation, but it's much easier than trying to roll everything by hand).
| https://stackoverflow.com/questions/68929471/ |
How to apply mask to image tensors in PyTorch? | Applying mask with NumPy or OpenCV is a relatively straightforward process. However, if I need to use masked image in loss calculations of my optimization algorithm, I need to employ exclusively PyTorch, as doing otherwise interferes with gradient computations.
Assuming that I have an image tensor [1, 512, 512, 3] (batch, height, width, channels) and a mask tensor [1, 20, 512, 512] (batch, channels, height, width) where every channel corresponds to one of 20 segmentation classes, I want to get a masked image tensor that fills every pixel with black (0, 0, 0), except for those belonging to one or more specified segmentation classes.
Here is how it can be done with numpy:
import numpy as np
import torch
# Create dummy image and mask
image_tensor = torch.randn([1, 512, 512, 3])
mask_tensor = torch.randn([1, 20, 512, 512])
# Apply argmax to mask
mask_tensor = torch.max(mask_tensor, 1)[1] # -> 1, 512, 512
# Define mask function
def selective_mask(image_src, mask, dims=[]):
h, w = mask.shape
background = np.zeros([h, w, 3], dtype=np.uint8)
for j_, j in enumerate(mask[:, :]):
for k_, k in enumerate(j):
if k in dims:
background[j_, k_] = image_src[j_, k_]
output = background
return output
# Convert tensors to numpy:
image = image_tensor.squeeze(0).cpu().numpy()
mask = mask_tensor.squeeze(0).cpu().nmpy()
# Apply mask function for several classes
image_masked = selective_mask(image, mask, dims=[5, 6, 8])
How should my code be changed to bring it in line with the PyTorch requirements?
| First of all, the definition of the function selective_mask is far for what You may call 'straightforward'. The key point in using numpy (and torch, which is designed to be mostly compatible) is to take advantage of the vectorization of operations and to avoid using loops, which are not parallelizable.
If You rewrite the said function in this manner:
def selective_mask(image_src, mask, channels=[]):
mask = mask[np.array(channels).astype(int)]
return np.sign(np.sum(mask, axis=0), dtype=image_src.dtype) * image_src
it will turn out that You can actually do the same with pytorch tensors (here no need to squeeze the batch (first) dimension):
def selective_mask_t(image_src, mask, channels=[]):
mask = mask[:, torch.tensor(channels).long()]
mask = torch.sgn(torch.sum(mask, dim=1)).to(dtype=image_src.dtype).unsqueeze(-1)
return mask * image_src
Also, You probably want to produce the mask itself this way:
(BTW using a combination of max and sgn here should actually work faster than setting elements indexed by argmax)
# Create dummy image and mask
image_tensor = torch.randn([1, 512, 512, 3])
mask_tensor = torch.randn([1, 20, 512, 512])
# Discreticize the mask (set to one in the channel with the highest value) -> 1, 20, 512, 512
mask_tensor = torch.sgn(mask_tensor - torch.max(mask_tensor, 1)[0].unsqueeze(1)) + 1.
Then it should work just fine:
print(selective_mask_t(image_tensor, mask_tensor, [5, 6, 8]))
| https://stackoverflow.com/questions/68929785/ |
How to fix ValueError: too many values to unpack (expected 2)? | Recently faced with such a problem: ValueError: too many values to unpack (expected 2).
import os
import natsort
from PIL import Image
import torchvision
import torch
import torch.optim as optim
from torchvision import transforms, models
from torch.utils.data import DataLoader, Dataset
import torch.nn as nn
import torch.nn.functional as F
root_dir = './images/'
class Col(Dataset):
def __init__(self, main_dir, transform):
self.main_dir = main_dir
self.transform = transform
all_images = self.all_img(main_dir = main_dir)
self.total_imges = natsort.natsorted(all_images)
def __len__(self):
return len(self.total_imges)
def __getitem__(self, idx):
img_loc = os.path.join(self.total_imges[idx])
image = Image.open(img_loc).convert("RGB")
tensor_image = self.transform(image)
return tensor_image
def all_img(self, main_dir):
img = []
for path, subdirs, files in os.walk(main_dir):
for name in files:
img.append(os.path.join(path, name))
return img
model = models.resnet18(pretrained=False)
model.fc = nn.Sequential(nn.Linear(model.fc.in_features, 256),
nn.ReLU(),
nn.Dropout(p=0.3),
nn.Linear(256, 100),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(100,9))
# model.load_state_dict(torch.load('model.pth'))
for name, param in model.named_parameters():
if("bn" not in name):
param.requires_grad = False
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5457, 0.5457, 0.5457], std=[0.2342, 0.2342, 0.2342])
])
data = Col(main_dir=root_dir, transform=transform)
dataset = torch.utils.data.DataLoader(data, batch_size=130)
train_set, validate_set= torch.utils.data.random_split(dataset, [round(len(dataset)*0.7), (len(dataset) - round(len(dataset)*0.7))])
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
model.to(device)
def train(model, optimizer, loss_fn, train_set, validate_set, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_set:
optimizer.zero_grad()
inputs, targets = batch
inputs = inputs.to(device)
targets = targets.to(device)
output = model(inputs)
loss = loss_fn(output, targets)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_set.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in validate_set:
inputs, targets = batch
inputs = inputs.to(device)
output = model(inputs)
targets = targets.to(device)
loss = loss_fn(output,targets)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(validate_set.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
optimizer = optim.Adam(model.parameters(), lr=0.0001)
But the call to this function
train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
gives this error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_4828/634509595.py in <module>
----> 1 train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
/tmp/ipykernel_4828/1473922939.py in train(model, optimizer, loss_fn, train_set, validate_set, epochs, device)
6 for batch in train_set:
7 optimizer.zero_grad()
----> 8 inputs, targets = batch
9 inputs = inputs.to(device)
10 targets = targets.to(device)
ValueError: too many values to unpack (expected 2)
| Batch doesn't contain both the inputs and the targets. Your problem is just that getitem returns only tensor_image (which is presumably the inputs) and not whatever targets should be.
| https://stackoverflow.com/questions/68930708/ |
TensorFlow vs PyTorch: Memory usage | I have PyTorch 1.9.0 and TensorFlow 2.6.0 in the same environment, and both recognizing the all GPUs.
I was comparing the performance of both, so I did this small simple test, multiplying large matrices (A and B, both 2000x2000) several times (10000x):
import numpy as np
import os
import time
def mul_torch(A,B):
# PyTorch matrix multiplication
os.environ['KMP_DUPLICATE_LIB_OK']='True'
import torch
A, B = torch.Tensor(A.copy()), torch.Tensor(B.copy())
A = A.cuda()
B = B.cuda()
start = time.time()
for i in range(10000):
C = torch.matmul(A, B)
torch.cuda.empty_cache()
print('PyTorch:', time.time() - start, 's')
return C
def mul_tf(A,B):
# TensorFlow Matrix Multiplication
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
with tf.device('GPU:0'):
A = tf.constant(A.copy())
B = tf.constant(B.copy())
start = time.time()
for i in range(10000):
C = tf.math.multiply(A, B)
print('TensorFlow:', time.time() - start, 's')
return C
if __name__ == '__main__':
A = np.load('A.npy')
B = np.load('B.npy')
n = 2000
A = np.random.rand(n, n)
B = np.random.rand(n, n)
PT = mul_torch(A, B)
time.sleep(5)
TF = mul_tf(A, B)
As a result:
PyTorch: 19.86856198310852 s
TensorFlow: 2.8338065147399902 s
I was not expecting these results, I thought the results should be similar.
Investigating the GPU performance, I noticed that both are using GPU full capacity, but PyTorch uses a small fraction of the memory Tensorflow uses. It explains the processing time difference, but I cannot explain the difference on memory usage. Is it something intrinsic to the methods, or is it my computer configuration? Regardless the matrix size (at least for matrices larger than 1000x1000), these plateau are the same.
Thanks you for your help.
| It is because you are doing matrix multiplication in pytorch but element-wise multiplication in tensorflow. To do matrix multiplication in TF, use tf.matmul or simply:
for i in range(10000):
C = A @ B
That does the same for both TF and torch. You also have to call torch.cuda.synchronize() inside the time measurement and move torch.cuda.empty_cache() outside of the measurement for the sake of fairness.
The expected results will be tensorflow's eager execution slower than pytorch.
Regarding the memory usage, TF by default claims all GPU memory and using nvidia-smi in linux or similarly task manager in windows, does not reflect the actual memory usage of the operations.
| https://stackoverflow.com/questions/68935366/ |
Why does the activation function degrade the time performance of the CNN model? | I'm testing the CNN model time and accuracy performance with some different small changes. In this model some activation layers between convolutions are skipped. So I added them. I noticed that model with added ELU activations has
performance 3.07 ms per image and 92.26% accuracy on test set
And model without them has
performance 3.52 ms per image and 92.34% accuracy on test set
Although the number of model parameters is the same. So my question is why does activations decrease model's time performance so much?
Here for instance part of code:
...
nn.Conv2d(28 * filters_multiplier, 28 * filters_multiplier, kernel_size=(3, 1), padding=(1, 0)),
nn.Conv2d(28 * filters_multiplier, 28 * filters_multiplier, kernel_size=(1, 3), padding=(0, 1)),
# nn.ELU(),
nn.Conv2d(28 * filters_multiplier, 28 * filters_multiplier, kernel_size=(3, 1), padding=(1, 0)),
nn.Conv2d(28 * filters_multiplier, 40 * filters_multiplier, kernel_size=(1, 3), padding=(0, 1)),
...
So code with commented block works faster than not commented.
UPD I just tested with ReLU() function, and it works as fast as first model with no activations. So maybe problem just in ELU()?
| Neither ReLU nor ELU have learnable parameters, but they still require compute to execute.
ELU executes the exponential function on all x >= 0. This is computational expensive and so your network is slower.
ReLU is a computational cheap because of the ease of the operation x[x < 0] = 0 and so you don't really see a spike in time. This is one of the reasons why ReLU is a common choice as activation function.
| https://stackoverflow.com/questions/68936595/ |
Syntax Error when calling the name of a model's layer in captum | I'm trying to use the gradCAM feature of captum for PyTorch. Previously, I asked the question of how to find the name of layers in pyTorch (which is done using model.named_modules()). However, since getting the names of the modules (my model name is 'model') I have tried to use it with LayerGradCam from captum and am receiving a syntax error - it seems to always happen on the 'number' within the model name.
I import the function with:
from captum.attr import LayerGradCam
I'm a bit of a Python newbie, so I've tried calling both:
layer_gc = LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
and:
layer_gc = captum.attr.LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
The error message I get is:
File "gradCAM.py", line 120
layer_gc = LayerGradCam(segmentation_wrapper, model.dl.backbone.layer4.2.conv3)
^
SyntaxError: invalid syntax
This is really stumping me, so any help is appreciated! Thanks in advance :)
| Array or list indexing is done using [] syntax, not ..
model.dl.backbone.layer4[2]conv3
| https://stackoverflow.com/questions/68940728/ |
Gradients of loss with respect to random parameters are the same in pytorch | In the simple code below, I perform a simple linear operation on an input tensor of ones and compute its binary cross-entropy loss considering a vector of zeros as the expected output.
When computing the gradient of the loss with respect to w, the rows are the same and equal to the gradient with respect to b. This is counter-intuitive since w and b have random values. What is the reason?
n_input, n_output = 5, 3
x = torch.ones(n_input)
y = torch.zeros(n_output) # expected output
w = torch.randn(n_input, n_output, requires_grad=True)
b = torch.randn(n_output, requires_grad=True)
z = torch.matmul(x,w) + b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)
loss.backward()
print(w.grad)
print(b.grad)
Output:
tensor([[0.2179, 0.4337, 0.1959],
[0.2179, 0.4337, 0.1959],
[0.2179, 0.4337, 0.1959],
[0.2179, 0.4337, 0.1959],
[0.2179, 0.4337, 0.1959]])
tensor([0.2179, 0.4337, 0.1959])
| It's because Your input is symmetric.
Imagine the issue from the point of view of a perceptron (You have 3 of them in Your setup):
each input is 1.0 so the weights of a specific neuron don't matter (it is not important from which input You will take as there is 1.0 everywhere).
If You diversify the input, everything works just fine:
n_input, n_output = 5, 3
x = torch.randn(n_input)
y = torch.ones(n_output)/2. # expected output
w = torch.randn(n_input, n_output, requires_grad=True)
b = torch.randn(n_output, requires_grad=True)
z = torch.matmul(x, w) + b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)
loss.backward()
print(w.grad)
print(b.grad)
tensor([[-0.1939, 0.1657, -0.2501],
[ 0.0561, -0.0480, 0.0724],
[-0.3162, 0.2703, -0.4079],
[ 0.0947, -0.0809, 0.1221],
[-0.0140, 0.0120, -0.0181]])
tensor([-0.1263, 0.1080, -0.1630])
| https://stackoverflow.com/questions/68941009/ |
Pytorch - element 0 of tensors does not require grad and does not have a grad_fn - Adding and Multiplying matrices as NN step parameters | I am a newbie in PyTorch.
I have implemented a custom model (based on a research paper), I get this error when trying to train it.
element 0 of tensors does not require grad and does not have a grad_fn
Here is my code for model:
class Classification(tnn.Module):
def __init__(self, n_classes=7):
super(Classification, self).__init__()
self.feature_extractor1 = VGG16FeatureExtactor() # tnn.Module
self.feature_extractor2 = VGG16FeatureExtactor() # tnn.Module
self.conv = conv_layer_relu(chann_in=512, chann_out=512, k_size=1, p_size=0) # tnn.Sequential
self.attn1 = AttentionBlock1() # tnn.Module
self.attn2 = AttentionBlock2() # tnn.Module
# FC layers
self.linear1 = vggFCLayer(512, 256) # tnn.Sequential
self.linear2 = vggFCLayer(256, 128) # tnn.Sequential
# Final layer
self.final = tnn.Linear(128, n_classes)
def forward(self, x, x_lbp):
features1 = self.feature_extractor1(x)
features2 = self.feature_extractor2(x_lbp)
f3 = torch.add(features1, features2)
# Apply attention block1 to features1
attn1 = self.attn1(features1)
# Create mask using Attention Block 2
attn2 = self.attn2(features2)
mask = attn1 * attn2
add = f3 + mask
out = self.conv(add)
out = out.view(out.size(0), -1)
out = self.linear1(out)
out = self.linear2(out)
out = self.final(out)
return out
Here is my code for training:
criterion = nn.CrossEntropyLoss(weight=weights)
optimizer = optim.SGD(model.parameters(), lr=lr,momentum=0.9, nesterov=True)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min')
def train(self, epoch, trainloader):
self.model.train()
for batch_idx, (inputs, targets) in enumerate(trainloader):
lbp = lbp_transform(inputs)
self.optimizer.zero_grad()
with torch.no_grad():
outputs = self.model(inputs, lbp)
loss = self.criterion(outputs, targets)
loss.backward()
self.scheduler.step(loss)
utils.clip_gradient(self.optimizer, 0.1)
self.optimizer.step()
Edited
Here is Full error stacktrace:
RuntimeError Traceback (most recent call last)
<ipython-input-17-b4e6536e5301> in <module>()
1 modelTraining = train_vggwithattention.ModelTraining(model = model , criterion=criterion,optimizer=optimizer, scheduler=scheduler, use_cuda=True)
2 for epoch in range(start_epoch, num_epochs):
----> 3 modelTraining.train(epoch=epoch, trainloader=trainloader)
4 # PublicTest(epoch)
5 # writer.flush()
2 frames
/content/train_vggwithattention.py in train(self, epoch, trainloader)
45 loss = self.criterion(outputs, targets)
46 # loss.requires_grad = True
---> 47 loss.backward()
48 for name, param in self.model.named_parameters():
49 print(name, param.grad)
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
253 create_graph=create_graph,
254 inputs=inputs)
--> 255 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
256
257 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
150
151
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Any help is appreciated. Thanks!
| You are inferring the outputs using the torch.no_grad() context manager, this means the activations of the layers won't be saved and backpropagation won't be possible.
Therefore, you must replace the following lines in your train function:
with torch.no_grad():
outputs = self.model(inputs, lbp)
with simply:
outputs = self.model(inputs, lbp)
| https://stackoverflow.com/questions/68941491/ |
How to move feature maps across images using slicing? | I am trying to implement the online algorithm of this paper, which is on video classification. This work moves 1/8 of channel feature maps from each image, into the next image, after each convolution operation. The image of the operation has been attached here -
While trying to implement the same, I have succeeded in extracting out the first 1/8 channel feature maps, but I don't know how to add them to the succeeding image. My code has been attached below -
import cv2
import gym
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
N = 1 # Batch Size
T = 5 # Time Steps. This means that there are 5 frames in the video
C = 3 # RGB Channels
H = 144 # Height
W = 144 # Width
foo = torch.randn(N*T, C, H, W)
print("Shape of foo = ", foo.shape)
#torch.Size([5, 3, 144, 144])
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 8, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
print("Shape of x = ", x.shape)
# torch.Size([5, 8, 140, 140])
shape_extract = x[:, :1,:,:]
print("Shape of extract = ", shape_extract.shape)
# torch.Size([5, 1, 140, 140])
# 1/8 of the channels have been extracted out from above. But how do I transfer these channel features to the next image?
return x
net = Net()
output = net(foo)
| Since your whole sequence is inside the batch, you can shift the layers using torch.roll the elements on the first axis.
>>> rolled = x.roll(shifts=1, dims=1)
Going from this layer layout on axis=1:
[x_0, x_1, x_2, x_3, ..., x_7]
to this one:
[x_7, x_0, x_1, x_2, ..., x_6]
Then replacing the first element by x_0:
>>> rolled[:, 0] = x[:, 0]
Resulting in this layout:
[x_0, x_0, x_1, x_2, ..., x_6]
Then you can input tensor rolled into the next layer.
You can implement a custom layer to wrap this logic:
class ShiftLayer(nn.Module):
def forward(self, x):
out = x.roll(1, 1)
out[:, 0] = x[:, 0]
return out
Then use it inside your model:
class Net(nn.Module):
def __init__(self):
super().__init__()
...
self.shift = ShiftLayer()
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.shift(x)
x = F.relu(self.conv2(x))
return x
| https://stackoverflow.com/questions/68943189/ |
How to fix AttributeError: 'tuple' object has no attribute 'to'? |
Today i faced with problem
AttributeError: 'tuple' object has no attribute 'to'
I read a data from csv file with 2 columns: Image (where the file path is) and finding (where the photo's label is)
Model:
model = models.resnet18(pretrained=False)
model.fc = nn.Sequential(nn.Linear(model.fc.in_features, 256),
nn.ReLU(),
nn.Dropout(p=0.3),
nn.Linear(256, 100),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(100,9))
# model.load_state_dict(torch.load('model.pth'))
for name, param in model.named_parameters():
if("bn" not in name):
param.requires_grad = False
Transforms:
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.5457, 0.5457, 0.5457], std=[0.2342, 0.2342, 0.2342])
])
Dataset class:
class Col(Dataset):
def __init__(self, csv, main_dir, transform):
self.df = pd.read_csv(csv)
self.main_dir = main_dir
self.transform = transform
def __len__(self):
return self.df.shape[0]
def __getitem__(self, idx):
image = transform(Image.open(self.df.Image[idx]).convert("RGB"))
label = self.df.Finding[idx]
return image, label
Prepairing Data:
data = Col(main_dir=root_dir,csv=csv_file, transform=transform)
dataset = torch.utils.data.DataLoader(data, batch_size=130)
train_set, validate_set= torch.utils.data.random_split(dataset, [round(len(dataset)*0.7), (len(dataset) - round(len(dataset)*0.7))])
Train func:
def train(model, optimizer, loss_fn, train_set, validate_set, epochs=20, device="cpu"):
for epoch in range(1, epochs+1):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_set:
optimizer.zero_grad()
inputs, labels = batch
inputs = inputs.to(device)
labels = labels.to(device)
output = model(inputs)
loss = loss_fn(output, labels)
loss.backward()
optimizer.step()
training_loss += loss.data.item() * inputs.size(0)
training_loss /= len(train_set.dataset)
model.eval()
num_correct = 0
num_examples = 0
for batch in validate_set:
inputs, labels = batch
inputs = inputs.to(device)
output = model(inputs)
labels = labels.to(device)
loss = loss_fn(output, labels)
valid_loss += loss.data.item() * inputs.size(0)
correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
num_correct += torch.sum(correct).item()
num_examples += correct.shape[0]
valid_loss /= len(validate_set.dataset)
print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
valid_loss, num_correct / num_examples))
Optimazer:
optimizer = optim.Adam(model.parameters(), lr=0.0001)
After calling train func
train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
I've got this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_9156/634509595.py in <module>
----> 1 train(model, optimizer,torch.nn.CrossEntropyLoss(), train_set.dataset, validate_set.dataset, epochs=100, device=device)
/tmp/ipykernel_9156/2858123881.py in train(model, optimizer, loss_fn, train_set, validate_set, epochs, device)
8 inputs, labels = batch
9 inputs = inputs.to(device)
---> 10 labels = labels.to(device)
11 output = model(inputs)
12 loss = loss_fn(output, labels)
AttributeError: 'tuple' object has no attribute 'to'
If i try to call this
for batch in train_set.dataset:
inputs, labels = batch
print(labels)
then it will display many large tuples with labels.(like this)
('polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps', 'polyps')
How can i fix this? Help me please
| You need to encode your labels into integers (or one-hot encodings depending on your loss_fn).
You should have classes, a list of 9 class names. If not you can get it with self.df.Finding.unique().
You can then invert this list into a dictionnary, essentially a label-name -> label-id mapping:
self.encode = {k: i for i, k in enumerate(classes)}
Then in your dataset __getitem__ definition convert the label names to label ids with encode:
def __getitem__(self, idx):
image = transform(Image.open(self.df.Image[idx]).convert("RGB"))
label = self.encode[self.df.Finding[idx]]
return image, label
| https://stackoverflow.com/questions/68944323/ |
Count Unique elements in pytorch Tensor | Suppose I have the following tensor: y = torch.randint(0, 3, (10,)). How would you go about counting the 0's 1's and 2's in there?
The only way I can think of is by using collections.Counter(y) but was wondering if there was a more "pytorch" way of doing this. A use case for example would be when building the confusion matrix for predictions.
| You can use torch.unique with the return_counts option:
>>> x = torch.randint(0, 3, (10,))
tensor([1, 1, 0, 2, 1, 0, 1, 1, 2, 1])
>>> x.unique(return_counts=True)
(tensor([0, 1, 2]), tensor([2, 6, 2]))
| https://stackoverflow.com/questions/68946273/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.