instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
How can I visualize single image with only one convolutional layer and one pooling layers? | I wrote this sample code to show only a single image after passing it to my model. The model should have only one convolutional layer and one pooling layer. Or in another way, how can I visualize a single image by passing it to a simple neural network that has one convolutional and one pooling layer?
import torch
import torch.nn as nn #creating neural network
from PIL import Image
from numpy import asarray
# Set up GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load the image
image = Image.open('./img.png')
# convert image to numpy array
data = asarray(image)
print(type(data))
print(data.shape)
now building the arch.
class ConvNet(nn.Module):
def __init__(self):
super().__init__()
#convolutional layer
self.layer = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=3, kernel_size=2, stride=1, padding=0),
nn.MaxPool2d(kernel_size=2, stride=2))
def forward(self, x):
out = self.layer(x)
return out
convnet = ConvNet().to(device) #set up for GPU if available
convnet
pass image to my model
outputs = convnet(data) imshow(outputs)
got the error below
TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_3184/1768392595.py in <module>
----> 1 outputs = convnet(data)
2 imshow(outputs)
TypeError: conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int)
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, Parameter, tuple, tuple, tuple, int)
I expect to show image after passed during this sample network
| as GoodDeeds mentioned, CNN expects the data to be of type Tensor you have read the image using PIL and then converted it to NumPy array, you will need to convert the NumPy array to Tensor using torch.from_numpy(data)
Below code will solve the issue
import torch
import torch.nn as nn #creating neural network
from PIL import Image
from numpy import asarray
#Set up GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Her I am loading my image
# load the image
image = Image.open('./img.png')
# convert image to numpy array
data = asarray(image)
data=torch.from_numpy(data)
print(type(data))
print(data.shape)`
| https://stackoverflow.com/questions/74452216/ |
How to split multi-dimensional arrays based on the unique indices of another array? | I have two torch tensors a and b:
import torch
torch.manual_seed(0) # for reproducibility
a = torch.rand(size = (5, 10, 1))
b = torch.tensor([3, 3, 1, 5, 3, 1, 0, 2, 1, 2])
I want to split the 2nd dimension of a (which is dim = 1 in the Python language) based on the unique values in b.
What I have tried so far:
# find the unique values and unique indices of b
unique_values, unique_indices = torch.unique(b, return_inverse = True)
# split a in where dim = 1, based on unique indices
l = torch.tensor_split(a, unique_indices, dim = 1)
I was expecting l to be a list of n number of tensors where n is the number of unique values in b. I was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1).
However, I get the following:
print(l)
(tensor([[[0.8198],
[0.9971],
[0.6984]],
[[0.7262],
[0.7011],
[0.2038]],
[[0.1147],
[0.3168],
[0.6965]],
[[0.0340],
[0.9442],
[0.8802]],
[[0.6833],
[0.7529],
[0.8579]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.9971],
[0.6984],
[0.5675]],
[[0.7011],
[0.2038],
[0.6511]],
[[0.3168],
[0.6965],
[0.9143]],
[[0.9442],
[0.8802],
[0.0012]],
[[0.7529],
[0.8579],
[0.6870]]]), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([], size=(5, 0, 1)), tensor([[[0.8198],
[0.9971]],
[[0.7262],
[0.7011]],
[[0.1147],
[0.3168]],
[[0.0340],
[0.9442]],
[[0.6833],
[0.7529]]]), tensor([], size=(5, 0, 1)), tensor([[[0.9971]],
[[0.7011]],
[[0.3168]],
[[0.9442]],
[[0.7529]]]), tensor([[[0.6984],
[0.5675],
[0.8352],
[0.2056],
[0.5932],
[0.1123],
[0.1535],
[0.2417]],
[[0.2038],
[0.6511],
[0.7745],
[0.4369],
[0.5191],
[0.6159],
[0.8102],
[0.9801]],
[[0.6965],
[0.9143],
[0.9351],
[0.9412],
[0.5995],
[0.0652],
[0.5460],
[0.1872]],
[[0.8802],
[0.0012],
[0.5936],
[0.4158],
[0.4177],
[0.2711],
[0.6923],
[0.2038]],
[[0.8579],
[0.6870],
[0.0051],
[0.1757],
[0.7497],
[0.6047],
[0.1100],
[0.2121]]]))
Why do I get empty tensors like tensor([], size=(5, 0, 1)) and how would I achieve what I want to achieve?
| From your description of the desired result:
I was also expecting the tensors to have the shape (5, number of elements corresponding to unique_values, 1).
I believe you are looking for the count (or frequency) of unique values. If you want to keep using torch.unique, then you can provide the return_counts argument combined with a call to torch.cumsum.
Something like this should work:
>>> indices = torch.cumsum(counts, dim=0)
>>> splits = torch.tensor_split(a, indices[:-1], dim = 1)
Let's have a look:
>>> for x in splits:
... print(x.shape)
torch.Size([5, 1, 1])
torch.Size([5, 3, 1])
torch.Size([5, 2, 1])
torch.Size([5, 3, 1])
torch.Size([5, 1, 1])
| https://stackoverflow.com/questions/74462683/ |
Add loss in SRGAN | I want to add loss to SRGAN
https://github.com/leftthomas/SRGAN
in train.py
g_loss = generator_criterion(fake_out, fake_img, real_img)
Can I write a function myself like:
def ContentLoss(a, b):
result = 0
for x, y in zip(a, b):
shape = x.shape
k = np.prod(shape[0:])
diff = x - y
#l2 norm
diff = np. sqrt(np. sum(np. square(diff)))
diff = diff*diff
diff = diff / k
result = result + diff
return result
And add it to the original loss as follows:
a = ContentLoss(a,b)
g_loss = generator_criterion(fake_out, fake_img, real_img) + a
Is there a way to calculate the gradient of this loss during training?
| The project you linked uses PyTorch. Assuming that you are also using it, you can just implement your loss using PyTorch instead of numpy and your're covered.
import torch
import math
def ContentLoss(a, b):
result = 0
for x, y in zip(a, b):
shape = x.shape
k = math.prod(shape[0:]) # you can also use np, but not torch
diff = x - y
#l2 norm
# just replacing np with torch
diff = torch.sqrt(torch.sum(torch.square(diff)))
diff = diff*diff # torch differentiates through these as well
diff = diff / k
result = result + diff
return result
# I assume you only want gradients for a (are these your model outputs?)
# This is just test data anyway. If this is your model output,
# it will pass on the gradients to the weights.
a = [
torch.tensor([1.0, 1.0, 1.0], requires_grad=True),
torch.tensor([0.0, -1.0, 1.0], requires_grad=True),
torch.tensor([7.0, 6.0, -5.0], requires_grad=True),
]
b = [
torch.tensor([1.0, 0.0, 1.0]),
torch.tensor([1.0, -2.0, 3.0]),
torch.tensor([0.0, 0.0, 0.0]),
]
loss = ContentLoss(a, b)
loss.backward() # computes the gradients
for x in a:
print(f"a={a}, gradient={a.grad}")
If you need to keep numpy, you have to create a torch.autograd.Function and implement forward and backward: https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html.
| https://stackoverflow.com/questions/74471691/ |
One hidden layer MLP not training in pytorch | I tried to implement a simple gradient descent without using an optimizer but the following MLP doesn't train as the loss is always around 2.30. I can't figure out what's wrong with my code . Any advice on what I am doing wrong will be appreciated.(Sorry if the structure of the code is weird, it was implemented in a notebook)
import torch
import numpy as np
import torchvision.datasets as datasets
from torchvision.transforms import ToTensor
# download train and test set
train = datasets.MNIST(root='.data', train=True, download=True, transform=ToTensor())
test = datasets.MNIST(root='.data', train=False, download=True, transform=ToTensor())
# calculate mean and std
mean = train.data.double().mean()
std = train.data.double().std()
print(f"Mean: {mean}, std: {std}")
# standardize data
train.data = (train.data - mean) / std
test.data = (test.data - mean) / std
from torch.utils.data import DataLoader
train_loader1 = DataLoader(train, shuffle=True, batch_size=1)
train_loader2 = DataLoader(train, shuffle=True, batch_size=256)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
import torch.nn as nn
class SimpleNN(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten() # flatten the 28x28 images to 784 features vectors
self.in_to_hidden = nn.Linear(784, 784) # input to 784 neurons hidden layer
self.hidden_to_out = nn.Linear(784, 10) # 784 to 10 neuron output layer
self.softmax = nn.Softmax(dim=1) # make the output into a probability distribution
def forward(self, x):
activation_fn = nn.Tanh()
x = self.flatten(x)
x = activation_fn(self.in_to_hidden(x))
x = self.softmax(self.hidden_to_out(x))
return x
def train_single_epoch(self, data_loader, loss_fn, lr, device):
for x, y in data_loader:
x, y = x.to(device), y.to(device)
# calculate loss
pred = self(x)
loss = loss_fn(pred, y)
# backpropagate error and update weights
loss.backward()
with torch.no_grad(): # the gradient mustn't be calculated for the weight updates
for weights in self.parameters():
weights -= lr * weights.grad
weights.grad.zero_()
print(f"loss: {loss.item()}")
net = SimpleNN().to(device)
for i in range(2):
net.train_single_epoch(train_loader2, nn.CrossEntropyLoss(), 0.001, device)
| My first observation is that you're printing the wrong loss value. You are printing the last batch loss at the end of each epoch, which doesn't represent the model's performance for the given epoch.
The right way would be to accumulate batches' losses and then compute the average accros all batches at the end of the epoch.
I modified your train_single_epoch method to print the right epoch loss. I also added an attribute self.list_losses to record the losses and plot them later :
class SimpleNN(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten() # flatten the 28x28 images to 784 features vectors
self.in_to_hidden = nn.Linear(784, 784) # input to 784 neurons hidden layer
self.hidden_to_out = nn.Linear(784, 10) # 784 to 10 neuron output layer
self.softmax = nn.Softmax(dim=1) # make the output into a probability distribution
self.list_losses = []
def forward(self, x):
activation_fn = nn.Tanh()
x = self.flatten(x)
x = activation_fn(self.in_to_hidden(x))
x = self.softmax(self.hidden_to_out(x))
return x
def train_single_epoch(self, epoch, data_loader, loss_fn, lr, device):
epoch_losses = []
sample_counts = 0
for x, y in data_loader:
x, y = x.to(device), y.to(device)
# calculate loss
pred = self(x)
loss = loss_fn(pred, y)
epoch_losses.append(loss.item()*y.shape[0])
sample_counts += y.shape[0]
# backpropagate error and update weights
loss.backward()
with torch.no_grad(): # the gradient mustn't be calculated for the weight updates
for weights in self.parameters():
weights -= lr * weights.grad
weights.grad.zero_()
epoch_loss = sum(epoch_losses)/sample_counts
self.list_losses.append(epoch_loss)
print(f"[Epoch]: {epoch+1} \t----\t [loss]: {epoch_loss}")
And then I train for 100 epochs and and plot the losses progress across epochs :
net = SimpleNN().to(device)
for epoch in range(100):
net.train_single_epoch(epoch, train_loader2, nn.CrossEntropyLoss(), 0.001, device)
import matplotlib.pyplot as plt
plt.plot(range(len(net.list_losses)), net.list_losses)
plt.show()
As you can see from the above plot the loss is decreasing, even though quite slowly, but the model is learning.
The model you used here is a toy one (I guess for learning purpose), and that could explain the slow learning we are observing.
There are lots of things you can do to improve learning performance of a neural network (increase model size in terms of depth and layer width, change activation function from tanh to relu, use convolutional network, change learning rate, ...), but that is out of scope of a stackoverflow post.
As an example of improvement I tried to change the learning rate from 0.001 to 0.01 and you can see the model is learning faster now :
net2 = SimpleNN().to(device)
for epoch in range(100):
net2.train_single_epoch(epoch, train_loader2, nn.CrossEntropyLoss(), 0.01, device)
plt.plot(range(len(net2.list_losses)), net2.list_losses)
plt.show()
| https://stackoverflow.com/questions/74471958/ |
AttributeError: 'numpy.float64' object has no attribute 'cpu' | I am trying to run BERT and train a model using pytorch.
I am not sure why I am getting this error after finishing the first Epoch.
I am using this code link
history = defaultdict(list)
best_accuracy = 0
for epoch in range(EPOCHS):
# Show details
print(f"Epoch {epoch + 1}/{EPOCHS}")
print("-" * 10)
train_acc, train_loss = train_epoch(
model,
train_data_loader,
loss_fn,
optimizer,
device,
scheduler,
len(df_train)
)
print(f"Train loss {train_loss} accuracy {train_acc}")
# Get model performance (accuracy and loss)
val_acc, val_loss = eval_model(
model,
val_data_loader,
loss_fn,
device,
len(df_val)
)
print(f"Val loss {val_loss} accuracy {val_acc}")
print()
history['train_acc'].append(train_acc.cpu())
history['train_loss'].append(train_loss.cpu())
history['val_acc'].append(val_acc.cpu())
history['val_loss'].append(val_loss.cpu())
# If we beat prev performance
if val_acc > best_accuracy:
torch.save(model.state_dict(), 'best_model_state.bin')
best_accuracy = val_acc
Here is the output and the error message
Image
It is a first time for me to work with pytorch. Any ideas how to fix the error>
| I checked kaggle link and I see that there is no cpu() reference as you have posted in your code. It should simply be:
history['train_acc'].append(train_acc)
history['train_loss'].append(train_loss)
history['val_acc'].append(val_acc)
history['val_loss'].append(val_loss)
| https://stackoverflow.com/questions/74473271/ |
Pytorch gradient descent keeps sending me NaNs mean squared errors | I am trying to apply, within the framework of a course, a gradient descent to estimate a linear model. My code is the following :
model = torch.nn.Linear(1,1)
myModel = model(X)
ds = torch.utils.data.TensorDataset(X, Y)
dl = torch.utils.data.DataLoader(ds)
optimiser = torch.optim.SGD(model.parameters(), lr=0.01)
loss = torch.nn.functional.mse_loss
for epoch in range(100):
for (Xb, yb) in dl:
yb_pred = model(Xb)
c_loss = loss(yb_pred, yb)
print(c_loss)
optimiser.zero_grad()
c_loss.backward()
optimiser.step()
Yet it keeps printing NaNs, which I do not understand. Have I done a mistake in the implementation ? I have the following output (x numerous times) :
tensor(nan, grad_fn=<MseLossBackward0>)
| There is nothing wrong with your code but Nan values can be explained by gradients exploding depending on the data X and Y. You can try with a lower learning rate (1e-3 or 1e-4).
For instance if you test with this toy linear example:
X = torch.randn(100, 1)
Y = X * 2 + 3
The loss will converge to 0 quickly.
| https://stackoverflow.com/questions/74476864/ |
Display variable label instead of value | I created two lists as dropdown menus in streamlit.
list = [2, 3, 5, 7, 11]
list1 = [1, 2, 5, 7, 11]
select_options = [list,list1]
dropdown = st.selectbox('Select Option', options=select_options)
Right now the dropdown menu displays 2,3,5,7,11 & 1,2,5,7,11. Is there a way to display the variable label instead so the dropdown menu would show list & list1?
| You can have two select boxes, the first selectbox is to make the choice as to which list you want to use and the second selectbox will let you make the choice of the elements in the selected list.
Note: It's not appropriate to name a variable after list because it is a python built-in function. So I renamed your list variable to list1
import streamlit as st
list1 = [2, 3, 5, 7, 11]
list2 = [1, 2, 5, 7, 11]
get_list = st.selectbox('Select List Option', ("List1", "List2"))
select_options = []
if get_list == "List1":
select_options = list1
elif get_list == "List2":
select_options = list2
dropdown = st.selectbox(f'Select Option of {get_list}:', select_options)
Output:
| https://stackoverflow.com/questions/74477755/ |
Unexpected calculation of number of trainable parameters (Pytorch) | Consider the following code
from torch import nn
from torchsummary import summary
from torchvision import models
model = models.efficientnet_b7(pretrained=True)
model.classifier[-1].out_features = 4 # because i have a 4-class problem; initially the output is 1000 classes
model.classifier = nn.Sequential(*model.classifier, nn.Softmax(dim=1)) # add softmax
# freeze features
for child in model.features:
for param in child.parameters():
param.requires_grad = False
When I run
model.classifier
I get the below (expected) output
which as per my calculations implies that the total trainable parameters should be (2560 + 1) * 4 output nodes = 10244 tranable params.
However, when i attempt to calculate the total number of trainable params by
summary(model, (3,128,128))
I get
and by
sum(p.numel() for p in model.parameters() if p.requires_grad)
I get
The 2,561,000, in both cases, comes from (2560 + 1) * 1000 classes.
But, why does it still consider 1000 classes though ?
| Resetting an attribute of an initialized layer does not necessarily re-initialize it with the newly-set attribute. What you need is model.classifier[-1] = nn.Linear(2560, 4).
| https://stackoverflow.com/questions/74479801/ |
TorchVision using pretrained weights for entire model vs backbone | TorchVision Detection models have a weights and a weights_backbone parameter. Does using pretrained weights imply that the model uses pretrained weights_backbone under the hood? I am training a RetinaNet model and um unsure which of the two options I should use and what the differences are.
| The difference is pretty simple: you can either choose to do transfer learning on the backbone only or on the whole network.
RetinaNet from Torchvision has a Resnet50 backbone. You should be able to do both of:
retinanet_resnet50_fpn(weights=RetinaNet_ResNet50_FPN_Weights.COCO_V1)
retinanet_resnet50_fpn(backbone_weights=ResNet50_Weights.IMAGENET1K_V1)
As implied by their names, the backbone weights are different. The former were trained on COCO (object detection) while the later were trained on ImageNet (classification).
To answer your question, pretrained weights implies that the whole network, including backbone weights, are initialized. However, I don't think that it calls backbone_weights under the hood.
| https://stackoverflow.com/questions/74489594/ |
Enabling CAFFE2 while building pytorch from source on Windows command prompt | So, I was doing a model train up using Yolo7 on Windows platform and
C:\Users\LENOVO>python train.py --weights yolov7.pt --data "data/custom.yaml" --workers 4 --batch-size 4 --img 416 --cfg cfg/training/yolov7.yaml --name yolov7 --hyp data/hyp.scratch.p5.yaml
After running the above command the below stack trace of error showed up in my command prompt on windows. My question is:
How to do the suggestions of the error below? How to enable the BUILD_CAFFE2=1 while building pytorch on my Windows? Not using Conda of course. On my Windows command prompt only.
I installed pytorch from using the following source
https://github.com/pytorch/pytorch#from-source
Install caffe2 using commands from this source
https://caffe2.ai/docs/getting-started.html?platform=windows&configuration=compile
But the following error still shows while training my model.
I just need to know the command of enabling build_caffe2=1 on windows command prompt.
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\__init__.py:5: UserWarning: Caffe2 support is not fully enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn("Caffe2 support is not fully enabled in this PyTorch build. "
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py:17: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py:9: UserWarning: Caffe2 support is not enabled in this PyTorch build. Please enable Caffe2 by building PyTorch from source with `BUILD_CAFFE2=1` flag.
warnings.warn('Caffe2 support is not enabled in this PyTorch build. '
Traceback (most recent call last):
File "C:\Users\LENOVO\train.py", line 8, in <module>
from caffe2.python import core, scope
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\python\__init__.py", line 7, in <module>
from caffe2.proto import caffe2_pb2
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py", line 15, in <module>
from caffe2.proto import caffe2_pb2, metanet_pb2, torch_pb2
ImportError: cannot import name 'metanet_pb2' from partially initialized module 'caffe2.proto' (most likely due to a circular import) (C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\caffe2\proto\__init__.py)
| I have solved the issue but setting BUILD_CAFFE2=1 on the command prompt before installing pytorch, with the following code.
set BUILD_CAFFE2=1
| https://stackoverflow.com/questions/74492524/ |
Why is my mac1 gpu not working in pytorch? | python train_torch.py --train --max_epochs 2
PyTorch version:1.12.1
MPS 장치를 지원하도록 build 되었는지: True
MPS 장치가 사용 가능한지: True
INFO:root:Namespace(chat=False, sentiment='0', model_params='model_chp/model_-last.ckpt', train=True, max_len=32, batch_size=40, lr=5e-05, warmup_ratio=0.1, logger=True, checkpoint_callback=True, default_root_dir=None, gradient_clip_val=0, process_position=0, num_nodes=1, num_processes=1, gpus=None, auto_select_gpus=False, tpu_cores=None, log_gpu_memory=None, progress_bar_refresh_rate=None, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=1, max_epochs=2, min_epochs=None, max_steps=None, min_steps=None, limit_train_batches=1.0, limit_val_batches=1.0, limit_test_batches=1.0, limit_predict_batches=1.0, val_check_interval=1.0, flush_logs_every_n_steps=100, log_every_n_steps=50, accelerator=None, sync_batchnorm=False, precision=32, weights_summary='top', weights_save_path=None, num_sanity_val_steps=2, truncated_bptt_steps=None, resume_from_checkpoint=None, profiler=None, benchmark=False, deterministic=False, reload_dataloaders_every_epoch=False, auto_lr_find=False, replace_sampler_ddp=True, terminate_on_nan=False, auto_scale_batch_size=False, prepare_data_per_node=True, plugins=None, amp_backend='native', amp_level='O2', distributed_backend=None, automatic_optimization=None, move_metrics_to_cpu=False, enable_pl_optimizer=None, multiple_trainloader_mode='max_size_cycle', stochastic_weight_avg=False)
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
-> AttributeError: Can't pickle local object 'get_cosine_schedule_with_warmup..lr_lambda'
To do deep learning,
python train_torch.py --train --max_epochs 2
Syntax used successfully.
However, the following error appears and it says accelerator=None
I don't know what to do. I ask for your help me.
I was able to see the tracing in Google colab.
| I think the pytorch version you are using does not have mps acceleration, you should download the nightly version of pytorch. Pytorch version 1.13 does how mps acceleration.
It will have some issues because it is not the stable version.
| https://stackoverflow.com/questions/74497849/ |
how to avoid TO_TENSOR() clips values to 1 | I have a black and white image that needs to be converted into tensor.
The shape of the image is (400, 600, 3).
Originally, the values of the image have max = 255; for example:
org_img[0]
# result:
array([[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]], dtype=uint8)
but after i convert into tensor using to_tensor(), it clips my value into 1.
torchvision.transforms.functional.to_tensor(org_img[0])
# result:
tensor([[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
...,
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]])
This makes my image all black, how can i avoid this problem?
Thank you.
| The images are not clipped but instead re-scaled from uint8 0..255 to float32 [0, 1] by ToTensor. Library such as matplotlib can naturally handle RGB images with single-precision pixel values within [0, 1] after re-scaling.
| https://stackoverflow.com/questions/74498111/ |
Visualising the last layer node embeddings of a model in torch geometric | I'm doing my first graph convolutional neural network project with torch_geometric. I want to visualize the last layer node embeddings of my model and don't know how I should get it.
I trained my model on the CiteSeer dataset. You can get the full dataset as easily as this:
from torch_geometric.datasets import Planetoid
from torch_geometric.transforms import NormalizeFeatures
dataset = Planetoid(root="data/Planetoid", name='CiteSeer', transform=NormalizeFeatures())
My model is a simple two-layered model as this:
class GraphClassifier(torch.nn.Module):
def __init__(self, dataset, hidden_dim):
super(GraphClassifier, self).__init__()
self.conv1 = GCNConv(dataset.num_features, hidden_dim)
self.conv2 = GCNConv(hidden_dim, dataset.num_classes)
def forward(self, data):
x, edge_index = data.x, data.edge_index
x = F.relu(self.conv1(x, edge_index))
x = F.relu(self.conv2(x, edge_index))
return F.log_softmax(x, dim=1)
If you print my model you will get this:
model = GraphClassifier(dataset, 64)
print(model)
>>>
GraphClassifier(
(conv1): GCNConv(3703, 64)
(conv2): GCNConv(64, 6)
)
My model is trained successfully. I only want to visualize its last-layer node embeddings. To visualize that I have this function to use:
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import torch
# emb: (nNodes, hidden_dim)
# node_type: (nNodes,). Entries are torch.int64 ranged from 0 to num_class - 1
def visualize(emb: torch.tensor, node_type: torch.tensor):
z = TSNE(n_components=2).fit_transform(emb.detach().cpu().numpy())
plt.figure(figsize=(10,10))
plt.scatter(z[:, 0], z[:, 1], s=70, c=node_type, cmap="Set2")
plt.show()
I don't know how I should extract emb and node_type from my model to give to the visualize function. emb is the last layer of node embeddings of the model. How can I get these from my model?
| It is solve by changing the model to this:
class GraphClassifier(torch.nn.Module):
def __init__(self, dataset, hidden_dim):
super(GraphClassifier, self).__init__()
self.conv1 = GCNConv(dataset.num_features, hidden_dim)
self.conv2 = GCNConv(hidden_dim, dataset.num_classes)
def forward(self, data, do_visualize=False):
x, edge_index = data.x, data.edge_index
x = F.relu(self.conv1(x, edge_index))
x = F.relu(self.conv2(x, edge_index))
if do_visualize: # NEW LINE
visualize(x, data.y) # NEW LINE
return F.log_softmax(x, dim=1)
Now if you call the forward function with do_visualize=Ture it will visualize. like this:
model = GraphClassifier(dataset, hidden_dim)
model.to(device)
model(dataset[0].to(device), do_visualize=True)
| https://stackoverflow.com/questions/74498230/ |
How can I convert this tensoflow code to pytorch? | How can I convert this tensoflow code to pytorch?
#tensoflow
Conv2D(
self.filter_1, (1, 64),
activation='elu',
padding="same",
kernel_constraint=max_norm(2., axis=(0, 1, 2))
)
nn.Sequential(
nn.Conv2D(16, (1, 64),
padding="same",
kernel_constraint=max_norm(2., axis=(0, 1, 2)),
nn.ELU()
)
| You need two things:
You need to know what the input channel size is. In your example, you've only given the number of output channels, 16. Keras calculates this on its own during runtime, but you have to specify input channels when making torch nn.Conv2d.
You need to implement the max_norm constraint on the conv kernel yourself.
With this in mind, let's write a simple wrapper around the nn.Conv2d, that just enforces the constraint on the weights each time forward is called:
import torch
from torch import nn
import torch.nn.functional as F
class Conv2D_Norm_Constrained(nn.Conv2d):
def __init__(self, max_norm_val, norm_dim, **kwargs):
super().__init__(**kwargs)
self.max_norm_val = max_norm_val
self.norm_dim = norm_dim
def get_constrained_weights(self, epsilon=1e-8):
norm = self.weight.norm(2, dim=self.norm_dim, keepdim=True)
return self.weight * (torch.clamp(norm, 0, self.max_norm_val) / (norm + epsilon))
def forward(self, input):
return F.conv2d(input, self.get_constrained_weights(), self.bias, self.stride, self.padding, self.dilation, self.groups)
Assuming your input channels are something like 8, we can write:
nn.Sequential(
Conv2D_Norm_Constrained(in_channels=8, out_channels=16, kernel_size=(1, 64), padding="same", max_norm_val=2.0, norm_dim=(0, 1, 2)),
nn.ELU()
)
| https://stackoverflow.com/questions/74498770/ |
LibTorch (PyTorch C++) LNK2001 errors | I was following the tutorial in LibTorch here.
With the following changes:
example-app => Ceres
example-app.cpp => main.cxx
Everything worked until the CMake command cmake --build . --config Release.
It produced the following errors:
main.obj : error LNK2001: unresolved external symbol __imp___tls_index_?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA [D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\build\Ceres.vcxproj]
main.obj : error LNK2001: unresolved external symbol __imp___tls_offset_?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA [D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\build\Ceres.vcxproj]
D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\build\Release\Ceres.exe : fatal error LNK1120: 2 unresolved externals [D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\build\Ceres.vcxproj]
I don't believe these are from the changes I placed, since the problem is with the linking.
I also am trying to replicate this directly into Visual Studio. I am using Visual Studio 17 2022 which the LibTorch extension is not yet compatible with (Visual Studio 16 2019 is no longer available for install from the website).
The replication is through a blank C++ template (no starting files). And I have set the following macros:
LibTorchTarget = CPU specifies libtorch for CPU is to be used (useful for other macros
LibTorchDir = C:/libtorch/ directory where the libtorch installation(s) can be found (for multiple installations)
LibTorchInstall = $(LibTorchDir)libtorch_$(LibTorchTarget)/ expresses to C:/libtorch/libtorch_CPU/
LibTorchInclude = $(LibTorchInstall)include/ expresses to C:/libtorch/libtorch_CPU/include/
LibTorchLib = $(LibTorchInstall)lib/ expresses to C:/libtorch/libtorch_CPU/lib/
And have put the Include and Lib macros at their respective VC++ Directories positions. As well as $(LibTorchLib)*.lib (C:/libtorch/libtorch_CPU/lib/*.lib) in the Linker > Input > Additional Dependencies to specify all the .libs for linking (prevents a lot of LNK2009 errors).
And lastly, I have put start xcopy /s "$(LibTorchLib)*.dll" "$(OutDir)" /Y /E /D /R command at the Build Events > Pre-Link Event > Command Line to replicate the if clause in the CMakeLists.txt in the tutorial (apparently to avoid memory errors).
Result is same exact error with a final LNK1120 error:
Error LNK2001 unresolved external symbol __imp___tls_index_?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA Ceres D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\main.obj 1
Error LNK2001 unresolved external symbol __imp___tls_offset_?init@?1??lazy_init_num_threads@internal@at@@YAXXZ@4_NA Ceres D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\src\Ceres\main.obj 1
Error LNK1120 2 unresolved externals Ceres D:\Silverous Black\CPE42S2-CPE42S2\CPE 406\ProjectDumagan\out\Debug_64\Ceres\Ceres.exe 1
I don't exactly understand the reason for the LNK errors, so if anyone could help that'll be really nice. Thank you in advance.
| Look to: Updating to Visual Studio 17.4.0 Yields linker errors related to TLS
You most likely need to rebuild PyTorch after MSVC update.
| https://stackoverflow.com/questions/74501884/ |
Why does pytorch's transforms.Normalize() not do the described action from the documentation? | According to the documentation normalize is supposed to do (tensor - mean)/std, but it doesn't. Why?
Docs:
Normalize a tensor image with mean and standard deviation.
Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n
channels, this transform will normalize each channel of the input
torch.*Tensor i.e.,
output[channel] = (input[channel] - mean[channel]) / std[channel]
a = T.Tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]])
m = a.mean()
std = a.std()
print((m, std))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).mean())
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).std())
a = (a - m)/std
m = a.mean()
std = a.std()
print((m, std))
Output:
(tensor(5.), tensor(2.7386))
tensor([[[[1.0150, 1.3802, 1.7453],
[2.1105, 2.4756, 2.8408],
[3.2059, 3.5711, 3.9362]]]])
tensor(2.4756)
tensor(1.0000)
(tensor(0.), tensor(1.0000))
The std is correct, but the mean is something random. What gives?
| Mean value of your tensor is stored in variable m, not mean.
After replace m with mean on Line 2
a = T.Tensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]]])
mean = a.mean()
std = a.std()
print((mean, std))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)))
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).mean())
print(transforms.Normalize(mean, std)(T.unsqueeze(a, 0)).std())
a = (a - mean)/std
mean = a.mean()
std = a.std()
print((mean, std))
Output:
(tensor(5.), tensor(2.7386))
tensor([[[[-1.4606, -1.0954, -0.7303],
[-0.3651, 0.0000, 0.3651],
[ 0.7303, 1.0954, 1.4606]]]])
tensor(0.)
tensor(1.0000)
(tensor(0.), tensor(1.0000))
| https://stackoverflow.com/questions/74505520/ |
Using detectron2 for training, pytorch's runtime error 'default process group has not been initialized' | Instructions To Reproduce the Issue:
Full runnable code or full changes you made:
I tried to train DeepLabV3+ architecture with a customized config having ResNet18 (converted to .pkl from https://download.pytorch.org/models/resnet18-f37072fd.pth) as the backbone:
_BASE_: "detectron2/projects/DeepLab/configs/Cityscapes-SemanticSegmentation/deeplab_v3_plus_R_103_os16_mg124_poly_90k_bs16.yaml"
MODEL:
WEIGHTS: "r18.pkl"
BACKBONE:
NAME: "build_resnet_backbone"
RESNETS:
DEPTH: 18
RES2_OUT_CHANNELS: 64
STEM_OUT_CHANNELS: 64
RES5_DILATION: 1
NUM_GROUPS: 1
ROI_HEADS:
NUM_CLASSES: 1
on MoNuSeg 2020 dataset.
What exact command you run:
cfg = get_cfg()
add_deeplab_config(cfg)
cfg.merge_from_file('/kaggle/input/deeplab-v3-plus-models/deeplab_v3_plus_R_18_os16_mg124_poly_90k_bs16.yaml')
cfg.DATASETS.TRAIN = ('monuseg_train',)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.SOLVER.IMS_PER_BATCH = 16
cfg.SOLVER.BASE_LR = 0.01
cfg.SOLVER.MAX_ITER = 300
cfg.SOLVER.LR_SCHEDULER_NAME = 'WarmupMultiStepLR'
cfg.SOLVER.STEPS = []
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
I also used the following lines to also try Trainer code (from DeepLab project train_net.py):
cfg.SOLVER.LR_SCHEDULER_NAME = 'WarmupPolyLR'
trainer = Trainer(cfg)
Full logs or other relevant observations:
[11/20 16:16:06 d2.engine.defaults]: Model:
SemanticSegmentor(
(backbone): ResNet(
(stem): BasicStem(
(conv1): Conv2d(
3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
(norm): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(res2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(res3): Sequential(
(0): BasicBlock(
(shortcut): Conv2d(
64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv1): Conv2d(
64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
(norm): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(res4): Sequential(
(0): BasicBlock(
(shortcut): Conv2d(
128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv1): Conv2d(
128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(res5): Sequential(
(0): BasicBlock(
(shortcut): Conv2d(
256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
(norm): SyncBatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv1): Conv2d(
256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
(norm): SyncBatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): Conv2d(
512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
(sem_seg_head): DeepLabV3PlusHead(
(decoder): ModuleDict(
(res2): ModuleDict(
(project_conv): Conv2d(
64, 48, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): SyncBatchNorm(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(fuse_conv): Sequential(
(0): Conv2d(
304, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Conv2d(
256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(res5): ModuleDict(
(project_conv): ASPP(
(convs): ModuleList(
(0): Conv2d(
512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): Conv2d(
512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(2): Conv2d(
512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(12, 12), dilation=(12, 12), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(3): Conv2d(
512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(18, 18), dilation=(18, 18), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(4): Sequential(
(0): AvgPool2d(kernel_size=(16, 32), stride=1, padding=0)
(1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
)
)
(project): Conv2d(
1280, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
(norm): SyncBatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(fuse_conv): None
)
)
(predictor): Conv2d(256, 19, kernel_size=(1, 1), stride=(1, 1))
(loss): DeepLabCE(
(criterion): CrossEntropyLoss()
)
)
)
[11/20 16:16:07 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [RandomCrop(crop_type='absolute', crop_size=[512, 1024]), ResizeShortestEdge(short_edge_length=(512, 768, 1024, 1280, 1536, 1792, 2048), max_size=4096, sample_style='choice'), RandomFlip()]
[11/20 16:16:07 d2.data.build]: Using training sampler TrainingSampler
[11/20 16:16:07 d2.data.common]: Serializing the dataset using: <class 'detectron2.data.common.NumpySerializedList'>
[11/20 16:16:07 d2.data.common]: Serializing 37 elements to byte tensors and concatenating them all ...
[11/20 16:16:07 d2.data.common]: Serialized dataset takes 0.01 MiB
[11/20 16:16:12 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone:
| Names in Model | Names in Checkpoint | Shapes |
|:------------------|:----------------------------------------------------------------------------------|:------------------------------------------|
| res2.0.conv1.* | res2.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (64,) (64,) (64,) (64,) (64,64,3,3) |
| res2.0.conv2.* | res2.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (64,) (64,) (64,) (64,) (64,64,3,3) |
| res2.1.conv1.* | res2.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (64,) (64,) (64,) (64,) (64,64,3,3) |
| res2.1.conv2.* | res2.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (64,) (64,) (64,) (64,) (64,64,3,3) |
| res3.0.conv1.* | res3.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (128,) (128,) (128,) (128,) (128,64,3,3) |
| res3.0.conv2.* | res3.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (128,) (128,) (128,) (128,) (128,128,3,3) |
| res3.0.shortcut.* | res3.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (128,) (128,) (128,) (128,) (128,64,1,1) |
| res3.1.conv1.* | res3.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (128,) (128,) (128,) (128,) (128,128,3,3) |
| res3.1.conv2.* | res3.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (128,) (128,) (128,) (128,) (128,128,3,3) |
| res4.0.conv1.* | res4.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,128,3,3) |
| res4.0.conv2.* | res4.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,256,3,3) |
| res4.0.shortcut.* | res4.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,128,1,1) |
| res4.1.conv1.* | res4.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,256,3,3) |
| res4.1.conv2.* | res4.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (256,) (256,) (256,) (256,) (256,256,3,3) |
| res5.0.conv1.* | res5.0.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,256,3,3) |
| res5.0.conv2.* | res5.0.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,512,3,3) |
| res5.0.shortcut.* | res5.0.shortcut.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,256,1,1) |
| res5.1.conv1.* | res5.1.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,512,3,3) |
| res5.1.conv2.* | res5.1.conv2.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (512,) (512,) (512,) (512,) (512,512,3,3) |
| stem.conv1.* | stem.conv1.{norm.bias,norm.running_mean,norm.running_var,norm.weight,weight} | (64,) (64,) (64,) (64,) (64,3,7,7) |
[11/20 16:16:14 d2.engine.train_loop]: Starting training from iteration 0
ERROR [11/20 16:16:22 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 149, in train
self.run_step()
File "/opt/conda/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 494, in run_step
self._trainer.run_step()
File "/opt/conda/lib/python3.7/site-packages/detectron2/engine/train_loop.py", line 274, in run_step
loss_dict = self.model(data)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/detectron2/modeling/meta_arch/semantic_seg.py", line 108, in forward
features = self.backbone(images.tensor)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 445, in forward
x = self.stem(x)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 356, in forward
x = self.conv1(x)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/detectron2/layers/wrappers.py", line 117, in forward
x = self.norm(x)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 731, in forward
world_size = torch.distributed.get_world_size(process_group)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 867, in get_world_size
return _get_group_size(group)
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 325, in _get_group_size
default_pg = _get_default_group()
File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 430, in _get_default_group
"Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
[11/20 16:16:22 d2.engine.hooks]: Total training time: 0:00:08 (0:00:00 on hooks)
[11/20 16:16:22 d2.utils.events]: iter: 0 lr: N/A max_mem: 7348M
Environment:
Kaggle platform with both accelerators: GPU T4 x2 and GPU P100:
---------------------- -------------------------------------------------------------------------------
sys.platform linux
Python 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0]
numpy 1.21.6
detectron2 0.6 @/opt/conda/lib/python3.7/site-packages/detectron2
Compiler GCC 9.4
CUDA compiler CUDA 11.0
detectron2 arch flags 7.5
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.11.0 @/opt/conda/lib/python3.7/site-packages/torch
PyTorch debug build False
GPU available Yes
GPU 0,1 Tesla T4 (arch=7.5)
Driver version 470.82.01
CUDA_HOME /usr/local/cuda
Pillow 9.1.1
torchvision 0.12.0 @/opt/conda/lib/python3.7/site-packages/torchvision
torchvision arch flags 3.7, 6.0, 7.0, 7.5
fvcore 0.1.5.post20220512
iopath 0.1.9
cv2 4.5.4
---------------------- -------------------------------------------------------------------------------
PyTorch built with:
- GCC 9.4
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- CUDA Runtime 11.0
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_70,code=compute_70;-gencode;arch=compute_75,code=compute_75
- CuDNN 8.0.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.0, CUDNN_VERSION=8.0.5, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
Testing NCCL connectivity ... this should not hang.
NCCL succeeded.
Google Colab:
---------------------- ----------------------------------------------------------------
sys.platform linux
Python 3.7.15 (default, Oct 12 2022, 19:14:55) [GCC 7.5.0]
numpy 1.21.6
detectron2 0.6 @/usr/local/lib/python3.7/dist-packages/detectron2
Compiler GCC 7.5
CUDA compiler CUDA 11.2
detectron2 arch flags 7.5
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.12.1+cu113 @/usr/local/lib/python3.7/dist-packages/torch
PyTorch debug build False
GPU available Yes
GPU 0 Tesla T4 (arch=7.5)
Driver version 460.32.03
CUDA_HOME /usr/local/cuda
Pillow 7.1.2
torchvision 0.13.1+cu113 @/usr/local/lib/python3.7/dist-packages/torchvision
torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0, 8.6
fvcore 0.1.5.post20220512
iopath 0.1.9
cv2 4.6.0
---------------------- ----------------------------------------------------------------
PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.3
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.3.2 (built against CUDA 11.5)
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
| It was due to very SyncBatchNorm layers that I knew can bother on single GPU as a drawback in pytorch, but seemingly I was not able to find in the log!
Stated here as a workaround:
print the config
find all keys that has a value of "SyncBN" or similar
Edit the config file or code to set these values to "BN" instead
| https://stackoverflow.com/questions/74510433/ |
Pytorch tensor broadcasting along axis | a=torch.rand(20)
b=torch.rand(20, 20)
a+b # Works!
a=torch.rand(32, 20)
b=torch.rand(32, 20, 20)
a+b # Doesn't work!
Does anyone know how broadcasting in the first example could be generalized to the second example along axis 0 with no for loops?
I tried normal addition but broadcasting in Pytorch doesn't seem to work this way!
| The dimensions in the second case are incompatible. You need to insert a unitary dimension into a to achieve the same results as the first case: a.unsqueeze(1) + b.
PyTorch follows the same broadcasting rules as NumPy. See https://numpy.org/doc/stable/user/basics.broadcasting.html
See specifically the first paragraph of the General Broadcasting Rules section.
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing (i.e. rightmost) dimensions and works its way left. Two dimensions are compatible when
they are equal, or
one of them is 1
Furthermore
Arrays do not need to have the same number of dimensions. For example, if you have a 256x256x3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values.
This is effectively saying that if we line up the shapes starting from the right, and then insert ones in any blank spots all the dimensions should be compatible.
Considering the first case. If we line up the tensor shapes starting from the right we have
a: 20
b: 20 x 20
and insert one into the missing spot
a: 1 x 20
b: 20 x 20
we see that the shapes are compatible because the first dimension has a 1 and the second dimension has both values equal. The output shape of the broadcasted operation is 20 x 20, taking the first 20 from the first dimension of b.
Considering the second case, if we try to do the same
a: 32 x 20
b: 32 x 20 x 20
after inserting one into the missing spot we have
a: 1 x 32 x 20
b: 32 x 20 x 20
!!! These shapes are incompatible since the second dimension of a is 32 and the second dimension of b is 20 (since 32 != 20 and neither is equal to 1).
For the second example, one way you could make these shapes compatible would be to reshape a so that it has shape 32 x 1 x 20. I.e. insert an explicit unitary dimension in the middle. This could be done with any of the three methods.
a.reshape(32, 1, 20)+b
or equivalently
a.unsqueeze(1)+b
or equivalently
a[:, None, :]+b
| https://stackoverflow.com/questions/74511803/ |
Slicing a tensor with tensor | I want to make a tensor of moving window. I'm using a list comprehension, but, this is sequential making it extremely slow.
weight_list = [w[:, :, :, i : i + self.l_c] for i in range(n)]
I want to find
weight = torch.for_example.index_slice(w, start_indices=n, slice_length=self.l_c, dim=-1)
I've seen methods of "indexing". But not slicing. Is there a method for this?
| It seems like you are looking for unfolding of w with kernel size (1, self.l_c).
| https://stackoverflow.com/questions/74513779/ |
How to save output after two layers of neural network in Pytorch | I wrote a convolutional autoencoder that was supposed to work on the ORL dataset (400 images in dataset, size 32*32) in csv. format. What I want is to observe how the data changes through the autoencoder. That's why I wrote a test1 function in the class that goes through only the first two layers.
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
self.conv1 = nn.Conv2d(1, 3, 3)
self.conv2 = nn.Conv2d(3 ,1, 3)
self.conv3 = nn.Conv2d(1, 3, 3)
self.conv4 = nn.Conv2d(3, 1, 3)
## decoder layers ##
self.t_conv1 = nn.ConvTranspose2d(1, 3, 3)
self.t_conv2 = nn.ConvTranspose2d(3, 1, 3)
self.t_conv3 = nn.ConvTranspose2d(1, 3, 3)
self.t_conv4 = nn.ConvTranspose2d(3, 1, 3)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
## decode ##
x = F.relu(self.t_conv1(x))
x = F.relu(self.t_conv2(x))
x = F.relu(self.t_conv3(x))
x = (self.t_conv4(x))
return x
def test1(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
return x
But the problem arises when I really want to check what is in those two layers.
mn_dataset_loader = torch.utils.data.DataLoader(dataset=custom_mnist_from_csv,
batch_size=200,
shuffle=False)
for epoch in range(200):
running_loss = 0
br = 0
for data in mn_dataset_loader:
inputs = data[0].to(device, non_blocking=True)
optimizer.zero_grad()
outputs = model(inputs).to(device)
i1 = model.test1(inputs).to(device)
i1 = torch.squeeze(i1, 0)
i1 = i1.flatten().to(device)
i1=i1.unsqueeze(0)
print(i1.shape)
loss = lossFn(data.to(device), outputs)
loss.backward()
optimizer.step()
running_loss += loss.item()
print('[Epoch %d] loss: %.3f' %
(epoch + 1, running_loss/len(mn_dataset_loader)))
print('Done Training')
My question is actually why is i1 what is the output after only 2 layers of the form torch.Size([1, 784])?
Why not torch.Size([400, 784]) because there are actually so many images?
So how to actually see what is actually the output of the first two layers?
| You specify a batch size of 200 but then take only the first element (inputs = data[0])
If you want to run it on all images change the batch size to 400 and don't take only the first element
| https://stackoverflow.com/questions/74516025/ |
How can I get specific columns form txt file and save them to new file using python | I have this txt file sentences.txt that contains texts below
a01-000u-s00-00 0 ok 154 19 408 746 1661 89 A|MOVE|to|stop|Mr.|Gaitskell|from
a01-000u-s00-01 0 ok 156 19 395 932 1850 105 nominating|any|more|Labour|life|Peers
which contains 10 columns
I want to use the panda's data frame to extract only the file name (at column 0) and corresponding text (column 10) without the (|) character
I wrote this code
def load() -> pd.DataFrame:
df = pd.read_csv('sentences.txt',sep=' ', header=None)
data = []
with open('sentences.txt') as infile:
for line in infile:
file_name, _, _, _, _, _, _, _, _, text = line.strip().split(' ')
data.append((file_name, cl_txt(text)))
df = pd.DataFrame(data, columns=['file_name', 'text'])
df.rename(columns={0: 'file_name', 9: 'text'}, inplace=True)
df['file_name'] = df['file_name'].apply(lambda x: x + '.jpg')
df = df[['file_name', 'text']]
return df
def cl_txt(input_text: str) -> str:
text = input_text.replace('+', '-')
text = text.replace('|', ' ')
return text
load()
the error I got
ParserError: Error tokenizing data. C error: Expected 10 fields in line 4, saw 11
where my expected process.txt file results should look like below without \n
a01-000u-s00-00 A MOVE to stop Mr. Gaitskell from
a01-000u-s00-01 nominating any more Labour life Peers
| IIUC, you just need pandas.read_csv to read your .txt and then select the two columns :
Try this :
import pandas as pd
df= (
pd.read_csv("test.txt", header=None, sep=r"(\d+)\s(?=\D)", engine="python",
usecols=[0,4], names=["filename", "text"])
.assign(filename= lambda x: x["filename"].str.strip().add(".jpg"),
text= lambda x: x["text"].str.replace(r'[\|"]', " ", regex=True)
.str.replace(r"\s+", " ", regex=True))
)
# Output :
print(df)
filename text
0 a01-000u-s00-00.jpg A MOVE to stop Mr. Gaitskell from
1 a01-000u-s00-01.jpg nominating any more Labour life Peers
2 a01-003-s00-01.jpg large majority of Labour M Ps are likely to
# .txt used:
| https://stackoverflow.com/questions/74518666/ |
torch.nn.functional.binary_cross_entropy and torch.nn.BCEloss() difference | I am trying to train a GAN model on anime face Dataset to generate anime faces. Here's my code-
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
import torchvision.transforms as T
import os
import torch
import torch.nn as nn
from torchvision.utils import make_grid
import matplotlib.pyplot as plt
%matplotlib inline
def denorm(img_tensors):
return img_tensors * stats[1][0] + stats[0][0]
def show_images(images, nmax=64):
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(make_grid(denorm(images.detach()[:nmax]), nrow=8).permute(1, 2, 0))
def show_batch(dl, nmax=64):
for images, _ in dl:
show_images(images, nmax)
break
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
device = get_default_device()
device
train_dl = DeviceDataLoader(train_dl, device)
discriminator = nn.Sequential(
# in: 3 x 64 x 64
nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.LeakyReLU(0.2, inplace=True),
# out: 64 x 32 x 32
nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(128),
nn.LeakyReLU(0.2, inplace=True),
# out: 128 x 16 x 16
nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.LeakyReLU(0.2, inplace=True),
# out: 256 x 8 x 8
nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(512),
nn.LeakyReLU(0.2, inplace=True),
# out: 512 x 4 x 4
nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=0, bias=False),
# out: 1 x 1 x 1
nn.Flatten(),
nn.Sigmoid())
discriminator = to_device(discriminator, device)
latent_size = 128
generator = nn.Sequential(
# in: latent_size x 1 x 1
nn.ConvTranspose2d(latent_size, 512, kernel_size=4, stride=1, padding=0, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(True),
# out: 512 x 4 x 4
nn.ConvTranspose2d(512, 256, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(True),
# out: 256 x 8 x 8
nn.ConvTranspose2d(256, 128, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(True),
# out: 128 x 16 x 16
nn.ConvTranspose2d(128, 64, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
# out: 64 x 32 x 32
nn.ConvTranspose2d(64, 3, kernel_size=4, stride=2, padding=1, bias=False),
nn.Tanh()
# out: 3 x 64 x 64
)
xb = torch.randn(batch_size, latent_size, 1, 1) # random latent tensors
fake_images = generator(xb)
print(fake_images.shape)
show_images(fake_images)
generator = to_device(generator, device)
def train_discriminator(real_images, opt_d):
# Clear discriminator gradients
opt_d.zero_grad()
# Pass real images through discriminator
real_preds = discriminator(real_images)
real_targets = torch.ones(real_images.size(0), 1, device=device)
real_loss = F.binary_cross_entropy(real_preds, real_targets) # here nn.BCELoss() not working
real_score = torch.mean(real_preds).item()
# Generate fake images
latent = torch.randn(batch_size, latent_size, 1, 1, device=device)
fake_images = generator(latent)
# Pass fake images through discriminator
fake_targets = torch.zeros(fake_images.size(0), 1, device=device)
fake_preds = discriminator(fake_images)
fake_loss = F.binary_cross_entropy(fake_preds, fake_targets) # here nn.BCELoss() not working
fake_score = torch.mean(fake_preds).item()
# Update discriminator weights
loss = real_loss + fake_loss
loss.backward()
opt_d.step()
return loss.item(), real_score, fake_score
def train_generator(opt_g):
# Clear generator gradients
opt_g.zero_grad()
# Generate fake images
latent = torch.randn(batch_size, latent_size, 1, 1, device=device)
fake_images = generator(latent)
# Try to fool the discriminator
preds = discriminator(fake_images)
targets = torch.ones(batch_size, 1, device=device)
loss = F.binary_cross_entropy(preds, targets) # here nn.BCELoss() not working
# Update generator weights
loss.backward()
opt_g.step()
return loss.item()
from torchvision.utils import save_image
sample_dir = 'generated'
os.makedirs(sample_dir, exist_ok=True)
def save_samples(index, latent_tensors, show=True):
fake_images = generator(latent_tensors)
fake_fname = 'generated-images-{0:0=4d}.png'.format(index)
save_image(denorm(fake_images), os.path.join(sample_dir, fake_fname), nrow=8)
print('Saving', fake_fname)
if show:
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(make_grid(fake_images.cpu().detach(), nrow=8).permute(1, 2, 0))
fixed_latent = torch.randn(64, latent_size, 1, 1, device=device)
save_samples(0, fixed_latent)
from tqdm.notebook import tqdm
import torch.nn.functional as F
def fit(epochs, lr, start_idx=1):
torch.cuda.empty_cache()
# Losses & scores
losses_g = []
losses_d = []
real_scores = []
fake_scores = []
# Create optimizers
opt_d = torch.optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))
opt_g = torch.optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))
for epoch in range(epochs):
for real_images, _ in tqdm(train_dl):
# Train discriminator
loss_d, real_score, fake_score = train_discriminator(real_images, opt_d)
# Train generator
loss_g = train_generator(opt_g)
# Record losses & scores
losses_g.append(loss_g)
losses_d.append(loss_d)
real_scores.append(real_score)
fake_scores.append(fake_score)
# Log losses & scores (last batch)
print("Epoch [{}/{}], loss_g: {:.4f}, loss_d: {:.4f}, real_score: {:.4f}, fake_score: {:.4f}".format(
epoch+1, epochs, loss_g, loss_d, real_score, fake_score))
# Save generated images
save_samples(epoch+start_idx, fixed_latent, show=False)
return losses_g, losses_d, real_scores, fake_scores
lr = 0.0002
epochs = 95
history = fit(epochs, lr)
The above code is working fine but before I was using nn.BCELoss from torch instead of binary_cross_entropy from torch.nn.functional in 'train_generator()' and 'train_discriminator()' methods above and I was getting the following error,
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
I wonder if they both don't perform the same operation. Can you help me to understand the problem?
| nn.BCELoss is a class. Unlike nn.functional.binary_cross_entropy, you have to instantiate it first before using it to calculate the loss. In you case,
F.binary_cross_entropy(preds, targets)
is equivalent to
nn.BCELoss()(preds, targets)
| https://stackoverflow.com/questions/74518820/ |
Floating point exception (core dumped) for UNet implementation | I am trying to do an implementation of KiuNet ( https://github.com/jeya-maria-jose/KiU-Net-pytorch ). But when I am executing the train command like so:
python train.py --train_dataset "KiuNet/Train Folder/" --val_dataset "KiuNet/Validation Folder/" --direc 'KiuNet/Results/' --batch_size 1 --epoch 200 --save_freq 10 --modelname "kiunet" --learning_rate 0.0001
I am getting the following error:
Traceback (most recent call last):
File "KiuNet/KiU-Net-pytorch/train.py", line 235, in <module>
loss.backward()
File "/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [847,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [958,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [703,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [831,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [575,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [974,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [77,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [78,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [719,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [720,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [592,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [593,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [209,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [465,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [337,0,0] Assertion `t >= 0 && t < n_classes` failed.
When I am running the train command with CUDA_LAUNCH_BLOCKING=1 I get the following error:
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [840,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [580,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [453,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [326,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [71,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [712,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [198,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [199,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [968,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [959,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [574,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [702,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [191,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [318,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [319,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [446,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [63,0,0] Assertion `t >= 0 && t < n_classes` failed.
Floating point exception (core dumped)
My torch and CUDA version are: '1.13.0+cu117'
My Python version: Python 3.9.12
Any help is much appreciated!
| The repository author mentions the following.
"This bug occurs when the ground truth masks have more classes than the number of classes in prediction. Please make sure you ground truth images have only 0 or 1 labels of pixels if you are training for binary segmentation. The datasets usually have the ground truth as 0 or 255 labels of pixels. So, please convert them to 0's and 1's."
| https://stackoverflow.com/questions/74520038/ |
Pytorch Parameterized Layer not Updated | I have a problem here, so I want to make a layer where the weight value (and the bias) is based on the other frozen weight. So, let’s say I have a frozen weight (FW) as a base value, then my current model layer will have weight W = FW + D, where D is the trainable parameter. Later, when I train the model, I hope the only parameter that gets updated is D.
I made this simple code for illustration:
frozen = nn.Linear(100,10)
frozen.weight.requires_grad = False
frozen.bias.requires_grad = False
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc = nn.Linear(100,10)
self.dw = nn.Parameter(torch.tensor(1.0, requires_grad=True))
self.db = nn.Parameter(torch.tensor(1.0, requires_grad=True))
def forward(self, x):
# the weight (and the bias) of fc layer is from FW and D
self.fc.weight = nn.Parameter(torch.add(frozen.weight, self.dw))
self.fc.bias = nn.Parameter(torch.add(frozen.bias, self.db))
return torch.sigmoid(self.fc(x))
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
x = torch.rand(100)
y = torch.tensor([0]*9+[1], dtype=torch.float32)
for _ in range(10):
out = model(x)
loss = criterion(out, y)
print(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
But when I run that code, the model doesn’t train, and the self.dw and self.db doesn’t change. I am not sure whether my concept is wrong, so it’s not possible to train D, or I made a mistake in the implementation.
I also tried to implement using nn.utils.parameterize, but it still doesn’t work (I am new to using this, so I am not sure I implemented it correctly)
frozen = nn.Linear(100,10)
frozen.weight.requires_grad = False
frozen.bias.requires_grad = False
class Adder(nn.Module):
def __init__(self, delta, frozen):
super().__init__()
self.delta = nn.Parameter(torch.tensor(delta, requires_grad=True))
self.frozen=frozen
def forward(self, x):
return torch.add(self.frozen, self.delta)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc = nn.Linear(100,10)
def forward(self, x):
nn.utils.parametrize.register_parametrization(self.fc, "weight", Adder(1.0, frozen.weight))
nn.utils.parametrize.register_parametrization(self.fc, "bias", Adder(1.0, frozen.bias))
return torch.sigmoid(self.fc(x))
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
x = torch.rand(100)
y = torch.tensor([0]*9+[1], dtype=torch.float32)
for _ in range(10):
out = model(x)
loss = criterion(out, y)
print(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Thank you for any responses.
| Instead of recreating new weight and bias by
self.fc.weight = nn.Parameter(torch.add(frozen.weight, self.dw))
self.fc.bias = nn.Parameter(torch.add(frozen.bias, self.db))
You can utilize nn.functional.linear and intermediate variables
weight = self.weight + frozen.weight
bias = self.bias + frozen.bias
F.linear(x, weight, bias)
Complete version:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, frozen):
super(Net, self).__init__()
self.weight = nn.Parameter(torch.ones(10, 100, dtype=torch.float32))
self.bias = nn.Parameter(torch.zeros(10, dtype=torch.float32))
self.frozen = frozen
@property
def weight_bias(self):
weight = self.weight + self.frozen.weight
bias = self.bias + self.frozen.bias
return weight, bias
def forward(self, x):
# the weight (and the bias) of fc layer is from FW and D
weight, bias = self.weight_bias
return F.linear(x, weight, bias) # this should return raw logits as required by nn.CrossEntropyLoss
frozen = nn.Linear(100, 10)
frozen.weight.requires_grad = False
frozen.bias.requires_grad = False
model = Net(frozen)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
x = torch.rand(100).unsqueeze(0)
y = torch.tensor([0]*9+[1], dtype=torch.float32).unsqueeze(0)
for _ in range(10):
out = model(x)
loss = criterion(out, y)
print(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/74522448/ |
How to translate this small part of TensorFlow code into pyTorch? | How to translate this small part of TensorFlow code into pyTorch?
def transforms(x):
# stft returns spectogram for each sample and each eeg
# input X contains 3 signals, apply stft for each
# and get array with shape [samples, num_of_eeg, time_stamps, freq]
# change dims and return [samples, time_stamps, freq, num_of_eeg]
spectrograms = tf.signal.stft(x, frame_length=32, frame_step=4, fft_length=64)
spectrograms = tf.abs(spectrograms)
return tf.einsum("...ijk->...jki", spectrograms)
| You can find the doc for STFT pytorch implementation here. The rest is fast-forward. It should be:
def transforms(x: torch.Tensor) -> torch.Tensor:
"""Return Fourrier spectrogram."""
spectrograms = torch.stft(x, win_length=32, n_fft=4, hop_length=64)
spectrograms = torch.abs(spectrograms)
return torch.einsum("...ijk->...jki", spectrograms)
| https://stackoverflow.com/questions/74523337/ |
Weighted mean squared error pytorch with sample weights regression | I am trying to use weighted mean squared error loss function for the regression task with imbalanced dataset. Basically, I have different weight assigned to each example and I am using the weighted MSE loss function. Is there a way to sample the weight tensor using TensorDataset along with input and output batch samples?
def weighted_mse_loss(inputs, targets, weights=None):
loss = (inputs - targets) ** 2
if weights is not None:
loss *= weights.expand_as(loss)
loss = torch.mean(loss)
return loss
train_dataset = torch.utils.data.TensorDataset(x_train, y_train)
weights = torch.rand(len(train_dataset))
for x, y in train_loader:
optimizer.zero_grad()
out = model(x)
loss = weighted_mse_loss(y, out, weights)
loss.backward()
| If you can get the weights before creating the train dataset:
train_dataset = TensorDataset(x_train, y_train, weights)
for x, y, w in train_dataset:
...
Otherwise:
train_dataset = TensorDataset(x_train, y_train)
for (x, y), w in zip(train_dataset, weights):
...
You can also use a DataLoader but be carefull about shuffling with the second method
| https://stackoverflow.com/questions/74525618/ |
How do I create a branched AlexNet in PyTorch? | I am attempting to create a near identical model architecture to AlexNet, except each channel (Red, Green, and Blue) are disconnected by their own branch and are all concatenated at the end for the classifier.
Similar architecture to this
The base network:
class AlexNet(nn.Module):
def __init__(self, num_classes: int = 1000, dropout: float = 0.5) -> None:
super().__init__()
_log_api_usage_once(self)
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
self.classifier = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(p=dropout),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
Training
def train_epoch(self, epoch, total):
self.model.train()
for batch_idx, (features, targets) in enumerate(self.train_loader):
features = features.to(self.device)
targets = targets.to(self.device)
logits = self.model(features)
loss = self.loss_func(logits, targets)
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
I would like to have each channel be belong to its own feature extraction, but combine to classify.
red = features[:,0:1,:,:]
green = features[:,1:2,:,:]
blue = features[:,2:3,:,:]
logits = self.model([r,g,b])
I have seen people use groups but I am not sure how to implement it fully.
Any help is greatly appreciated
| Since each branch/head would take an image with one channel you could start by just replacing the 3 in the first CNN layer with 1:
nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2),
Now you can send the three single-channeled images through the self.features layers and concat them before passing them to the self.classifier layers:
import torch
import torch.nn as nn
class AlexNet(nn.Module):
def __init__(self, num_classes: int=1000, dropout: float=0.5) -> None:
super().__init__()
self.features = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.avgpool = nn.AdaptiveAvgPool2d((3, 3))
self.classifier = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(6912, 4096),
nn.ReLU(inplace=True),
nn.Dropout(p=dropout),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def forward(self, x_r: torch.Tensor, x_g: torch.Tensor, x_b: torch.Tensor) -> torch.Tensor:
x_r = self.features(x_r)
x_r = torch.flatten(self.avgpool(x_r), 1)
x_g = self.features(x_g)
x_g = torch.flatten(self.avgpool(x_g), 1)
x_b = self.features(x_b)
x_b = torch.flatten(self.avgpool(x_b), 1)
x = torch.concat((x_r, x_g, x_b), -1)
x = self.classifier(x)
return x
model = AlexNet()
img = torch.rand(1, 3, 256, 256)
img_r = torch.rand(1, 1, 256, 256)
img_g = torch.rand(1, 1, 256, 256)
img_b = torch.rand(1, 1, 256, 256)
output = model(img_r, img_g, img_b)
Note that I changed self.avgpool = nn.AdaptiveAvgPool2d((6, 6)) to self.avgpool = nn.AdaptiveAvgPool2d((3, 3)) because the output size of the flattened branches was really big (9216). Now it is 2304 and by concatinating them you get a tensor of size 6912. Hope this helps :)
| https://stackoverflow.com/questions/74526120/ |
How to get Non-contextual Word Embeddings in BERT? | I am already installed BERT, But I don't know how to get Non-contextual word embeddings.
For example:
input: 'Apple'
output: [1,2,23,2,13,...] #embedding of 'Apple'
How can i get these word embeddings?
Thank you.
I search some method, but no blogs have written the way.
| Sloved.
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
# get the word embedding from BERT
def get_word_embedding(word:str):
input_ids = torch.tensor(tokenizer.encode(word)).unsqueeze(0) # Batch size 1
# print(input_ids)
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
# output[0] is token vector
# output[1] is the mean pooling of all hidden states
return last_hidden_states[0][1]
| https://stackoverflow.com/questions/74527928/ |
Problem loading parallel datasets even after using SubsetRandomSampler | I have two parallel datasets dataset1 and dataset2 and following is my code to load them in parallel using SubsetRandomSampler where I provide train_indices for dataloading.
P.S. Even after setting num_workers=0 and seeding np as well as torch, the samples do not get loaded in parallel. Any suggestions are heartily welcome including methods other than SubsetRandomSampler.
import torch, numpy as np
from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler
dataset1 = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
dataset2 = torch.tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
train_indices = list(range(len(dataset1)))
torch.manual_seed(12)
np.random.seed(12)
np.random.shuffle(train_indices)
sampler = SubsetRandomSampler(train_indices)
dataloader1 = DataLoader(dataset1, batch_size=2, num_workers=0, sampler=sampler)
dataloader2 = DataLoader(dataset2, batch_size=2, num_workers=0, sampler=sampler)
for i, (data1, data2) in enumerate(zip(dataloader1, dataloader2)):
x = data1
y = data2
print(x, y)
Output:
tensor([5, 1]) tensor([15, 18])
tensor([0, 2]) tensor([14, 12])
tensor([4, 6]) tensor([16, 10])
tensor([8, 9]) tensor([11, 19])
tensor([7, 3]) tensor([17, 13])
Expected Output:
tensor([5, 1]) tensor([15, 11])
tensor([0, 2]) tensor([10, 12])
tensor([4, 6]) tensor([14, 16])
tensor([8, 9]) tensor([18, 19])
tensor([7, 3]) tensor([17, 13])
| Since I was using a random sampler, the random indices are expected.
To yield the same (shuffled) indices from both DataLoaders, it is better to create the indices first, and then use a custom sampler:
class MySampler(torch.utils.data.sampler.Sampler):
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return iter(self.indices)
def __len__(self):
return len(self.indices)
dataset1 = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
dataset2 = torch.tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
train_indices = list(range(len(dataset1)))
np.random.seed(12)
np.random.shuffle(train_indices)
sampler = MySampler(train_indices)
dataloader1 = DataLoader(dataset1, batch_size=2, num_workers=0, sampler=sampler)
dataloader2 = DataLoader(dataset2, batch_size=2, num_workers=0, sampler=sampler)
for i, (data1, data2) in enumerate(zip(dataloader1, dataloader2)):
x = data1
y = data2
print(x, y)
P.S. got the solution by cross-posting on Pytorch forums but still want to keep it for future readers. Credits to ptrblck.
| https://stackoverflow.com/questions/74537660/ |
Why does the pytorch crossEntropyLoss use label encoding, instead of one-hot encoding? | I'm learning on CrossEntropyLoss module in pytorch.
And the tutor says, you should input target value y with 'label encoded', not 'one-hot encoded'.
Like this
loss = nn.CrossEntropyLoss()
Y = torch.tensor([0])
Y_pred_good = torch.tensor([[2.0, 1.0, 0.1]])
Y_pred_bad = torch.tensor([[0.5, 1.0, 0.3]])
l1 = loss(Y_pred_good, Y)
l2 = loss(Y_pred_bad, Y)
print(l1.item())
print(l2.item())
But I learned that CrossEntropy Loss is caculated with one-hot encoded class information.
Does the pytorch module transform label encoded into one-hot encoded? or is there another way to caculate CELoss with label encoded information?
| There's a difference between the multi-label CE loss, nn.CrossEntropyLoss, and the binary version, nn.BCEWithLogitsLoss.
For the binary case, the implemented loss allows for "soft labels" and thus requires the binary targets to be floats in the range [0, 1].
In contrast, nn.CrossEntropyLoss works with "hard" labels, and thus does not need to encode them in a one-hot fashion.
If you do the math for the multi-class cross-entropy loss, you'll see that it is inefficient to have a one-hot representation for the targets. The loss is -log p_i where i is the true label. One only need to index the proper entry in the predicted probabilities vector. This can be done via multiplication be the one-hot encoded targets, but it is much more efficient to do it be indexing the right entry.
Note: It seems like recent versions of nn.CrossEntropyLoss also support one-hot encoded targets ("smooth labels").
| https://stackoverflow.com/questions/74541568/ |
overlaying the ground truth mask on an image | In my project, I extracted frames from a video and in another folder I have ground truth for each frame.
I want to map the ground truth image of each frame of a video (in my case, it is saliency prediction ground truth) on its related frame image. As an example I have the following frame:
And the following is ground truth mask:
and the following is the mapping of ground truth on the frame.
How can I do that. Also, I have two folders that inside each of them, there are several folders that inside each of them the there are stored frames. How can I do this operation with these batch data?
This is the hierarchy of my folders:
frame_folder: folder_1, folder_2, ......
├── frames
│ ├── 601 (601 and 602 and etc are folders that in the inside there are image frames that their name is like 0001.png,0002.png, ...)
│ ├── 602
.
.
.
│ └── 700
├── ground truth
│ ├── 601 (601 and 602 and etc are folders that in the inside there are ground truth masks that their name is like 0001.png,0002.png, ...)
│ ├── 602
.
.
.
│ └── 700
Update:
Using the answer proposed by @hkchengrex , I faced with an error. When there is only one folder in the paths, it works well but when I put several folders (frames of different videos) based on the question I face with the following error. the details are in below:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
TypeError: process_video() takes 1 positional argument but 6 were given
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/Video_processing/Saliency_mapping.py", line 69, in <module>
pool.apply(process_video, videos)
File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 357, in apply
return self.apply_async(func, args, kwds).get()
File "/home/user/miniconda3/envs/vtn/lib/python3.10/multiprocessing/pool.py", line 771, in get
raise self._value
TypeError: process_video() takes 1 positional argument but 6 were given
| I need to do similar things pretty often. In my favorite StackOverflow fashion, here is a script that you can copy and paste. I hope the code itself is self-explanatory. There are a few things that you can tune and try (e.g., color maps, overlay styles). It uses multiprocessing.Pool for faster batch-processing, resizes the mask to match the shape of the image, assumes the mask is in .png format, and depends on the file structure that you posted.
import os
from os import path
import cv2
import numpy as np
from argparse import ArgumentParser
from multiprocessing import Pool
def create_overlay(image, mask):
"""
image: H*W*3 numpy array
mask: H*W numpy array
If dimensions do not match, the mask is upsampled to match that of the image
Returns a H*W*3 numpy array
"""
h, w = image.shape[:2]
mask = cv2.resize(mask, dsize=(w,h), interpolation=cv2.INTER_CUBIC)
# color options: https://docs.opencv.org/4.x/d3/d50/group__imgproc__colormap.html
mask_color = cv2.applyColorMap(mask, cv2.COLORMAP_HOT).astype(np.float32)
mask = mask[:, :, None] # create trailing dimension for broadcasting
mask = mask.astype(np.float32)/255
# different other options that you can use to merge image/mask
overlay = (image*(1-mask)+mask_color*mask).astype(np.uint8)
# overlay = (image*0.5 + mask_color*0.5).astype(np.uint8)
# overlay = (image + mask_color).clip(0,255).astype(np.uint8)
return overlay
def process_video(video_name):
"""
Processing frames in a single video
"""
vid_image_path = path.join(image_path, video_name)
vid_mask_path = path.join(mask_path, video_name)
vid_output_path = path.join(output_path, video_name)
os.makedirs(vid_output_path, exist_ok=True)
frames = sorted(os.listdir(vid_image_path))
for f in frames:
image = cv2.imread(path.join(vid_image_path, f))
mask = cv2.imread(path.join(vid_mask_path, f.replace('.jpg','.png')), cv2.IMREAD_GRAYSCALE)
overlay = create_overlay(image, mask)
cv2.imwrite(path.join(vid_output_path, f), overlay)
parser = ArgumentParser()
parser.add_argument('--image_path')
parser.add_argument('--mask_path')
parser.add_argument('--output_path')
args = parser.parse_args()
image_path = args.image_path
mask_path = args.mask_path
output_path = args.output_path
if __name__ == '__main__':
videos = sorted(
list(set(os.listdir(image_path)).intersection(
set(os.listdir(mask_path))))
)
print(f'Processing {len(videos)} videos.')
pool = Pool()
pool.map(process_video, videos)
print('Done.')
Output:
EDIT: Made it work on Windows; changed pool.apply to pool.map.
| https://stackoverflow.com/questions/74546287/ |
getting AttributeError: 'numpy.ndarray' object has no attribute 'dim' when converting tensorflow code to pytorch | I was translating my TensorFlow code to PyTorch and suddenly faced this error.
What am I doing wrong here?
AttributeError Traceback (most recent call last)
<ipython-input-36-058644576709> in <module>
3 batch_size = 1024
4 Xtrain = torch.concat(
----> 5 [transforms(Xtrain[batch_size*batch:batch_size*(batch +1)]) for batch in range(len(Xtrain)//batch_size+1)],
6 axis=0
7 )
<ipython-input-36-058644576709> in <listcomp>(.0)
3 batch_size = 1024
4 Xtrain = torch.concat(
----> 5 [transforms(Xtrain[batch_size*batch:batch_size*(batch +1)]) for batch in range(len(Xtrain)//batch_size+1)],
6 axis=0
7 )
<ipython-input-22-9fc8aa48e3e2> in transforms(x)
1 def transforms(x: torch.Tensor) -> torch.Tensor:
2 """Return Fourrier spectrogram."""
----> 3 spectrograms = torch.stft(x, win_length=32, n_fft=4, hop_length=64)
4 spectrograms = torch.abs(spectrograms)
5 return torch.einsum("...ijk->...jki", spectrograms)
~\anaconda3\lib\site-packages\torch\functional.py in stft(input, n_fft, hop_length, win_length, window, center, pad_mode, normalized, onesided, return_complex)
565 # this and F.pad to ATen.
566 if center:
--> 567 signal_dim = input.dim()
568 extended_shape = [1] * (3 - signal_dim) + list(input.size())
569 pad = int(n_fft // 2)
AttributeError: 'numpy.ndarray' object has no attribute 'dim'
The following is the approach I have tried already:
#!/usr/bin/env python
# coding: utf-8
# # Import library
# In[1]:
get_ipython().run_line_magic('matplotlib', 'inline')
get_ipython().run_line_magic('load_ext', 'autoreload')
get_ipython().run_line_magic('autoreload', '2')
#%matplotlib qt
# # Load pooled data
# In[2]:
from nu_smrutils import loaddat
import pandas as pd
# In[26]:
import pickle
import mne
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns # Seaborn is a Python
# data visualization library built on top of Matplotlib.
import datetime # module supplies classes for manipulating dates and times.
import nu_smrutils # utils for SMR
import nu_MIdata_loader # MI data loader
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# In[3]:
dname = dict(BNCI2014004 = 'aBNCI2014004R.pickle',
BNCI2014001 = 'aBNCI2014001R.pickle',
Weibo2014 = 'aWeibo2014R.pickle',
Physionet = 'aPhysionetRR.pickle')
# In[4]:
# itemname is one of : ['BNCI2014004', 'BNCI2014001', 'Weibo2014', 'Physionet']
itemname = 'BNCI2014004'
filename = dname[itemname]
iname = itemname + '__'
# In[5]:
data = loaddat(filename)
# In[6]:
data[0]['right_hand'].plot();
# In[7]:
from nu_smrutils import load_pooled, augment_dataset, crop_data
# In[8]:
subjectIndex = list(range(108))
class_name = ['left_hand', 'right_hand']
dat = load_pooled(data, subjectIndex, class_name,
normalize = True, test_size = 0.15)
# # Data augmentation
# In[9]:
print(dat.keys())
dat['xtrain'].shape
# In[10]:
get_ipython().run_line_magic('pinfo', 'augment_dataset')
# In[11]:
augdata = dict(std_dev = 0.01, multiple = 2)
# In[12]:
xtrain, ytrain = augment_dataset(dat['xtrain'], dat['ytrain'],
augdata['std_dev'], augdata['multiple'])
print("Shape after data augmentation :", xtrain.shape)
dat['xtrain'], dat['ytrain'] = xtrain, ytrain
# # Data Cropping
# In[14]:
fs = 80 # sampling frequency
crop_len = 1.5 #or None
crop = dict(fs = fs, crop_len = crop_len)
#if crop['crop_len']:
X_train,y_train = crop_data(crop['fs'],crop['crop_len'],
dat['xtrain'], dat['ytrain'],
xpercent = 50)
X_valid,y_valid = crop_data(crop['fs'],crop['crop_len'],
dat['xvalid'], dat['yvalid'],
xpercent = 50)
X_test, y_test = crop_data(crop['fs'],crop['crop_len'],
dat['xtest'], dat['ytest'],
xpercent = 50)
dat = dict(xtrain = X_train, xvalid = X_valid, xtest = X_test,
ytrain = y_train, yvalid = y_valid, ytest = y_test)
# In[16]:
print('data shape after cropping :',dat['xtrain'].shape)
# # Pytorch dataloaders
# In[18]:
import torch
from torch.utils.data import TensorDataset, DataLoader
def get_data_loaders(dat, batch_size, EEGNET = None):
# convert data dimensions to into to gray scale image format
if EEGNET: ### EEGNet model requires the last dimension to be 1
ff = lambda dat: torch.unsqueeze(dat, dim = -1)
else:
ff = lambda dat: torch.unsqueeze(dat, dim = 1)
x_train, x_valid, x_test = map(ff,(dat['xtrain'], dat['xvalid'],dat['xtest']))
y_train, y_valid, y_test = dat['ytrain'], dat['yvalid'], dat['ytest']
print('Input data shape', x_train.shape)
# TensorDataset & Dataloader
train_dat = TensorDataset(x_train, y_train)
val_dat = TensorDataset(x_valid, y_valid)
train_loader = DataLoader(train_dat, batch_size = batch_size, shuffle = True)
val_loader = DataLoader(val_dat, batch_size = batch_size, shuffle = False)
output = dict(dset_loaders = {'train': train_loader, 'val': val_loader},
dset_sizes = {'train': len(x_train), 'val': len(x_valid)},
test_data = {'x_test' : x_test, 'y_test' : y_test})
return output
# In[19]:
dat = get_data_loaders(dat, batch_size = 64)
dat.keys()
# In[20]:
# Sanity check begin
dset_loaders = dat['dset_loaders']
dset_sizes = dat['dset_sizes']
dset_sizes
dtrain = dset_loaders['train']
dval = dset_loaders['val']
dtr = iter(dtrain)
dv = iter(dval)
# In[21]:
inputs, labels = next(dtr)
print(inputs.shape, labels.shape)
# Sanity check end
# In[29]:
augmentdata = dict(std_dev = 0.01, multiple = 1) # to augment data
fs = 80
crop_length = 1.5 #seconds
crop = dict(fs = fs, crop_length = crop_length) # crop length
class1, class2 = 'left_hand', 'right_hand'
s = list(range(108))
# In[31]:
def convertY(Y):
return np.concatenate([Y[:, None], np.where(Y == 0, 1, 0)[:, None]], axis=-1)
# In[33]:
def convert(d): # converting tran method
Xtrain = d['xtrain'].numpy()
Xval = d['xvalid'].numpy()
Xtest = d['xtest'].numpy()
Ytrain = convertY(d['ytrain'].numpy())
Yval = convertY(d['yvalid'].numpy())
Ytest = convertY(d['ytest'].numpy())
return Xtrain, Xval, Xtest, Ytrain, Yval, Ytest
# In[34]:
files = ['aBNCI2014004R.pickle', ]
# In data we storage sample from different files
Data = []
for file in files:
d = nu_MIdata_loader.EEGDataLoader(file, class_name = [class1, class2])
d1 = d.load_pooled(s, normalize = True, crop = crop, test_size = 0.01, augmentdata = augmentdata)
Data.append(convert(d1))
# In[35]:
# concatenate all data if there more then one file
Xtrain = np.concatenate([d[0] for d in Data])
Xval = np.concatenate([d[1] for d in Data])
Xtest = np.concatenate([d[2] for d in Data])
Xtrain = np.concatenate([Xtrain, Xval], axis=0)
Ytrain = np.concatenate([d[3] for d in Data])
Yval = np.concatenate([d[4] for d in Data])
Ytest = np.concatenate([d[5] for d in Data])
Ytrain = np.concatenate([Ytrain, Yval], axis=0)
# In[22]:
def transforms(x: torch.Tensor) -> torch.Tensor:
"""Return Fourrier spectrogram."""
spectrograms = torch.stft(x, win_length=32, n_fft=4, hop_length=64)
spectrograms = torch.abs(spectrograms)
return torch.einsum("...ijk->...jki", spectrograms)
# In[36]:
# Convert data in batchs
# Cause outofmemort or python crash
batch_size = 1024
Xtrain = torch.concat(
[transforms(Xtrain[batch_size*batch:batch_size*(batch +1)]) for batch in range(len(Xtrain)//batch_size+1)],
axis=0
)
Xtest = torch.concat(
[transforms(Xtest[batch_size*batch:batch_size*(batch +1)]) for batch in range(len(Xtest)//batch_size+1)],
axis=0
)
# Convert to tensorflow tensors
Ytrain = torch.cast(Ytrain, dtype='float32')
Ytest = torch.cast(Ytest, dtype='float32')
| torch.stft is a pytorch function and expects a Tensor as the input. You must convert your numpy array into a tensor and then pass that as the input.
you can use torch.from_numpy to do this.
| https://stackoverflow.com/questions/74551169/ |
Select pytorch tensor elements by list of indices | I guess I have a pretty simple problem. Let's take the following tensor of length 6
t = torch.tensor([10., 20., 30., 40., 50., 60.])
Now I would like to to access only the elements at specific indices, lets say at [0, 3, 4]. So I would like to return
# exptected output
tensor([10., 40., 50.])
I found torch.index_select which worked great for a tensor of two dimensions, e.g. dimension (2, 4), but not for the given t for example.
How can access a set of elements based on a given list of indices in a 1-d tensor without using a for loop?
| You can in fact use index_select for this:
t = torch.tensor([10., 20., 30., 40., 50., 60.])
output = torch.index_select(t, 0, torch.LongTensor([0, 3, 4]))
# output: tensor([10., 40., 50.])
You just need to specify the dimension (0) as the second parameter. This is the only valid dimension to specify for a 1-d input tensor.
| https://stackoverflow.com/questions/74551428/ |
Need help on locating which part of CNN needs to get either .cuda() or .to_device() | Good day!
I've been struggling with the following error: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_nll_loss_forward)
I've been unable to locate where I need to add a part of my model to the GPU. From the error message I gather it should be in the loss function but I've tried all places I could think off, related to the loss function and have been unable to solve it.
Would love some help with this.
My full code can be found here:
https://huggingface.co/AFAD85/CNN_apples/blob/main/CNN%20paper%20clone%20pytorch.ipynb
I tried to isolate all possibly relevant code below:
`transformer = transforms.Compose([
transforms.Resize((350,350)),
transforms.ToTensor(),
transforms.Normalize([0.5,0.5,0.5],
[0.5,0.5,0.5])
])`
`class ConvNet(nn.Module):
def __init__(self,num_classes=4):
super(ConvNet,self).__init__()
self.conv1 = nn.Conv2d(in_channels=3,out_channels=128,kernel_size=3,stride=1,padding='valid')
self.bn1 = nn.BatchNorm2d(num_features=128)
self.relu1 = nn.ReLU()
self.pool1 = nn.MaxPool2d(kernel_size=2)
self.conv2 = nn.Conv2d(in_channels=128,out_channels=64,kernel_size=3,stride=1,padding='valid')
self.bn2 = nn.BatchNorm2d(num_features=64)
self.relu2 = nn.ReLU()
self.pool2 = nn.MaxPool2d(kernel_size=2)
self.conv3 = nn.Conv2d(in_channels=64,out_channels=64,kernel_size=3,stride=1,padding='valid')
self.bn3 = nn.BatchNorm2d(num_features=64)
self.relu3 = nn.ReLU()
self.pool3 = nn.MaxPool2d(kernel_size=2)
self.conv4 = nn.Conv2d(in_channels=64,out_channels=32,kernel_size=3,stride=1,padding='valid')
self.bn4 = nn.BatchNorm2d(num_features=32)
self.relu4 = nn.ReLU()
self.pool4 = nn.MaxPool2d(kernel_size=2)
self.conv5 = nn.Conv2d(in_channels=32,out_channels=32,kernel_size=3,stride=1,padding='valid')
self.bn5 = nn.BatchNorm2d(num_features=32)
self.relu5 = nn.ReLU()
self.pool5 = nn.MaxPool2d(kernel_size=2)
self.flat = nn.Flatten()
self.fc1 = nn.Linear(in_features=2592, out_features = 256)
self.fc2 = nn.Linear(in_features=256, out_features = num_classes)
def forward(self,input):
output = self.conv1(input)
output = self.bn1(output)
output = self.relu1(output)
output = self.pool1(output)
output = self.conv2(output)
output = self.bn2(output)
output = self.relu2(output)
output = self.pool2(output)
output = self.conv3(output)
output = self.bn3(output)
output = self.relu3(output)
output = self.pool3(output)
output = self.conv4(output)
output = self.bn4(output)
output = self.relu4(output)
output = self.pool4(output)
output = self.conv5(output)
output = self.bn5(output)
output = self.relu5(output)
output = self.pool5(output)
# output = output.view(-1,32,9,9)
output = self.flat(output)
output = self.fc1(output)
output = self.fc2(output)
return output`
model = ConvNet(num_classes=4).to(device)
optimizer = Adam(model.parameters(),lr=0.001,weight_decay=0.0001)
loss_function = nn.CrossEntropyLoss()
`best_accuracy = 0.0
for epoch in range(num_epochs):
model.train()
train_accuracy = 0.0
train_loss = 0.0
for i, (images,labels) in enumerate(train_loader):
if torch.cuda.is_available():
images = Variable(images.cuda())
lables = Variable(labels.cuda())
optimizer.zero_grad()
outputs = model(images)
loss = loss_function(outputs,labels)
loss.backward()
optimizer.step()
train_loss += loss.cpu().data*images.size(0)
_.prediction = torch.max(outputs.data,1)
train_accuracy += int(torch.sum(prediction==labels.data))
train_accuracy = train_accuracy/train_count
train_loss = train_loss/train_count
#test set evalueren
model.eval()
test_accuracy = 0.0
for i, (images,labels) in enumerate(train_loader):
if torch.cuda.is_available():
images = Variable(images.cuda())
lables = Variable(labels.cuda())
outputs = model(images)
_.prediction = torch.max(outputs.data,1)
test_accuracy = test_accuracy/test_count
print('Epoch: '+str(epoch)+' Train Loss: '+str(int(train_loss)))+' Train Accuracy: '+str(train_accuracy)+' Test Accuracy: '+str(test_accuracy)
if test_accuracy > best_accuracy:
torch.save(model.state_dict(), 'best_checkpoint.model')`
I tried to have the model run an epoch, expecting it to run on the GPU.
I tried adding .cuda() and .to_device() in all places where I expected the problem might lie, but was unable to find the correct one.
| It was due to a typo
lables = Variable(labels.cuda())
where lables should be labels.
| https://stackoverflow.com/questions/74561222/ |
reshaping tensors for multi head attention in pytorch - view vs transpose | I'm learning about the attention operator in the deep learning domain. I understand that to compute multi head attention efficiently in parallel, the input tensors (query, key, value) have to be reshaped properly.
Assuming query, key and value are three tensor of identical shape [N, L, D], in which
N is the batch size
L is the sequence length
D is the hidden/embedding size,
they should be turned into [N*N_H, L, D_H] tensors, where N_H is the number of heads for the attention layer and D_H is the embedding size of each head.
The pytorch code seems to do exactly that. Here below I post the code for reshaping the query tensor (key, value are equally deemed)
q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
I don't get why they perform both a view and a transpose call, when the result would be the same by just doing
q = q.contiguous().view(bsz * num_heads, tgt_len, head_dim)
Other than avoiding an additional function call, using view alone also guarantees that the resulting tensor is still contiguous in memory, whereas this doesn't hold (to the best of my knowledge) for transpose. I suppose working with contiguous data is beneficial whenever possible to make computations potentially faster (may lead to fewer memory accesses, better exploiting of spatial locality of data, etc.).
What's the use case for having a transpose call after a view?
| The results AREN'T necessarily the same:
a = torch.arange(0, 2 * 3 * 4)
b = a.view(2, 3, 4).transpose(1, 0)
#tensor([[[ 0, 1, 2, 3],
[12, 13, 14, 15]],
[[ 4, 5, 6, 7],
[16, 17, 18, 19]],
[[ 8, 9, 10, 11],
[20, 21, 22, 23]]])
c = a.view(3, 2, 4)
#tensor([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7]],
[[ 8, 9, 10, 11],
[12, 13, 14, 15]],
[[16, 17, 18, 19],
[20, 21, 22, 23]]])
| https://stackoverflow.com/questions/74566019/ |
How to solve AttributeError: 'Tensor' object has no attribute 'zero_grad' in pytorch | Still working through video tutorial https://www.youtube.com/watch?v=weQ5pShEVic&list=PLbMqOoYQ3Mxw1Sl5iAAV4SJmvnAGAhFvK&index=2 about pytorch but hit another error.
lossFunc = torch.nn.MSELoss()
for i in range(epoch):
output = net(x)
loss = lossFunc(output, y)
loss.zero_grad()
loss.backward()
for f in net.parameters():
f.data.sub_(learning_rate = f.grad.data)
print(output, loss)
Created the network, loss function and wanted to iterate before backpropogattion
but get this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/v_/yq26pm194xj5ckqy8p_njwc00000gn/T/ipykernel_9995/2476130544.py in <module>
3 output = net(x)
4 loss = lossFunc(output, y)
----> 5 loss.zero_grad()
6 loss.backward()
7
AttributeError: 'Tensor' object has no attribute 'zero_grad'
What gives?
| You should use zero grad for your optimizer.
optimizer = torch.optim.Adam(net.parameters(), lr=0.001)
lossFunc = torch.nn.MSELoss()
for i in range(epoch):
optimizer.zero_grad()
output = net(x)
loss = lossFunc(output, y)
loss.backward()
optimizer.step()
| https://stackoverflow.com/questions/74567865/ |
Using GPU with fastai and pytorch | I am using fastai and pytorch for image classification. I tried to train it on google colab. But it takes much time to train it on colab and I think the problem is GPU is not set properly. This is how I did it.
import os
import torch
import torchvision as tv
import matplotlib.pyplot as plt
import numpy as np
from torchinfo import summary as torchinfo_summary
#cuda configs
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#loading dataset
data_dir='/content/drive/MyDrive/tech_related/machine_learning_related/pytorch-etic/data/cat_deer_dog_horse'
#data_dir = os.path.join('..','data','cat_deer_dog_horse')
print(os.listdir(data_dir))
data_classes = os.listdir(os.path.join(data_dir,'train'))
print(data_classes)
#making torch defined dataloaders with dataset
stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) #means and stds of each channel in images of cifar10
train_tfms = tv.transforms.Compose([
tv.transforms.RandomCrop(32,padding=4,padding_mode='reflect'),
tv.transforms.RandomHorizontalFlip(),
tv.transforms.ToTensor()
])
valid_tfms = tv.transforms.Compose([
tv.transforms.ToTensor()
])
train_ds = tv.datasets.ImageFolder(os.path.join(data_dir,'train'),train_tfms)
valid_ds = tv.datasets.ImageFolder(os.path.join(data_dir,'test'),valid_tfms)
batch_size = 64
train_dl = torch.utils.data.DataLoader(train_ds,batch_size,shuffle=True,pin_memory=True)
valid_dl = torch.utils.data.DataLoader(valid_ds,batch_size,shuffle=True,pin_memory=True)
model_ = tv.models.mobilenet_v2(pretrained=False, num_classes=len(data_classes)).to(device)
#here comes the training part, finding optimum learning rate.
from fastai.vision.all import *
data = DataLoaders(train_dl,valid_dl)
learner = Learner(data, model_, loss_func=torch.nn.functional.cross_entropy, opt_func=Adam, metrics=accuracy)
lr_min,lr_steep,lr_slide,lr_valley = learner.lr_find(suggest_funcs=(minimum,steep,slide,valley))
I checked the runtime I used in colab and it is GPU. Can someone say what am I doing wrong?
| It looks to me like you're using the GPU. You can confirm that you are by running next(learner.model.parameters).is_cuda immediately prior to lr_find.
What jumps out is that you are not loading pretrained weights (pretrained = False) in your model. This would naturally slow down the total time of training. Set pretrained to True and see if you get good results quicker.
Or, if you are concerned about the per-epoch time of training, the other issue might be if your datasets are very large. Large datasets are usually a good problem to have and, even if the per-epoch time of training is high, the total wall time for training without overfitting is probably similar or better with large datasets.
Just to be thorough, lr_find is not used to actually train. For that, you should use fit or fit_one_cycle with the lr_steep value generated by lr_find.
| https://stackoverflow.com/questions/74569154/ |
What is the error " TypeError: 'int' object is not callable"? | Substitute three numpy for the audio and combine them to get the max-min average. I am getting an error with this, what should I do?
import torch
import torchaudio
import torchaudio.transforms as T
import os
import requests
import librosa
import matplotlib.pyplot as plt
# 音声の保存
_SAMPLE_DIR = "_sample_data"
SAMPLE_WAV_URL = "https://pytorch-tutorial-assets.s3.amazonaws.com/VOiCES_devkit/source-16k/train/sp0307/Lab41-SRI-VOiCES-src-sp0307-ch127535-sg0042.wav"
SAMPLE_WAV_PATH = os.path.join(_SAMPLE_DIR, "speech.wav")
def plot_spectrogram(spec, title=None, ylabel="freq_bin", aspect="auto", xmax=None):
fig, axs = plt.subplots(1, 1)
axs.set_title(title or "Spectrogram (db)")
axs.set_ylabel(ylabel)
axs.set_xlabel("frame")
im = axs.imshow(librosa.power_to_db(spec), origin="lower", aspect=aspect)
if xmax:
axs.set_xlim((0, xmax))
fig.colorbar(im, ax=axs)
plt.show(block=False)
def synthesis(sigList):
maxLength = 0
tmpLength = 0
tmpArray = []
#最大長の音声を探索する
for i, data in enumerate(sigList):
if len(data) > tmpLength:
maxLength = len(data)
tmpLength = len(data)
index = i
#最大長の音声の長さの0埋め配列を定義
sig = np.zeros(maxLength)
for i in sigList:
tmp = i.tolist() #numpy→list
#全ての音声を最大長の音声に合わせて0埋めする
for data in range(maxLength - len(i)):
tmp.append(0)
tmpArray.append(tmp)
#配列3つを合成する
sig = np.array(tmpArray[0]) + np.array(tmpArray[1]) + np.array(tmpArray[2])
return sig
def min_max(x, axis=None):
min = x.min(axis=axis, keepdims=True)
max = x.max(axis=axis, keepdims=True)
try:
z = (x - min) / (max - min)
except ZeroDivisionError:
z = (x - min) / min
return z
waveform, sample_rate = torchaudio.load(filepath=SAMPLE_WAV_URL)
n_fft = 1024
win_length = None
hop_length = 512
window_fn = torch.hann_window
waveforms = waveform.numpy()
k = waveforms
for i in range(2):
waveforms = np.concatenate([waveforms,k],0)
spectrogram = T.Spectrogram(
n_fft=n_fft,
win_length=win_length,
hop_length=hop_length,
window_fn=window_fn,
power=2.0,
)
sig = min_max(synthesis(waveforms))
spec = spectrogram(sig)
plot_spectrogram(spec[0], title='torchaudio')
spec = spectrogram(sig) This is the line where the error occurs.
Detailed error is TypeError
Traceback (most recent call last)
<ipython-input-44-a0a6c4ba7770> in <module>
70 sig = min_max(synthesis(waveforms))
71
---> 72 spec = spectrogram(sig)
73 plot_spectrogram(spec[0], title='torchaudio')
2 frames
/usr/local/lib/python3.7/dist-packages/torchaudio/functional/functional.py in spectrogram(waveform, pad, window, n_fft, hop_length, win_length, power, normalized, center, pad_mode, onesided, return_complex)
106
107# pack batch
--> 108 shape = waveform.size()
109 waveform = waveform.reshape(-1, shape[-1])
110
TypeError: 'int' object is not callable
| According to the docs for Torchaudio Spectrogram, the parameter that's passed to its return value (spectrogram() in your code) needs to be a PyTorch Tensor. In your code, you're giving it a Numpy array instead, because that's what your function synthesis() returns.
You can convert a Numpy ndarray into a Tensor with torch.from_numpy. For example:
spec = spectrogram(torch.from_numpy(sig))
| https://stackoverflow.com/questions/74573351/ |
Can PyTorch L-BFGS be used to optimize a complex parameter? | A brief description of my model:
Consists of a single parameter X of dtype ComplexDouble and shape (20, 20, 20, 3). For reference, this must be complex because I need to perform FFTs etc. on it
X is used to compute a real scalar value, Y as the output
The objective is to minimise the value of Y using autograd to optimize the value of X.
Simple gradient descent-based optimizers like torch.optim.SGD and torch.optim.Adam seem to work fine for this process. I would like to extend this to L-BFGS.
The problem is upon using
optimizer = optim.LBFGS(solver.parameters())
def closure():
optimizer.zero_grad()
Y = model.forward()
Y.backward()
return Y
for i in range(steps):
optimizer.step(closure)
I get the error
File "xx\Python\Python38\lib\site-packages\torch\optim\lbfgs.py", line 410, in step
if gtd > -tolerance_change:
RuntimeError: "gt_cpu" not implemented for 'ComplexDouble'
According to the source file, it's computing the directional derivative to be complex which disrupts the algorithm.
Is there any way to get L-BFGS working for my complex parameter (e.g. using an alternative library) or is this fundamentally impossible? I had some ideas about replacing these "faulty" dot products with something like real(a.conj() * b)) but I wasn't sure whether that would work.
| My intuition was correct. I replaced every occurence of a.dot(b) in the file with torch.real(a.conj().dot(b)) and L-BFGS is working great!
| https://stackoverflow.com/questions/74574823/ |
Pytorch torch.linalg.qr is differentiable? | I have a neural network that involves the calculation of the QR decomposition of the input matrix X. Such a matrix is rectangular and it has maximal rank.
My question is if this operation still allows to make the gradients propagate backward during the training or if there can be some issue.
| You are right in your suspicions. It is not always differentiable. Depending on the mode parameter, there are three different cases you need to consider.
From the documentation;
The parameter mode chooses between the full and reduced QR
decomposition. If A has shape (*, m, n), denoting k = min(m, n)
mode= ‘reduced’ (default): Returns (Q, R) of shapes (, m, k), (, k,
n) respectively. It is
always differentiable.
mode= ‘complete’: Returns (Q, R) of shapes (, m, m), (, m, n)
respectively. It is
differentiable for m <= n.
mode= ‘r’: Computes only the reduced R. Returns (Q, R) with Q empty
and R of shape (*, k, n).
It is never differentiable.
| https://stackoverflow.com/questions/74576711/ |
Pytorch-Lightning ModelCheckpoint get paths of saved checkpoints | I am using PytorchLightning and beside others a ModelCheckpoint which saves models with a formated filename like `filename="model_{epoch}-{val_acc:.2f}"
In a process I want to load these checkpoints again, for simplicity lets say I want only the best via save_top_k=N.
As the filename is dynamic I wonder how can I retrieve the checkpoint easily is there a built in attribute or via the trainer that gives the saved checkpoints?
For example like
checkpoint_callback.get_top_k_paths()
I know I can do it with glob and model_dir but wondering if there is a one line solution built in somehwere.
| you can retrieve the best model path after training from the checkpoint
# retrieve the best checkpoint after training
checkpoint_callback = ModelCheckpoint(dirpath='my/path/')
trainer = Trainer(callbacks=[checkpoint_callback])
model = ...
trainer.fit(model)
checkpoint_callback.best_model_path
To find all the checkpoints you can get the list of files in the dirpath where the checkpoints are saved.
| https://stackoverflow.com/questions/74577562/ |
Why does PyGad fitness_function not work when inside of a class? | I am trying to train a genetic algorithm but for some reason it does not work when it's stored inside of a class. I have two equivalent pieces of code but the one stored inside of a class fails. It returns this..
raise ValueError("The fitness function must accept 2 parameters:
1) A solution to calculate its fitness value.
2) The solution's index within the population.
The passed fitness function named '{funcname}' accepts {argcount} parameter(s).".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount))
ValueError: The fitness function must accept 2 parameters:
1) A solution to calculate its fitness value.
2) The solution's index within the population.
The passed fitness function named 'fitness_func' accepts 3 parameter(s).
Here is the simplified version of the one that doesnt work.
import torch
import torch.nn as nn
import pygad.torchga
import pygad
class NN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.linear1 = nn.Linear(input_size, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, hidden_size)
self.linear4 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = self.linear1(x)
x = self.linear2(x)
x = self.linear3(x)
x = self.linear4(x)
return x
class Coin:
def __init__(self):
self.NeuralNet = NN(1440, 1440, 3)
def fitness_func(self, solution, solution_idx):
return 0
def trainModel(self):
torch_ga = pygad.torchga.TorchGA(model=self.NeuralNet, num_solutions=10)
ga_instance = pygad.GA(num_generations=10,
num_parents_mating=2,
initial_population=torch_ga.population_weights,
fitness_func=self.fitness_func)
ga_instance.run()
if __name__ == "__main__":
coin = Coin()
coin.trainModel()
Here is the simplified version of the one that does work.
import torch
import torch.nn as nn
import pygad.torchga
import pygad
class NN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.linear1 = nn.Linear(input_size, hidden_size)
self.linear2 = nn.Linear(hidden_size, hidden_size)
self.linear3 = nn.Linear(hidden_size, hidden_size)
self.linear4 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = self.linear1(x)
x = self.linear2(x)
x = self.linear3(x)
x = self.linear4(x)
return x
def fitness_func(solution, solution_idx):
return 0
def trainModel():
NeuralNet = NN(1440, 1440, 3)
torch_ga = pygad.torchga.TorchGA(model=NeuralNet, num_solutions=10)
ga_instance = pygad.GA(num_generations=10,
num_parents_mating=2,
initial_population=torch_ga.population_weights,
fitness_func=fitness_func)
ga_instance.run()
if __name__ == "__main__":
trainModel()
Both of these should work the same but they don't
| When you look at the pygad code you can see it's explicitly checking that the fitness function has exactly two parameters:
# Check if the fitness function accepts 2 paramaters.
if (fitness_func.__code__.co_argcount == 2):
self.fitness_func = fitness_func
else:
self.valid_parameters = False
raise ValueError("The fitness function must accept 2 parameters:\n1) A solution to calculate its fitness value.\n2) The solution's index within the population.\n\nThe passed fitness function named '{funcname}' accepts {argcount} parameter(s).".format(funcname=fitness_func.__code__.co_name, argcount=fitness_func.__code__.co_argcount))
So if you want to use it in a class you'll need to make it a static method so you aren't required to pass in self:
@staticmethod
def fitness_func(solution, solution_idx):
return 0
| https://stackoverflow.com/questions/74578431/ |
Runtime error on WGan-gp algorithm when running on GPU | I am a newbie in pytorch and running the WGan-gp algorithm on google colab using GPU runtime. I encountered the error below. The algorithm works fine when at None runtime i.e cpu.
Error generated during training
0%| | 0/3 [00:00<?, ?it/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-18-7e1d4849a60a> in <module>
19 # Calculate gradient penalty on real and fake images
20 # generated by generator
---> 21 gp = gradient_penalty(netCritic, real_image, fake, device)
22 critic_loss = -(torch.mean(critic_real_pred)
23 - torch.mean(critic_fake_pred)) + LAMBDA_GP * gp
<ipython-input-15-f84354d74f37> in gradient_penalty(netCritic, real_image, fake_image, device)
8 # image
9 # interpolated image ← alpha *real image + (1 − alpha) fake image
---> 10 interpolated_image = (alpha*real_image) + (1-alpha) * fake_image
11
12 # calculate the critic score on the interpolated image
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Snippet of my WGan-gp code
def gradient_penalty(netCritic, real_image, fake_image, device=device):
batch_size, channel, height, width = real_image.shape
# alpha is selected randomly between 0 and 1
alpha = torch.rand(batch_size, 1, 1, 1).repeat(1, channel, height, width)
# interpolated image=randomly weighted average between a real and fake
# image
# interpolated image ← alpha *real image + (1 − alpha) fake image
interpolated_image = (alpha*real_image) + (1-alpha) * fake_image
# calculate the critic score on the interpolated image
interpolated_score = netCritic(interpolated_image)
# take the gradient of the score wrt to the interpolated image
gradient = torch.autograd.grad(inputs=interpolated_image,
outputs=interpolated_score,
retain_graph=True,
create_graph=True,
grad_outputs=torch.ones_like
(interpolated_score)
)[0]
gradient = gradient.view(gradient.shape[0], -1)
gradient_norm = gradient.norm(2, dim=1)
gradient_penalty = torch.mean((gradient_norm - 1)**2)
return gradient_penalty
n_epochs = 2000
cur_step = 0
LAMBDA_GP = 10
display_step = 50
CRITIC_ITERATIONS = 5
nz = 100
for epoch in range(n_epochs):
# Dataloader returns the batches
for real_image, _ in tqdm(dataloader):
cur_batch_size = real_image.shape[0]
real_image = real_image.to(device)
for _ in range(CRITIC_ITERATIONS):
fake_noise = get_noise(cur_batch_size, nz, device=device)
fake = netG(fake_noise)
critic_fake_pred = netCritic(fake).reshape(-1)
critic_real_pred = netCritic(real_image).reshape(-1)
# Calculate gradient penalty on real and fake images
# generated by generator
gp = gradient_penalty(netCritic, real_image, fake, device)
critic_loss = -(torch.mean(critic_real_pred)
- torch.mean(critic_fake_pred)) + LAMBDA_GP * gp
netCritic.zero_grad()
# To make a backward pass and retain the intermediary results
critic_loss.backward(retain_graph=True)
optimizerCritic.step()
# Train Generator: max E[critic(gen_fake)] <-> min -E[critic(gen_fake)]
gen_fake = netCritic(fake).reshape(-1)
gen_loss = -torch.mean(gen_fake)
netG.zero_grad()
gen_loss.backward()
# Update optimizer
optimizerG.step()
# Visualization code ##
if cur_step % display_step == 0 and cur_step > 0:
print(f"Step{cur_step}: GenLoss: {gen_loss}: CLoss: {critic_loss}")
display_images(fake)
display_images(real_image)
gen_loss = 0
critic_loss = 0
cur_step += 1
I tried to introduce cuda() at the lines 10 and 21 indicated in the error output.But not working.
| Here is one approach to solve this kind of error:
Read the error message and locate the exact line where it occured:
... in gradient_penalty(netCritic, real_image, fake_image, device)
8 # image
9 # interpolated image ← alpha *real image + (1 − alpha) fake image
---> 10 interpolated_image = (alpha*real_image) + (1-alpha) * fake_image
11
12 # calculate the critic score on the interpolated image
RuntimeError: Expected all tensors to be on the same device,
but found at least two devices, cuda:0 and cpu!
Look for input tensors that have not been properly transferred to the correct device. Then look for intermediate tensors that have not been transferred.
Here alpha is assigned to a random tensor but no transfer is done!
>>> alpha = torch.rand(batch_size, 1, 1, 1) \
.repeat(1, channel, height, width)
Fix the issue and test:
>>> alpha = torch.rand(batch_size, 1, 1, 1, device=fake_image.device) \
.repeat(1, channel, height, width)
| https://stackoverflow.com/questions/74580743/ |
The shuffling order of DataLoader in pytorch | I am really confused about the shuffle order of DataLoader in pytorch.
Supposed I have a dataset:
datasets = [0,1,2,3,4]
In scenario I, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets,generator=G)
dataloader = DataLoader(dataset=datasets,sampler=ran_sampler)
the shuffling result is 0,4,2,3,1.
In scenario II, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 1,3,4,0,2.
In scenario III, the code is:
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
the shuffling result is 4,1,3,0,2.
Can someone explain what is going on here?
| Based on your code, I did a little modification (on scenario II) and inspection:
datasets = [0,1,2,3,4]
torch.manual_seed(1)
G = torch.Generator()
G = G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
# this is different from OP's scenario II because in that case the ran_sampler is not initialized with the right generator.
dataloader = DataLoader(dataset=datasets, shuffle=True, generator=G)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
torch.manual_seed(1)
G = torch.Generator()
G.manual_seed(1)
ran_sampler = RandomSampler(data_source=datasets, generator=G)
dataloader = DataLoader(dataset=datasets, sampler=ran_sampler, generator=G)
print(id(dataloader.generator)==id(dataloader.sampler.generator))
xs = []
for x in dataloader:
xs.append(x.item())
print(xs)
The outputs are:
False
[0, 4, 2, 3, 1]
True
[4, 1, 3, 0, 2]
True
[4, 1, 3, 0, 2]
The reason why the above three seemingly equivalent setups lead to different outcomes is that there are two different generators actually being used inside the DataLoader, one of which is None, in the first scenario.
To make it clear, let's analyze the source. It seems that the generator not only decides the random number generation of the _index_sampler inside DataLoader but also affects the initialization of _BaseDataLoaderIter. See the source code
if sampler is None: # give default samplers
if self._dataset_kind == _DatasetKind.Iterable:
# See NOTE [ Custom Samplers and IterableDataset ]
sampler = _InfiniteConstantSampler()
else: # map-style
if shuffle:
sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
else:
sampler = SequentialSampler(dataset) # type: ignore[arg-type]
and
self.sampler = sampler
self.batch_sampler = batch_sampler
self.generator = generator
and
def _get_iterator(self) -> '_BaseDataLoaderIter':
if self.num_workers == 0:
return _SingleProcessDataLoaderIter(self)
else:
self.check_worker_number_rationality()
return _MultiProcessingDataLoaderIter(self)
and
class _BaseDataLoaderIter(object):
def __init__(self, loader: DataLoader) -> None:
...
self._index_sampler = loader._index_sampler
Scenario II & Scenario III
Both setups are equivalent. We pass a generator to DataLoader and do not specify the sampler. DataLoader automatically creates a RandomSampler object with the generator and assign the same generator to self.generator.
Scenario I
We pass a sampler to DataLoader with the right generator but do not explicitly specify the keyword argument generator in DataLoader.__init__(...). DataLoader initializes the sampler with the given sampler but uses the default generator None for self.generator and the _BaseDataLoaderIter object returned by self._get_iterator().
| https://stackoverflow.com/questions/74580942/ |
mini-batch gradient descent, loss doesn't improve and accuracy very low | I’m trying to implement mini-batch gradient descent on the popular iris dataset, but somehow I don’t manage to get the accuracy of the model above 75-80%. Also the loss does not decrease and is rather stuck at around 0.45, even when I set the number of iterations to 10000.
Something im missing here ?
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.linear_stack = nn.Sequential(
nn.Linear(4,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,3),
)
def forward(self, x):
logits = self.linear_stack(x)
return logits
training loop, batchsize per epoch = 10.
transform_label maps [0,1,2] to the labels.
lr = 0.01
model = NeuralNetwork()
optim = torch.optim.Adam(model.parameters(), lr=lr)
loss = torch.nn.CrossEntropyLoss()
n_iters = 1000
steps = n_iters/10
LOSS = []
for epochs in range(n_iters):
for i,(inputs, labels) in enumerate(train_loader):
out = model(inputs)
train_labels = transform_label(labels)
l = loss(out, train_labels)
l.backward()
#update weights
optim.step()
optim.zero_grad()
LOSS.append(l.item())
if epochs%steps == 0:
print(f"\n epoch: {int(epochs+steps)}/{n_iters}, loss: {sum(LOSS)/len(LOSS)}")
#if i % 1 == 0:
#print(f" steps: {i+1}, loss : {l.item()}")
output:
epoch: 100/1000, loss: 1.0636296272277832
epoch: 400/1000, loss: 0.5142968013338076
epoch: 500/1000, loss: 0.49906910391073867
epoch: 900/1000, loss: 0.4586030915751588
epoch: 1000/1000, loss: 0.4543738731996598
Is it possible to calculate the loss like that or should I use torch.max()? If I do so I get this Error:
Expected floating point type for target with class probabilities, got Long
| you didn't provide enough data and code to reproduce the problem. I wrote a complete and working code to train your model on the IRIS dataset.
Imports and Classes.
import torch
from torch import nn
import pandas as pd
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.metrics import classification_report
class MyDataset(Dataset):
def __init__(self, X, Y):
assert len(X) == len(Y)
self.X = X
self.Y = Y
def __len__(self):
return len(self.X)
def __getitem__(self, item):
x = self.X[item]
y = self.Y[item]
return x, y
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.linear_stack = nn.Sequential(
nn.Linear(4,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,3),
)
def forward(self, x):
logits = self.linear_stack(x)
return logits
Read and Preprocess the data.
# Dataset was downloaded from https://archive.ics.uci.edu/ml/machine-learning-databases/iris/
df = pd.read_csv("iris.data", names=["x1", "x2", "x3", "x4", "label"])
X, Y = df[['x1', "x2", "x3", "x4"]], df['label']
# First, we transform the labels to numbers 0,1,2
encoder = LabelEncoder()
Y = encoder.fit_transform(Y)
# We split the dataset to train and test
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=123)
# Due to the nature of Neural Networks, we standardize the inputs to get better results
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
train_dataset = MyDataset(X_train, Y_train)
test_dataset = MyDataset(X_test, Y_test)
train_loader = DataLoader(train_dataset, batch_size=8)
test_loader = DataLoader(test_dataset, batch_size=8)
Train the model.
lr = 0.01
model = NeuralNetwork()
optim = torch.optim.Adam(model.parameters(), lr=lr)
loss = torch.nn.CrossEntropyLoss()
n_iters = 1000
steps = n_iters/10
LOSS = []
for epochs in range(n_iters):
for i,(inputs, labels) in enumerate(train_loader):
optim.zero_grad()
out = model(inputs.float())
l = loss(out, labels)
l.backward()
optim.step()
LOSS.append(l.item())
if epochs%steps == 0:
print(f"\n epoch: {int(epochs+steps)}/{n_iters}, loss: {sum(LOSS)/len(LOSS)}")
output:
Then, we need to run the model on test data to calculate the metrics.
preds = []
with torch.no_grad():
for i,(inputs, labels) in enumerate(test_loader):
out = model(inputs.float())
preds.extend(list(torch.argmax(out, axis=1).cpu().numpy()))
To get the metrics, you can use "classification_report".
print(classification_report(y_true=Y_test, y_pred=preds))
output:
I hope my answer helps you.
| https://stackoverflow.com/questions/74589556/ |
PyTorch vectorized sum different from looped sum | I am using torch 1.7.1 and I noticed that vectorized sums are different from sums in a loop if the indices are repeated. For example:
import torch
indices = torch.LongTensor([0,1,2,1])
values = torch.FloatTensor([1,1,2,2])
result = torch.FloatTensor([0,0,0])
looped_result = torch.zeros_like(result)
for i in range(indices.shape[0]):
looped_result[indices[i]] += values[i]
result[indices] += values
print('result:',result)
print('looped result:', looped_result)
results in:
result tensor: ([1., 2., 2.])
looped result tensor: ([1., 3., 2.])
As you can see the looped variable has the correct sums while the vectorized one doesn’t. Is it possible to avoid the loop and still get the correct result?
| The issue here is that you're indexing result multiple times at the same index, which is bound to fail for this inplace operation. Instead what you'd need to use is index_add or index_add_, e.g. (as a continuation of your snippet):
>>> result_ia = torch.zeros_like(result)
>>> result_ia.index_add_(0, indices, values)
tensor([1., 3., 2.]
| https://stackoverflow.com/questions/74593825/ |
Torch: Input type and weight type (torch.cuda.FloatTensor) should be the same | Note: I have already seen similar questions: the same error, tell torch not to use GPU, but the answers do not work for me.
I have installed PyTorch version 1.13.0+cu117 (the latest), and the code structure is as follows (an image classification task):
# os.environ["CUDA_VISIBLE_DEVICES"]="" # required?
device = torch.device("cpu") # use CPU
...
train_set = DataLoader(
torchvision.datasets.ImageFolder(path, transform), **kwargs
)
...
model = myCNN().to(device)
optimizer = SGD(args)
loss = CrossEntropyLoss()
train()
I want to train on CPU.
For dataloader, in accordance to this, I've set pin_memory=True and non_blocking=pin_memory. The error persists even on setting pin_memory=False.
The training loop has the following structure:
for epoch in n_epochs:
model.train()
inputs, labels = inputs.to(device, non_blocking=non_blocking), labels.to(device, non_blocking=non_blocking)
Compute loss, back-propagate
The error traceback (on calling train()):
Traceback (most recent call last):
File "code.py", line 233, in <module>
train()
File "code.py", line 122, in train
outputs = model(inputs)
File "...\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "code.py", line 87, in forward
output = self.network(input)
File "...\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "...\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "...\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "...\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "...\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
Edit: There was a comment regarding possible issues due to the model. The model is roughly:
class myCNN(nn.Module):
def __init__(self, ...other args...):
super().__init__()
self.network = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
nn.ReLU(),
nn.MaxPool2d(kernel_size),
... similar convolutional layers ...
nn.Flatten(),
nn.Linear(in_features, out_features)
)
def forward(self, input):
output = self.network(input)
return output
Since I have transferred both model and data to the same device, what could be the reason of this error? How to correct it?
| The issue was due to incorrect usage of summary from torchinfo. It does a forward pass (if input size is provided), and the device is (by default) selected on basis of torch.cuda.is_available().
If device (as specified in the question) argument is given to summary, the training happens just fine.
| https://stackoverflow.com/questions/74609050/ |
manual_seed for mps devices in pytorch | How do I set up a manual seed for mps devices using pytorch?
With cuda devices the code should work like this:
if torch.cuda.is_available():
torch.cuda.manual_seed(0)
I'm using an apple m1 chip. The following statement returns True:
torch.backends.mps.is_available()
But following statement is not possible:
torch.backends.mps.manual_seed(0)
| According to the docs:
You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA):
import torch
torch.manual_seed(0)
| https://stackoverflow.com/questions/74614882/ |
ERROR in CNN Pytorch; shape '[-1, 192]' is invalid for input of size 300000 | I want to change kernal size to 3, output channels of convolutional layers to 8 and 16 respectively. But when i try to change it i get an error message The following code is working fine but when I change kernal size and output channels like this:
self.conv1 = nn.Conv2d(in_channels=1,out_channels=**8**,kernel_size=**3**)
self.conv2 = nn.Conv2d(in_channels=**8**,out_channels=**16**,kernel_size=**3**)
self.fc1 = nn.Linear(in_features=**16*2*2**,out_features=128)
It generate an error for invalid input size.
working code
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=64)
self.out = nn.Linear(in_features=64,out_features=10)
def forward(self,x):
#input layer
x = x
#first hidden layer
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#second hidden layer
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#third hidden layer
x = x.reshape(-1,12*4*4)
x = self.fc1(x)
x = F.relu(x)
#fourth hidden layer
x = self.fc2(x)
x = F.relu(x)
#output layer
x = self.out(x)
return x
batch_size = 1000
train_dataset = FashionMNIST(
'../data', train=True, download=True,
transform=transforms.ToTensor())
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataset = FashionMNIST(
'../data', train=False, download=True,
transform=transforms.ToTensor())
testloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
model = Network()
losses = []
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
epochs = 1
for i in range(epochs):
batch_loss = []
for j, (data, targets) in enumerate(trainloader):
optimizer.zero_grad()
ypred = model(data)
loss = criterion(ypred, targets.reshape(-1))
loss.backward()
optimizer.step()
batch_loss.append(loss.item())
if i>10:
optimizer.lr = 0.0005
losses .append(sum(batch_loss) / len(batch_loss))
print('Epoch {}:\tloss {:.4f}'.format(i, losses [-1]))
| By changing your kernel size and output size in intermediate filters, you also change the size of your intermediate activations.
I suppose your input data is of size (1,28,28) (the usual size for FashionMNIST).
In your original code, before the layer self.fc1, after two 2D convolutionnal layers and two maxpools, the shape of your activations will be (12, 4, 4). However, if you change your kernel size to 3 and output channels of convolutional layers to 8 and 16, this shape will change. It will now be (16, 5, 5). Thus, you have to change your network. Try the following:
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.conv1 = nn.Conv2d(in_channels=1,out_channels=8,kernel_size=3)
self.conv2 = nn.Conv2d(in_channels=8,out_channels=16,kernel_size=3)
self.fc1 = nn.Linear(in_features=16*5*5,out_features=128)
self.fc2 = nn.Linear(in_features=128,out_features=64)
self.out = nn.Linear(in_features=64,out_features=10)
def forward(self,x):
#input layer
x = x
#first hidden layer
x = self.conv1(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#second hidden layer
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x,kernel_size=2,stride=2)
#third hidden layer
x = x.reshape(-1,16*5*5)
x = self.fc1(x)
x = F.relu(x)
#fourth hidden layer
x = self.fc2(x)
x = F.relu(x)
#output layer
x = self.out(x)
return x
Check Pytorch's documentation for the Conv2D and Maxpool layers.
The output size after a Conv2D layer is:
H_out = ⌊[H_in + 2*padding[0] − dilation[0]*(kernel_size[0]−1)−1]/stride[0] +1⌋
W_out = ⌊[W_in + 2*padding[1] - dilation[1]*(kernel_size[1]-1)-1]/stride[1] +1⌋
As you use the default values, the output size after the first convolutionnal layer will be :
H_out = W_out = 28+0-2-1+1=26
The maxpool following will divide this size by 2, and after the second convolutionnal layer the size will be:
13+0-2-1+1=11
The second maxpool will divide this by 2 again, taking the floor value, which is 5. Thus, the output shape after the second layer will be (n, 16, 5, 5). Before the first fully connected layer, this has to be flattened. This is why the input features of self.fc1 is 16*5*5.
| https://stackoverflow.com/questions/74625420/ |
Memory issue in running multiple processes on GPU | This question can be viewed related to my other question.
I tried running multiple machine learning processes in parallel (with bash). These are written using PyTorch. After a certain number of concurrent programs (10 in my case), I get the following error:
RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
As mentioned in this answer,
...it could occur because the VRAM memory limit was hit (which is rather non-intuitive from the error message).
For my case with PyTorch model training, decreasing batch size helped. You could try this or maybe decrease your model size to consume less VRAM.
I tried the solution mentioned here, to enforce a per-process GPU memory usage limit, but this issue persists.
This problem does not occur with a single process, or a fewer number of processes. Since only one context runs at a single time instant, why does this cause memory issue?
This issue occurs with/without MPS. I thought it could occur with MPS, but not otherwise, as MPS may run multiple processes in parallel.
|
Since only one context runs at a single time instant, why does this cause memory issue?
Context-switching doesn't dump the contents of GPU "device" memory (i.e. DRAM) to some other location. If you run out of this device memory, context switching doesn't alleviate that.
If you run multiple processes, the memory used by each process will add up (just like it does in the CPU space) and GPU context switching (or MPS or time-slicing) does not alleviate that in any way.
It's completely expected that if you run enough processes using the GPU, eventually you will run out of resources. Neither GPU context switching nor MPS nor time-slicing in any way affects the memory utilization per process.
| https://stackoverflow.com/questions/74632058/ |
how to change the out_features of densenet121 model? | How to change the out_features of densenet121 model?
I am using the code below to train the model:
from torch.nn.modules.dropout import Dropout
class Densnet121(nn.Module):
def __init__(self):
super(Densnet121, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=3 , out_channels=64 , kernel_size=3 , stride=1 )
self.Densenet_121 = models.densenet121(pretrained=True)
self.gap = AvgPool2d(kernel_size=2, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(1024)
self.do1 = nn.Dropout(0.25)
self.linear = nn.Linear(256,256)
self.bn2 = nn.BatchNorm2d(256)
self.do2 = nn.Dropout(0.25)
self.output = nn.Linear(64 * 64 * 64,2)
self.act = nn.ReLU()
def densenet(self):
for param in self.Densenet_121.parameters():
param.requires_grad = False
self.Densenet_121.classifier = nn.Linear(1024, 1024)
return self.Densenet_121
def forward(self, x):
img = self.act(self.cnn1(x))
img = self.densenet(img)
img = self.gap(img)
img = self.bn1(img)
img = self.do1(img)
img = self.linear(img)
img = self.bn2(img)
img = self.do2(img)
img = torch.flatten(img, 1)
img = self.output(img)
return img
When training this model, I face the following error:
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 64, 62, 62] to have 3 channels, but got 64 channels instead
| Your first conv layer outputs a tensor of shape (b, 64, h, w) while the following layer, the densenet model expects 3 channels. Hence the error that was raised:
"expected input [...] to have 3 channels, but got 64 channels instead"
Unfortunately, this value is hardcoded in the source of the Densenet class, see reference.
One workaround however is to overwrite the first convolutional layer after the densenet has been initialized. Something like this should work:
# First gather the conv layer specs
conv = self.Densenet_121.features.conv0
kwargs = {k: getattr(conv, k) for k in
('out_channels', 'stride', 'kernel_size', 'padding', 'bias')}
# overwrite with identical specs with new in_channels
model.features.conv0 = nn.Conv2d(in_channels=64, **kwargs)
Alternatively, you can do:
w = model.features.conv0.weight
w.data = torch.rand(len(w), 64, *w.shape[:2])
Which replaces the underlying convolutional layer weight without affecting its metadata (eg. conv.in_channels remains equal to 3), this could have side effects. So I would recommend following the first approach.
| https://stackoverflow.com/questions/74633673/ |
Pytorch's share_memory_() vs built-in Python's shared_memory: Why in Pytorch we don't need to access the shared memory-block? | Trying to learn about the built-in multiprocessing and Pytorch's multiprocessing packages, I have observed a different behavior between both. I find this to be strange since Pytorch's package is fully-compatible with the built-in package.
Concretely, I'm refering to the way variables are shared between processes. In Pytorch, tensor's are moved to shared_memory via the inplace operation share_memory_(). On the other hand, we can get the same result with the built-in package by using the shared_memory module.
The difference between both that I'm struggling to understand is that, with the built-in version, we have to explicitely access the shared memory-block inside the launched process. However, we don't need to do that with the Pytorch version.
Here is a Pytorch's toy example showing this:
import time
import torch
# the same behavior happens when importing:
# import multiprocessing as mp
import torch.multiprocessing as mp
def get_time(s):
return round(time.time() - s, 1)
def foo(a):
# wait ~1sec to print the value of the tensor.
time.sleep(1.0)
with lock:
#-------------------------------------------------------------------
# WITHOUT explicitely accessing the shared memory block, we can observe
# that the tensor has changed:
#-------------------------------------------------------------------
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*50)
# create tensor and assign it to shared memory.
a = torch.zeros(2).share_memory_()
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# start child process.
p0 = mp.Process(target=foo, args=(a,))
p0.start()
# modify the value of the tensor after ~0.5sec.
time.sleep(0.5)
with lock:
a[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a}")
time.sleep(1.5)
p0.join()
which outputs (as expected):
Module Time Value
--------------------------------------------------
__main__ 0.0 tensor([0., 0.])
__main__ 0.5 tensor([1., 0.])
__mp_main__ 1.0 tensor([1., 0.])
And here is a toy example with the built-in package:
import time
import multiprocessing as mp
from multiprocessing import shared_memory
import numpy as np
def get_time(s):
return round(time.time() - s, 1)
def foo(shm_name, shape, type_):
#-------------------------------------------------------------------
# WE NEED TO explicitely access the shared memory block to observe
# that the array has changed:
#-------------------------------------------------------------------
existing_shm = shared_memory.SharedMemory(name=shm_name)
a = np.ndarray(shape, type_, buffer=existing_shm.buf)
# wait ~1sec to print the value.
time.sleep(1.0)
with lock:
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*35)
# create numpy array and shared memory block.
a = np.zeros(2,)
shm = shared_memory.SharedMemory(create=True, size=a.nbytes)
a_shared = np.ndarray(a.shape, a.dtype, buffer=shm.buf)
a_shared[:] = a[:]
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
# start child process.
p0 = mp.Process(target=foo, args=(shm.name, a.shape, a.dtype))
p0.start()
# modify the value of the vaue after ~0.5sec.
time.sleep(0.5)
with lock:
a_shared[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
time.sleep(1.5)
p0.join()
which equivalently outputs, as expected:
Module Time Value
-----------------------------------
__main__ 0.0 [0. 0.]
__main__ 0.5 [1. 0.]
__mp_main__ 1.0 [1. 0.]
So what I'm strugging to understand is why we don't need to follow the same steps in both versions, built-in and Pytorch's, i.e. how Pytorch is able to avoid the need to explicitely access the shared memory-block?
P.S. I'm using a Windows OS and Python 3.9
| pytorch has a simple wrapper around shared memory, python's shared memory module is only a wrapper around the underlying OS dependent functions.
the way it can be done is that you don't serialize the array or the shared memory themselves, and only serialize what's needed to create them by using the __getstate__ and __setstate__ methods from the docs, so that your object acts as both a proxy and a container at the same time.
the following bar class can double for a proxy and a container this way, which is useful if the user shouldn't have to worry about the shared memory part.
import time
import multiprocessing as mp
from multiprocessing import shared_memory
import numpy as np
class bar:
def __init__(self):
self._size = 10
self._type = np.uint8
self.shm = shared_memory.SharedMemory(create=True, size=self._size)
self._mem_name = self.shm.name
self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)
def __getstate__(self):
"""Return state values to be pickled."""
return (self._mem_name, self._size, self._type)
def __setstate__(self, state):
"""Restore state from the unpickled state values."""
self._mem_name, self._size, self._type = state
self.shm = shared_memory.SharedMemory(self._mem_name)
self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)
def get_time(s):
return round(time.time() - s, 1)
def foo(shm, lock):
# -------------------------------------------------------------------
# without explicitely access the shared memory block we observe
# that the array has changed:
# -------------------------------------------------------------------
a = shm
# wait ~1sec to print the value.
time.sleep(1.0)
with lock:
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
# global variables.
s = time.time()
if __name__ == '__main__':
lock = mp.Lock() # to work on windows/mac.
print("Module\t\tTime\t\tValue")
print("-" * 35)
# create numpy array and shared memory block.
a = bar()
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
# start child process.
p0 = mp.Process(target=foo, args=(a, lock))
p0.start()
# modify the value of the vaue after ~0.5sec.
time.sleep(0.5)
with lock:
a.arr[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
time.sleep(1.5)
p0.join()
python just makes it much easier to hide such details inside the class without bothering the user with such details.
Edit: i wish they'd make locks non-inheritable so your code can raise an error on the lock, instead you'll find out one day that it doesn't actually lock ... After it crashes your application in production.
| https://stackoverflow.com/questions/74635994/ |
Conv1d Pytorch on the columns of a matrix | I would like to compute, in the most efficient way, the following operation:
cc = nn.Conv1d(1,1,3,1,1,bias=False)
U = torch.randn(1,1,10,10)
V = torch.zeros_like(U)
for i in range(10):
V[:,:,:,i] = cc(U[:,:,:,i])
In other words, I would like to apply a 1d convolution on the columns of the input U. Of course the for loop is too slow and I am sure there is a more efficient way to solve the problem. However, I can not find any idea to achieve that.
| You can apply a 2D convolution with a rectangular shape window ie. 1x3:
conv = nn.Conv2d(1,1, kernel_size=(1,3), padding=1, bias=False)
| https://stackoverflow.com/questions/74639953/ |
FashionMNIST Dataset not transforming to Tensor | Trying to calculate the mean and standard deviation of the dataset to normalise it afterwards.
Current Code:
train_dataset = datasets.FashionMNIST('data', train=True, download = True, transform=[transforms.ToTensor()])
test_dataset = datasets.FashionMNIST('data', train=False, download = True, transform=[transforms.ToTensor()])
def calc_torch_mean_std(tens):
mean = torch.mean(tens, dim=1)
std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1))
return(std, mean)
train_mean, train_std = calc_torch_mean_std(train_dataset)
test_mean, test_std = calc_torch_mean_std(test_dataset)
However, i'm getting the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in <module>
8 return(std, mean)
9
---> 10 train_mean, train_std = calc_torch_mean_std(train_dataset)
11
12 test_mean, test_std = calc_torch_mean_std(test_dataset)
/var/folders/16/crymx03s6pzfspm_3qfrlkx00000gn/T/ipykernel_72423/605045038.py in calc_torch_mean_std(tens)
4
5 def calc_torch_mean_std(tens):
----> 6 mean = torch.mean(tens, dim=1)
7 std = torch.sqrt(torch.mean((tens - mean[:, None]) ** 2, dim=1))
8 return(std, mean)
TypeError: mean() received an invalid combination of arguments - got (FashionMNIST, dim=int), but expected one of:
* (Tensor input, *, torch.dtype dtype)
* (Tensor input, tuple of ints dim, bool keepdim, *, torch.dtype dtype, Tensor out)
* (Tensor input, tuple of names dim, bool keepdim, *, torch.dtype dtype, Tensor out)
It should be getting a tensor as i transform the data as it comes in using transforms.ToTensor().
Checked import of transforms and it is okay. Checked parameters for the datasets.FashionMNIST() and transform is correctly used (should work both with and without [ ]).
Expecting no error, and to get the mean and std for both datasets.
| datasets.FashionMNIST returns (image, target) where target is index of the target class. So if you want to take the mean you need to extract just the image.
images = torch.vstack([pair[0] for pair in train_dataset])
images should now be of shape (N, H, W) and you can do whatever you want from there.
Another solution as noted by OP is to use train_dataset.data to directly access the data.
| https://stackoverflow.com/questions/74644993/ |
conv2d() received an invalid combination of arguments | After resnet convolution, I want to further compress the 256 dimensions to 20 dimensions. I directly wrote a layer in the back, but after forward propagation, there is an error in this layer, I don't know why?
def forward(self, x):
x = self.conv1(x)
dif_residual1 = self.downsample1(x)
x = self.layer1_1(x)
x =x + dif_residual1
residual = x
x = self.layer1_2(x)
x = x + residual
residual = x
x = self.layer1_3(x)
x = x + residual
if self.out_channel != 256:
x = self.layer2
filters = torch.ones(self.batch_size, self.out_channel, 1, 1).detach().requires_grad_(False).to(self.device)
x = F.conv2d(x, weight=filters, padding=0)
The dimension of x before I do if is:
x = {Tensor:(1,256,117,240)}
But after the if statement is executed, it becomes what the picture shows。
The error I get is this:
x = F.conv2d(feature, weight=filters, padding=0)
TypeError: conv2d() received an invalid combination of arguments - got (Sequential, weight=Tensor, padding=int), but expected one of:
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
Encounter a new problem:
File "D:\software\Anaconda\envs\torch1.10\lib\site-packages\torch\autograd\__init__.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 1, 117, 240]], which is output 0 of AddBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
My code:
class VGG(nn.Module):
def __init__(self, in_channel, out_channel=None, init_weights=True, device='gpu',batch_size=1):
super(VGG, self).__init__()
self.batch_size = batch_size
self.out_channel = out_channel
if device == 'gpu':
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
else:
self.device = torch.device("cpu")
modes = 'reflect'
out_channel1 = 64
self.conv1_1 = nn.Sequential(
nn.Conv2d(in_channels=in_channel, out_channels=out_channel1, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel1),
nn.LeakyReLU()
)
self.conv1_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel1, out_channels=out_channel1, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel1),
nn.LeakyReLU()
)
out_channel2 = 128
self.conv2_1 = nn.Sequential(
nn.Conv2d(in_channels=out_channel1, out_channels=out_channel2, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel2),
nn.LeakyReLU()
)
self.conv2_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel2, out_channels=out_channel2, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel2),
nn.LeakyReLU()
)
out_channel3 = 256
self.conv3_1 = nn.Sequential(
nn.Conv2d(in_channels=out_channel2, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
self.conv3_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
if out_channel == None:
self.out_channel = 256
self.conv3_3 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1,
padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
else:
self.conv3_3 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU(),
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel, kernel_size=3, stride=1, padding=1, padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel),
nn.LeakyReLU()
)
if init_weights:
self._init_weight()
def forward(self, x):
x = self.conv1_1(x)
x = self.conv1_2(x)
x = self.conv2_1(x)
x = self.conv2_2(x)
x = self.conv3_1(x)
x = self.conv3_2(x)
x = self.conv3_3(x)
feature = x
filters = torch.ones(self.batch_size, self.out_channel, 1, 1).detach().requires_grad_(False).to(self.device)
x = F.conv2d(x, weight = filters, padding = 0)
return x,feature
out_channel = 20
model = VGG(in_channel=12, out_channel=out_channel, init_weights=True, batch_size=batch_size)
for epoch in range(start_epoch+1,epochs):
# train
model.train()
running_loss = 0.0
train_bar = tqdm(train_loader, file=sys.stdout)
for step, data in enumerate(train_bar):
images, labels = data
optimizer.zero_grad()
outputs,feature = model(images.to(device))
outputs = tonser_nolmal(outputs)
loss = loss_function(outputs, labels.to(device))
loss.backward()
optimizer.step()
running_loss += loss.item()
train_bar.desc = "train epoch[{}/{}] loss:{:.6f}".format(epoch + 1,
epochs,
loss)
checkpoint = {
"net": model.state_dict(),
"optimizer": optimizer.state_dict(),
"epoch": epoch
}
torch.save(checkpoint, save_path + "/model-{}.pth".format(epoch))
# validate
model.eval()
count_acc = 0.0
count_mae = 0.0
with torch.no_grad():
val_bar = tqdm(validate_loader, file=sys.stdout)
for val_data in val_bar:
val_images, val_labels = val_data
outputs,_ = model(val_images.to(device))
# outputs = F.normalize(outputs,dim=3)
outputs = tonser_nolmal(outputs)
loss = loss_function(outputs, val_labels.to(device))
count_acc = count_acc + loss.item()
mae = Evaluation().MAE(outputs, val_labels.to(device))
count_mae = count_mae + mae.item()
| The error is likely to be caused by the following variable assignment:
if self.out_channel != 256:
x = self.layer2
which can be easily fixed by changing it to
x = self.layer2(x)
Update:
As OP updated his code, I did some test. There were several things which I found problematic:
self._init_weight was not provided, so I commented it out;
filters = torch.ones(self.batch_size, self.out_channel, 1, 1).detach().requires_grad_(False).to(self.device). The filter weight should have a shape of (c_out, c_in, kernel_size, kernel_size). However, batch_size appeared in the position of out_channels.
The role of filter in the forward was not clear to me. If you wanted to reduce the out_channels further from 256 to 20, then initializing your model with VGG(..., out_channel=20) is sufficient. Basically, self.conv3_3 would do the job.
On my end, I modified the code a little bit and it ran successfully:
import sys
import torch
import torch.nn as nn
from tqdm import tqdm
from torchvision.datasets import FakeData
from torch.utils.data import DataLoader
import torch.nn.functional as F
dataset = [torch.randn(12, 64, 64) for _ in range(1000)]
train_loader = DataLoader(dataset, batch_size=1, shuffle=True)
class VGG(nn.Module):
def __init__(self, in_channel, out_channel=None, init_weights=True, device='cpu', batch_size=1):
super(VGG, self).__init__()
self.batch_size = batch_size
self.out_channel = out_channel
self.device = device
modes = 'reflect'
out_channel1 = 64
self.conv1_1 = nn.Sequential(
nn.Conv2d(in_channels=in_channel, out_channels=out_channel1, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel1),
nn.LeakyReLU()
)
self.conv1_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel1, out_channels=out_channel1, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel1),
nn.LeakyReLU()
)
out_channel2 = 128
self.conv2_1 = nn.Sequential(
nn.Conv2d(in_channels=out_channel1, out_channels=out_channel2, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel2),
nn.LeakyReLU()
)
self.conv2_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel2, out_channels=out_channel2, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel2),
nn.LeakyReLU()
)
self.out_channel3 = out_channel3 = 256
self.conv3_1 = nn.Sequential(
nn.Conv2d(in_channels=out_channel2, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
self.conv3_2 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode = modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
self.out_channel = out_channel
if out_channel == None:
self.conv3_3 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1,
padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU()
)
else:
self.conv3_3 = nn.Sequential(
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel3, kernel_size=3, stride=1, padding=1, padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel3),
nn.LeakyReLU(),
nn.Conv2d(in_channels=out_channel3, out_channels=out_channel, kernel_size=3, stride=1, padding=1, padding_mode=modes, bias=False),
nn.BatchNorm2d(out_channel),
nn.LeakyReLU()
)
# The implementation of _init_weight is not found
# if init_weights:
# self._init_weight()
def forward(self, x):
x = self.conv1_1(x)
x = self.conv1_2(x)
x = self.conv2_1(x)
x = self.conv2_2(x)
x = self.conv3_1(x)
x = self.conv3_2(x)
x = self.conv3_3(x)
feature = x
if x.shape[1] == 256: # self.out_channel is None
filters = torch.ones(20, self.out_channel3, 1, 1).to(self.device)
x = F.conv2d(x, weight = filters, padding = 0)
return x, feature
out_channel = 20
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = VGG(in_channel=12, out_channel=None, init_weights=True, device=device, batch_size=1)
model.to(device)
print(model(next(iter(train_loader)).to(device))[0].shape)
model = VGG(in_channel=12, out_channel=20, init_weights=True, device=device, batch_size=1)
model.to(device)
print(model(next(iter(train_loader)).to(device))[0].shape)
Outputs:
torch.Size([1, 20, 64, 64])
torch.Size([1, 20, 64, 64])
| https://stackoverflow.com/questions/74650187/ |
Load and Retrain PyTorch model… | Hi all! Can you help me? I have a question: "How can I retrain a PyTorch model with just a .pt file ?”
I've looked at many guides but haven't found the answer. Everything there has a model class.
I tried to import using torch.load() and others. But it doesn't work. When I load the model with torch.load() I can't get the parameters.enter image description here
MyModel class I write myself? Doesn't it have to match the class in .pt?
| To retrain a PyTorch model with just a .pt file, you will need to use the PyTorch API to create a new model instance and load the .pt file as the initial weights for the model. This can be done using the torch.load() function to load the .pt file, and then using the model.load_state_dict() method to load the weights into the model.
Here is an example of how this might be done:
import torch
# Create a new model instance
model = MyModel()
# Load the .pt file as the initial weights for the model
weights = torch.load('model.pt')
model.load_state_dict(weights)
# Retrain the model using the loaded weights as the starting point
model.fit(...)
In this example, MyModel is the class for your model, and model.pt is the .pt file containing the initial weights for the model. The model.fit() method is used to retrain the model using the loaded weights as the starting point.
It is important to note that this approach assumes that the .pt file was created using the same model class (MyModel in this example) and that the model architecture has not changed since the .pt file was created. If the model architecture has changed, you will need to make sure that the new model architecture is compatible with the weights in the .pt file, or you may need to use a different approach to retrain the model.
| https://stackoverflow.com/questions/74654315/ |
In pytorch torchscript, how to define mutiple entry point | Have a torch model as follow :
MyModel
update(self) : Update some params.
predict(self,X) : Predict with some input tensor.
When exporting to torchscript, is there a way to have 2 entry points:
One for update()
One for predict()
| Yes, you can define multiple entry points in a TorchScript model by using the @torch.jit.export decorator to specify which methods should be exported as entry points.
For example, given a PyTorch model defined as follows:
class MyModel(nn.Module):
def update(self):
# Update some params.
def predict(self, X):
# Predict with some input tensor.
You can use the @torch.jit.export decorator to specify that the update and predict methods should be exported as entry points in the resulting TorchScript module, like this:
class MyModel(nn.Module):
@torch.jit.export
def update(self):
# Update some params.
@torch.jit.export
def predict(self, X):
# Predict with some input tensor.
You can then export the MyModel class to TorchScript using the following code:
model = MyModel()
traced_model = torch.jit.script(model)
The resulting TorchScript module will have two entry points, update and predict, which you can use to call the corresponding methods of your model.
traced_model.update()
traced_model.predict(X)
Alternatively, you can also use the torch.jit.export decorator at the class level to specify that all of the methods in the class should be exported as entry points in the resulting TorchScript module. For example:
@torch.jit.export
class MyModel(nn.Module):
def update(self):
# Update some params.
def predict(self, X):
# Predict with some input tensor.
In this code, the @torch.jit.export decorator is applied to the MyModel class itself, which tells the torch.jit.script function to export all of the methods in the MyModel class as entry points in the resulting TorchScript module.
You can then export the MyModel class to TorchScript using the following code:
model = MyModel()
traced_model = torch.jit.script(model)
The resulting TorchScript module will have two entry points, update and predict, which you can use to call the corresponding methods of your model.
traced_model.update()
traced_model.predict(X)
| https://stackoverflow.com/questions/74664418/ |
Pytorch lightning logger doesn't work as expected | Good evening,
I am a beginner in Pytorch lightning and I am trying to implement a NN and plot the graph (loss and accuracy) on various sets.
The code is this one
def training_step(self, train_batch, batch_idx):
X, y = train_batch
y_copy = y # Integer y for the accuracy
X = X.type(torch.float32)
y = y.type(torch.float32)
# forward pass
y_pred = self.forward(X).squeeze()
# accuracy
accuracy = Accuracy()
acc = accuracy(y_pred, y_copy)
# compute loss
loss = self.loss_fun(y_pred, y)
self.log_dict({'train_loss': loss, 'train_accuracy': acc}, on_step=False, on_epoch=True, prog_bar=True, logger=True)
return loss
def validation_step(self, validation_batch, batch_idx):
X, y = validation_batch
X = X.type(torch.float32)
# forward pass
y_pred = self.forward(X).squeeze()
# compute metrics
accuracy = Accuracy()
acc = accuracy(y_pred, y)
loss = self.loss_fun(y_pred, y)
self.log_dict({'validation_loss': loss, 'validation_accuracy': acc}, on_step=False, on_epoch=True, prog_bar=True, logger=True)
return loss
def test_step(self, test_batch, batch_idx):
X, y = test_batch
X = X.type(torch.float32)
# forward pass
y_pred = self.forward(X).squeeze()
# compute metrics
accuracy = Accuracy()
acc = accuracy(y_pred, y)
loss = self.loss_fun(y_pred, y)
self.log_dict({'test_loss': loss, 'test_accuracy': acc}, on_epoch=True, prog_bar=True, logger=True)
return loss
After training the NN, I run this peace of code:
metrics = pd.read_csv(f"{trainer.logger.log_dir}/metrics.csv")
del metrics["step"]
metrics
and I obtain this
It is okay than on the validation set there's only one accuracy and one loss, because I am performing an hold out CV. On the test set, I noticed that the value test_accuracy=0.97 is the mean of all the accuracy for each epoch. with that I can't see the intermediate values (for each epoch) and then I can't plot any curve. It would be useful also when I'll do a cross validation with KFold.
Why he's taking the mean and How can I see the intermediate results ? For the training_step it works properly, I can't figure out why the logger doesn't perform the same print for the test_step.
Can someone help me please ?
| It looks like the test_step method is logging the metrics using the on_epoch parameter, which means that the logged values will be averaged over the entire epoch and only logged once per epoch. To log the metrics at each step, you should set the on_epoch parameter to False in the test_step method like this:
self.log_dict({'test_loss': loss, 'test_accuracy': acc}, on_epoch=False, prog_bar=True, logger=True)
This will log the loss and accuracy at each step in the test_step method, allowing you to see the intermediate values and plot the curve.
| https://stackoverflow.com/questions/74669022/ |
PyTorch doesn't import after installing Anaconda | I just installed PyTorch after installing Anaconda, and when I run iPython using Anaconda Python, it won't find PyTorch.
However I can verify PyTorch is in the pkgs directory of my anaconda folder.
What's going on?
| While the framework name is Pytorch, it must be imported using the torch namespace like so import torch. If you look at the official documentation here, you will see any pytorch related module import like nn or autograd is done through the torch namespace. I admit it is a bit confusing but I guess it was done to instill a sense of familiarity for old torch7 users. Also, there is a very good introduction tutorial which you could probably use to familiarize yourself better with the ins and outs of pytorch found here. Hope this helps!
| https://stackoverflow.com/questions/41818618/ |
Explanation behind actor-critic algorithm in pytorch example? | Pytorch provides a good example of using actor-critic to play Cartpole in the OpenAI gym environment.
I'm confused about several of their equations in the code snippet found at https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py#L67-L79:
saved_actions = model.saved_actions
value_loss = 0
rewards = []
for r in model.rewards[::-1]:
R = r + args.gamma * R
rewards.insert(0, R)
rewards = torch.Tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std() + np.finfo(np.float32).eps)
for (action, value), r in zip(saved_actions, rewards):
action.reinforce(r - value.data.squeeze())
value_loss += F.smooth_l1_loss(value, Variable(torch.Tensor([r])))
optimizer.zero_grad()
final_nodes = [value_loss] + list(map(lambda p: p.action, saved_actions))
gradients = [torch.ones(1)] + [None] * len(saved_actions)
autograd.backward(final_nodes, gradients)
optimizer.step()
What do r and value mean in this case? Why do they run REINFORCE on the action space with the reward equal to r - value? And why do they try to set the value so that it matches r?
Thanks for your help!
|
First the rewards a collected for a time, along with the state:action that resulted in the reward
Then r - value is the difference between the expected reward and actual
That difference is used to adjust the expected value of that action from that state
So if in state "middle", the expected reward for action "jump" was 10 and the actual reward was only 2, then the AI was off by -8 (2-10). Reinforce means "adjust expectations". So if we adjust them by half, we'll new expected reward is 10-(8 *.5), or 6. meaning the AI really thought it would get 10 for that, but now it's less confident and thinks 6 is a better guess. So if the AI is not off by much, 10 - ( 2 *.5) = 9, it will adjust by a smaller amount.
| https://stackoverflow.com/questions/42636323/ |
Pytorch and Polynomial Linear Regression issue | I have modified the code hat I found on the Pytorch github to suit my data, but my loss results are huge and with each iteration they get bigger and later become nan.Code doesn't give me any errors, just nor loss results and no predictions.
I have another code that deals withe the simple Linear Regression and all works fine. I guess I'm missing something simple here, but I'm unable to see it. any help would be appreciated.
Code:
import sklearn.linear_model as lm
from sklearn.preprocessing import PolynomialFeatures
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable
train_data = torch.Tensor([
[40, 6, 4],
[44, 10, 4],
[46, 12, 5],
[48, 14, 7],
[52, 16, 9],
[58, 18, 12],
[60, 22, 14],
[68, 24, 20],
[74, 26, 21],
[80, 32, 24]])
test_data = torch.Tensor([
[6, 4],
[10, 5],
[4, 8]])
x_train = train_data[:,1:3]
y_train = train_data[:,0]
POLY_DEGREE = 3
input_size = 2
output_size = 1
poly = PolynomialFeatures(input_size * POLY_DEGREE, include_bias=False)
x_train_poly = poly.fit_transform(x_train.numpy())
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc = torch.nn.Linear(poly.n_output_features_, output_size)
def forward(self, x):
return self.fc(x)
model = Model()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
losses = []
for i in range(10):
optimizer.zero_grad()
outputs = model(Variable(torch.Tensor(x_train_poly)))
print(outputs)
loss = criterion(outputs, Variable(y_train))
print(loss.data[0])
losses.append(loss.data[0])
loss.backward()
optimizer.step()
if loss.data[0] < 1e-4:
break
print('n_iter', i)
print(loss.data[0])
plt.plot(losses)
plt.show()
output:
[393494300459008.0, inf, inf, inf, nan, nan, nan, nan, nan, nan]
n_iter
9 nan
| There are a couple of things that contribute to the problem. Changing some or all of them will give you reasonable results and make learning possible.
Some of your (polynomial) features have a huge variance and are taking on very large values. Check out np.max(x_train_poly).When your weight matrix is randomly initialised, this causes the initial predictions to be largely off, and the loss to approach infinity quickly. To counteract this, you may want to standardise your features first (i.e. make mean 0 and variance 1 for each feature). Note, that in very deep networks a similar idea is used called "Batch Normalization". If you're interested, you can read more here: https://arxiv.org/abs/1502.03167 You can do the following to fix your example:
means = np.mean(x_train_poly,axis=0,keepdims=True)
std = np.std(x_train_poly,axis=0,keepdims=True)
x_train_poly = (x_train_poly - means) / std
Your current model, doesn't have any hidden layers, which is sort of the point of a neural network and building a non-linear regressor/ classifier. What you're doing right now is applying a linear transformation to the 27 input features to get something that is close to the output. You could add an additional layer like this:
hidden_dim = 50
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer1 = torch.nn.Linear(poly.n_output_features_, hidden_dim)
self.layer2 = torch.nn.Linear(hidden_dim, output_size)
def forward(self, x):
return self.layer2(torch.nn.ReLU()(self.layer1(x)))
Note that I have added a non-linearity after the first linear transformation, because else there's no point having multiple layers.
The problem of initial predictions that are greatly off in the beginning and lead to the loss approaching infinity. You're using squared loss which essentially doubles the order of magnitude of your initial "mistake" in the loss function. And once the loss is infinity, you'll be unable to escape, because the gradient updates are essentially also infinity as you're using squared loss. An easy fix that is sometimes useful is to use the smooth L1 loss instead. Essentially MSE on the interval [0, 1] and L1 loss outside that interval. Change the following:
criterion = torch.nn.SmoothL1Loss()
That already gets you to something sensible (i.e. no infs anymore), but now
consider tuning the learning rate and introducing weight_decay. You may also want to change the optimizer. Some suggestions that works alright:
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, weight_decay=1)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=0.1)
| https://stackoverflow.com/questions/42795226/ |
undefined symbol: PySlice_AdjustIndices when importing PyTorch | I am trying to use PyTorch, and I think there is some version of something that isn't lining up.
From what little I can suss out, it seems that there are some functions in the newest version of PyTorch (?) that can't be accessed on my system. I suspect it has something to do with Python version 3.6.1 as opposed to 3.6.0. But I can't figure it out. If anyone has any advice on what I can do to rectify this error:
/home/ubuntu/nbs/torch_utils.py in <module>()
----> 1 import torch
2 import torch.nn as nn
3 import torch.nn.parallel
4 import torch.utils.data
5 from torch import optim
/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/__init__.py in <module>()
51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
52
---> 53 from torch._C import *
54
55 __all__ += [name for name in dir(_C)
ImportError: /home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: PySlice_AdjustIndices
Here's the details on my system:
I am running Ubuntu on AWS, Ubuntu 16.04.2 LTS
my Cuda info is Cuda compilation tools, release 8.0, V8.0.61
I have Anaconda, conda 4.3.15
My python version is Python 3.6.0 :: Anaconda custom (64-bit)
Thanks.
| I have the same problem, maybe the build is broken for Ubuntu / Python 3.6.
Anyway, until they fix this problem, you can install PyTorch by downgrading one version:
conda install pytorch=0.1.10 torchvision -c soumith
This version runs just fine on all my tests.
| https://stackoverflow.com/questions/43104252/ |
Pytorch .backward() method without CUDA | I am trying to run the code in the Pytorch tutorial on the autograd module. However, when I run the .backwards() call, I get the error:
cuda runtime error (38) : no CUDA-capable device is detected at torch/csrc/autograd/engine.cpp:359
I admittedly have no CUDA-capable device set up at the moment, but it was my understanding that this wasn't strictly necessary (at least I didn't find it specified anywhere in the tutorial). So I was wondering if there is a way to still run the code without a CUDA-enabled GPU.
| You should transfer you network, inputs, and labels onto the cpu using: net.cpu(), Variable(inputs.cpu()), Variable(labels.cpu())
| https://stackoverflow.com/questions/43367075/ |
about torch.nn.CrossEntropyLoss parameter shape | i'm learning pytorch, and taking the anpr project,which is based tensorflow
(https://github.com/matthewearl/deep-anpr,
http://matthewearl.github.io/2016/05/06/cnn-anpr/)
as a exercise, transplant it to pytorch platform.
there is a problem,i'm using nn.CrossEntropyLoss() as loss function:
criterion=nn.CrossEntropyLoss()
the output.data of model is:
- 1.00000e-02 *
- 2.5552 2.7582 2.5368 ... 5.6184 1.2288 -0.0076
- 0.7033 1.3167 -1.0966 ... 4.7249 1.3217 1.8367
- 0.7592 1.4777 1.8095 ... 0.8733 1.2417 1.1521
- 0.1040 -0.7054 -3.4862 ... 4.7703 2.9595 1.4263
- [torch.FloatTensor of size 4x253]
and targets.data is:
- 1 0 0 ... 0 0 0
- 1 0 0 ... 0 0 0
- 1 0 0 ... 0 0 0
- 1 0 0 ... 0 0 0
- [torch.DoubleTensor of size 4x253]
when i call:
loss=criterion(output,targets)
error occured,information is:
TypeError: FloatClassNLLCriterion_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, **torch.DoubleTensor**, torch.FloatTensor, bool, NoneType, torch.FloatTensor), but expected (int state, torch.FloatTensor input, **torch.LongTensor** target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)
'expected torch.LongTensor'......'got torch.DoubleTensor',but if i convert the targets into LongTensor:
torch.LongTensor(numpy.array(targets.data.numpy(),numpy.long))
call loss=criterion(output,targets), the error is:
RuntimeError: multi-target not supported at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.10_1488752595704/work/torch/lib/THNN/generic/ClassNLLCriterion.c:20
my last exercise is mnist, a example from pytorch,i made a bit modification,batch_size is 4,the loss function:
loss = F.nll_loss(outputs, labels)
outputs.data:
- -2.3220 -2.1229 -2.3395 -2.3391 -2.5270 -2.3269 -2.1055 -2.2321 -2.4943 -2.2996
-2.3653 -2.2034 -2.4437 -2.2708 -2.5114 -2.3286 -2.1921 -2.1771 -2.3343 -2.2533
-2.2809 -2.2119 -2.3872 -2.2190 -2.4610 -2.2946 -2.2053 -2.3192 -2.3674 -2.3100
-2.3715 -2.1455 -2.4199 -2.4177 -2.4565 -2.2812 -2.2467 -2.1144 -2.3321 -2.3009
[torch.FloatTensor of size 4x10]
labels.data:
- 8
- 6
- 0
- 1
- [torch.LongTensor of size 4]
the labels, for a input image,must be a single element, in upper example, there is 253 numbers, and in 'mnist',there is only one number, the shape of outputs is difference from labels.
i review the tensorflow manual, tf.nn.softmax_cross_entropy_with_logits,
'Logits and labels must have the sameshape [batch_size, num_classes] and the same dtype (either float32 or float64).'
does pytorch support the same function in tensorflow?
many thks
| You can convert the targets that you have to a categorical representation.
In the example that you provide, you would have 1 0 0 0.. 0 if the class is 0, 0 1 0 0 ... if the class is 1, 0 0 1 0 0 0... if the class is 2 etc.
One quick way that I can think of is first convert the target Tensor to a numpy array, then convert it from one hot to a categorical array, and convert it back to a pytorch Tensor. Something like this:
targetnp=targets.numpy()
idxs=np.where(targetnp>0)[1]
new_targets=torch.LongTensor(idxs)
loss=criterion(output,new_targets)
| https://stackoverflow.com/questions/43406693/ |
Trying to load a custom dataset in Pytorch | I'm just starting out with PyTorch and am, unfortunately, a bit confused when it comes to using my own training/testing image dataset for a custom algorithm. For starters, I am making a small "hello world"-esque convolutional shirt/sock/pants classifying network. I've only loaded a few images and am just making sure that PyTorch can load them and transform them down properly to 32x32 usable images. My ImageFolder is set up like so:
imgs/socks/sockimages.jpeg
imgs/pants/pantsimages.jpeg
imgs/shirt/shirtimages.jpeg
and a similar setup for my testing images folder. According to my current knowledge, the image loader built into PyTorch should read the labels from the subfolder names within the training/test images. However, I'm getting a TypeError complaining that my iterator is not iterable. Here's my code and the error:
import torch
import torchvision
import torchvision.datasets as dset
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Scale((32,32)),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = dset.ImageFolder(root="imgs",transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,shuffle=True, num_workers=2)
testset = dset.ImageFolder(root='tests',transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,shuffle=True, num_workers=2)
classes=('shirt','pants','sock')
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
Error:
TypeError: 'builtin_function_or_method' object is not iterable
It says that it is in reference to the line containing dataiter.next(), meaning that the compiler believes that I cannot iterate dataiter?
Please help! Thanks in advance,
-David Sillman, new to PyTorch
| I think the error comes because in the transform.Compose you are doing first .ToTensor() and, instead, you should do .Scale(). Pytorch has transformations on tensors and on PIL Images, that are not interchangeable.
Reading the docs it says
class torchvision.transforms.Scale(size, interpolation=2) [...]
Rescale the input PIL.Image to the given size.
While you are changing that image to a Pytorch tensor before scaling thus making it crash.
It should be changed to:
transform = transforms.Compose(
[transforms.Scale((32,32)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
You will get this error when applying PIL Image transformations on tensors.
| https://stackoverflow.com/questions/43441673/ |
LSTM time sequence generation using PyTorch | For several days now, I am trying to build a simple sine-wave sequence generation using LSTM, without any glimpse of success so far.
I started from the time sequence prediction example
All what I wanted to do differently is:
Use different optimizers (e.g RMSprob) than LBFGS
Try different signals (more sine-wave components)
This is the link to my code. "experiment.py" is the main file
What I do is:
I generate artificial time-series data (sine waves)
I cut those time-series data into small sequences
The input to my model is a sequence of time 0...T, and the output is a sequence of time 1...T+1
What happens is:
The training and the validation losses goes down smoothly
The test loss is very low
However, when I try to generate arbitrary-length sequences, starting from a seed (a random sequence from the test data), everything goes wrong. The output always flats out
I simply don't see what the problem is. I am playing with this for a week now, with no progress in sight.
I would be very grateful for any help.
Thank you
| This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable as it was in the training or validation case where it got the true input.
I have two possible solutions for you:
The first is the simplest but less intuitive one: Add a little bit of Gaussian noise to your input. This will force the network to rely more on its hidden state.
The second, is the most obvious solution: during training, feed it not the true input but its generated output with a certain probability p. Start out training with p=0 and gradually increase it so that it learns to general longer and longer sequences, independently. This is called schedualed sampling, and you can read more about it here: https://arxiv.org/abs/1506.03099 .
| https://stackoverflow.com/questions/43459013/ |
Default dilation value in PyTorch | As given in the documentation of PyTorch, the layer Conv2d uses a default dilation of 1. Does this mean that if I want to create a simple conv2d layer I would have to write
nn.conv2d(in_channels = 3, out_channels = 64, kernel_size = 3, dilation = 0)
instead of simply writing
nn.conv2d(in_channels = 3, out_channels = 64, kernel_size = 3)
Or is it the case that in PyTorch dilation = 1 means same as dilation = 0 as given here in the Dilated Convolution section?
| From the calculation of H_out, W_out in the documentation of pytorch, we can know that dilation=n means to make a pixel (1x1) of kernel to be nxn, where the original kernel pixel is at the topleft, and the rest pixels are empty (or filled with 0).
Thus dilation=1 is equivalent to the standard convolution with no dilation.
| https://stackoverflow.com/questions/43474072/ |
How to train a reverse embedding, like vec2word? | how do you train a neural network to map from a vector representation, to one hot vectors? The example I'm interested in is where the vector representation is the output of a word2vec embedding, and I'd like to map onto the the individual words which were in the language used to train the embedding, so I guess this is vec2word?
In a bit more detail; if I understand correctly, a cluster of points in embedded space represents similar words. Thus if you sample from points in that cluster, and use it as the input to vec2word, the output should be a mapping to similar individual words?
I guess I could do something similar to an encoder-decoder, but does it have to be that complicated/use so many parameters?
There's this TensorFlow tutorial, how to train word2vec, but I can't find any help to do the reverse? I'm happy to do it using any deeplearning library, and it's OK to do it using sampling/probabilistic.
Thanks a lot for your help, Ajay.
| One easiest thing that you can do is to use the nearest neighbor word. Given a query feature of an unknown word fq, and a reference feature set of known words R={fr}, then you can find out what is the nearest fr* for fq, and use the corresponding fr* word as fq's word.
| https://stackoverflow.com/questions/43515400/ |
multi-variable linear regression with pytorch | I'm working on a linear regression problem with Pytorch.
I've had success with the single variable case, however when I perform multi-variable linear regression I get the following error. How should I perform linear regression with multiple variables?
TypeError Traceback (most recent call
last) in ()
9 optimizer.zero_grad() #gradient
10 outputs = model(inputs) #output
---> 11 loss = criterion(outputs,targets) #loss function
12 loss.backward() #backward propogation
13 optimizer.step() #1-step optimization(gradeint descent)
/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py
in call(self, *input, **kwargs)
204
205 def call(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/loss.py
in forward(self, input, target)
22 _assert_no_grad(target)
23 backend_fn = getattr(self._backend, type(self).name)
---> 24 return backend_fn(self.size_average)(input, target)
25
26
/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py
in forward(self, input, target)
39 output = input.new(1)
40 getattr(self._backend, update_output.name)(self._backend.library_state, input, target,
---> 41 output, *self.additional_args)
42 return output
43
TypeError: FloatMSECriterion_updateOutput received an invalid
combination of arguments - got (int, torch.FloatTensor,
torch.DoubleTensor, torch.FloatTensor, bool), but expected (int state,
torch.FloatTensor input, torch.FloatTensor target, torch.FloatTensor
output, bool sizeAverage)
here is code
#import
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from torch.autograd import Variable
#input_size = 1
input_size = 3
output_size = 1
num_epochs = 300
learning_rate = 0.002
#Data set
#x_train = np.array([[1.564],[2.11],[3.3],[5.4]], dtype=np.float32)
x_train = np.array([[73.,80.,75.],[93.,88.,93.],[89.,91.,90.],[96.,98.,100.],[73.,63.,70.]],dtype=np.float32)
#y_train = np.array([[8.0],[19.0],[25.0],[34.45]], dtype= np.float32)
y_train = np.array([[152.],[185.],[180.],[196.],[142.]])
print('x_train:\n',x_train)
print('y_train:\n',y_train)
class LinearRegression(nn.Module):
def __init__(self,input_size,output_size):
super(LinearRegression,self).__init__()
self.linear = nn.Linear(input_size,output_size)
def forward(self,x):
out = self.linear(x) #Forward propogation
return out
model = LinearRegression(input_size,output_size)
#Lost and Optimizer
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)
#train the Model
for epoch in range(num_epochs):
#convert numpy array to torch Variable
inputs = Variable(torch.from_numpy(x_train)) #convert numpy array to torch tensor
#inputs = Variable(torch.Tensor(x_train))
targets = Variable(torch.from_numpy(y_train)) #convert numpy array to torch tensor
#forward+ backward + optimize
optimizer.zero_grad() #gradient
outputs = model(inputs) #output
loss = criterion(outputs,targets) #loss function
loss.backward() #backward propogation
optimizer.step() #1-step optimization(gradeint descent)
if(epoch+1) %5 ==0:
print('epoch [%d/%d], Loss: %.4f' % (epoch +1, num_epochs, loss.data[0]))
predicted = model(Variable(torch.from_numpy(x_train))).data.numpy()
plt.plot(x_train,y_train,'ro',label='Original Data')
plt.plot(x_train,predicted,label='Fitted Line')
plt.legend()
plt.show()
| You need to make sure that the data has the same type. In this case x_train is a 32 bit float while y_train is a Double. You have to use:
y_train = np.array([[152.],[185.],[180.],[196.],[142.]],dtype=np.float32)
| https://stackoverflow.com/questions/43559536/ |
IOError: [Errno 28] No space left on device while installing pytorch | l got the following error when installing pytorch
pip install http://download.pytorch.org/whl/cu80/torch-0.1.11.post5-cp27-none-linux_x86_64.whl
Collecting torch==0.1.11.post5 from http://download.pytorch.org/whl/cu80/torch-0.1.11.post5-cp27-none-linux_x86_64.whl
Downloading http://download.pytorch.org/whl/cu80/torch-0.1.11.post5-cp27-none-linux_x86_64.whl (475.7MB)
100% |████████████████████████████████| 475.7MB 3.5MB/s
the following error
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 335, in run
wb.build(autobuilding=True)
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 821, in unpack_url
hashes=hashes
File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 663, in unpack_http_url
unpack_file(from_path, location, content_type, link)
File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 599, in unpack_file
flatten=not filename.endswith('.whl')
File "/usr/local/lib/python2.7/dist-packages/pip/utils/__init__.py", line 501, in unzip_file
fp.write(data)
IOError: [Errno 28] No space left on device
What's wrong ? l don't understand the error coming from /python2.7/dist-packages/pip/
| In my case the system /tmp ran out of space, hence the error despite the installation path has enough space left.
export TMPDIR=/where/you/have/enough/space/left solves the problem for me.
| https://stackoverflow.com/questions/43585263/ |
How can l uninstall PyTorch with Anaconda? | I installed PyTorch with:
conda install pytorch torchvision cuda80 -c soumith
How do I uninstall and remove all PyTorch dependencies?
| From the anaconda docs, you can uninstall with conda uninstall
Try
conda uninstall pytorch torchvision cuda80 -c soumith
Alternatively, the pytorch docs suggest
conda uninstall pytorch
pip uninstall torch
pip uninstall torch # run this command twice
| https://stackoverflow.com/questions/43664444/ |
Numpy outperforms Tensorflow and Pytorch with same hyperparameters | I have made 3 Neural Networks for Joel's FizzBuzz implementation, each in Numpy, TensorFlow and Pytorch. With the same hyperparameters and 1k epochs, my numpy script converges to 0.002 loss yet my pytorch and tensorflow is still jumping around 0.6. Could someone please help me figure out whats happening. I dont think Google and [Facebook + Nvidia] made something dump than Numpy just for GPU boost. My code below
Numpy
import numpy as np
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
def sig(val):
return 1 / (1 + np.exp(-val))
def sig_d(val):
sig_val = sig(val)
return sig_val * (1 - sig_val)
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
w1 = np.random.randn(10, 100)
w2 = np.random.randn(100, 4)
b1 = np.zeros((1, 100))
b2 = np.zeros((1, 4))
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
# forward
start = batch * batches
end = start + batches
x = trX[start:end]
y = trY[start:end]
a2 = x.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
error = hyp - y
loss = (error ** 2).mean()
# backward
outerror = error
outgrad = outerror * sig_d(a3)
outdelta = h2.T.dot(outgrad)
outbiasdelta = np.ones([1, batches]).dot(outgrad)
hiddenerror = outerror.dot(w2.T)
hiddengrad = hiddenerror * sig_d(a2)
hiddendelta = x.T.dot(hiddengrad)
hiddenbiasdelta = np.ones([1, batches]).dot(hiddengrad)
w1 -= hiddendelta * lr
b1 -= hiddenbiasdelta * lr
w2 -= outdelta * lr
b2 -= outbiasdelta * lr
print(epoch, loss)
# test
a2 = teX.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
outli = ['fizbuz', 'buz', 'fiz', 'number']
for i in range(len(teX)):
num = binary_dec(teX[i])
print(
'Number: {} -- Actual: {} -- Prediction: {}'.format(
num, check_fizbuz(num), outli[hyp[i].argmax()]))
print('Test loss: ', np.mean(teY - hyp))
Pytorch
import numpy as np
import torch as th
from torch.autograd import Variable
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
if th.cuda.is_available():
dtype = th.cuda.FloatTensor
else:
dtype = th.FloatTensor
x = Variable(th.from_numpy(trX).type(dtype), requires_grad=False)
y = Variable(th.from_numpy(trY).type(dtype), requires_grad=False)
w1 = Variable(th.randn(10, 100).type(dtype), requires_grad=True)
w2 = Variable(th.randn(100, 4).type(dtype), requires_grad=True)
b1 = Variable(th.zeros(1, 100).type(dtype), requires_grad=True)
b2 = Variable(th.zeros(1, 4).type(dtype), requires_grad=True)
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
start = batch * batches
end = start + batches
x_ = x[start:end]
y_ = y[start:end]
a2 = x_.mm(w1)
a2 = a2.add(b1.expand_as(a2))
h2 = a2.sigmoid()
a3 = h2.mm(w2)
a3 = a3.add(b2.expand_as(a3))
hyp = a3.sigmoid()
error = hyp - y_
loss = error.pow(2).sum()
loss.backward()
w1.data -= lr * w1.grad.data
w2.data -= lr * w2.grad.data
b1.data -= lr * b1.grad.data
b2.data -= lr * b2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
print(epoch, error.mean().data[0])
TensorFlow
import tensorflow as tf
import numpy as np
input_size = 10
epochs = 1000
batches = 64
learning_rate = 0.01
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
x = tf.placeholder(tf.float32, [None, 10], name='x')
y = tf.placeholder(tf.float32, [None, 4], name='y')
lr = tf.placeholder(tf.float32, [], name='lr')
w1 = tf.Variable(tf.truncated_normal([10, 100]))
w2 = tf.Variable(tf.truncated_normal([100, 4]))
b1 = tf.Variable(tf.zeros(100))
b2 = tf.Variable(tf.zeros(4))
a2 = tf.sigmoid(tf.add(tf.matmul(x, w1), b1))
hyp = tf.sigmoid(tf.add(tf.matmul(a2, w2), b2))
cost = tf.reduce_mean(tf.square(hyp - y))
optmizer = tf.train.GradientDescentOptimizer(lr).minimize(cost)
prediction = tf.argmax(hyp, 1)
no_of_batches = int(len(trX) / batches)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(epochs):
p = np.random.permutation(range(len(trX)))
trX = trX[p]
trY = trY[p]
for batch in range(no_of_batches):
start = batch * batches
end = start + batches
input_batch = trX[start: end]
target_batch = trY[start: end]
sess.run(
optmizer, feed_dict={x: input_batch, y: target_batch, lr: learning_rate})
if epoch % 100 == 0:
a = np.argmax(teY, axis=1)
b = sess.run(prediction, feed_dict={x: teX})
acc = np.mean(a == b)
out_cost = sess.run(
cost, feed_dict={x: input_batch, y: target_batch, lr: learning_rate})
print('cost - {} --- accuracy - {}'.format(out_cost.mean(), acc))
| Can't say I've checked all your code (that's too much work), but I noticed that you are not zeroing your bias gradients in PyTorch, so these gradients will keep growing, killing the optimization algorithm. (There may be other problems)
The more idiomatic way to write this would be:
optimizer = torch.optim.SGD(net.parameters(), lr)
for _ in range(steps):
optimizer.zero_grad()
# ...
As a general note, it shouldn't matter which framework you use here: numbers are numbers: if you feed the same batches, and initialize the weights the same way, you should get approximately the same gradients.
Edit
Here's another big difference: in NP, you are calculating the mean error (across batches and channels), while in PT, you sum them. That's a 40-fold difference that will affect both the loss and the gradient.
| https://stackoverflow.com/questions/43709859/ |
Standard deviation in batches | I am using this function to compute the standard deviation of my output. I am evaluating my model on 2000 examples, in batches of 4. Can I just take the standard deviation per batch and then divide by 2000 ?
| No, you can't. Have a look at this post where batch updates of the mean and the standard deviation are explained in a nice way, with both math and code.
| https://stackoverflow.com/questions/43716525/ |
How to set different learning rate for different layer in pytorch? | I'm doing fine-tuning with pytorch using resnet50 and want to set the learning rate of the last fully connected layer to 10^-3 while the learning rate of other layers be set to 10^-6. I know that I can just follow the method in its document:
optim.SGD([{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}],
lr=1e-2, momentum=0.9)
But is there anyway that I do not need to set the parameters layer by layer
| You can group layers. If you want to group all linear layers, the best way to do it is use modules:
param_grp = []
for idx, m in enumerate(model.modules()):
if isinstance(m, nn.Linear):
param_grp.append(m.weight)
| https://stackoverflow.com/questions/43818246/ |
PyTorch network produces constant output | I am trying to train a simple MLP to approximate y=f(a,b,c).
My code is as below.
import torch
import torch.nn as nn
from torch.autograd import Variable
# hyper parameters
input_size = 3
output_size = 1
num_epochs = 50
learning_rate = 0.001
# Network definition
class FeedForwardNet(nn.Module):
def __init__(self, l1_size, l2_size):
super(FeedForwardNet, self).__init__()
self.fc1 = nn.Linear(input_size, l1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(l1_size, l2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(l2_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return out
model = FeedForwardNet(5 , 3)
# sgd optimizer
optimizer = torch.optim.SGD(model.parameters(), learning_rate, momentum=0.9)
for epoch in range(11):
print ('Epoch ', epoch)
for i in range(trainX_light.shape[0]):
X = Variable( torch.from_numpy(trainX_light[i]).view(-1, 3) )
Y = Variable( torch.from_numpy(trainY_light[i]).view(-1, 1) )
# forward
optimizer.zero_grad()
output = model(X)
loss = (Y - output).pow(2).sum()
print (output.data[0,0])
loss.backward()
optimizer.step()
totalnorm = 0
for p in model.parameters():
modulenorm = p.grad.data.norm()
totalnorm += modulenorm ** 2
totalnorm = math.sqrt(totalnorm)
print (totalnorm)
# validation code
if (epoch + 1) % 5 == 0:
print (' test points',testX_light.shape[0])
total_loss = 0
for t in range(testX_light.shape[0]):
X = Variable( torch.from_numpy(testX_light[t]).view(-1, 3) )
Y = Variable( torch.from_numpy(testY_light[t]).view(-1, 1) )
output = model(X)
loss = (Y - output).pow(2).sum()
print (output.data[0,0])
total_loss += loss
print ('epoch ', epoch, 'avg_loss ', total_loss.data[0] / testX_light.shape[0])
print ('Done')
The problem that I have now is, the validation code
output = model(X)
is always producing an exact same output value (I guess this value is some sort of garbage). I am not sure what mistake I am doing in this part. Could some help me figure out the mistake in my code?
| The reason that network produced random values (and inf later) was the exploding gradient problem. Clipping the gradient (torch.nn.utils.clip_grad_norm(model.parameters(), 0.1)) helped.
| https://stackoverflow.com/questions/43881353/ |
Training speed on GPU become slower overtime | My model training speed becomes slower over time. Every epoch take longer time to train.
Here is the full source code with my preprocess sentiment tree bank data (put glove.840B.300d.txt into data/glove).
Install some python packages:
pip install meowlogtool
pip install tqdm
Command to run:
python sentiment.py --emblr 0 --rel_dim 0 --tag_dim 0 --optim adagrad --name basic --lr 0.05 --wd 1e-4 --at_hid_dim 0
Model source code for you to read
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable as Var
import utils
import Constants
from model import SentimentModule
from embedding_model import EmbeddingModel
class SimpleGRU(nn.Module):
"""
w[i] : (300, 1)
h[i] : (150, 1)
p[i] : (20, 1)
r[i] : (20, 1)
k[i] : (150, 1)
x[i] : (20 + 150 + 300 + 20 = 490, 1) (490, 1)
Uz, Ur, Uh : (150, 150) => 67500 => (450, 450)
Wz, Wr, Wh : (150, 20 + 150 + 300 + 20) (150, 490)
"""
def __init__(self, cuda, in_dim, hid_dim):
super(SimpleGRU, self).__init__()
self.cudaFlag = cuda
self.Uz = nn.Linear(hid_dim, hid_dim)
self.Ur = nn.Linear(hid_dim, hid_dim)
self.Uh = nn.Linear(hid_dim, hid_dim)
self.Wz = nn.Linear(in_dim, hid_dim)
self.Wr = nn.Linear(in_dim, hid_dim)
self.Wh = nn.Linear(in_dim, hid_dim)
if self.cudaFlag:
self.Uz = self.Uz.cuda()
self.Ur = self.Uz.cuda()
self.Uh = self.Uz.cuda()
self.Wz = self.Wz.cuda()
self.Wr = self.Wr.cuda()
self.Wh = self.Wh.cuda()
def forward(self, x, h_prev):
"""
Simple-GRU(compress_x[v], h[t-1]) :
z[t] := s(Wz *compress_x[t]+ Uz * h[t-1] + bz)
r[t] := s(Wr * compress_x[t] + Ur * h[t-1] + br)
h_temp[t] := g(Wh * compress_x[t] + Uh * h[t-1] + bh)
h[t] := r[t] .* h[t-1] + (1 - z[t]) .* h_temp[t]
return h[t]
:param x: compress_x[t]
:param h_prev: h[t-1]
:return:
"""
z = F.sigmoid(self.Wz(x) + self.Uz(h_prev))
r = F.sigmoid(self.Wr(x) + self.Ur(h_prev))
h_temp = F.tanh(self.Wh(x) + self.Uh(h_prev))
h = r*h_prev + (1-z)*h_temp
return h
class TreeSimpleGRU(nn.Module):
def __init__(self, cuda, word_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, criterion, leaf_h = None):
super(TreeSimpleGRU, self).__init__()
self.cudaFlag = cuda
# self.gru_cell = nn.GRUCell(word_dim + tag_dim, mem_dim)
self.gru_cell = SimpleGRU(self.cudaFlag, word_dim+tag_dim, mem_dim)
self.gru_at = GRU_AT(self.cudaFlag, word_dim + tag_dim + rel_dim + mem_dim, at_hid_dim ,mem_dim)
self.mem_dim = mem_dim
self.in_dim = word_dim
self.tag_dim = tag_dim
self.rel_dim = rel_dim
self.leaf_h = leaf_h # init h for leaf node
if self.leaf_h == None:
self.leaf_h = Var(torch.rand(1, self.mem_dim))
torch.save(self.leaf_h, 'leaf_h.pth')
if self.cudaFlag:
self.leaf_h = self.leaf_h.cuda()
self.criterion = criterion
self.output_module = None
def getParameters(self):
"""
Get flatParameters
note that getParameters and parameters is not equal in this case
getParameters do not get parameters of output module
:return: 1d tensor
"""
params = []
for m in [self.gru_cell, self.gru_at]:
# we do not get param of output module
l = list(m.parameters())
params.extend(l)
one_dim = [p.view(p.numel()) for p in params]
params = F.torch.cat(one_dim)
return params
def set_output_module(self, output_module):
self.output_module = output_module
def forward(self, tree, w_emb, tag_emb, rel_emb, training = False):
loss = Var(torch.zeros(1)) # init zero loss
if self.cudaFlag:
loss = loss.cuda()
for idx in xrange(tree.num_children):
_, child_loss = self.forward(tree.children[idx], w_emb, tag_emb, rel_emb, training)
loss = loss + child_loss
if tree.num_children > 0:
child_rels, child_k = self.get_child_state(tree, rel_emb)
if self.tag_dim > 0:
tree.state = self.node_forward(w_emb[tree.idx - 1], tag_emb[tree.idx -1], child_rels, child_k)
else:
tree.state = self.node_forward(w_emb[tree.idx - 1], None, child_rels, child_k)
elif tree.num_children == 0:
if self.tag_dim > 0:
tree.state = self.leaf_forward(w_emb[tree.idx - 1], tag_emb[tree.idx -1])
else:
tree.state = self.leaf_forward(w_emb[tree.idx - 1], None)
if self.output_module != None:
output = self.output_module.forward(tree.state, training)
tree.output = output
if training and tree.gold_label != None:
target = Var(utils.map_label_to_target_sentiment(tree.gold_label))
if self.cudaFlag:
target = target.cuda()
loss = loss + self.criterion(output, target)
return tree.state, loss
def leaf_forward(self, word_emb, tag_emb):
"""
Forward function for leaf node
:param word_emb: word embedding of current node u
:param tag_emb: tag embedding of current node u
:return: k of current node u
"""
h = self.leaf_h
if self.cudaFlag:
h = h.cuda()
if self.tag_dim > 0:
x = F.torch.cat([word_emb, tag_emb], 1)
else:
x = word_emb
k = self.gru_cell(x, h)
return k
def node_forward(self, word_emb, tag_emb, child_rels, child_k):
"""
Foward function for inner node
:param word_emb: word embedding of current node u
:param tag_emb: tag embedding of current node u
:param child_rels (tensor): rels embedding of child node v
:param child_k (tensor): k of child node v
:return:
"""
n_child = child_k.size(0)
h = Var(torch.zeros(1, self.mem_dim))
if self.cudaFlag:
h = h.cuda()
for i in range(0, n_child):
k = child_k[i]
x_list = [word_emb, k]
if self.rel_dim >0:
rel = child_rels[i]
x_list.append(rel)
if self.tag_dim > 0:
x_list.append(tag_emb)
x = F.torch.cat(x_list, 1)
h = self.gru_at(x, h)
k = h
return k
def get_child_state(self, tree, rels_emb):
"""
Get child rels, get child k
:param tree: tree we need to get child
:param rels_emb (tensor):
:return:
"""
if tree.num_children == 0:
assert False # never get here
else:
child_k = Var(torch.Tensor(tree.num_children, 1, self.mem_dim))
if self.rel_dim>0:
child_rels = Var(torch.Tensor(tree.num_children, 1, self.rel_dim))
else:
child_rels = None
if self.cudaFlag:
child_k = child_k.cuda()
if self.rel_dim > 0:
child_rels = child_rels.cuda()
for idx in xrange(tree.num_children):
child_k[idx] = tree.children[idx].state
if self.rel_dim > 0:
child_rels[idx] = rels_emb[tree.children[idx].idx - 1]
return child_rels, child_k
class AT(nn.Module):
"""
AT(compress_x[v]) := sigmoid(Wa * tanh(Wb * compress_x[v] + bb) + ba)
"""
def __init__(self, cuda, in_dim, hid_dim):
super(AT, self).__init__()
self.cudaFlag = cuda
self.in_dim = in_dim
self.hid_dim = hid_dim
self.Wa = nn.Linear(hid_dim, 1)
self.Wb = nn.Linear(in_dim, hid_dim)
if self.cudaFlag:
self.Wa = self.Wa.cuda()
self.Wb = self.Wb.cuda()
def forward(self, x):
out = F.sigmoid(self.Wa(F.tanh(self.Wb(x))))
return out
class GRU_AT(nn.Module):
def __init__(self, cuda, in_dim, at_hid_dim ,mem_dim):
super(GRU_AT, self).__init__()
self.cudaFlag = cuda
self.in_dim = in_dim
self.mem_dim = mem_dim
self.at_hid_dim = at_hid_dim
if at_hid_dim > 0:
self.at = AT(cuda, in_dim, at_hid_dim)
self.gru_cell = SimpleGRU(self.cudaFlag, in_dim, mem_dim)
if self.cudaFlag:
if at_hid_dim > 0:
self.at = self.at.cuda()
self.gru_cell = self.gru_cell.cuda()
def forward(self, x, h_prev):
"""
:param x:
:param h_prev:
:return: a * m + (1 - a) * h[t-1]
"""
m = self.gru_cell(x, h_prev)
if self.at_hid_dim > 0:
a = self.at.forward(x)
h = torch.mm(a, m) + torch.mm((1-a), h_prev)
else:
h = m
return h
class TreeGRUSentiment(nn.Module):
def __init__(self, cuda, in_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, num_classes, criterion):
super(TreeGRUSentiment, self).__init__()
self.cudaFlag = cuda
self.tree_module = TreeSimpleGRU(cuda, in_dim, tag_dim, rel_dim, mem_dim, at_hid_dim, criterion)
self.output_module = SentimentModule(cuda, mem_dim, num_classes, dropout=True)
self.tree_module.set_output_module(self.output_module)
def get_tree_parameters(self):
return self.tree_module.getParameters()
def forward(self, tree, sent_emb, tag_emb, rel_emb, training = False):
# sent_emb = F.torch.unsqueeze(self.word_embedding.forward(sent_inputs), 1)
# tag_emb = F.torch.unsqueeze(self.tag_emb.forward(tag_inputs), 1)
# rel_emb = F.torch.unsqueeze(self.rel_emb.forward(rel_inputs), 1)
# sent_emb, tag_emb, rel_emb = self.embedding_model(sent_inputs, tag_inputs, rel_inputs)
tree_state, loss = self.tree_module(tree, sent_emb, tag_emb, rel_emb, training)
output = tree.output
return output, loss
|
Why does neural network learning slow down as the error gets lower?
The reasons for the slowdown are not fully understood, but we have some basic ideas.
For classifiers, most training examples start out as incorrectly classified. Over time, more of them become correctly classified. Early in learning, you might have a nearly 100% error rate, so every example in the minibatch contributes to learning. Late in learning, you might have nearly a 0% error rate, so almost none of the examples in the minibatch contribute to learning. This problem can be resolved to some extent by using hard example mining or importance sampling. Both of these are just techniques for training on more difficult examples more often.
There are other more complicated reasons. One of them is that the condition number of the Hessian tends to worsen a lot as learning progresses, so that the optimal step size becomes smaller and smaller.
| https://stackoverflow.com/questions/43906242/ |
How to asynchronously load and train batches to train a DeepLearning model? | I have 3TB dataset and 64GB RAM and a 12 core CPU and one 12GB GPU. would like to train a deep learning model on this dataset. How do I have asynchronous load of batches and training of the model? I want to make sure disk load of data doesn't block training loop to be waiting for the new batch to load into memory.
I am not language dependent and the easiest library that can do this without friction wins but I prefer one of torch, pytorch, tensorflow.
| We solved this problem in the way @mo-hossny described above (not "tied to the Imagenet folder structure") with Keras (tensorflow backend) and described it in gory detail here.
A brief summary of that: most ML tutorials show a directory structure where the class of training (and test) examples is implied by the subdirectory. For instance, you might see subdirectories and files like data/train/cats/???.png and data/train/dogs/???.png, etc.
If instead you create a simple Pandas DataFrame to hold the unique id, class label and file path for each train/test sample, then you can shuffle this DataFrame at the start of each epoch, loop over it in mini-batches and use a generator to send each chunk to the GPU. In the background, the CPU is keeping the queue of chunks full, standing by to send each subsequent one to the GPU as soon as it finishes its current batch.
An example of such a DataFrame is:
df
object_id bi multi path
index
0 461756 dog white /path/to/imgs/756/61/blah_461756.png
1 1161756 cat black /path/to/imgs/756/61/blah_1161756.png
2 3303651 dog white /path/to/imgs/651/03/blah_3303651.png
3 3367756 dog grey /path/to/imgs/756/67/blah_3367756.png
4 3767756 dog grey /path/to/imgs/756/67/blah_3767756.png
5 5467756 cat black /path/to/imgs/756/67/blah_5467756.png
6 5561756 dog white /path/to/imgs/756/61/blah_5561756.png
7 31255756 cat grey /path/to/imgs/756/55/blah_31255756.png
8 35903651 cat black /path/to/imgs/651/03/blah_35903651.png
9 44603651 dog black /path/to/imgs/651/03/blah_44603651.png
10 49557622 cat black /path/to/imgs/622/57/blah_49557622.png
11 58164756 dog grey /path/to/imgs/756/64/blah_58164756.png
12 95403651 cat white /path/to/imgs/651/03/blah_95403651.png
13 95555756 dog grey /path/to/imgs/756/55/blah_95555756.png
I've included labels for binomial and multinomial versions of the problem do demonstrate that the same DataFrame and files can be used in different classification settings.
Once you have this going, the Keras generator code is pretty short and sweet:
train_generator = generator_from_df(df, batch_size, target_size)
where df is similar to my example above and the function generator_from_df() is defined here. It simply loops through the df in chunks of a given size; reads, normalized and concatenates the pixel data specified in the chunk's rows; and finally yields (hence the generator) the X (pixels) and Y (labels) data. The heart of it is very similar to:
i, j = 0, batch_size
for _ in range(nbatches):
sub = df.iloc[i:j]
X = np.array([
(2 *
(img_to_array(load_img(f, target_size=target_size))
/ 255.0 - 0.5))
for f in sub.imgpath])
Y = sub.target.values
yield X, Y
i = j
j += batch_size
count += 1
Note the references and code in the post: we aggregated helpful hints from others in the Keras pages and here on Stackoverflow.
| https://stackoverflow.com/questions/43946813/ |
PyTorch + CUDA 7.5 error | I have non-sudo access to a machine with NVIDIA GPUs and CUDA 7.5 installed. I installed PyTorch with CUDA 7.5 support, which seems to have worked:
>>> import torch
>>> torch.cuda.is_available()
True
To get some practice, I followed tutorial for machine translation using RNNs. When I set USE_CUDA = False and the CPUs are used, everything works quite alright. However, when want to utilize the GPUs with USE_CUDA = True I get the following error:
Traceback (most recent call last):
...
File "seq2seq.py", line 229, in train
encoder_output, encoder_hidden = encoder(input_variable[ei], encoder_hidden)
File "/.../python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "seq2seq.py", line 144, in forward
output, hidden = self.gru(embedded, hidden)
File "/.../python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/.../python2.7/site-packages/torch/nn/modules/rnn.py", line 91, in forward
output, hidden = func(input, self.all_weights, hx)
...
File "/.../python2.7/site-packages/torch/backends/cudnn/rnn.py", line 42, in init_rnn_descriptor
cudnn.DropoutDescriptor(handle, dropout_p, fn.dropout_seed)
File "/usr/lib/python2.7/ctypes/__init__.py", line 383, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: python: undefined symbol: cudnnCreateDropoutDescriptor
Exception AttributeError: 'python: undefined symbol: cudnnDestroyDropoutDescriptor' in <bound method DropoutDescriptor.__del__ of <torch.backends.cudnn.DropoutDescriptor object at 0x7fe540efec10>> ignored
I've tried to use Google to search for that error but got no meaningful results. Since I'm rather a newbie with PyTorch and CUDA, I have no idea how to go on from here. The full setup is Ubuntu 14.04, Python 2.7, CUDA 7.5.
| As stated in the comments: your error is with outdated CUDNN, and can be resolved by upgrading.
Install current versions of CUDA, CUDNN, and PyTorch, then you'll be fine.
| https://stackoverflow.com/questions/43961051/ |
Simple LSTM in PyTorch with Sequential module | In PyTorch, we can define architectures in multiple ways. Here, I'd like to create a simple LSTM network using the Sequential module.
In Lua's torch I would usually go with:
model = nn.Sequential()
model:add(nn.SplitTable(1,2))
model:add(nn.Sequencer(nn.LSTM(inputSize, hiddenSize)))
model:add(nn.SelectTable(-1)) -- last step of output sequence
model:add(nn.Linear(hiddenSize, classes_n))
However, in PyTorch, I don't find the equivalent of SelectTable to get the last output.
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
# what to put here to retrieve last output of LSTM ?,
nn.Linear(hiddenSize, classe_n))
| Define a class to extract the last cell output:
# LSTM() returns tuple of (tensor, (recurrent state))
class extract_tensor(nn.Module):
def forward(self,x):
# Output shape (batch, features, hidden)
tensor, _ = x
# Reshape shape (batch, hidden)
return tensor[:, -1, :]
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
extract_tensor(),
nn.Linear(hiddenSize, classe_n)
)
| https://stackoverflow.com/questions/44130851/ |
Hyperparameter optimization for Pytorch model | What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?
| Many researchers use RayTune. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.
import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test
def train_mnist(config):
train_loader, test_loader = get_data_loaders()
model = ConvNet()
optimizer = optim.SGD(model.parameters(), lr=config["lr"])
for i in range(10):
train(model, optimizer, train_loader)
acc = test(model, test_loader)
tune.report(mean_accuracy=acc)
analysis = tune.run(
train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
# Get a dataframe for analyzing trial results.
df = analysis.dataframe()
[Disclaimer: I contribute actively to this project!]
| https://stackoverflow.com/questions/44260217/ |
how tf.space_to_depth() works in tensorflow? | I am a pytorch user. I have got a pretrained model in tensorflow and I would like to transfer it into pytorch. In one part of model architecture, I mean in tensorflow-defined model, there is a function tf.space_to_depth which transfers an input size of (None, 38,38,64) to (None, 19,19, 256). (https://www.tensorflow.org/api_docs/python/tf/space_to_depth) is the doc of this function. But I could not understand what this function actually do. Could you please provide some numpy codes to illustrate it for me?
Actually I would like to make an exact similar layer in pytorch.
Some codes in tensorflow reveals another secret:
Here is some codes:
import numpy as np
import tensorflow as tf
norm = tf.random_normal([1, 2, 2, 1], mean=0, stddev=1)
trans = tf.space_to_depth(norm,2)
with tf.Session() as s:
norm = s.run(norm)
trans = s.run(trans)
print("Norm")
print(norm.shape)
for index,value in np.ndenumerate(norm):
print(value)
print("Trans")
print(trans.shape)
for index,value in np.ndenumerate(trans):
print(value)
And here is the output:
Norm
(1, 2, 2, 1)
0.695261
0.455764
1.04699
-0.237587
Trans
(1, 1, 1, 4)
1.01139
0.898777
0.210135
2.36742
As you can see above, In Addition to data reshaping, the tensor values has changed!
| This tf.space_to_depth divides your input into blocs and concatenates them.
In your example the input is 38x38x64 (and I guess the block_size is 2). So the function divides your input into 4 (block_size x block_size) and concatenates them which gives your 19x19x256 output.
You just need to divide each of your channel (input) into block_size*block_size patches (each patch has a size of width/block_size x height/block_size) and concatenate all of these patches. Should be pretty straightforward with numpy.
Hope it helps.
| https://stackoverflow.com/questions/44264962/ |
Long Sequence In a seq2seq model with attention? | I am following along this pytorch tutorial and trying to apply this principle to summarization, where the encoding sequence would be around 1000 words and decoder target 200 words.
How do I apply seq2seq to this? I know it would be very expensive and almost infeasible to run through the whole sequence of 1000 words at once. So dividing the seq into say 20 seq and running in parallel could be an answer. But I'm not sure how to implement it; I also want to incorporate attention into it.
| You can not parallelize RNN in time (1000 here) because they are inherently sequential.
You can use a light RNN, something like QRNN or SRU as a faster alternative(which is still sequential).
Another common sequence processing modules are TCN and Transformers which are both parallelizable in time.
Also, note that all of them can be used with attention and work perfectly fine with text.
| https://stackoverflow.com/questions/44351134/ |
Basic multi GPU parallelization of matrix multiplication | I want to parallelize the simple following expression on 2 GPUs: C = A^n + B^n by calculating A^n on GPU 0 and B^n on GPU 1 before summing the results.
In TensorFlow I would go like:
with tf.device('/gpu:0'):
An = matpow(A, n)
with tf.device('/gpu:1'):
Bn = matpow(B, n)
with tf.Session() as sess:
C = sess.run(An + Bn)
However, since PyTorch is dynamic, I'm having trouble doing the same thing. I tried the following but it only takes more time.
with torch.cuda.device(0):
A = A.cuda()
with torch.cuda.device(1):
B = B.cuda()
C = matpow(A, n) + matpow(B, n).cuda(0)
I know there is a module to parallelize models on the batch dimension using torch.nn.DataParallel but here I try to do something more basic.
| You can use cuda streams for this. This will not necessarily distribute it over two devices, but the execution will be in parallel.
s1 = torch.cuda.Stream()
s2 = torch.cuda.Stream()
with torch.cuda.stream(s1):
A = torch.pow(A,n)
with torch.cuda.stream(s2):
B = torch.pow(B,n)
C = A+B
Although I'm not sure whether it will really speed up your computation if you only parallelize this one operation. Your matrices must be really big.
If your requirement is to split it across devices, you can add this before the streams:
A = A.cuda(0)
B = B.cuda(1)
Then after the power operation, you need to get them on the same device again, e.g. B = B.cuda(0). After that you can do the addition.
| https://stackoverflow.com/questions/44371682/ |
OpenNMT issues with Pytorch: cPickle.UnpicklingError: invalid load key, '' | I am trying to run the OpenNMT project using the instruction from the link: http://forum.opennmt.net/t/text-summarization-on-gigaword-and-rouge-scoring/85/6
I am using Python 2.7 and installed pytorch from the github repository.
I am trying to run the program using the prebuild model of the OpenNMT, which I have downloaded from the following: http://opennmt.net/Models/
I tried the command:
python translate.py -model textsum_epoch7_14.69_release.t7 -src data/Giga/input.txt
Got the following error:
Traceback (most recent call last):
File "translate.py", line 151, in <module>
main()
File "translate.py", line 70, in main
translator = onmt.Translator(opt)
File "/home/ubuntu/opennmt/onmt/Translator.py", line 21, in __init__
checkpoint = torch.load(opt.model)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 229, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 367, in _load
magic_number = pickle_module.load(f)
cPickle.UnpicklingError: invalid load key, ''.
Kindly let me know what I need to do so that I can use the model and check the library OpenNMT.
| The model you downloaded is for the Lua version of OpenNMT.
If you are just a user of the project, I recommend you to use this version as it is the most supported and stable.
| https://stackoverflow.com/questions/44403886/ |
Pytorch: Sparse Matrix multiplcation | Given:
self.A = torch.autograd.Variable(random_sparse(n = dim))
self.w = torch.autograd.Variable(torch.Tensor(np.random.normal(0,1,(dim,dim))))
Goal1:
torch.mm(self.A, self.w)
Goal2:
torch.spmm(self.A, self.w)
Result1:
TypeError: Type torch.sparse.FloatTensor doesn't implement stateless method addmm
Result2:
AttributeError: 'module' object has no attribute 'spmm'
My PyTorch version is 0.1.12_2 - would greatly appreciate possible solutions.
| spmm has been moved from torch module to torch.sparse module. For official documentation please check this link. There is also a warning in the beginning of the documentation of torch.sparse module:
This API is currently experimental and may change in the near future.
| https://stackoverflow.com/questions/44417500/ |
Parallel way of applying function element-wise to a Pytorch CUDA Tensor | Suppose I have a torch CUDA tensor and I want to apply some function like sin() but I have explicitly defined the function F. How can I use parallel computation to apply F in Pytorch.
| I think currently, it is not possible to explicit parallelize a function on a CUDA-Tensor. A possible solution could be, you can define a Function like the for example the non-linear activation functions. So you can feed forward it through the Net and your function.
The drawback is, it probably don't work, because you have to define a CUDA-Function and have to recompile pytorch.
| https://stackoverflow.com/questions/44432978/ |
PyTorch specific inspection issues in PyCharm | Has anyone been able to resolve PyTorch specific inspection issues in PyCharm? Previous posts for non-PyTorch related issues suggest upgrading PyCharm, but I'm currently at the latest version. One option is to of course disable certain inspections entirely, but I'd rather avoid that.
Example: torch.LongTensor(x) gives me "Unexpected argument...", whereas both call signatures (with and without x) are both supported.
| I believe it's because torch.LongTensor has no __init__ method for pycharm to find.
According to this source that I found thanks to this SO post :
Use __new__ when you need to control the creation of a new instance.
Use __init__ when you need to control initialization of a new instance.
__new__ is the first step of instance creation. It's called first,
and is responsible for returning a new instance of your class. In
contrast, __init__ doesn't return anything; it's only responsible for
initializing the instance after it's been created.
In general, you shouldn't need to override __new__ unless you're
subclassing an immutable type like str, int, unicode or tuple.
Since Tensors are types, it makes sense to define only new and no init.
You can experiment this behavior by testing the following classes :
torch.LongTensor(1) # Unexpected arguments
Produces the warning while the following doesn't.
class MyLongTensor(torch.LongTensor):
def __init__(self, *args, **kwargs):
pass
MyLongTensor(1) # No error
To confirm that the absence of __init__ is the culprit try :
class Example(object):
pass
Example(0) # Unexpected arguments
To find out by yourself, use pycharm to Ctrl+click on LongTensor then _TensorBase and look at the defined methods.
| https://stackoverflow.com/questions/44522661/ |
AttributeError in creating module | I am getting the following error.
AttributeError: cannot assign module before Module.init() call
I have a class as follows.
class Classifier(nn.Module):
def __init__(self, dictionary, embeddings_index, max_seq_length, args):
self.embedding = EmbeddingLayer(len(dictionary), args.emsize, args.dropout)
self.drop = nn.Dropout(args.dropout)
What I am doing wrong here? I am beginner in PyTorch, please help!
| The first thing you should always do when you create a module is call its super constructor. So, your class should look like this:
class Classifier(nn.Module):
def __init__(self, dictionary, embeddings_index, max_seq_length, args):
super(Classifier, self).__init__()
'''Rest of your code goes here.'''
| https://stackoverflow.com/questions/44577501/ |
Stacking copies of an array/ a torch tensor efficiently? | I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following
array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than :
A=np.array([M]*N) ?
Same question with torch tensor !
Cause, Now, if M is a Variable(torch.tensor), i have to do:
A=torch.autograd.Variable(torch.tensor(np.array([M]*N)))
which is ugly !
| Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array.
In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory.
In PyTorch, you can use expand() and repeat() as follows for your purposes:
import torch
L = 10
N = 20
A = torch.randn(L,L)
A.expand(N, L, L) # specifies new size
A.repeat(N,1,1) # specifies number of copies
In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence,
import numpy as np
L = 10
N = 20
A = np.random.rand(L,L)
np.tile(A,(N, 1, 1))
| https://stackoverflow.com/questions/44593141/ |
How to parallelize RNN function in Pytorch with DataParallel | Here's an RNN model to run character based language generation:
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, n_layers):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.encoder = nn.Embedding(input_size, hidden_size)
self.GRU = nn.GRU(hidden_size, hidden_size, n_layers, batch_first=True)
self.decoder = nn.Linear(hidden_size, output_size)
def forward(self, input, batch_size):
self.init_hidden(batch_size)
input = self.encoder(input)
output, self.hidden = self.GRU(input, self.hidden)
output = self.decoder(output.view(batch_size, self.hidden_size))
return output
def init_hidden(self, batch_size):
self.hidden = Variable(torch.randn(self.n_layers, batch_size, self.hidden_size).cuda())
I instantiate the model using DataParallel, to split the batch of inputs across my 4 GPUs:
net = torch.nn.DataParallel(RNN(n_chars, hidden_size, n_chars, n_layers)).cuda()
Here's the full code.
Unfortunately, DataParallel requires the inputs to have batch_size as the first dimension, but GRU function expects hidden tensor to have batch_size as second dimension:
output, self.hidden = self.GRU(input, self.hidden)
The code as is throws the following error (note the printouts showing that encoder is correctly executed on 4 GPUs):
...
forward function: encoding input of shape: (16L, 1L)
forward function: encoding input of shape: (16L, 1L)
forward function: encoding input of shape: (16L,
forward function: encoding input of shape:
forward function: GRU processing input of shape:
1L)
( (16L, 16L1L, 1L), 100L)
forward function: GRU processing input of shape:
(16L, 1L,
forward function: GRU processing input of shape:100L)
(16L
forward function: GRU processing input of shape:, 1L, 100L) (
16L, 1L, 100L)
Traceback (most recent call last):
File "gru2.py", line 166, in <module>
output = net(c, batch_size)
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 61, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 71, in parallel_apply
return parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 45, in parallel_apply
raise output
RuntimeError: Expected hidden size (2, 16L, 100), got (2L, 64L, 100L)
Here the model has 2 layers, batch_size=64, and hidden_size = 100.
How do I parallelize the GRU operation in the forward function?
| You can simply set the parameter dim=1, e.g.
net = torch.nn.DataParallel(RNN(n_chars, hidden_size, n_chars, n_layers), dim=1).cuda()
| https://stackoverflow.com/questions/44595338/ |
How to convert a list of strings into a tensor in pytorch? | I am working on classification problem in which I have a list of strings as class labels and I want to convert them into a tensor. So far I have tried converting the list of strings into a numpy array using the np.array function provided by the numpy module.
truth = torch.from_numpy(np.array(truths))
but I am getting the following error.
RuntimeError: can't convert a given np.ndarray to a tensor - it has an invalid type. The only supported types are: double, float, int64, int32, and uint8.
Can anybody suggest an alternative approach? Thanks
| Unfortunately, you can't right now. And I don't think it is a good idea since it will make PyTorch clumsy. A popular workaround could convert it into numeric types using sklearn.
Here is a short example:
from sklearn import preprocessing
import torch
labels = ['cat', 'dog', 'mouse', 'elephant', 'pandas']
le = preprocessing.LabelEncoder()
targets = le.fit_transform(labels)
# targets: array([0, 1, 2, 3, 4])
targets = torch.as_tensor(targets)
# targets: tensor([0, 1, 2, 3, 4])
Since you may need the conversion between true labels and transformed labels, it is good to store the variable le.
| https://stackoverflow.com/questions/44617871/ |
How to handle the BatchNorm layer when training fully convolutional networks by finetuning? | Training fully convolutional nerworks (FCNs) for pixelwise semantic segmentation is very memory intensive. So we often use batchsize=1 for traing FCNs. However, when we finetune the pretrained networks with BatchNorm (BN) layers, batchsize=1 doesn't make sense for the BN layers. So, how to handle the BN layers?
Some options:
delete the BN layers (merge the BN layers with the preceding layers for the pretrained model)
Freeze the parameters and statistics of the BN layers
....
which is better and any demo for implementation in pytorch/tf/caffe?
| Having only one element will make the batch normalization zero if epsilon is non-zero (variance is zero, mean will be same as input).
Its better to delete the BN layers from the network and try the activation function SELU (scaled exponential linear units). This is from the paper 'Self normalizing neural networks' (SNNs).
Quote from the paper:
While batch normalization requires explicit normalization, neuron
activations of SNNs automatically converge towards zero mean and
unit variance. The activation function of SNNs are “scaled
exponential linear units” (SELUs), which induce self-normalizing
properties.
The SELU is defined as:
def selu(x, name="selu"):
alpha = 1.6732632423543772848170429916717
scale = 1.0507009873554804934193349852946
return scale * tf.where(x >= 0.0, x, alpha * tf.nn.elu(x))
| https://stackoverflow.com/questions/44621731/ |
Torch code producing CUDA Runtime Error | a friend of mine implemented a sparse version of torch.bmm that actually works, but when I try a test, I have a runtime error (that has nothing to do with this implementation), that I don't understand. I have seen a few topics about if but couldn't find a solution. Here is the code, and the error:
if __name__ == "__main__":
tmp = torch.zeros(1).cuda()
batch_csr = BatchCSR()
sparse_bmm = SparseBMM()
i=torch.LongTensor([[0,5,8], [1,5,8], [2,5,8]])
v=torch.FloatTensor([4,3,8])
s=torch.Size([3,500,500])
indices, values, size = i,v,s
a_ = torch.sparse.FloatTensor(indices, values, size).cuda().transpose(2, 1)
batch_size, num_nodes, num_faces = a_.size()
a = a_.to_dense()
for _ in range(10):
b = torch.randn(batch_size, num_faces, 16).cuda()
torch.cuda.synchronize()
time1 = time.time()
result = torch.bmm(a, b)
torch.cuda.synchronize()
time2 = time.time()
print("{} CuBlas dense bmm".format(time2 - time1))
torch.cuda.synchronize()
time1 = time.time()
col_ind, col_ptr = batch_csr(a_.indices(), a_.size())
my_result = sparse_bmm(a_.values(), col_ind, col_ptr, a_.size(), b)
torch.cuda.synchronize()
time2 = time.time()
print("{} My sparse bmm".format(time2 - time1))
print("{} Diff".format((result-my_result).abs().max()))
And the error:
Traceback (most recent call last):
File "sparse_bmm.py", line 72, in <module>
b = torch.randn(3, 500, 16).cuda()
File "/home/bizeul/virtual_env/lib/python2.7/site-packages/torch/_utils.py", line 65, in _cuda
return new_type(self.size()).copy_(self, async)
RuntimeError: cuda runtime error (59) : device-side assert triggered at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorCopy.c:18
When running with the command CUDA_LAUNCH_BLOCKING=1, I get the error :
/b/wheel/pytorch-src/torch/lib/THC/THCTensorIndex.cu:121: void indexAddSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 1, SrcDim = 1, IdxDim = -2]: block: [0,0,0], thread: [0,0,0] Assertion `dstIndex < dstAddDimSize` failed.
THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THCS/generic/THCSTensorMath.cu line=292 error=59 : device-side assert triggered
Traceback (most recent call last):
File "sparse_bmm.py", line 69, in <module>
a = a_.to_dense()
RuntimeError: cuda runtime error (59) : device-side assert triggered at /b/wheel/pytorch-src/torch/lib/THCS/generic/THCSTensorMath.cu:292
| The indices that you are passing to create the sparse tensor are incorrect.
here is how it should be:
i = torch.LongTensor([[0, 1, 2], [5, 5, 5], [8, 8, 8]])
How to create a sparse tensor:
Lets take a simpler example. Lets say we want the following tensor:
0 0 0 2 0
0 0 0 0 0
0 0 0 0 20
[torch.cuda.FloatTensor of size 3x5 (GPU 0)]
As you can see, the number (2) needs to be in the (0, 3) location of the sparse tensor. And the number (20) needs to be in the (2, 4) location.
In order to create this, our index tensor should look like this
[[0 , 2],
[3 , 4]]
And, now for the code to create the above sparse tensor:
i=torch.LongTensor([[0, 2], [3, 4]])
v=torch.FloatTensor([2, 20])
s=torch.Size([3, 5])
a_ = torch.sparse.FloatTensor(indices, values, size).cuda()
More comments regarding the assert error by cuda:
Assertion 'dstIndex < dstAddDimSize' failed. tells us that, its highly likely, you've got an index out of bounds. So whenever you notice that, look for places where you might have supplied the wrong indices to any of the tensors.
| https://stackoverflow.com/questions/44661086/ |
Torch: How to shuffle a tensor by its rows? | I am currently working in torch to implement a random shuffle (on the rows, the first dimension in this case) on some input data. I am new to torch, so I have some troubles figuring out how permutation works..
The following is supposed to shuffle the data:
if argshuffle then
local perm = torch.randperm(sids:size(1)):long()
print("\n\n\nSize of X and y before")
print(X:view(-1, 1000, 128):size())
print(y:size())
print(sids:size())
print("\nPerm size is: ")
print(perm:size())
X = X:view(-1, 1000, 128)[{{perm},{},{}}]
y = y[{{perm},{}}]
print(sids[{{1}, {}}])
sids = sids[{{perm},{}}]
print(sids[{{1}, {}}])
print(X:size())
print(y:size())
print(sids:size())
os.exit(69)
end
This prints out
Size of X and y before
99
1000
128
[torch.LongStorage of size 3]
99
1
[torch.LongStorage of size 2]
99
1
[torch.LongStorage of size 2]
Perm size is:
99
[torch.LongStorage of size 1]
5
[torch.LongStorage of size 1x1]
5
[torch.LongStorage of size 1x1]
99
1000
128
[torch.LongStorage of size 3]
99
1
[torch.LongStorage of size 2]
99
1
[torch.LongStorage of size 2]
Out of the value, I can imply that the function did not shuffle the data. How can I make it shuffle correctly, and what is the common solution in lua/torch?
| I also faced a similar issue. In the documentation, there is no shuffle function for tensors (there are for dataset loaders). I found a workaround to the problem using torch.randperm.
>>> a=torch.rand(3,5)
>>> print(a)
tensor([[0.4896, 0.3708, 0.2183, 0.8157, 0.7861],
[0.0845, 0.7596, 0.5231, 0.4861, 0.9237],
[0.4496, 0.5980, 0.7473, 0.2005, 0.8990]])
>>> # Row shuffling
...
>>> a=a[torch.randperm(a.size()[0])]
>>> print(a)
tensor([[0.4496, 0.5980, 0.7473, 0.2005, 0.8990],
[0.0845, 0.7596, 0.5231, 0.4861, 0.9237],
[0.4896, 0.3708, 0.2183, 0.8157, 0.7861]])
>>> # column shuffling
...
>>> a=a[:,torch.randperm(a.size()[1])]
>>> print(a)
tensor([[0.2005, 0.7473, 0.5980, 0.8990, 0.4496],
[0.4861, 0.5231, 0.7596, 0.9237, 0.0845],
[0.8157, 0.2183, 0.3708, 0.7861, 0.4896]])
I hope it answers the question!
| https://stackoverflow.com/questions/44738273/ |
necessity of transposed convolution when feature maps are not downsampled | I was reading a paper here. The authors in the paper have proposed a symmetric generator network which contains a stack of convolution layers followed by a stack of de-convolution (transposed convolution) layers. It is also mentioned that a stride of 1 with appropriate padding is used to ensure that feature map size is same as input image size.
My question is if there is no downsampling, then why transposed conolution layers are used? Can't the generator be constructed only with convolution layers? Am I missing something about transposed convolution layers here (is it being used for some other purpose)? Please help.
Update: I am re-opening this question, as I came across this paper which states in section 2.1.1 that "deconvolution is used to compensate the details". However, I am not able to appreciate this because there is no downsampling of feature maps in the proposed model. Can somebody explain why deconvolution is preferred over convolution here? What makes deconvolution layer perform better than convolution in this case?
| In theory spatial convolution can be used as a replacement for fractionally-strided convolution. Typically this is avoided because, even without some type of pooling, convolutional layers can produce outputs that are smaller than their corresponding inputs (see the formulae for owidth and oheight in the docs here). Using nn.SpatialConvolution to produce outputs that are larger than inputs would require a great deal of inefficient zero-padding to reach the original input size. To make the reverse process easier, torch functionality was added for fractionally-strided convolution.
That being said, this case is a bit different since the size at each layer remains constant. So it is quite possible that using nn.SpatialConvolution for the entire generator will work. You'll still want to mirror the encoder's nInputPlane and nOutputPlane pattern to successfully move from feature space back to input space.
Likely the authors referred to the decoder process as using transpose convolution just for clarity and generality.
This article discusses convolution and fractionally-strided convolution, and provides nice graphics that I do not wish to copy here.
| https://stackoverflow.com/questions/44809446/ |
PyTorch Linear Algebra Gradients | I'm looking to back-propagate gradients through a singular value decomposition for regularisation purposes. PyTorch currently does not support backpropagation through a singular value decomposition.
I know that I could write my own custom function that operates on a Variable; takes its .data tensor, applies the torch.svd to it, wraps a Variable around its singular values and returns it in the forward pass, and in the backward pass applies the appropriate Jacobian matrix to the incoming gradients.
However, I was wondering whether there was a more elegant (and potentially faster) solution, where I could overwrite the "Type Variable doesn't implement stateless method svd" Error directly, call Lapack, etc. ?
If someone could guide me through the appropriate steps and source files I need to look at, I'd be very grateful. I suppose these steps would similarly apply to other linear algebra operations which have no associated backward method currently.
| torch.svd with forward and backward pass is now available in the Pytorch master:
http://pytorch.org/docs/master/torch.html#torch.svd
You need to install Pytorch from source:
https://github.com/pytorch/pytorch/#from-source
| https://stackoverflow.com/questions/44829420/ |
Failed to compute dot product of torch.cuda.FloatTensor | I used a GPU to compute the dot product of the output of neural networks and a torch.cuda.FloatTensor (both of them are stored in GPU) but got an error saying:
TypeError: dot received an invalid combination of arguments - got (torch.cuda.FloatTensor) but expected (torch.FloatTensor tensor).
the codes are like
p = torch.exp(vector.dot(ht))
here vector is a torch FloatTensor and ht is the output of neural networks.
I've struggled with these things for days but still got no idea. Thanks in advance for any possible solution!
| What does the following error message mean?
TypeError: dot received an invalid combination of arguments - got (torch.cuda.FloatTensor) but expected (torch.FloatTensor tensor).
It means dot function expected cpu tensor but you are providing a gpu (cuda) tensor.
So, how to solve the problem of your code?
p = torch.exp(vector.dot(ht))
As you mentioned vector is a FloatTensor, so ht should be a FloatTensor as well but ht is a cuda.FloatTensor (because, your neural network model is in gpu memory).
So, you should convert vector to cuda.FloatTensor by doing the following.
vector = vector.cuda()
OR, you can convert the cuda.FloatTensor to cpu tensor by doing the following. Please note, .cpu() method is not applicable for Variable. In that case, you can use .data.cpu().
ht = ht.cpu()
It should solve your problem.
| https://stackoverflow.com/questions/44879767/ |
What would be equivalent of pytorch's torch.nn.CosineEmbeddingLoss in tensorflow? | CosineEmbeddingLoss in Pytorch is the perfect function I am looking for in tensorflow, but I can only find tf.losses.cosine_distance. Is there a way or code that writes CosineEmbeddingLoss in tensorflow?
| A TensorFlow version of CosineEmbeddingLoss:
import tensorflow as tf
def CosineEmbeddingLoss(margin=0.):
def _cosine_similarity(x1, x2):
"""Cosine similarity between two batches of vectors."""
return tf.reduce_sum(x1 * x2, axis=-1) / (
tf.norm(x1, axis=-1) * tf.norm(x2, axis=-1))
def _cosine_embedding_loss_fn(input_one, input_two, target):
similarity = _cosine_similarity(input_one, input_two)
return tf.reduce_mean(tf.where(
tf.equal(target, 1),
1. - similarity,
tf.maximum(tf.zeros_like(similarity), similarity - margin)))
return _cosine_embedding_loss_fn
Running it alongside Torch's version:
import tensorflow as tf
import numpy
import torch
from torch.autograd import Variable
first_values = numpy.random.normal(size=[100, 3])
second_values = numpy.random.normal(size=[100, 3])
labels = numpy.random.randint(2, size=[100]) * 2 - 1
torch_result = torch.nn.CosineEmbeddingLoss(margin=0.5)(
Variable(torch.FloatTensor(first_values)),
Variable(torch.FloatTensor(second_values)),
Variable(torch.IntTensor(labels))).data.numpy()
with tf.Graph().as_default():
with tf.Session():
tf_result = CosineEmbeddingLoss(margin=0.5)(
first_values, second_values, labels).eval()
print(torch_result, tf_result)
Seems to match to within reasonable precision:
array([ 0.35702518], dtype=float32) 0.35702516587462357
| https://stackoverflow.com/questions/44905227/ |
Unique values in PyTorch tensor | I'm tying to find distinct values in a PyTorch tensor. Is there an efficient analogue of Tensorflow's unique op?
| There is a torch.unique() method in 0.4.0
In torch <= 0.3.1 you can try:
import torch
import numpy as np
x = torch.rand((3,3)) * 10
np.unique(x.round().numpy())
| https://stackoverflow.com/questions/44993324/ |
Confused about tensor dimensions and batch sizes in pytorch | So I'm very new to PyTorch and Neural Networks in general, and I'm having some problems creating a Neural Network that classifies names by gender.
I based this off of the PyTorch tutorial for RNNs that classify names by nationality, but I decided not to go with a recurrent approach... Stop me right here if this was the wrong idea!
However, whenever I try to run an input through the network it tells me:
RuntimeError: matrices expected, got 3D, 2D tensors at /py/conda-bld/pytorch_1493681908901/work/torch/lib/TH/generic/THTensorMath.c:1232
I know this has something to do with how PyTorch always expects there to be a batch size or something, and I have my tensor set up that way, but you can probably tell by this point that I have no idea what I'm talking about.
Here's my code:
from future import unicode_literals, print_function, division
from io import open
import glob
import unicodedata
import string
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import random
from torch.autograd import Variable
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
"""------GLOBAL VARIABLES------"""
all_letters = string.ascii_letters + " .,;'"
num_letters = len(all_letters)
all_names = {}
genders = ["Female", "Male"]
"""-------DATA EXTRACTION------"""
def findFiles(path):
return glob.glob(path)
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for file in findFiles("/home/andrew/PyCharm/PycharmProjects/CantStop/data/names/*.txt"):
gender = file.split("/")[-1].split(".")[0]
names = readLines(file)
all_names[gender] = names
"""-----DATA INTERPRETATION-----"""
def nameToTensor(name):
tensor = torch.zeros(len(name), 1, num_letters)
for index, letter in enumerate(name):
tensor[index][0][all_letters.find(letter)] = 1
return tensor
def outputToGender(output):
gender, gender_index = output.data.topk(1)
if gender_index[0][0] == 0:
return "Female"
return "Male"
"""------NETWORK SETUP------"""
class Net(nn.Module):
def __init__(self, input_size, output_size):
super(Net, self).__init__()
#Layer 1
self.Lin1 = nn.Linear(input_size, int(input_size/2))
self.ReLu1 = nn.ReLU()
self.Batch1 = nn.BatchNorm1d(int(input_size/2))
#Layer 2
self.Lin2 = nn.Linear(int(input_size/2), output_size)
self.ReLu2 = nn.ReLU()
self.Batch2 = nn.BatchNorm1d(output_size)
self.softMax = nn.LogSoftmax()
def forward(self, input):
output1 = self.Batch1(self.ReLu1(self.Lin1(input)))
output2 = self.softMax(self.Batch2(self.ReLu2(self.Lin2(output1))))
return output2
NN = Net(num_letters, 2)
"""------TRAINING------"""
def getRandomTrainingEx():
gender = genders[random.randint(0, 1)]
name = all_names[gender][random.randint(0, len(all_names[gender])-1)]
gender_tensor = Variable(torch.LongTensor([genders.index(gender)]))
name_tensor = Variable(nameToTensor(name))
return gender_tensor, name_tensor, gender
def train(input, target):
loss_func = nn.NLLLoss()
optimizer = optim.SGD(NN.parameters(), lr=0.0001, momentum=0.9)
optimizer.zero_grad()
output = NN(input)
loss = loss_func(output, target)
loss.backward()
optimizer.step()
return output, loss
all_losses = []
current_loss = 0
for i in range(100000):
gender_tensor, name_tensor, gender = getRandomTrainingEx()
output, loss = train(name_tensor, gender_tensor)
current_loss += loss
if i%1000 == 0:
print("Guess: %s, Correct: %s, Loss: %s" % (outputToGender(output), gender, loss.data[0]))
if i%100 == 0:
all_losses.append(current_loss/10)
current_loss = 0
# plt.figure()
# plt.plot(all_losses)
# plt.show()
Please help a newbie out!
|
Debugging your bug out:
Pycharm is a helpful python debugger that let you set breakpoint and views dimension of your tensor.
For easier debug, do not stack forward thing up like that
output1 = self.Batch1(self.ReLu1(self.Lin1(input)))
Instead,
h1 = self.ReLu1(self.Lin1(input))
h2 = self.Batch1(h1)
For the stacktrace, Pytorch also provide Pythonic error stacktrack. I believe that before
RuntimeError: matrices expected, got 3D, 2D tensors at /py/conda-bld/pytorch_1493681908901/work/torch/lib/TH/generic/THTensorMath.c:1232
There are some python error stacktrace that point right into your code. For easier debug, as I said, don't stack forward.
You use Pycharm to create break point before crash point. In debugger watcher Then use Variable(torch.rand(dim1, dim2)) to test out forward pass input, output dimension, and if a dimension is incorrect. Comparing with dimension of input. Call input.size() in debugger watcher.
For example, self.ReLu1(self.Lin1(Variable(torch.rand(10, 20)))).size() . If it show read text (error), then the input dimension is incorrect. Else, it show the size of the output.
Read the docs
In Pytorch Docs, it specify input/output dimension. It also have a example code snip
>>> rnn = nn.RNN(10, 20, 2)
>>> input = Variable(torch.randn(5, 3, 10))
>>> h0 = Variable(torch.randn(2, 3, 20))
>>> output, hn = rnn(input, h0)
You may use the code snip in PyCharm Debugger to explore dimension of input, output of specific layer of your interest (RNN, Linear, BatchNorm1d).
| https://stackoverflow.com/questions/45020108/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.